source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Nuclear%20decommissioning
Nuclear decommissioning is the process leading to the irreversible complete or partial closure of a nuclear facility, usually a nuclear reactor, with the ultimate aim at termination of the operating licence. The process usually runs according to a decommissioning plan, including the whole or partial dismantling and decontamination of the facility, ideally resulting in restoration of the environment up to greenfield status. The decommissioning plan is fulfilled when the approved end state of the facility has been reached. The process typically takes about 15 to 30 years, or many decades more when an interim safe storage period is applied for radioactive decay. Radioactive waste that remains after the decommissioning is either moved to an on-site storage facility where it is still under control of the owner, or moved to a dry cask storage or disposal facility at another location. The final disposal of nuclear waste from past and future decommissioning is a growing still unsolved problem. Decommissioning is an administrative and technical process. The facility is dismantled to the point that it no longer requires measures for radiation protection. It includes clean-up of radioactive materials. Once a facility is fully decommissioned, no radiological danger should persist. The license will be terminated and the site released from regulatory control. The plant licensee is then no longer responsible for the nuclear safety. The costs of decommissioning are to be covered by funds that are provided for in a decommissioning plan, which is part of the facility's initial authorization. They may be saved in a decommissioning fund, such as a trust fund. There are worldwide also hundreds of thousands small nuclear devices and facilities, for medical, industrial and research purposes, that will have to be decommissioned at some point. Definition Nuclear decommissioning is the administrative and technical process leading to the irreversible closure of a nuclear facility such as
https://en.wikipedia.org/wiki/Schaefer%27s%20dichotomy%20theorem
In computational complexity theory, a branch of computer science, Schaefer's dichotomy theorem, proved by Thomas Jerome Schaefer, states necessary and sufficient conditions under which a finite set S of relations over the Boolean domain yields polynomial-time or NP-complete problems when the relations of S are used to constrain some of the propositional variables. It is called a dichotomy theorem because the complexity of the problem defined by S is either in P or is NP-complete, as opposed to one of the classes of intermediate complexity that is known to exist (assuming P ≠ NP) by Ladner's theorem. Special cases of Schaefer's dichotomy theorem include the NP-completeness of SAT (the Boolean satisfiability problem) and its two popular variants 1-in-3 SAT and not-all-equal 3SAT (often denoted by NAE-3SAT). In fact, for these two variants of SAT, Schaefer's dichotomy theorem shows that their monotone versions (where negations of variables are not allowed) are also NP-complete. Original presentation Schaefer defines a decision problem that he calls the Generalized Satisfiability problem for S (denoted by SAT(S)), where is a finite set of relations over the binary domain . An instance of the problem is an S-formula, i.e. a conjunction of constraints of the form where and the are propositional variables. The problem is to determine whether the given formula is satisfiable, in other words if the variables can be assigned values such that they satisfy all the constraints as given by the relations from S. Schaefer identifies six classes of sets of Boolean relations for which SAT(S) is in P and proves that all other sets of relations generate an NP-complete problem. A finite set of relations S over the Boolean domain defines a polynomial time computable satisfiability problem if any one of the following conditions holds: all relations that are not constantly false are true when all its arguments are true; all relations that are not constantly false are true when a
https://en.wikipedia.org/wiki/Clause%20%28logic%29
In logic, a clause is a propositional formula formed from a finite collection of literals (atoms or their negations) and logical connectives. A clause is true either whenever at least one of the literals that form it is true (a disjunctive clause, the most common use of the term), or when all of the literals that form it are true (a conjunctive clause, a less common use of the term). That is, it is a finite disjunction or conjunction of literals, depending on the context. Clauses are usually written as follows, where the symbols are literals: Empty clauses A clause can be empty (defined from an empty set of literals). The empty clause is denoted by various symbols such as , , or . The truth evaluation of an empty disjunctive clause is always . This is justified by considering that is the neutral element of the monoid . The truth evaluation of an empty conjunctive clause is always . This is related to the concept of a vacuous truth. Implicative form Every nonempty (disjunctive) clause is logically equivalent to an implication of a head from a body, where the head is an arbitrary literal of the clause and the body is the conjunction of the complements of the other literals. That is, if a truth assignment causes a clause to be true, and all of the literals of the body satisfy the clause, then the head must also be true. This equivalence is commonly used in logic programming, where clauses are usually written as an implication in this form. More generally, the head may be a disjunction of literals. If are the literals in the body of a clause and are those of its head, the clause is usually written as follows: If n = 1 and m = 0, the clause is called a (Prolog) fact. If n = 1 and m > 0, the clause is called a (Prolog) rule. If n = 0 and m > 0, the clause is called a (Prolog) query. If n > 1, the clause is no longer Horn. See also Conjunctive normal form Disjunctive normal form Horn clause
https://en.wikipedia.org/wiki/Greater%20omentum
The greater omentum (also the great omentum, omentum majus, gastrocolic omentum, epiploon, or, especially in animals, caul) is a large apron-like fold of visceral peritoneum that hangs down from the stomach. It extends from the greater curvature of the stomach, passing in front of the small intestines and doubles back to ascend to the transverse colon before reaching to the posterior abdominal wall. The greater omentum is larger than the lesser omentum, which hangs down from the liver to the lesser curvature. The common anatomical term "epiploic" derives from "epiploon", from the Greek epipleein, meaning to float or sail on, since the greater omentum appears to float on the surface of the intestines. It is the first structure observed when the abdominal cavity is opened anteriorly (from the front). Structure The greater omentum is the larger of the two peritoneal folds. It consists of a double sheet of peritoneum, folded on itself so that it has four layers. The two layers of the greater omentum descend from the greater curvature of the stomach and the beginning of the duodenum. They pass in front of the small intestines, sometimes as low as the pelvis, before turning on themselves, and ascending as far as the transverse colon, where they separate and enclose that part of the intestine. These individual layers are easily seen in the young, but in the adult they are more or less inseparably blended. The left border of the greater omentum is continuous with the gastrosplenic ligament; its right border extends as far as the beginning of the duodenum. The greater omentum is usually thin, and has a perforated appearance. It contains some adipose tissue, which can accumulate considerably in obese people. It is highly vascularised. Subdivisions The greater omentum is often defined to encompass a variety of structures. Most sources include the following three: Gastrophrenic ligament—extends to the underside of the left dome of the diaphragm Gastrocolic ligament—
https://en.wikipedia.org/wiki/Gastrocolic%20ligament
The gastrocolic ligament is a portion of the greater omentum that stretches from the greater curvature of the stomach to the transverse colon. It forms part of the anterior wall of the lesser sac. Dividing the gastrocolic ligament provides access to the anterior pancreas and the posterior wall of the stomach. This is commonly done for Whipple procedures, distal pancreatectomy, some forms of the Roux-en-Y gastric bypass, and exploratory laparotomy.
https://en.wikipedia.org/wiki/International%20Movement%20Writing%20Alphabet
The International Movement Writing Alphabet (IMWA) is a set of symbols that can be used to describe and record movement. Its creator, Valerie Sutton, also invented MovementWriting, a writing system which employs IMWA. It in turn has several application areas within which it is specialised. Application areas Sign language transcription Sutton SignWriting is optimised for sign languages and has the most development so far. Dance notation DanceWriting is a form of dance notation. Mimestry notation MimeWriting is for classic mimestry. Kinesiology SportsWriting is for the kinesiology of ice skating and gymnastics. Identification numbers The IMWA has more than 27,000 elements that are represented by unique identification numbers. Each identification number specifies six attributes——as dash-separated values. The symbol is specified with a three-digit value whereas all other attributes use a two-digit value (e.g., 01-01-001-01-01-01). There are eight categories: hand, movement, face, head, upper body, full body, space, and punctuation. There are 40 groups. The are based on the 40 groups. History The IMWA was originally designed for describing sign language and consequently was named Sutton's Sign Symbol Sequence (SSS) by its inventor, Valerie Sutton. The original symbol set, SSS-95, was limited in size due to memory constraints in personal computers at the time. The SSS-99 symbol set expanded the number of symbols, and the SSS-2002 set was the first to use the current identification numbering system. The final version, SSS-2004, was renamed International Movement Writing Alphabet (SSS-IMWA) to reflect its usefulness in applications beyond sign language. External links MovementWriting IMWA Design Documents IMWA Keyboard Design writing systems constructed scripts
https://en.wikipedia.org/wiki/Acetochlor
Acetochlor is an herbicide developed by Monsanto Company and Zeneca. It is a member of the class of herbicides known as chloroacetanilides. Its mode of action is elongase inhibition, and inhibition of geranylgeranyl pyrophosphate (GGPP) cyclization enzymes, part of the gibberellin pathway. It carries high risks of environmental contamination. Uses In the United States, acetochlor was registered by the EPA as a direct substitute for many herbicides of known concern. The EPA imposed several restrictions and conditions on the use of acetochlor. It is approved for pre-emergence application or for pre-planting application with soil incorporation, in corn. (maize) at 5 litres / hectare (1835g / hectare of a.i.) It is the main active ingredient in Acenit, Keystone, Guardian, Harness, Relay, Sacemid, Surpass, Top-Hand, Trophy and Winner. It is used to control weeds in corn, and is particularly useful as a replacement for atrazine in the case of some important weeds. Safety Acetochlor has been classified as a probable human carcinogen. Acetochlor, as alachlor, can cause nasal turbinate tumors via the generation of a common tissue reactive metabolite that leads to cytotoxicity and regenerative proliferation in the nasal epithelium. It is a thyroid disruptor. Human health effects from acetochlor at low environmental doses or at biomonitored levels from low environmental exposures are unknown. A case of poisoning with extremely swollen genitals as a symptom was reported after contact. Ecologic effects In the United States, acetochlor is the third most frequently detected herbicide in natural waters. Acetochlor can accelerate metamorphosis in amphibians. It can also affect the development of fish. See also Metolachlor
https://en.wikipedia.org/wiki/Pipeflow
In hydrology, pipeflow is a type of subterranean water flow where water travels along cracks in the soil or old root systems found in above ground vegetation. In such soils which have a high vegetation content water is able to travel along the 'pipes', allowing water to travel faster than throughflow. Here, water can move at speeds between 50 and 500 m/h. Hydrology Aquatic ecology
https://en.wikipedia.org/wiki/Second%20polar%20moment%20of%20area
The second polar moment of area, also known (incorrectly, colloquially) as "polar moment of inertia" or even "moment of inertia", is a quantity used to describe resistance to torsional deformation (deflection), in objects (or segments of an object) with an invariant cross-section and no significant warping or out-of-plane deformation. It is a constituent of the second moment of area, linked through the perpendicular axis theorem. Where the planar second moment of area describes an object's resistance to deflection (bending) when subjected to a force applied to a plane parallel to the central axis, the polar second moment of area describes an object's resistance to deflection when subjected to a moment applied in a plane perpendicular to the object's central axis (i.e. parallel to the cross-section). Similar to planar second moment of area calculations (,, and ), the polar second moment of area is often denoted as . While several engineering textbooks and academic publications also denote it as or , this designation should be given careful attention so that it does not become confused with the torsion constant, , used for non-cylindrical objects. Simply put, the 'polar moment of area is a shaft or beam's resistance to being distorted by torsion, as a function of its shape. The rigidity comes from the object's cross-sectional area only, and does not depend on its material composition or shear modulus. The greater the magnitude of the second polar moment of area, the greater the torsional stiffness of the object. Definition The equation describing the polar moment of area is a multiple integral over the cross-sectional area, , of the object. where is the distance to the element . Substituting the and components, using the Pythagorean theorem: Given the planar second moments of area equations, where: It is shown that the polar moment of area can be described as the summation of the and planar moments of area, and This is also shown in the perpendicular a
https://en.wikipedia.org/wiki/Tiktaalik
Tiktaalik (; Inuktitut ) is a monospecific genus of extinct sarcopterygian (lobe-finned fish) from the Late Devonian Period, about 375 Mya (million years ago), having many features akin to those of tetrapods (four-legged animals). Tiktaalik is estimated to have had a total length of based on various specimens. Unearthed in Arctic Canada, Tiktaalik is a non-tetrapod member of Osteichthyes (bony fish), complete with scales and gills – but it has a triangular, flattened head and unusual, cleaver-shaped fins. Its fins have thin ray bones for paddling like most fish, but they also have sturdy interior bones that would have allowed Tiktaalik to prop itself up in shallow water and use its limbs for support as most four-legged animals do. Those fins and other mixed characteristics mark Tiktaalik as a crucial transition fossil, a link in evolution from swimming fish to four-legged vertebrates. This and similar animals might be the common ancestors of all vertebrate terrestrial fauna: amphibians, reptiles, birds, and mammals. The first well-preserved Tiktaalik fossils were found in 2004 on Ellesmere Island in Nunavut, Canada. The discovery, made by Edward B. Daeschler of the Academy of Natural Sciences, Neil H. Shubin from the University of Chicago, and Harvard University Professor Farish A. Jenkins Jr, was published in the April 6, 2006, issue of Nature and quickly recognized as a transitional form. Description Tiktaalik provides insights on the features of the extinct closest relatives of the tetrapods. Tiktaalik was a large fish: the largest known fossils have an estimated length of 2.75 m (9.02 feet), with the longest lower jaws reaching a length of 31 cm (1.0 feet). Skull and neck The skull of Tiktaalik was low and flat, more similar in shape to that of a crocodile than most fish. The rear edge of the skull was excavated by a pair of indentations known as otic notches. These notches may have housed spiracles on the top of the head, which suggest the creature had
https://en.wikipedia.org/wiki/Terrigenous%20sediment
In oceanography, terrigenous sediments are those derived from the erosion of rocks on land; that is, they are derived from terrestrial (as opposed to marine) environments. Consisting of sand, mud, and silt carried to sea by rivers, their composition is usually related to their source rocks; deposition of these sediments is largely limited to the continental shelf. Sources of terrigenous sediments include volcanoes, weathering of rocks, wind-blown dust, grinding by glaciers, and sediment carried by rivers or icebergs. Terrigenous sediments are responsible for a significant amount of the salt in today's oceans. Over time rivers continue to carry minerals to the ocean but when water evaporates, it leaves the minerals behind. Since chlorine and sodium are not consumed by biological processes, these two elements constitute the greatest portion of dissolved minerals. Quantity Some 1.35 billion tons, or 8% of global river-borne sediment (16.5-17 billion tons globally), is transported by Ganges-Brahmaputra river system annually according to decades old studies, it is unquantified how much variance year to year as well as the impact modern humans have on this amount by holding back sediment in dams, counteracted with increased development of erosion patterns. Wind born sediment also transports billions of tons annually, most prominent in Saharan dust, but thought to be substantially less than rivers; again, variance of year to year and human impacts of land use remain unquantified on this data. It is well known terrain influences climate conditions, and erosive processes slowly but surely modify terrain along with tectonic causes, but all encompassing studies have been lacking on a global scale to understand how these shape of land and sea factors fit in with both human induced climate change and natural geo-astrological climate variability. See also Pelagic sediments Biogenous Ooze
https://en.wikipedia.org/wiki/Herbert%20Booth
Herbert Henry Howard Booth (26 August 1862 – 25 September 1926) was a Salvation Army officer, the third son of five children to William and Catherine Booth (Mumford), who later went on to serve as an independent evangelist. He oversaw the Limelight Department's development and he was the writer and director for Soldiers of the Cross. Early life Herbert, who was born in Penzance, Cornwall, received little formal elementary education but became a student at Allesly Park College and the Congregational Institute at Nottingham. At the age of twenty, Herbert began helping his sister Kate Booth in building up The Salvation Army in France. Two years later, he was given charge of England's cadet officer training. He wrote many songs for The Salvation Army and became a bandmaster and a songster leader. He was the first Salvation Army Officer to use the magic lantern for presentations in England. In 1886, Herbert Booth took ill and went to Australia to rest and heal. While staying in a mining town there, he found a gold nugget. He eventually forged a ring out of it for his future wife, Dutch Salvationist Cornelie Schoch. Salvation Army Herbert Booth took command of all Salvation Army operations in the British Isles when he was 26. Then, from 1892–1896, he was the Commandant for the Salvation Army in Canada. Next, he was appointed to the Australasian Territory where his health continued to deteriorate. He struggled with depression, but was still very active in his position. In Australia, Herbert took considerable interest in the Salvation Army's Limelight Department there. He soon authorized extensive expansion, allowing Limelight to make Australia's first fictional narrative film in 1897. The following year, he and early cinematographer Joe Perry produced Social Salvation, a multimedia presentation that portrayed the work of The Salvation Army in its Australasian Territory. Whilst appointed to the territory Herbert also founded the Hamodava Tea Company, which pion
https://en.wikipedia.org/wiki/Epidemiological%20transition
In demography and medical geography, epidemiological transition is a theory which "describes changing population patterns in terms of fertility, life expectancy, mortality, and leading causes of death." For example, a phase of development marked by a sudden increase in population growth rates brought by improved food security and innovations in public health and medicine, can be followed by a re-leveling of population growth due to subsequent declines in fertility rates. Such a transition can account for the replacement of infectious diseases by chronic diseases over time due to increased life span as a result of improved health care and disease prevention. This theory was originally posited by Abdel Omran in 1971. Theory Omran divided the epidemiological transition of mortality into three phases, in the last of which chronic diseases replace infection as the primary cause of death. These phases are: The Age of Pestilence and Famine: Mortality is high and fluctuating, precluding sustained population growth, with low and variable life expectancy vacillating between 20 and 40 years. It is characterized by an increase in infectious diseases, malnutrition and famine, common during the Neolithic age. Before the first transition, the hominid ancestors were hunter-gatherers and foragers, a lifestyle partly enabled by a small and dispersed population. However, unreliable and seasonal food sources put communities at risk for periods of malnutrition. The Age of Receding Pandemics: Mortality progressively declines, with the rate of decline accelerating as epidemic peaks decrease in frequency. Average life expectancy increases steadily from about 30 to 50 years. Population growth is sustained and begins to be exponential. The Age of Degenerative and Man-Made Diseases: Mortality continues to decline and eventually approaches stability at a relatively low level. Mortality is increasingly related to degenerative diseases, cardiovascular disease (CVD), cancer, violence, acciden
https://en.wikipedia.org/wiki/Frank%20Benford
Frank Albert Benford Jr. (July 10, 1883 – December 4, 1948) was an American electrical engineer and physicist best known for rediscovering and generalizing Benford's Law, an earlier statistical statement by Simon Newcomb, about the occurrence of digits in lists of data. Benford is also known for having devised, in 1937, an instrument for measuring the refractive index of glass. An expert in optical measurements, he published 109 papers in the fields of optics and mathematics and was granted 20 patents on optical devices. Early life He was born in Johnstown, Pennsylvania. His date of birth is given variously as May 29 or July 10, 1883. At the age of 6 his family home was destroyed by the Johnstown Flood. Education He graduated from the University of Michigan in 1910. Career Benford worked for General Electric, first in the Illuminating Engineering Laboratory for 18 years, then the Research Laboratory for 20 years until retiring in July 1948. He was working as a research physicist when he made the rediscovery of Benford's law, and spent years collecting data before publishing in 1938, citing more than 20,000 values from a diverse set of sources including statistics from baseball, atomic weights, the areas of rivers and numbering of articles in magazines. Death He died suddenly at his home on December 4, 1948.
https://en.wikipedia.org/wiki/Fluctuation%20loss
Fluctuation loss is an effect seen in radar systems as the target object moves or changes its orientation relative to the radar system. It was extensively studied during the 1950s by Peter Swerling, who introduced the Swerling models to allow the effect to be simulated. For this reason, it is sometimes known as Swerling loss or similar names. The effect occurs when the target's physical size is within a key range of values relative to the wavelength of the radar signal. As the signal reflects off various parts of the target, they may interfere as they return to the radar receiver. At any single distance from the station, this will cause the signal to be amplified or diminished compared to the baseline signal one calculates from the radar equation. As the target moves, these patterns change. This causes the signal to fluctuate in strength and may cause it to disappear entirely at certain times. The effect can be reduced or eliminated by operating on more than one frequency or using modulation techniques like pulse compression that change the frequency over the period of a pulse. In these cases, it is unlikely that the pattern of reflections from the target causes the same destructive interference at two different frequencies. Swerling modeled these effects in a famous 1954 paper introduced while working at RAND Corporation. Swerling's models considered the contribution of multiple small reflectors, or many small reflectors and a single large one. This offered the ability to model real-world objects like aircraft to understand the expected fluctuation loss effects. Fluctuation loss For basic considerations of the strength of a signal returned by a given target, the radar equation models the target as a single point in space with a given radar cross-section (RCS). The RCS is difficult to estimate except for the most basic cases, like a perpendicular surface or a sphere. Before the introduction of detailed computer modeling, the RCS for real-world objects was gener
https://en.wikipedia.org/wiki/Memory%20T%20cell
Memory T cells are a subset of T lymphocytes that might have some of the same functions as memory B cells. Their lineage is unclear. Function Antigen-specific memory T cells specific to viruses or other microbial molecules can be found in both central memory T cells (TCM) and effector memory T cells (TEM) subsets. Although most information is currently based on observations in the cytotoxic T cells (CD8-positive) subset, similar populations appear to exist for both the helper T cells (CD4-positive) and the cytotoxic T cells. Primary function of memory cells is augmented immune response after reactivation of those cells by reintroduction of relevant pathogen into the body. It is important to note that this field is intensively studied and some information may not be available as of yet. Central memory T cells (TCM): TCM lymphocytes have several attributes in common with stem cells, the most important being the ability of self-renewal, mainly because of high level of phosphorylation on key transcription factor STAT5. In mice, TCM proved to confer more powerful immunity against viruses, bacteria and cancer cells, compared to TEM lymphocytes in several experimental models. Effector memory T cells (TEM): TEM and TEMRA lymphocytes are primarily active as the CD8 variants, thus being mainly responsible for cytotoxic action against pathogens. Tissue-resident memory T cell (TRM): Because TRM lymphocytes are present over long periods of time in tissues, or more importantly, barrier tissues (epithelium for example), they are crucial for quick response to barrier breach and response to any relevant pathogen present. One mechanism used by TRM to restrict pathogens is the secretion of granzyme B. Stem cell-like memory T cells (TSCM): Those lymphocytes are capable of self-renewal as are the TCM lymphocytes and are also capable of generating both the TCM and TEM subpopulations. Presence of this population in humans is currently under investigation. Virtual memory T cell (TV
https://en.wikipedia.org/wiki/Artificial%20urinary%20bladder
The two main methods for replacing bladder function involve either redirecting urine flow or replacing the bladder in situ. Replacement can be done with an artificial urinary bladder, an artificial organ. Development On January 30, 1999, scientists announced that lab-grown bladders had been successfully transplanted into dogs. These artificial bladders worked well for almost a year in the dogs. In 2000, a new procedure for creating artificial bladders for humans was developed. This procedure is called an orthotopic neobladder procedure. This procedure involves shaping a part (usually 35 to 40 inches) of a patient's small intestine to form a new bladder; however, these bladders made of intestinal tissues produced unpleasant side-effects. The current standard for repairing a damaged urinary bladder involves partial or complete replacement using tissue from the small intestine. In 2006, the first publication of experimental transplantation of bioengineered bladders appeared in The Lancet. The trial involved seven people with spina bifida between the ages of four and nineteen who had been followed for up to five years after surgery to determine long-term effects. The bladders were prepared and the trial run by a team of biologists at the Wake Forest University School of Medicine and Boston Children's Hospital led by Professor Anthony Atala. Bioengineered organs which rely on a patient's own cells, autologous constructs, are not subject to transplant rejection, unlike transplants from human or animal donors.
https://en.wikipedia.org/wiki/Cognitive%20style
Cognitive style or thinking style is a concept used in cognitive psychology to describe the way individuals think, perceive and remember information. Cognitive style differs from cognitive ability (or level), the latter being measured by aptitude tests or so-called intelligence tests. There is controversy over the exact meaning of the term "cognitive style" and whether it is a single or multiple dimension of human personality. However it remains a key concept in the areas of education and management. If a pupil has a cognitive style that is similar to that of his/her teacher, the chances are improved that the pupil will have a more positive learning experience (Kirton, 2003). Likewise, team members with similar cognitive styles likely feel more positive about their participation with the team (Kirton, 2003). While matching cognitive styles may make participants feel more comfortable when working with one another, this alone cannot guarantee the success of the outcome. Multi-dimensional models and measures A popular multi-dimensional instrument for the measure of cognitive style is the Myers–Briggs Type Indicator. Riding (1991) developed a two-dimensional cognitive style instrument, his Cognitive Style Analysis (CSA), which is a compiled computer-presented test that measures individuals' position on two orthogonal dimensions – Wholist-Analytic (W-A) and Verbal-Imagery (V-I). The W-A dimension reflects how individuals organise and structure information. Individuals described as Analytics will deconstruct information into its component parts, whereas individuals described as Wholists will retain a global or overall view of information. The V–I dimension describes individuals' mode of information representation in memory during thinking – Verbalisers represent information in words or verbal associations, and Imagers represent information in mental pictures. The CSA test is broken down into three sub-tests, all of which are based on a comparison between response times
https://en.wikipedia.org/wiki/Compartment%20%28development%29
Compartments can be simply defined as separate, different, adjacent cell populations, which upon juxtaposition, create a lineage boundary. This boundary prevents cell movement from cells from different lineages across this barrier, restricting them to their compartment. Subdivisions are established by morphogen gradients and maintained by local cell-cell interactions, providing functional units with domains of different regulatory genes, which give rise to distinct fates. Compartment boundaries are found across species. In the hindbrain of vertebrate embryos, rhombomeres are compartments of common lineage outlined by expression of Hox genes. In invertebrates, the wing imaginal disc of Drosophila provides an excellent model for the study of compartments. Although other tissues, such as the abdomen, and even other imaginal discs are compartmentalized, much of our understanding of key concepts and molecular mechanisms involved in compartment boundaries has been derived from experimentation in the wing disc of the fruit fly. Function By separating different cell populations, the fate of these compartments are highly organized and regulated. In addition, this separation creates a region of specialized cells close to the boundary, which serves as a signaling center for the patterning, polarizing and proliferation of the entire disc. Compartment boundaries establish these organizing centers by providing the source of morphogens that are responsible for the positional information required for development and regeneration. The inability of cell competition to occur across the boundary, indicates that each compartment serves as an autonomous unit of growth. Differences in growth rates and patterns in each compartment, maintain the two lineages separated and each control the precise size of the imaginal discs. Cell separation These two cell populations are kept separate by a mechanism of cell segregation linked to the heritable expression of a selector gene. A select
https://en.wikipedia.org/wiki/Lightest%20supersymmetric%20particle
In particle physics, the lightest supersymmetric particle (LSP) is the generic name given to the lightest of the additional hypothetical particles found in supersymmetric models. In models with R-parity conservation, the LSP is stable; in other words, it cannot decay into any Standard Model particle, since all SM particles have the opposite R-parity. There is extensive observational evidence for an additional component of the matter density in the universe, which goes under the name dark matter. The LSP of supersymmetric models is a dark matter candidate and is a weakly interacting massive particle (WIMP). Constraints on LSP from cosmology The LSP is unlikely to be a charged wino, charged higgsino, slepton, sneutrino, gluino, squark, or gravitino but is most likely a mixture of neutral higgsinos, the bino and the neutral winos, i.e. a neutralino. In particular, if the LSP were charged (and is abundant in our galaxy) such particles would have been captured by the Earth's magnetic field and form heavy hydrogen-like atoms. Searches for anomalous hydrogen in natural water however have been without any evidence for such particles and thus put severe constraints on the existence of a charged LSP. LSP as a dark matter candidate Dark matter particles must be electrically neutral; otherwise they would scatter light and thus not be "dark". They must also almost certainly be non-colored. With these constraints, the LSP could be the lightest neutralino, the gravitino, or the lightest sneutrino. Sneutrino dark matter is ruled out in the Minimal Supersymmetric Standard Model (MSSM) because of the current limits on the interaction cross section of dark matter particles with ordinary matter as measured by direct detection experiments—the sneutrino interacts via Z boson exchange and would have been detected by now if it makes up the dark matter. Extended models with right-handed or sterile sneutrinos reopen the possibility of sneutrino dark matter by lowering the interaction cro
https://en.wikipedia.org/wiki/Majorana%20fermion
A Majorana fermion (), also referred to as a Majorana particle, is a fermion that is its own antiparticle. They were hypothesised by Ettore Majorana in 1937. The term is sometimes used in opposition to a Dirac fermion, which describes fermions that are not their own antiparticles. With the exception of neutrinos, all of the Standard Model fermions are known to behave as Dirac fermions at low energy (lower than the electroweak symmetry breaking temperature), and none are Majorana fermions. The nature of neutrinos is not settled – they may turn out to be either Dirac or Majorana fermions. In condensed matter physics, quasiparticle excitations can appear like bound Majorana fermions. However, instead of a single fundamental particle, they are the collective movement of several individual particles (themselves composite) which are governed by non-Abelian statistics. Theory The concept goes back to Majorana's suggestion in 1937 that electrically neutral spin- particles can be described by a real-valued wave equation (the Majorana equation), and would therefore be identical to their antiparticle, because the wave functions of particle and antiparticle are related by complex conjugation, which leaves the Majorana wave equation unchanged. The difference between Majorana fermions and Dirac fermions can be expressed mathematically in terms of the creation and annihilation operators of second quantization: The creation operator creates a fermion in quantum state (described by a real wave function), whereas the annihilation operator annihilates it (or, equivalently, creates the corresponding antiparticle). For a Dirac fermion the operators and are distinct, whereas for a Majorana fermion they are identical. The ordinary fermionic annihilation and creation operators and can be written in terms of two Majorana operators and by In supersymmetry models, neutralinos – superpartners of gauge bosons and Higgs bosons – are Majorana fermions. Identities Another common c
https://en.wikipedia.org/wiki/Missing%20energy
In experimental particle physics, missing energy refers to energy that is not detected in a particle detector, but is expected due to the laws of conservation of energy and conservation of momentum. Missing energy is carried by particles that do not interact with the electromagnetic or strong forces and thus are not easily detectable, most notably neutrinos. In general, missing energy is used to infer the presence of non-detectable particles and is expected to be a signature of many theories of physics beyond the Standard Model. The concept of missing energy is commonly applied in hadron colliders. The initial momentum of the colliding partons along the beam axis is not known — the energy of each hadron is split, and constantly exchanged, between its constituents — so the amount of total missing energy cannot be determined. However, the initial energy in particles traveling transverse to the beam axis is zero, so any net momentum in the transverse direction indicates missing transverse energy, also called missing ET or MET. Accurate measurements of missing energy are difficult, as they require full, accurate, energy reconstruction of all particles produced in an interaction. Mismeasurement of particle energies can make it appear as if there is missing energy carried away by other particles when, in fact, no such particles were created.
https://en.wikipedia.org/wiki/Inferential%20programming
In most computer programming, a programmer keeps a program's intended results in mind and painstakingly constructs a program to achieve those results. Inferential programming refers to (still mostly hypothetical) techniques and technologies enabling the inverse. This would allow describing an intended result to a computer, using a metaphor such as a fitness function, a test specification, or a logical specification, and then the computer, on its own, would construct a program needed to meet the supplied criteria. During the 1980s, approaches to achieve inferential programming mostly involved techniques for logical inference. Today the term is sometimes used in connection with evolutionary computation techniques that enable a computer to evolve a solution in response to a problem posed as a fitness or reward function. In July 2022, GitHub Copilot was released, which is an example of inferential programming. Closely related concepts and technologies Logic programming Prolog Constraint programming Artificial intelligence Genetic programming Machine learning Artificial life Evolution Metaprogramming See also Automated reasoning Compiler theory Unit testing
https://en.wikipedia.org/wiki/Proanthocyanidin
Proanthocyanidins are a class of polyphenols found in many plants, such as cranberry, blueberry, and grape seeds. Chemically, they are oligomeric flavonoids. Many are oligomers of catechin and epicatechin and their gallic acid esters. More complex polyphenols, having the same polymeric building block, form the group of tannins. Proanthocyanidins were discovered in 1947 by Jacques Masquelier, who developed and patented techniques for the extraction of oligomeric proanthocyanidins from pine bark and grape seeds. Proanthocyanidins are under preliminary research for the potential to reduce the risk of urinary tract infections (UTIs) by consuming cranberries, grape seeds or red wine. Distribution in plants Proanthocyanidins, including the lesser bioactive and bioavailable polymers (four or more catechins), represent a group of condensed flavan-3-ols, such as procyanidins, prodelphinidins and propelargonidins. They can be found in many plants, most notably apples, maritime pine bark and that of most other pine species, cinnamon, aronia fruit, cocoa beans, grape seed, grape skin (procyanidins and prodelphinidins), and red wines of Vitis vinifera (the European wine grape). However, bilberry, cranberry, black currant, green tea, black tea, and other plants also contain these flavonoids. Cocoa beans contain the highest concentrations. Proanthocyanidins also may be isolated from Quercus petraea and Q. robur heartwood (wine barrel oaks). Açaí oil, obtained from the fruit of the açaí palm (Euterpe oleracea), is rich in numerous procyanidin oligomers. Apples contain on average per serving about eight times the amount of proanthocyanidin found in wine, with some of the highest amounts found in the Red Delicious and Granny Smith varieties. An extract of maritime pine bark called Pycnogenol bears 65–75 percent proanthocyanidins (procyanidins). Thus a 100 mg serving would contain 65 to 75 mg of proanthocyanidins (procyanidins). Proanthocyanidin glycosides can be isolated from
https://en.wikipedia.org/wiki/Cocaine%20paste
Coca paste (paco, basuco, oxi) is a crude extract of the coca leaf which contains 40% to 91% cocaine freebase along with companion coca alkaloids and varying quantities of benzoic acid, methanol, and kerosene. In South America, coca paste, also known as cocaine base and, therefore, often confused with cocaine sulfate in North America, is relatively inexpensive and is widely used by low-income populations. The coca paste is smoked in tobacco or cannabis cigarettes and use has become widespread in several Latin American countries. Traditionally, coca paste has been relatively abundant in South American countries such as Colombia where it is processed into cocaine hydrochloride ("street cocaine") for distribution to the rest of the world. The caustic reactions associated with the local application of coca paste prevents its use by oral, intranasal, mucosal, intramuscular, intravenous or subcutaneous routes. Coca paste can only be smoked when combined with a combustible material such as tobacco or cannabis. History Coca paste use began in Bolivia and Peru in the early 1970s, first in the capital cities and then in other towns and rural areas. In a few years its use had spread to Argentina, Brazil, Colombia, Ecuador, and some Mexican cities near the border with the United States. In Argentina, cocaine paste was sold for about 30 cents per dose in 2006, enough for a powerful two-minute high. However, its price has increased because of higher demand, among other reasons. Preparation and effects Crude cocaine preparation intermediates are marketed as cheaper alternatives to pure cocaine to local markets while the more expensive end product is exported to United States and European markets. Freebase cocaine paste preparations can be smoked. The psychological and physiological effects of the paco are quite severe. Media usually report that it is extremely toxic and addictive. According to a study by Intercambios, media appear to exaggerate the effects of paco. These stere
https://en.wikipedia.org/wiki/Federation%20of%20European%20Biochemical%20Societies
The Federation of the European Biochemical Societies, frequently abbreviated FEBS, is an international scientific society promoting activities in biochemistry, molecular biology and related research areas in Europe and neighbouring regions. It was founded in 1964 and includes over 35,000 members across 39 Constituent Societies. Present activities FEBS activities include: publishing journals; providing grants for scientific meetings such as an annual Congress, Young Scientists’ Forum and FEBS Advanced Courses; offering travel awards to early-stage scientists to participate in these events; offering research Fellowships for pre- and post-doctoral bioscientists; promoting molecular life science education; encouraging integration of scientists working in economically disadvantaged countries of the FEBS area; and awarding prizes and medals for research excellence. FEBS collaborates with related scientific societies such as its Constituent Societies, the International Union of Biochemistry and Molecular Biology (IUBMB) and the European Molecular Biology Organization (EMBO). Awards presented by FEBS include the Sir Hans Krebs Medal, the FEBS/EMBO Women in Science Award (presented jointly with EMBO), the Datta medal and the Theodor Bücher medal. Journals FEBS publishes four scientific journals: The FEBS Journal, FEBS Letters, Molecular Oncology and FEBS Open Bio. The FEBS Journal was previously entitled the European Journal of Biochemistry. Molecular Oncology and FEBS Open Bio are open-access journals. See also List of biochemistry awards
https://en.wikipedia.org/wiki/Copenhagen%20disease
Copenhagen disease, sometimes known as Copenhagen syndrome or progressive non-infectious anterior vertebral fusion (PAVF), is a unique spinal disorder with distinctive radiological features. This is a rare childhood disease of unknown cause, affecting females slightly more than males (60%). Prevalence is unknown, but there have been approximately 80–100 individuals with Copenhagen disease reported since 1949. However, there is still little known research due to the rarity of the disease. The disease is so rare that the National Organization for Rare Diseases does not even mention Copenhagen disease in their database. Copenhagen disease affects the lower back, as can be seen on MRI scans. It is characterized by the progressive fusion of the anterior vertebral body in the thoracolumbar region of the spine. Presentation Individuals with Copenhagen disease are often asymptomatic, but some may present with symptoms including back pain, difficulty walking, and stiffness of the spine, including neck and back, with kyphosis. Kyphosis can progress to angles of up to 85 degrees. Complete bony ankylosis occurs as the disease progresses over the years. Radiological findings may show anterior deformities in end plates, which is related to narrowing between vertebrae in specific areas. Erosion and irregularity of corresponding end plates is also seen. This is followed by fusion, however fusion is usually not seen in the posterior disc space except in later stages of the disease. Signs and symptoms Clinical studies and case reports 15‐year‐old male A 15-year-old male presented with major thoracolumbar kyphosis with spinal stiffness. However, there was no presentation of pain, scoliosis, neurologic symptoms, or trouble with ambulating. Through a combination of MRI of the spine, radiography findings, and absence of sacroiliac joint movement, Copenhagen disease was diagnosed. In this individual, exam findings showed anterior vertebral body fusion, as well as multilevel lumbar
https://en.wikipedia.org/wiki/Polyol%20pathway
The polyol pathway is a two-step process that converts glucose to fructose. In this pathway glucose is reduced to sorbitol, which is subsequently oxidized to fructose. It is also called the sorbitol-aldose reductase pathway. The pathway is implicated in diabetic complications, especially in microvascular damage to the retina, kidney, and nerves. Sorbitol cannot cross cell membranes, and, when it accumulates, it produces osmotic stresses on cells by drawing water into the insulin-independent tissues. Pathway Cells use glucose for energy. This normally occurs by phosphorylation from the enzyme hexokinase. However, if large amounts of glucose are present (as in diabetes mellitus), hexokinase becomes saturated and the excess glucose enters the polyol pathway when aldose reductase reduces it to sorbitol. This reaction oxidizes NADPH to NADP+. Sorbitol dehydrogenase can then oxidize sorbitol to fructose, which produces NADH from NAD+. Hexokinase can return the molecule to the glycolysis pathway by phosphorylating fructose to form fructose-6-phosphate. However, in uncontrolled diabetics that have high blood glucose - more than the glycolysis pathway can handle - the reactions mass balance ultimately favors the production of sorbitol. Activation of the polyol pathway results in a decrease of reduced NADPH and oxidized NAD+; these are necessary co-factors in redox reactions throughout the body, and under normal conditions they are not interchangeable. The decreased concentration of these NADPH leads to decreased synthesis of reduced glutathione, nitric oxide, myo-inositol, and taurine. Myo-inositol is particularly required for the normal function of nerves. Sorbitol may also glycate nitrogens on proteins, such as collagen, and the products of these glycations are referred-to as AGEs - advanced glycation end-products. AGEs are thought to cause disease in the human body, one effect of which is mediated by RAGE (receptor for advanced glycation end-products) and the en
https://en.wikipedia.org/wiki/Biozone
In biostratigraphy, biostratigraphic units or biozones are intervals of geological strata that are defined on the basis of their characteristic fossil taxa, as opposed to a lithostratigraphic unit which is defined by the lithological properties of the surrounding rock. A biostratigraphic unit is defined by the zone fossils it contains. These may be a single taxon or combinations of taxa if the taxa are relatively abundant, or variations in features related to the distribution of fossils. The same strata may be zoned differently depending on the diagnostic criteria or fossil group chosen, so there may be several, sometimes overlapping, biostratigraphic units in the same interval. Like lithostratigraphic units, biozones must have a type section designated as a stratotype. These stratotypes are named according to the typical taxon (or taxa) that are found in that particular biozone. The boundary of two distinct biostratigraphic units is called a biohorizon. Biozones can be further subdivided into subbiozones, and multiple biozones can be grouped together in a superbiozone in which the grouped biozones usually have a related characteristic. A succession of biozones is called biozonation. The length of time represented by a biostratigraphic zone is called a biochron. History The concept of a biozone was first established by the 19th century paleontologist Albert Oppel, who characterized rock strata by the species of the fossilized animals found in them, which he called zone fossils. Oppel's biozonation was mainly based on Jurassic ammonites he found throughout Europe, which he used to classify the period into 33 zones (now 60). Alcide d'Orbigny would further reinforce the concept in his Prodrome de Paléontologie Stratigraphique, in which he established comparisons between geological stages and their biostratigraphy. Types of biozone The International Commission on Stratigraphy defines the following types of biozones: Range zones Range zones are biozones defined b
https://en.wikipedia.org/wiki/United%20States%20Navy%20use%20of%20Hydrometer%20in%20the%201800s
Certain ships of the United States Navy adopted the use of the hydrometer in the 1850s. Readings taken of the density of seawater contributed to research into the dynamics of ocean currents. Adoption The first International Maritime Conference held at Brussels in 1853 (aka "Brussels Conference") for devising a uniform system of meteorological observations at sea recommended the systematic use of the hydrometer. Captain John Rodgers, Lieutenant Porter, and Dr. William Samuel Waithman Ruschenberger, all of the United States Navy did this as did Dr. Raymond, in the American steamer Golden Age, and Captain Henry Toynbee, (F.R.A.S., F.R.A.G.S) of the English East Indiaman Gloriana. All of these men returned valuable observations with the hydrometer, though Captain Rodgers afforded the most extended series. Those navigators who used the hydrometer enlarged the bounds of knowledge and fields of research and led to the discovery of new relations of the sea. The object which the Brussels Organization had in view when the specific gravity column was introduced into the sea-journal was that hydrographers might find in it data for computing the dynamical force which the sea derives for its currents from the difference in the specific gravity of its waters in different climes. The Brussels Conference agreed that a given difference as to specific gravity between the water in one part of the sea and the water in another would give rise to certain currents, and that the set and strength of these currents would be the same, whether such difference of specific gravity arose from difference of temperature or difference of saltiness, or both. Findings The observations made with it by Captain Rodgers, on board the Vincennes – the first United States warship to circumnavigate the globe – showed that the specific gravity of sea water varies but little in the trade-wind regions, notwithstanding the change of temperature. The temperature was a little greater in the southeast trade-wind
https://en.wikipedia.org/wiki/Waragi
Waragi (pronounced , also known as kasese) is a generic term in Uganda for domestic distilled beverages. Waragi is also given different names, depending on region of origin, the distillation process, or both. Waragi is known as a form of homemade Gin. The term "Waragi" is synonymous with locally distilled gin in all parts of Uganda. However, Uganda Waragi is a particular brand of industrially distilled gin produced by East African Breweries Limited. Other brands of distilled gin which are done by individuals at small scale are also available, but they are also unique and different from each other. The most common are: 1. "Kasese-Kasese" which was originally distilled in the district of Kasese in western Uganda and sold all over the country; 2. "Arege mogo" which first originated in Lira district in northern Uganda made and sold all over the country as well. These two brands "waragi" have different tastes and scents from each other. The distillation process in both cases produce highly distilled gin at the level produced industrially. Moonshining and consumption of waragi and other alcoholic beverages is widespread in Uganda. In the 2004 WHO Global Status Report on Alcohol, Uganda ranked as the world's leading consumer of alcohol (per capita). Based on results from 2007, Uganda’s overall alcohol consumption was an average of 17.6 liters per capita. This is unusually high compared to surrounding countries. In 2016 this seems no longer the case, with total consumption of pure alcohol down to 9.5 litres per capita (≥ 15 years of age). History The history of waragi in Uganda can be traced to the period of Ugandan history during the colonial era. Gins were introduced to Uganda by British soldiers who were stationed in the Uganda Protectorate, and soon became a popular drink among Ugandans. In 1960, the colonial government passed the Liquor Act, which implemented limitations on the production and consumption of locally produced gins in the colony. Although the act was os
https://en.wikipedia.org/wiki/TR-069
Technical Report 069 (TR-069) is a technical specification of the Broadband Forum that defines an application layer protocol for remote management and provisioning of customer-premises equipment (CPE) connected to an Internet Protocol (IP) network. TR-069 uses the CPE WAN Management Protocol (CWMP) which provides support functions for auto-configuration, software or firmware image management, software module management, status and performance managements, and diagnostics. The CPE WAN Management Protocol is a bidirectional SOAP- and HTTP-based protocol, and provides the communication between a CPE and auto configuration servers (ACS). The protocol addresses the growing number of different Internet access devices such as modems, routers, gateways, as well as end-user devices which connect to the Internet, such as set-top boxes, and VoIP-phones. TR-069 was first published in May 2004, with amendments in 2006, 2007, 2010, July 2011 (version 1.3), and November 2013 (version 1.4 am5) Other technical initiatives, such as the Home Gateway Initiative (HGI), Digital Video Broadcasting (DVB) and WiMAX Forum endorsed CWMP as the protocol for remote management of residential networking devices and terminals. Communication Transport CWMP is a text based protocol. Orders sent between the device (CPE) and auto configuration server (ACS) are transported over HTTP (or more frequently HTTPS). At this level (HTTP), the CPE acts as client and ACS as HTTP server. This essentially means that control over the flow of the provisioning session is the sole responsibility of the device. Configuration parameters In order for the device to connect to the server, it needs to have certain parameters configured first. These include the URL of the server the device wants to connect to and the interval at which the device will initiate the provisioning session (PeriodicInformInterval). Additionally, if authentication is required for security reasons, data such as the username and the password
https://en.wikipedia.org/wiki/Annulus%20%28mycology%29
An annulus is the ring-like or collar-like structure sometimes found on the stipe of some species of mushrooms. The annulus represents the remnants of the partial veil, after it has ruptured to expose the gills or other spore-producing surface. It can also be called a ring which is what the Latin word annulus directly translates as. The modern usage of the Latin word originates from the early days of botany and mycology when species descriptions were only written in Latin. Outside of the formal setting of scientific publications which still have a Latin requirement, it will often just be referred to as a ring or stem ring in field guides and on identification websites. Ring descriptions The way in which the structure and appearance of rings is described can vary with author and the description may only note the existence of a ring without providing specific information in cases where the ring lacks any notable features that would be useful to distinguish between similar species. Ring shapes and structures can also vary between specimens of the same species and change with the age of the mushroom so multiple descriptions are sometimes used. Even in species with distinctly identifiable rings they may not always be present or perfectly match descriptions so it can be beneficial to have multiple specimens to compare at different states of maturity. Some rings may be persistent and remain through the life of the mushroom or they may disappear with age. Such rings have a variety of synonymous terms associated with them such as fleeting, ephemeral, evanescent, transient, fugacious or impermanent. These rings may peel away from the stem, fall apart or simply fade into the stem surface as it matures. Rings may also be described as fragile, meaning easily torn or damaged and whilst some fragile rings could be considered permanent if undisturbed they are often likely to vanish due to their fragility. Some rings may be fixed in place and firmly attached to the stem whilst ot
https://en.wikipedia.org/wiki/Sockets%20Direct%20Protocol
The Sockets Direct Protocol (SDP) is a transport-agnostic protocol to support stream sockets over remote direct memory access (RDMA) network fabrics. SDP was originally defined by the Software Working Group (SWG) of the InfiniBand Trade Association. Originally designed for InfiniBand (IB), SDP is currently maintained by the OpenFabrics Alliance. Protocol SDP defines a standard wire protocol over an RDMA fabric to support stream sockets (SOCK_STREAM). SDP uses various RDMA network features for high-performance zero-copy data transfers. SDP is a pure wire-protocol level specification and does not go into any socket API or implementation specifics. The purpose of the Sockets Direct Protocol is to provide an RDMA-accelerated alternative to the TCP protocol on IP. The goal is to do this in a manner which is transparent to the application. Solaris 10 and Solaris 11 Express include support for SDP. Several other Unix operating system variants plan to include support for Sockets Direct Protocol. Windows offers a subsystem called Winsock Direct, which could be used to support SDP. SDP support was introduced to the JDK 7 release of the Java Platform, Standard Edition (July 2011) for applications deployed in on Solaris and Linux operating systems (OFED 1.4.2 and 1.5). Oracle Database 11g supports connection over SDP. Sockets Direct Protocol only deals with stream sockets, and if installed in a system, bypasses the OS resident TCP stack for stream connections between any endpoints on the RDMA fabric. All other socket types (such as datagram, raw, packet, etc.) are supported by the Linux IP stack and operate over standard IP interfaces (i.e., IPoIB on InfiniBand fabrics). The IP stack has no dependency on the SDP stack; however, the SDP stack depends on IP drivers for local IP assignments and for IP address resolution for endpoint identifications. SDP is used by the Australian telecommunications company Telstra on their 3G platform Next G to deliver streaming mobile TV. T
https://en.wikipedia.org/wiki/Spartan%20%28chemistry%20software%29
Spartan is a molecular modelling and computational chemistry application from Wavefunction. It contains code for molecular mechanics, semi-empirical methods, ab initio models, density functional models, post-Hartree–Fock models, and thermochemical recipes including G3(MP2) and T1. Quantum chemistry calculations in Spartan are powered by Q-Chem. Primary functions are to supply information about structures, relative stabilities and other properties of isolated molecules. Molecular mechanics calculations on complex molecules are common in the chemical community. Quantum chemical calculations, including Hartree–Fock method molecular orbital calculations, but especially calculations that include electronic correlation, are more time-consuming in comparison. Quantum chemical calculations are also called upon to furnish information about mechanisms and product distributions of chemical reactions, either directly by calculations on transition states, or based on Hammond's postulate, by modeling the steric and electronic demands of the reactants. Quantitative calculations, leading directly to information about the geometries of transition states, and about reaction mechanisms in general, are increasingly common, while qualitative models are still needed for systems that are too large to be subjected to more rigorous treatments. Quantum chemical calculations can supply information to complement existing experimental data or replace it altogether, for example, atomic charges for quantitative structure-activity relationship (QSAR) analyses, and intermolecular potentials for molecular mechanics and molecular dynamics calculations. Spartan applies computational chemistry methods (theoretical models) to many standard tasks that provide calculated data applicable to the determination of molecular shape conformation, structure (equilibrium and transition state geometry), NMR, IR, Raman, and UV-visible spectra, molecular (and atomic) properties, reactivity, and selectivity. Compu
https://en.wikipedia.org/wiki/Tide%20mill
A tide mill is a water mill driven by tidal rise and fall. A dam with a sluice is created across a suitable tidal inlet, or a section of river estuary is made into a reservoir. As the tide comes in, it enters the mill pond through a one-way gate, and this gate closes automatically when the tide begins to fall. When the tide is low enough, the stored water can be released to turn a water wheel. Tide mills are usually situated in river estuaries, away from the effects of waves but close enough to the sea to have a reasonable tidal range. Cultures that built such mills have existed since the Middle Ages, and some may date back to the Roman period. A modern version of a tide mill is the electricity-generating tidal barrage. Early history Possibly the earliest tide mill in the Roman world was located in London on the River Fleet, dating to Roman times. Since the late 20th century, a number of new archaeological finds have consecutively pushed back the date of the earliest tide mills, all of which were discovered on the Irish coast: A 6th-century vertical-wheeled tide mill was located at Killoteran near Waterford. A twin-flume, horizontal-wheeled tide mill, dating to c. 630, was excavated on Little Island in Cork. Alongside it, another tide mill was found that was powered by a vertical undershot wheel. The Nendrum Monastery mill from 787 was situated on an island in Strangford Lough in Northern Ireland. Its millstones are 830mm in diameter and the horizontal wheel is estimated to have developed at its peak. Remains of an earlier mill dated at 619 were also found at the site. In England, an exceptionally well preserved tidal mill, dated by dendrochronology to the late 7th century (691-2AD) was excavated in the Ebbsfleet Valley (a minor tributary of the River Thames) in Kent during construction of the Ebbsfleet International Station, on the High Speed 1 railway line<ref></Oxford Wessex Archaeology Joint Venture 2011. Settling the Ebbsfleet Valley. CTRL Excavations
https://en.wikipedia.org/wiki/Train%20of%20thought
The train of thought or track of thought refers to the interconnection in the sequence of ideas expressed during a connected discourse or thought, as well as the sequence itself, especially in discussion how this sequence leads from one idea to another. When a reader or listener "loses the train of thought" (i.e., loses the relation between consecutive sentences or phrases, or the relation between non-verbal concepts in an argument or presentation), comprehension is lost of the expressed or unexpressed thought. The term "train of thoughts" was introduced and elaborated as early as in 1651 by Thomas Hobbes in his Leviathan, though with a somewhat different meaning (similar to the meaning used by the British associationists): See also Absent-mindedness Association of Ideas Associationism Derailment (thought disorder) Internal monologue Mind-wandering Stream of consciousness
https://en.wikipedia.org/wiki/Chemotaxonomy
Merriam-Webster defines chemotaxonomy as the method of biological classification based on similarities and dissimilarity in the structure of certain compounds among the organisms being classified. Advocates argue that, as proteins are more closely controlled by genes and less subjected to natural selection than the anatomical features, they are more reliable indicators of genetic relationships. The compounds studied most are proteins, amino acids, nucleic acids, peptides etc. Physiology is the study of working of organs in a living being. Since working of the organs involves chemicals of the body, these compounds are called biochemical evidences. The study of morphological change has shown that there are changes in the structure of animals which result in evolution. When changes take place in the structure of a living organism, they will naturally be accompanied by changes in the physiological or biochemical processes. John Griffith Vaughan was one of the pioneers of chemotaxonomy. Biochemical products The body of any animal in the animal kingdom is made up of a number of chemicals. Of these, only a few biochemical products have been taken into consideration to derive evidence for evolution. Protoplasm: Every living cell, from a bacterium to an elephant, from grasses to the blue whale, has protoplasm. Though the complexity and constituents of the protoplasm increases from lower to higher living organism, the basic compound is always the protoplasm. Evolutionary significance: From this evidence, it is clear that all living things have a common origin point or a common ancestor, which in turn had protoplasm. Its complexity increased due to changes in the mode of life and habitat. Nucleic acids: DNA and RNA are the two types of nucleic acids present in all living organisms. They are present in the chromosomes. The structure of these acids has been found to be similar in all animals. DNA always has two chains forming a double helix, and each chain is made up of nuc
https://en.wikipedia.org/wiki/Hemangioblast
Hemangioblasts are the multipotent precursor cells that can differentiate into both hematopoietic and endothelial cells. In the mouse embryo, the emergence of blood islands in the yolk sac at embryonic day 7 marks the onset of hematopoiesis. From these blood islands, the hematopoietic cells and vasculature are formed shortly after. Hemangioblasts are the progenitors that form the blood islands. To date, the hemangioblast has been identified in human, mouse and zebrafish embryos. Hemangioblasts have been first extracted from embryonic cultures and manipulated by cytokines to differentiate along either hematopoietic or endothelial route. It has been shown that these pre-endothelial/pre-hematopoietic cells in the embryo arise out of a phenotype CD34 population. It was then found that hemangioblasts are also present in the tissue of post-natal individuals, such as in newborn infants and adults. Adult hemangioblast There is now emerging evidence of hemangioblasts that continue to exist in the adult as circulating stem cells in the peripheral blood that can give rise to both endothelial cells and hematopoietic cells. These cells are thought to express both CD34 and CD133 These cells are likely derived from the bone marrow, and may even be derived from hematopoietic stem cells. History The hemangioblast was first hypothesized in 1900 by Wilhelm His. Existence of the hemangioblast was first proposed in 1917 by Florence Sabin, who observed the close spatial and temporal proximity of the emergence of blood vessels and red blood cells within the yolk sac in chick embryos. In 1932, making the same observation as Sabin, Murray coined the term “hemangioblast”. The hypothesis of a bipotential precursor was further supported by the fact that endothelial cells and hematopoietic cells share many of the same markers, including Flk1, Vegf, CD34, Scl, Gata2, Runx1, and Pecam-1. Furthermore, it was shown that depletion of Flk1 in the developing embryo results in disappearance of
https://en.wikipedia.org/wiki/Fluorosilicate%20glass
Fluorosilicate glass (FSG) is a glass material composed primarily of fluorine, silicon and oxygen. It has a number of uses in industry and manufacturing, especially in semiconductor fabrication where it forms an insulating dielectric. The related fluorosilicate glass-ceramics have good mechanical and chemical properties. Semiconductor fabrication FSG has a small relative dielectric constant (low-κ dielectric) and is used in between metal copper interconnect layers during silicon integrated circuit fabrication process. It is widely used by semiconductor fabrication plants on geometries under 0.25 microns (μ). FSG is effectively a fluorine-containing silicon dioxide (κ=3.5, while κ of undoped silicon dioxide is 3.9). FSG is used by IBM. Intel started using Cu metal layers and FSG on its 1.2 GHz Pentium processor at 130 nm complementary metal–oxide–semiconductor (CMOS). Taiwan Semiconductor Manufacturing Company (TSMC) combined FSG and copper in the Altera APEX. Fluorosilicate glass-ceramics Fluorosilicate glass-ceramics are crystalline or semi-crystalline solids formed by careful cooling of molten fluorosilicate glass. They have good mechanical properties. Potassium fluororichterite based materials are composed from tiny interlocked rod-shaped amphibole crystals; they have good resistance to chemicals and can be used in microwave ovens. Richterite glass-ceramics are used for high-performance tableware. Fluorosilicate glass-ceramics with sheet structure, derived from mica, are strong and machinable. They find a number of uses and can be used in high vacuum and as dielectrics and precision ceramic components. A number of mica and mica-fluoroapatite glass-ceramics were studied as biomaterials. See also Fluoride glass Glass Silicate
https://en.wikipedia.org/wiki/Oracle%20Fusion%20Middleware
Oracle Fusion Middleware (FMW, also known as Fusion Middleware) consists of several software products from Oracle Corporation. FMW spans multiple services, including Java EE and developer tools, integration services, business intelligence, collaboration, and content management. FMW depends on open standards such as BPEL, SOAP, XML and JMS. Oracle Fusion Middleware provides software for the development, deployment, and management of service-oriented architecture (SOA). It includes what Oracle calls "hot-pluggable" architecture, designed to facilitate integration with existing applications and systems from other software vendors such as IBM, Microsoft, and SAP AG. Evolution Many of the products included under the FMW banner do not themselves qualify as middleware products: "Fusion Middleware" essentially represents a re-branding of many of Oracle products outside of Oracle's core database and applications-software offerings—compare Oracle Fusion. Oracle acquired many of its FMW products via acquisitions. This includes products from BEA Systems and Stellent. In order to provide standards-based software to assist with business process automation, HP has incorporated FMW into its "service-oriented architecture (SOA) portfolio". Oracle leveraged its Configurable Network Computing (CNC) technology acquired from its PeopleSoft/JD Edwards 2005 purchase. Oracle Fusion Applications, based on Oracle Fusion Middleware, were finally released in September 2010. According to Oracle, as of 2013, over 120,000 customers were using Fusion Middleware. This includes over 35 of the world's 50 largest companies and more than 750 of the BusinessWeek Global 1000, with FMW also supported by 7,500 partners. Assessments In January 2008, Oracle WebCenter Content (formerly Universal Content Management) won InfoWorld's "Technology of the Year" award for "Best Enterprise Content Manager", with Oracle SOA Suite winning the award for "Best Enterprise Service Bus". In 2007, Gartner wrote th
https://en.wikipedia.org/wiki/Iterated%20monodromy%20group
In geometric group theory and dynamical systems the iterated monodromy group of a covering map is a group describing the monodromy action of the fundamental group on all iterations of the covering. A single covering map between spaces is therefore used to create a tower of coverings, by placing the covering over itself repeatedly. In terms of the Galois theory of covering spaces, this construction on spaces is expected to correspond to a construction on groups. The iterated monodromy group provides this construction, and it is applied to encode the combinatorics and symbolic dynamics of the covering, and provide examples of self-similar groups. Definition The iterated monodromy group of f is the following quotient group: where : is a covering of a path-connected and locally path-connected topological space X by its subset , is the fundamental group of X and is the monodromy action for f. is the monodromy action of the iteration of f, . Action The iterated monodromy group acts by automorphism on the rooted tree of preimages where a vertex is connected by an edge with . Examples Iterated monodromy groups of rational functions Let : f be a complex rational function be the union of forward orbits of its critical points (the post-critical set). If is finite (or has a finite set of accumulation points), then the iterated monodromy group of f is the iterated monodromy group of the covering , where is the Riemann sphere. Iterated monodromy groups of rational functions usually have exotic properties from the point of view of classical group theory. Most of them are infinitely presented, many have intermediate growth. IMG of polynomials The Basilica group is the iterated monodromy group of the polynomial See also Growth rate (group theory) Amenable group Complex dynamics Julia set
https://en.wikipedia.org/wiki/Wratten%20number
Wratten numbers are a labeling system for optical filters, usually for photographic use comprising a number sometimes followed by a letter. The number denotes the color of the filter, but is arbitrary and does not encode any information (the 80A–80D are blue, the next filters in numerical order, 81A–81EF, are orange); letters almost always increase with increasing strength (the exception being 2B, 2A, 2C, 2E). They are named for the founder of the first photography company, British inventor Frederick Wratten. Wratten and partner C.E.K. Mees sold their company to Eastman Kodak in 1912, and Kodak started manufacturing Wratten filters. They remain in production, and are sold under license through the Tiffen corporation. Wratten filters are often used in observational astronomy by amateur astronomers. Color filters for visual observing made by GSO, Baader, Lumicon, or other companies are actually Wratten filters mounted in standard or 2 in (nominal, 48 mm actual) filter threads. For imaging interference filters are used. Wratten filters are also used in photomicrography. Filters made by various manufacturers may be identified by Wratten numbers but not precisely match the spectral definition for that number. This is especially true for filters used for aesthetic (as opposed to technical) reasons. For example, an 81B warming filter is a filter used to slightly "warm" the colors in a color photo, making the scene a bit less blue and more red. Many manufacturers make filters labeled as 81B with transmission curves which are similar, but not identical, to the Kodak Wratten 81B. This is according to that manufacturer's idea of how best to warm a scene, and depending on the dyes and layering techniques used in manufacturing. Some manufacturers use their own designations to avoid this confusion, for example Singh-Ray has a warming filter which they designate A‑13, which is not a Wratten number. Filters used where precisely specified and repeatable characteristics are requi
https://en.wikipedia.org/wiki/Microserver
A data center 64 bit microserver is a server class computer which is based on a system on a chip (SoC). The goal is to integrate all of the server motherboard functions onto a single microchip, except DRAM, boot FLASH and power circuits. Thus, the main chip contains more than only compute cores, caches, memory interfaces and PCI controllers. It typically also contains SATA, networking, serial port and boot FLASH interfaces on the same chip. This eliminates support chips (and therefore area, power and cost) at the board level. Multiple microservers can be put together in a small package to construct dense data center (example: DOME MicroDataCenter). History The term "microserver" first appeared in the late 1990s and was popularized by a Palo Alto incubator; PicoStar when incubating Cobalt Microservers. Microserver again appeared around 2010 and is commonly misunderstood to imply low performance. Microservers first appeared in the embedded market, where due to cost and space these types of SoCs appeared before they did in general purpose computing. Indeed, recent research indicates that emerging scale-out services and popular datacenter workloads (e.g., as in CloudSuite) require a certain degree of single-thread performance (with out-of-order execution cores) which may be lower than those in conventional desktop processors but much higher than those in the embedded systems. A modern microserver typically offers medium-high performance at high packaging densities, allowing very small compute node form factors. This can result in high energy efficiency (operations per Watt), typically better than that of highest single-thread performance processors. One of the early microservers is the 32-bit SheevaPlug. There are plenty of consumer grade 32-bit microservers available, for instance the Banana Pi as seen on Comparison of single-board computers. Early 2015, even a 64-bit consumer grade microserver is announced. Mid 2017 consumer-grade 64-bit microservers started app
https://en.wikipedia.org/wiki/Sphenoidal%20emissary%20foramen
In the base of the skull, in the great wings of the sphenoid bone, medial to the foramen ovale, a small aperture, the sphenoidal emissary foramen, may occasionally be seen (it is often absent) opposite the root of the pterygoid process. When present, it opens below near the scaphoid fossa. Vesalius was the first to describe and illustrate this foramen, and is also called the foramen Vesalius. Other names include foramen venosum and canaliculus sphenoidalis. Importance If at all present, the sphenoidal emissary foramen gives passage to a small vein (vein of Vesalius) that connects the pterygoid plexus with the cavernous sinus. The importance of this passage lies in the fact that an infected thrombus from an extracranial source may reach the cavernous sinus. The mean area of the foramen is small, which may suggest that it plays a minor role in the dynamics of blood circulation in the venous system of the head. Structure The sphenoidal emissary foramen varies in size in different individuals, and is not always present on both sides of the sphenoid bone (one on each great wing of the sphenoid). In a study conducted under 100 skulls, the sphenoidal emissary foramen was only present in 17% of the cases, and it was always single. In another study, the differences between the right and the left side as well as the differences between the male and the female sex were noted. Out of the 70 sides observed (35 skulls total), the sphenoidal emissary foramen was present in 32.85% of the cases (20.0% right side, 12.85% left side). The incidence of bilateral and unilateral sphenoidal emissary foramen was 22.85% (8 out of 35 skulls) and 20% (7 out of 35 skulls) respectively. Regarding the differences between the male and the female sex, no remarkable differences were observed, although the occurrence of the foramen was more in females compared to males (found in 13 sides in females and in 10 sides in males).The skulls with one foramen were most frequent; those with two followed
https://en.wikipedia.org/wiki/Game%20development%20tool
A game development tool is a specialized software application that assists or facilitates the making of a video game. Some tasks handled by tools include the conversion of assets (such as 3D models, textures, etc.) into formats required by the game, level editing and script compilation. Almost all game development tools are developed by the developer custom for one game, or by a console manufacturer (such as Nintendo or Microsoft) as part of a game development kit. Though tools may be re-used for later games, they almost always start out as a resource for a single game. While many COTS packages are used in the production of games—such as 3D packages like Maya and 3D Studio Max, graphic editors like Photoshop and IDEs like Microsoft Visual Studio—they are not considered solely game development tools since they have uses beyond game development. The game tools may or may not be released along with the final game, depending on what the tool is used for. For contemporary games, it is common to include at least level editors with games that require them. History Early in the history of the video game industry, game programming tools were non-existent. This wasn't a hindrance for the types of games that could be created at the time, however. While today a game like Pac-Man would most likely have levels generated with a level editor, in the industry's infancy, such levels were hard coded into the game's source code. Images of the player's character were also hard-coded, being drawn, frame by frame, by source code commands. As soon as the more technologically advanced use of sprites became common, game development tools began to emerge, custom programmed by the programmer. Today, game development tools are still often programmed by members of the game development team by programmers, often whose sole job is to develop and maintain tools. Examples Bitsy: A game development tool featured exclusively at itch.io RPG Maker, known in Japan as RPG Tsukūru for the dev
https://en.wikipedia.org/wiki/Ephemeral%20key
A cryptographic key is called ephemeral if it is generated for each execution of a key establishment process. In some cases ephemeral keys are used more than once, within a single session (e.g., in broadcast applications) where the sender generates only one ephemeral key pair per message and the private key is combined separately with each recipient's public key. Contrast with a static key. Private / public ephemeral key agreement key Private (resp. public) ephemeral key agreement keys are the private (resp. public) keys of asymmetric key pairs that are used a single key establishment transaction to establish one or more keys (e.g., key wrapping keys, data encryption keys, or MAC keys) and, optionally, other keying material (e.g., initialization vectors). See also Cryptographic key types Session key External links Recommendation for Key Management — Part 1: General, NIST Special Publication 800-57 NIST Cryptographic Toolkit
https://en.wikipedia.org/wiki/Strehl%20ratio
The Strehl ratio is a measure of the quality of optical image formation, originally proposed by Karl Strehl, after whom the term is named. Used variously in situations where optical resolution is compromised due to lens aberrations or due to imaging through the turbulent atmosphere, the Strehl ratio has a value between 0 and 1, with a hypothetical, perfectly unaberrated optical system having a Strehl ratio of 1. Mathematical definition The Strehl ratio is frequently defined as the ratio of the peak aberrated image intensity from a point source compared to the maximum attainable intensity using an ideal optical system limited only by diffraction over the system's aperture. It is also often expressed in terms not of the peak intensity but the intensity at the image center (intersection of the optical axis with the focal plane) due to an on-axis source; in most important cases these definitions result in a very similar figure (or identical figure, when the point of peak intensity must be exactly at the center due to symmetry). Using the latter definition, the Strehl ratio can be computed in terms of the wavefront-error : the offset of the wavefront due to an on-axis point source, compared to that produced by an ideal focusing system over the aperture A(x,y). Using Fraunhofer diffraction theory, one computes the wave amplitude using the Fourier transform of the aberrated pupil function evaluated at 0,0 (center of the image plane) where the phase factors of the Fourier transform formula are reduced to unity. Since the Strehl ratio refers to intensity, it is found from the squared magnitude of that amplitude: where i is the imaginary unit, is the phase error over the aperture at wavelength λ, and the average of the complex quantity inside the brackets is taken over the aperture A(x,y). The Strehl ratio can be estimated using only the statistics of the phase deviation , according to a formula rediscovered by Mahajan but known long before in antenna theory as the Ruz
https://en.wikipedia.org/wiki/Cavium
Cavium was a fabless semiconductor company based in San Jose, California, specializing in ARM-based and MIPS-based network, video and security processors and SoCs. The company was co-founded in 2000 by Syed B. Ali and M. Raghib Hussain, who were introduced to each other by a Silicon Valley entrepreneur. Cavium offers processor- and board-level products targeting routers, switches, appliances, storage and servers. The company went public in May 2007 with about 175 employees. As of 2011, following numerous acquisitions, it had about 850 employees worldwide, of whom about 250 were located at company headquarters in San Jose. Cavium was acquired by Marvell Technology Group on July 6, 2018. History Name change On June 17, 2011, Cavium Networks, Inc. changed their name to Cavium, Inc. Acquisitions Acquisition In November 2017, Cavium's board of directors agreed to the company's purchase by Marvell Technology Group for $6 billion in cash and stock. The merger was finalized on July 6, 2018. NSA Interference On March 23, 2022, Cavium was named as an NSA "enabled" CPU vendor in a PhD thesis titled "Communication in a world of pervasive surveillance". The "enabled" term refers to a process with which a chip vendor has a backdoor introduced into their designs.
https://en.wikipedia.org/wiki/StegFS
StegFS is a free steganographic file system for Linux based on the ext2 filesystem. It is licensed under the GPL. It was principally developed by Andrew D. McDonald and Markus G. Kuhn. The last version of StegFS is 1.1.4, released February 14, 2001. This is a development release, with known bugs, such as a file corruption bug. There is no stable release. The last website activity was in 2004. In 2003, Andreas C. Petter and Sebastian Urbach intended to continue development of StegFS, and created a site for it on SourceForge.net. The development has further moved to using FUSE library, and working releases are available from the development homepage. See also Filesystem-level encryption List of cryptographic file systems Further reading External links StegFS original home page StegFS development home page StegFS research paper -(PDF file) StegFS SourceForge project page Disk file systems Free special-purpose file systems File systems supported by the Linux kernel Steganography
https://en.wikipedia.org/wiki/Underhanded%20C%20Contest
The Underhanded C Contest is a programming contest to turn out code that is malicious, but passes a rigorous inspection, and looks like an honest mistake even if discovered. The contest rules define a task, and a malicious component. Entries must perform the task in a malicious manner as defined by the contest, and hide the malice. Contestants are allowed to use C-like compiled languages to make their programs. The contest was organized by Dr. Scott Craver of the Department of Electrical Engineering at Binghamton University. The contest was initially inspired by Daniel Horn's Obfuscated V contest in the fall of 2004. For the 2005 to 2008 contests, the prize was a $100 gift certificate to ThinkGeek. The 2009 contest had its prize increased to $200 due to the very late announcement of winners, and the prize for the 2013 contest is also a $200 gift certificate. Contests 2005 The 2005 contest had the task of basic image processing, such as resampling or smoothing, but covertly inserting unique and useful "fingerprinting" data into the image. Winning entries from 2005 used uninitialized data structures, reuse of pointers, and an embedding of machine code in constants. 2006 The 2006 contest required entries to count word occurrences, but have vastly different runtimes on different platforms. To accomplish the task, entries used fork implementation errors, optimization problems, endian differences and various API implementation differences. The winner called strlen() in a loop, leading to quadratic complexity which was optimized out by a Linux compiler but not by Windows. 2007 The 2007 contest required entries to encrypt and decrypt files with a strong, readily available encryption algorithm such that a low percentage (1% - 0.01%) of the encrypted files may be cracked in a reasonably short time. The contest commenced on April 16 and ended on July 4. Entries used misimplementations of RC4, misused API calls, and incorrect function prototypes. 2008 The 2008 contest re
https://en.wikipedia.org/wiki/Steganographic%20file%20system
Steganographic file systems are a kind of file system first proposed by Ross Anderson, Roger Needham, and Adi Shamir. Their paper proposed two main methods of hiding data: in a series of fixed size files originally consisting of random bits on top of which 'vectors' could be superimposed in such a way as to allow levels of security to decrypt all lower levels but not even know of the existence of any higher levels, or an entire partition is filled with random bits and files hidden in it. In a steganographic file system using the second scheme, files are not merely stored, nor stored encrypted, but the entire partition is randomized - encrypted files strongly resemble randomized sections of the partition, and so when files are stored on the partition, there is no easy way to discern between meaningless gibberish and the actual encrypted files. Furthermore, locations of files are derived from the key for the files, and the locations are hidden and available to only programs with the passphrase. This leads to the problem that very quickly files can overwrite each other (because of the Birthday Paradox); this is compensated for by writing all files in multiple places to lessen the chance of data loss. Advantage While there may seem to be no point to a file system which is guaranteed to either be grossly inefficient storage space-wise or to cause data loss and corruption either from data collisions or loss of the key (in addition to being a complex system, and for having poor read/write performance), performance was not the goal of StegFS. Rather, StegFS is intended to thwart "rubberhose attacks", which usually work because encrypted files are distinguishable from regular files, and authorities can coerce the user until the user gives up the keys and all the files are distinguishable as regular files. However, since in a steganographic file system, the number of files are unknown and every byte looks like an encrypted byte, the authorities cannot know how many files
https://en.wikipedia.org/wiki/List%20of%20boiling%20and%20freezing%20information%20of%20solvents
See also Freezing-point depression Boiling-point elevation
https://en.wikipedia.org/wiki/Economizer
Economizers (US and Oxford spelling), or economisers (UK), are mechanical devices intended to reduce energy consumption, or to perform useful function such as preheating a fluid. The term economizer is used for other purposes as well. Boiler, power plant, heating, refrigeration, ventilating, and air conditioning (HVAC) uses are discussed in this article. In simple terms, an economizer is a heat exchanger. Stirling engine Robert Stirling's innovative contribution to the design of hot air engines of 1816 was what he called the 'Economiser'. Now known as the regenerator, it stored heat from the hot portion of the engine as the air passed to the cold side, and released heat to the cooled air as it returned to the hot side. This innovation improved the efficiency of the Stirling engine enough to make it commercially successful in particular applications, and has since been a component of every air engine that is called a Stirling engine. Boilers In boilers, economizers are heat exchange devices that heat fluids, usually water, up to but not normally beyond the boiling point of that fluid. Economizers are so named because they can make use of the enthalpy in fluid streams that are hot, but not hot enough to be used in a boiler, thereby recovering more useful enthalpy and improving the boiler's efficiency. They are a device fitted to a boiler which saves energy by using the exhaust gases from the boiler to preheat the cold water used to fill it (the feed water). Steam boilers use large amounts of energy raising feed water to the boiling temperature, converting the water to steam and sometimes superheating that steam above saturation temperature. Heat transfer efficiency is improved when the highest temperatures near the combustion sources are used for boiling and superheating, while using the residual heat of the cooled combustion gases exhausting from the boiler through an economizer to raise the temperature of feed water entering the steam drum. An indirect conta
https://en.wikipedia.org/wiki/Contingency%20management
Contingency management (CM) is the application of the three-term contingency (or operant conditioning), which uses stimulus control and consequences to change behavior. CM originally derived from the science of applied behavior analysis (ABA), but it is sometimes implemented from a cognitive-behavior therapy (CBT) framework as well (such as in dialectical behavior therapy, or DBT). Incentive-based contingency management is well-established when used as a clinical behavior analysis (CBA) treatment for substance use disorders, which entails that patients' earn money (vouchers) or other incentives (i.e., prizes) as a reward to reinforce drug abstinence (and, less often, punishment if they fail to adhere to program rules and regulations or their treatment plan). Another popular approach based on CM for alcoholism is the community reinforcement approach and family training (CRAFT) model, which uses self-management and shaping techniques. By most evaluations, its procedures produce one of the largest effect sizes out of all mental health and educational interventions. Token economies One form of contingency management is the token economy system. Token systems can be used in an individual or group format. Token systems are successful with a diverse array of populations including those suffering from addiction, those with special needs, and those experiencing delinquency. However, recent research questions the use of token systems with very young children. The exception to the last would be the treatment of stuttering. The goal of such systems is to gradually thin out and to help the person begin to access the natural community of reinforcement (the reinforcement typically received in the world for performing the behavior). Walker (1990) presents an overview of token systems and combining such procedures with other interventions in the classroom. He relates the comprehensiveness of token systems to the child's level of difficulty. Voucher programs and related applica
https://en.wikipedia.org/wiki/Beacon%20Institute%20for%20Rivers%20%26%20Estuaries
Beacon Institute for Rivers and Estuaries, Clarkson University, with offices in City of Beacon and Troy, New York, is a 501(c)(3) not-for-profit environmental research organization focusing on real-time monitoring of river ecosystems. The institute's mission is to "create and maintain a global center for scientific and technological innovation that advances research, education and public policy regarding rivers and estuaries." In 2011, Beacon Institute for Rivers and Estuaries and Clarkson University announced an expansion of their education and operational partnership for commercialization of new real-time river monitoring sensor technology, academic programs and public policy solutions based on real-time data to protect waterways. Clarkson University Business School Dean Timothy Sugrue, a West Point graduate, is the institute's President and Chief Executive Officer, with responsibility for R&D oversight and commercial partnership development. With the new alliance, the institute's Founding Director, Hudson River environmentalist John Cronin, joined Clarkson's faculty as a Beacon Institute Fellow. Cronin also serves as Senior Fellow for Environmental Affairs at Pace University's Academy for the Environment. In 2008, Clarkson University's James S. Bonner and a team of its researchers joined the River and Estuary Observatory Network (REON) collaboration started by Beacon Institute and IBM. Together, the partners have established a first-of-its-kind real-time environmental monitoring network for rivers and estuaries that seeks to allow continuous monitoring of physical, chemical and biological data from points in New York's Hudson, Mohawk and St. Lawrence Rivers via an integrated network of sensors, robotics, mobile monitoring and computational technology deployed in the rivers. External links Beacon Institute for Rivers and Estuaries, Clarkson University Ecology organizations Beacon, New York Hudson River
https://en.wikipedia.org/wiki/Bose%E2%80%93Hubbard%20model
The Bose–Hubbard model gives a description of the physics of interacting spinless bosons on a lattice. It is closely related to the Hubbard model that originated in solid-state physics as an approximate description of superconducting systems and the motion of electrons between the atoms of a crystalline solid. The model was introduced by Gersch and Knollman in 1963 in the context of granular superconductors. (The term 'Bose' in its name refers to the fact that the particles in the system are bosonic.) The model rose to prominence in the 1980s after it was found to capture the essence of the superfluid-insulator transition in a way that was much more mathematically tractable than fermionic metal-insulator models. The Bose–Hubbard model can be used to describe physical systems such as bosonic atoms in an optical lattice, as well as certain magnetic insulators. Furthermore, it can be generalized and applied to Bose–Fermi mixtures, in which case the corresponding Hamiltonian is called the Bose–Fermi–Hubbard Hamiltonian. Hamiltonian The physics of this model is given by the Bose–Hubbard Hamiltonian: . Here, denotes summation over all neighboring lattice sites and , while and are bosonic creation and annihilation operators such that gives the number of particles on site . The model is parametrized by the hopping amplitude that describes boson mobility in the lattice, the on-site interaction which can be attractive () or repulsive (), and the chemical potential , which essentially sets the number of particles. If unspecified, typically the phrase 'Bose–Hubbard model' refers to the case where the on-site interaction is repulsive. This Hamiltonian has a global symmetry, which means that it is invariant (its physical properties are unchanged) by the transformation . In a superfluid phase, this symmetry is spontaneously broken. Hilbert space The dimension of the Hilbert space of the Bose–Hubbard model is given by , where is the total number of particles, whil
https://en.wikipedia.org/wiki/Battle%20management%20language
A Battle Management Language (BML) is the unambiguous language used to command and control forces and equipment conducting military operations and to provide for situational awareness and a shared, common operational picture. It can be seen as a standard digitized representation of a commander's intent to be used for real troops, for simulated troops, and for future robotic forces. BML is particularly relevant in a network centric environment for enabling mutual understanding. The need for common BML was identified by the U.S. Army's Simulation to C4I Interoperability Overarching Integrated Product Team (SIMCI OIPT) and was originally defined in a paper titled Standardizing Battle Management Language - A Vital Move Towards the Army Transformation published by the Simulation Interoperability Standards Organization in 2001 . The same issues that have driven the Army to embark on this program also confront the other Services C4ISR and simulation systems, and future military operations. Acknowledging the significant need for training in joint environments to support future operations, SISO and other BML development efforts have expanded to encompass Joint BML (J-BML) for all military branches and ultimately to more recent work on establishing a Coalition BML (C-BML) for all Services and multinational coalition members promoting interoperability among their C4ISR systems and simulations, and also among systems employed in real-world operations. Coalition Battle Management Language Coalition Battle Management Language (C-BML, CBML) is an unambiguous language to describe a commander's intent, to be understood by both live forces and automated systems, for multi-national simulated and real world operations. Following a meeting of subject matter experts at the Spring 2004 SIW, the Simulation Interoperability Standards Organization determined that a detailed evaluation of BML efforts at a coalition level was necessary and formally established a Coalition BML (C-BML) Stud
https://en.wikipedia.org/wiki/American%20Superconductor
American Superconductor (AMSC) is an American energy technologies company headquartered in Ayer, Massachusetts. The firm specializes in using superconductors for the development of diverse power systems, including but not limited to superconducting wire. Moreover, AMSC employs superconductors in the construction of ship protection systems. The company has a subsidiary, AMSC Windtech, located in Klagenfurt, Austria. Projects Detroit Edison Project American Superconductor installed a test of a superconducting electric power transmission power cable in the Detroit Edison Frisbee substation in 2001. Holbrook Superconductor Project The world's first production superconducting transmission power cable, the Holbrook Superconductor Project, was commissioned in late June 2008. The suburban Long Island electrical substation is fed by about 600 meters of high-temperature superconductor wire manufactured by American Superconductor, installed underground and chilled to superconducting temperatures with liquid nitrogen. Tres Amigas Project American Superconductor was chosen as a supplier for the Tres Amigas Project, the United States' first renewable energy market hub. The Tres Amigas renewable energy market hub will be a multi-mile, triangular electricity pathway of Superconductor Electricity Pipelines capable of transferring and balancing many gigawatts of power between three U.S. power grids (the Eastern Interconnection, the Western Interconnection and the Texas Interconnection). Unlike traditional powerlines, it will transfer power as DC instead of AC current. It will be located in Clovis, New Mexico. Korea's LS Cable AMSC will sell three million meters of wire to allow LS Cable to build 10–15 miles of superconducting cabling for the grid. This represents an order of magnitude increase over the size of the current largest installation, at Long Island Power. HTS rotors AMSC has demonstrated a 36.5 MW (49,000 horsepower) high-temperature superconductor (HTS) electric m
https://en.wikipedia.org/wiki/Media%20Delivery%20Index
The Media Delivery Index (MDI) is a set of measures that can be used to monitor both the quality of a delivered video stream as well as to show system margin for IPTV systems by providing an accurate measurement of jitter and delay at network level (Internet Protocol, IP), which are the main causes for quality loss. Identifying and quantizing such problems in this kind of networks is key to maintaining high quality video delivery and providing indications that warn system operators with enough advance notice to allow corrective action. The Media Delivery Index is typically displayed as two numbers separated by a colon: the Delay Factor (DF) and the Media Loss Rate (MLR). Context The Media Delivery Index (MDI) may be able to identify problems caused by: Time distortion If packets are delayed by the network, some packets arrive in bursts with interpacket delays shorter than when they were transmitted, while others are delayed such that they arrive with greater delay between packets than when they were transmitted from the source (see figure below). This time difference between when a packet actually arrives and the expected arrival time is defined as packet jitter or time distortion. A receiver displaying the video at its nominal rate must accommodate the varying input stream arrival times by buffering the data arriving early and assuring that there is enough already stored data to face the possible delays in the received data (because of this the buffer is filled before displaying). Similarly, the network infrastructure (switches, routers,…) uses buffers at each node to avoid packet loss. These buffers must be sized appropriately to handle network congestion. Packet delays can be caused by multiple facts, among which there are the way traffic is routed through the infrastructure and possible differences between link speeds in the infrastructure. Moreover, some methods for delivering Quality of Service (QOS) using packet metering algorithms may intentionally h
https://en.wikipedia.org/wiki/PyBOP
PyBOP (benzotriazol-1-yloxytripyrrolidinophosphonium hexafluorophosphate) is a peptide coupling reagent used in solid phase peptide synthesis. It is used as a substitute for the BOP reagent - avoiding the formation of the carcinogenic waste product HMPA. See also BOP reagent DEPBT, a related reagent that contains no phosphorus-nitrogen bonds HATU HBTU
https://en.wikipedia.org/wiki/Visual%20reasoning
Visual reasoning is the process of manipulating one's mental image of an object in order to reach a certain conclusion – for example, mentally constructing a piece of machinery to experiment with different mechanisms. In a frequently cited paper in the journal Science and a later book, Eugene S. Ferguson, a mechanical engineer and historian of technology, claims that visual reasoning is a widely used tool used in creating technological artefacts. There is ample evidence that visual methods, particularly drawing, play a central role in creating artefacts. Ferguson's visual reasoning also has parallels in philosopher David Gooding's argument that experimental scientists work with a combination of action, instruments, objects and procedures as well as words. That is, with a significant non-verbal component. Ferguson argues that non-verbal reasoning does not get much attention in areas like history of technology and philosophy of science because the people involved are verbal rather than visual thinkers. Those who use visual reasoning, notably architects, designers, engineers, and certain mathematicians conceive and manipulate objects in "the mind's eye" before putting them on paper. Having done this the paper or computer versions (in CAD) can be manipulated by metaphorically "building" the object on paper (or computer) before building it physically. Nicola Tesla claimed that the first alternating current motor he built ran perfectly because he had visualized and "run" models of it in his mind before building the prototype. See also Diagrammatic reasoning Scientific visualization Visual analytics
https://en.wikipedia.org/wiki/Glaisher%27s%20theorem
In number theory, Glaisher's theorem is an identity useful to the study of integer partitions. Proved in 1883 by James Whitbread Lee Glaisher, it states that the number of partitions of an integer into parts not divisible by is equal to the number of partitions in which no part is repeated or more times. This generalizes a result established in 1748 by Leonhard Euler for the case . Statement It states that the number of partitions of an integer into parts not divisible by is equal to the number of partitions in which no part is repeated d or more times, which can be written formally as partitions of the form where and . When this becomes the special case known as Euler's theorem, that the number of partitions of into distinct parts is equal to the number of partitions of into odd parts. In the following examples, we use the multiplicity notation of partitions. For example, is a notation for the partition 1 + 1 + 1 + 1 + 2 + 3 + 3. Example for d=2 (Euler's theorem case) Among the 15 partitions of the number 7, there are 5, shown in bold below, that contain only odd parts (i.e. only odd numbers): If we count now the partitions of 7 with distinct parts (i.e. where no number is repeated), we also obtain 5: The partitions in bold in the first and second case are not the same, and it is not obvious why their number is the same. Example for d=3 Among the 11 partitions of the number 6, there are 7, shown in bold below, that contain only parts not divisible by 3: And if we count the partitions of 6 with no part that repeats more than 2 times, we also obtain 7: Proof A proof of the theorem can be obtained with generating functions. If we note the number of partitions with no parts divisible by d and the number of partitions with no parts repeated more than d-1 times, then the theorem means that for all n . The uniqueness of ordinary generating functions implies that instead of proving that for all n, it suffices to prove that the generating functions
https://en.wikipedia.org/wiki/Crofton%20formula
In mathematics, the Crofton formula, named after Morgan Crofton (1826–1915), is a classic result of integral geometry relating the length of a curve to the expected number of times a "random" line intersects it. Statement Suppose is a rectifiable plane curve. Given an oriented line ℓ, let (ℓ) be the number of points at which and ℓ intersect. We can parametrize the general line ℓ by the direction in which it points and its signed distance from the origin. The Crofton formula expresses the arc length of the curve in terms of an integral over the space of all oriented lines: The differential form is invariant under rigid motions of , so it is a natural integration measure for speaking of an "average" number of intersections. It is usually called the kinematic measure. The right-hand side in the Crofton formula is sometimes called the Favard length. In general, the space of oriented lines in is the tangent bundle of , and we can similarly define a kinematic measure on it, which is also invariant under rigid motions of . Then for any rectifiable surface of codimension 1, we have where Proof sketch Both sides of the Crofton formula are additive over concatenation of curves, so it suffices to prove the formula for a single line segment. Since the right-hand side does not depend on the positioning of the line segment, it must equal some function of the segment's length. Because, again, the formula is additive over concatenation of line segments, the integral must be a constant times the length of the line segment. It remains only to determine the factor of 1/4; this is easily done by computing both sides when γ is the unit circle. The proof for the generalized version proceeds exactly as above. Poincare’s formula for intersecting curves Let be the Euclidean group on the plane. It can be parametrized as , such that each defines some : rotate by counterclockwise around the origin, then translate by . Then is invariant under action of on itself, thus we o
https://en.wikipedia.org/wiki/Fruity%20Frank
Fruity Frank is a 1984 video game for the Amstrad CPC and MSX home computers. Produced by Kuma Software and authored by Steve Wallis with graphics by his brother Sean Wallis. The gameplay is very similar to Mr. Do!, though the story involves Frank protecting a garden from invading monsters. Gameplay The player has to collect the fruits lying around the garden while avoiding touching the monsters. Apples can be pushed on these to kill them and offer temporary respite. Monsters can also be killed by throwing a bouncing apple pip at them. When all pieces of fruit have been collected the player proceeds to the next level. Each level is identifiable by a different colour background and a new jocular tune. There are four types of enemies: the yellow "big nose", slow: 20 points by shooting, 40 points by squashing the violet "eggplant", fast, digging: 50 points by shooting, 100 points by squashing the red "strawberry", very fast, digging: 100 points by shooting, 200 points by squashing the green (spelling "Bonus") Every 1000 points, Frank gains an extra life, with a maximum of two. Music Music in the game is inspired from traditional English songs and rhymes. Level 1 : "A Life on the Ocean Wave" (Royal Marines anthem) Level 2 : "Where Have You Been All the Day, Billy Boy" (Irish version) Level 3 : ??? Level 4 : "Sweet Molly Malone" Level 5 : "The Jolly Beggar" Level 6 : "My Bonnie Lies over the Ocean" Level 7 : "London Bridge Is Falling Down"
https://en.wikipedia.org/wiki/List%20of%20formulae%20involving%20%CF%80
The following is a list of significant formulae involving the mathematical constant . Many of these formulae can be found in the article Pi, or the article Approximations of . Euclidean geometry where is the circumference of a circle, is the diameter, and is the radius. More generally, where and are, respectively, the perimeter and the width of any curve of constant width. where is the area of a circle. More generally, where is the area enclosed by an ellipse with semi-major axis and semi-minor axis . where is the area between the witch of Agnesi and its asymptotic line; is the radius of the defining circle. where is the area of a squircle with minor radius , is the gamma function and is the arithmetic–geometric mean. where is the area of an epicycloid with the smaller circle of radius and the larger circle of radius (), assuming the initial point lies on the larger circle. where is the area of a rose with angular frequency () and amplitude . where is the perimeter of the lemniscate of Bernoulli with focal distance . where is the volume of a sphere and is the radius. where is the surface area of a sphere and is the radius. where is the hypervolume of a 3-sphere and is the radius. where is the surface volume of a 3-sphere and is the radius. Regular convex polygons Sum of internal angles of a regular convex polygon with sides: Area of a regular convex polygon with sides and side length : Inradius of a regular convex polygon with sides and side length : Circumradius of a regular convex polygon with sides and side length : Physics The cosmological constant: Heisenberg's uncertainty principle: Einstein's field equation of general relativity: Coulomb's law for the electric force in vacuum: Magnetic permeability of free space: Approximate period of a simple pendulum with small amplitude: Exact period of a simple pendulum with amplitude ( is the arithmetic–geometric mean): Kepler's third law of planetary motion
https://en.wikipedia.org/wiki/Graph%20state
In quantum computing, a graph state is a special type of multi-qubit state that can be represented by a graph. Each qubit is represented by a vertex of the graph, and there is an edge between every interacting pair of qubits. In particular, they are a convenient way of representing certain types of entangled states. Graph states are useful in quantum error-correcting codes, entanglement measurement and purification and for characterization of computational resources in measurement based quantum computing models. A graph state is a particular case of a 2-uniform hypergraph state, a generalization where the edges have N cardinality. Formal definition Quantum graph states can be defined in two equivalent ways: through the notion of quantum circuits and stabilizer formalism. Quantum circuit definition Given a graph , with the set of vertices and the set of edges , the corresponding graph state is defined as where and the operator is the controlled-Z interaction between the two vertices (corresponding to two qubits) and Stabilizer formalism definition An alternative and equivalent definition is the following, which makes use of the stabilizer formalism. Define an operator for each vertex of : where are the Pauli matrices and is the set of vertices adjacent to . The operators commute. The graph state is defined as the simultaneous -eigenvalue eigenstate of the operators : Equivalence between the two definitions A proof of the equivalence of the two definitions can be found in. Examples If is a three-vertex path, then the stabilizers are The corresponding quantum state is If is a triangle on three vertices, then the stabilizers are The corresponding quantum state is Observe that and are locally equivalent to each other, i.e., can be mapped to each other by applying one-qubit unitary transformations. Indeed, switching and on the first and last qubits, while switching and on the middle qubit, maps the stabilizer group of on
https://en.wikipedia.org/wiki/Radiation%20intelligence
Unintentional Radiation intelligence, or RINT, is military intelligence gathered and produced from unintentional radiation created as induction from electrical wiring, usually of computers, data connections and electricity networks. See also TEMPEST
https://en.wikipedia.org/wiki/Interolog
An interolog is a conserved interaction between a pair of proteins which have interacting homologs in another organism. The term was introduced in a 2000 paper by Walhout et al. Example Suppose that A and B are two different interacting human proteins, and A' and B' are two different interacting dog proteins. Then the interaction between A and B is an interolog of the interaction between A' and B' if the following conditions all hold: A is a homolog of A'. (Protein homologs have similar amino acid sequences and derive from a common ancestral sequence). B is a homolog of B'. A and B interact. A' and B' interact. Thus, interologs are homologous pairs of protein interactions across different organisms. See also Homology (biology) Systems biology Bioinformatics
https://en.wikipedia.org/wiki/Media%20server
A media server is a computer appliance or an application software that stores digital media (video, audio or images) and makes it available over a network. Media servers range from servers that provide video on demand to smaller personal computers or NAS (Network Attached Storage) for the home. Purpose By definition, a media server is a device that simply stores and shares media. This definition is vague, and can allow several different devices to be called media servers. It may be a NAS drive, a home theater PC running Windows XP Media Center Edition, MediaPortal or MythTV, or a commercial web server that hosts media for a large web site. In a home setting, a media server acts as an aggregator of information: video, audio, photos, books, etc. These different types of media (whether they originated on DVD, CD, digital camera, or in physical form) are stored on the media server's hard drive. Access to these is then available from a central location. It may also be used to run special applications that allow the user(s) to access the media from a remote location via the internet. Hardware The only requirement for a media server is a method of storing media and a network connection with enough bandwidth to allow access to that media. Depending on the uses and applications that it runs, a media server may require large amounts of RAM, or a powerful multicore CPU. A RAID array may be used to create a large amount of storage, though it is generally not necessary in a home media server to use a RAID array that gives a performance increase because current home network transfer speeds are slower than that of most current hard drives. However, a RAID configuration may be used to prevent loss of the media files due to disk failure as well. Many media servers also have the ability to capture media. This is done with specialized hardware such as TV tuner cards. Analog TV tuner cards can capture video from analog broadcast TV and output from cable/satellite set top boxes. Thi
https://en.wikipedia.org/wiki/GeoDNS
GeoDNS (or GeoIP) is a patch for BIND DNS server software, to allow geographical split horizon (different DNS answers based on client's geographical location), based on MaxMind's geoip (commercial) or geolite (free) databases. The objective of this technology is to enhance the domain name lookup by address resolution based on the geographical location of the client. For example a website might have 2 servers, one located in France and one in the US. With GeoDNS it's possible to create a DNS record for which clients from Europe would get the IP address of the French server and clients from the US would get the American one. This makes network access faster and possibly cheaper, compared to directing all users worldwide to the same server or to multiple servers using random distribution, such as round robin. As this technology is DNS based, it is much easier to deploy than BGP anycast. It does not require any support from the ISP and will not break existing connections when the server selected for a particular client changes. However, as it is not intimately tied into the network infrastructure it is likely to be less accurate at sending data to the nearest server. The requester that the resolving DNS server sees is typically not the end user, but the DNS server of the user's ISP doing a recursive lookup, and the recursive DNS server caches the result. As ISPs typically arrange for users to use DNS servers geographically near them, the system usually works nonetheless. External links https://kb.isc.org/docs/aa-01149 BIND9 Documentation for GeoIP feature http://news.constellix.com/dns-coach-digs-deeper-into-geoip-infographic/ Geo-IP Info graphic DNS software
https://en.wikipedia.org/wiki/Fibre%20Channel%20electrical%20interface
The Fibre Channel electrical interface is one of two related Fibre Channel standards that can be used to physically interconnect computer devices. The other standard is a Fibre Channel optical interface, which is not covered in this article. Fibre Channel signal characteristics Fibre channel electrical signals are sent over a duplex differential interface. This usually consists of twisted-pair cables with a nominal impedance of 75 ohms (single-ended) or 150 ohms (differential). This is a genuine differential signalling system so no ground reference is carried through the cable, except for the shield. Signalling is AC-coupled, with the series capacitors located at the transmitter end of the link. The definition of the Fibre Channel signalling voltage is complex. Eye-diagrams are defined for both the transmitter and receiver. There are many eye-diagram parameters which must all be met to be compliant with the standard. In simple terms, the transmitter circuit must output a signal with a minimum of 600 mV peak-to-peak differential, maximum 2000 mV peak-to-peak differential. A good signal looks rather like a sine-wave with a fundamental frequency of half the data rate, so 1 GHz for a typical system running at 2 gigabits per second. The Bit-Error Rate (BER) objective for Fibre Channel systems is 1 in 1012 (1 bit in 1,000,000,000,000 bits). At 2 Gbit/s this equates to seven errors per hour. Therefore, this is a common event and the receiver circuitry must contain error-handling logic. In order to achieve such a low error-rate, jitter "budgets" are defined for the transmitter and cables. Fibre Channel connector pinouts There are various Fibre Channel connectors in use in the computer industry. Details of their pinouts are distributed between different official documents. The following sections describe the most common Fibre Channel pinouts with some comments about the purpose of their electrical signals. The most familiar Fibre Channel connectors are
https://en.wikipedia.org/wiki/Super%20Cars
Super Cars is a top-view racing game from Gremlin Interactive, who later produced the Lotus series of games. Stylistically, the game is influenced by Super Sprint. There are endless tracks at each of the 4 difficulty levels, which can be raced in any order (although the last track raced is made harder than usual). In the races the player wins money, which can be spent on temporary handling and power upgrades, plus armour plating and front/rear shooting missiles that can knock out other racers. The player must finish in the top 3 of each race to progress - initially there are 4 computer opponents, but more are added as the game progresses. The car can be upgraded throughout the game via the shop section. The player is given an initial price, but also a number of options of things to say to the salesman - with the right combination, the price will drop. The NES version was released exclusively in America in 1991 by Electro Brain. It was followed by Super Cars II in 1991. Cars Three cars are available for purchase during the game, the Taraco Neoroder Turbo, the Vaug Interceptor Turbo and the Retron Parsec Turbo. Each appears to be based on a real car of the time with the Retron Parsec Turbo being based on the Cizeta-Moroder V16T, the Vaug Interceptor based on the Honda NSX and the Taraco Neoroder based on the Alfa Romeo SZ (Sprint Zagato) but with some slight changes. This is in slight contrast to the box art, where the blue "starter" car (Taraco) instead more closely resembles a contemporary European Ford Fiesta or Escort Cosworth convertible. The Retron is also portrayed differently on the box art, where it is a Lamborghini Countach instead of a Cizeta. External links Super Cars at MobyGames
https://en.wikipedia.org/wiki/Tundra%20orbit
A Tundra orbit () is a highly elliptical geosynchronous orbit with a high inclination (approximately 63.4°), an orbital period of one sidereal day, and a typical eccentricity between 0.2 and 0.3. A satellite placed in this orbit spends most of its time over a chosen area of the Earth, a phenomenon known as apogee dwell, which makes them particularly well suited for communications satellites serving high-latitude regions. The ground track of a satellite in a Tundra orbit is a closed figure 8 with a smaller loop over either the northern or southern hemisphere. This differentiates them from Molniya orbits designed to service high-latitude regions, which have the same inclination but half the period and do not loiter over a single region. Uses Tundra and Molniya orbits are used to provide high-latitude users with higher elevation angles than a geostationary orbit. This is desirable as broadcasting to these latitudes from a geostationary orbit (above the Earth's equator) requires considerable power due to the low elevation angles, and the extra distance and atmospheric attenuation that comes with it. Sites located above 81° latitude are unable to view geocentric satellites at all, and as a rule of thumb, elevation angles of less than 10° can cause problems, depending on the communications frequency. Highly elliptical orbits provide an alternative to geostationary ones, as they remain over their desired high-latitude regions for long periods of time at the apogee. Their convenience is mitigated by cost, however: two satellites are required to provide continuous coverage from a Tundra orbit (three from a Molniya orbit). A ground station receiving data from a satellite constellation in a highly elliptical orbit must periodically switch between satellites and deal with varying signal strengths, latency and Doppler shifts as the satellite's range changes throughout its orbit. These changes are less pronounced for satellites in a Tundra orbit, given their increased distanc
https://en.wikipedia.org/wiki/Compactness%20measure
Compactness measure is a numerical quantity representing the degree to which a shape is compact. The circle and the sphere are the most compact planar and solid shapes, respectively. Properties Various compactness measures are used. However, these measures have the following in common: They are applicable to all geometric shapes. They are independent of scale and orientation. They are dimensionless numbers. They are not overly dependent on one or two extreme points in the shape. They agree with intuitive notions of what makes a shape compact. Examples A common compactness measure is the isoperimetric quotient, the ratio of the area of the shape to the area of a circle (the most compact shape) having the same perimeter. In the plane, this is equivalent to the Polsby–Popper test. Alternatively, the shape's area could be compared to that of its bounding circle, its convex hull, or its minimum bounding box. Similarly, a comparison can be made between the perimeter of the shape and that of its convex hull, its bounding circle, or a circle having the same area. Other tests involve determining how much area overlaps with a circle of the same area or a reflection of the shape itself. Compactness measures can be defined for three-dimensional shapes as well, typically as functions of volume and surface area. One example of a compactness measure is sphericity . Another measure in use is , which is proportional to . For raster shapes, i.e. shapes composed of pixels or cells, some tests involve distinguishing between exterior and interior edges (or faces). More sophisticated measures of compactness include calculating the shape's moment of inertia or boundary curvature. Applications A common use of compactness measures is in redistricting. The goal is to maximize the compactness of electoral districts, subject to other constraints, and thereby to avoid gerrymandering. Another use is in zoning, to regulate the manner in which land can be subdivided into building l
https://en.wikipedia.org/wiki/Cetylpyridinium%20chloride
Cetylpyridinium chloride (CPC) is a cationic quaternary ammonium compound used in some types of mouthwashes, toothpastes, lozenges, throat sprays, breath sprays, and nasal sprays. It is an antiseptic that kills bacteria and other microorganisms. It has been shown to be effective in preventing dental plaque and reducing gingivitis. It has also been used as an ingredient in certain pesticides. Though one study seems to indicate cetylpyridinium chloride does not cause brown tooth stains, at least one mouthwash containing CPC as an active ingredient bears the warning label "In some cases, antimicrobial rinses may cause surface staining to teeth," following a failed class-action lawsuit brought by customers whose teeth were stained. The name breaks down as: cetyl- means cetyl group, which derives from cetyl alcohol that was first isolated from the whale oil (); pyridinium refers to the cation [C5H5NH]+, the conjugate acid of pyridine; chloride refers to the anion Cl−. Medical use OTC (over the counter) products containing cetylpyridinium chloride include oral wash, oral rinse, and ingestable products, such as lozenges and over-the-counter cough syrup. The United States' federal Food and Drug Administration's monograph on oral antiseptic drug products reviewed the data regarding CPC and made this conclusion: The National Library of Medicine Toxicology Data Network (TOXNET) reviewed the range of toxicity of CPC and stated "Significant toxicity is rare after exposure to low concentration products that are typically available in the home." The fatal dose in humans ingesting cationic detergents has been estimated to be 1 to 3 g. Therefore, a person using a typical oral ingestible product that provides 0.25 mg CPC per dose would need to take 4,000 doses at one time to reach the estimated fatal dose range. A review found that mouthwashes containing CPC "provide a small but significant additional benefit when compared with toothbrushing only or toothbrushing followed
https://en.wikipedia.org/wiki/Intermittency
In dynamical systems, intermittency is the irregular alternation of phases of apparently periodic and chaotic dynamics (Pomeau–Manneville dynamics), or different forms of chaotic dynamics (crisis-induced intermittency). Experimentally, intermittency appears as long periods of almost periodic behavior interrupted by chaotic behavior. As control variables change, the chaotic behavior become more frequent until the system is fully chaotic. This progression is known as the intermittency route to chaos. Pomeau and Manneville described three routes to intermittency where a nearly periodic system shows irregularly spaced bursts of chaos. These (type I, II and III) correspond to the approach to a saddle-node bifurcation, a subcritical Hopf bifurcation, or an inverse period-doubling bifurcation. In the apparently periodic phases the behaviour is only nearly periodic, slowly drifting away from an unstable periodic orbit. Eventually the system gets far enough away from the periodic orbit to be affected by chaotic dynamics in the rest of the state space, until it gets close to the orbit again and returns to the nearly periodic behaviour. Since the time spent near the periodic orbit depends sensitively on how closely the system entered its vicinity (in turn determined by what happened during the chaotic period) the length of each phase is unpredictable. Another kind, on-off intermittency, occurs when a previously transversally stable chaotic attractor with dimension less than the embedding space begins to lose stability. Near unstable orbits within the attractor orbits can escape into the surrounding space, producing a temporary burst before returning to the attractor. In crisis-induced intermittency a chaotic attractor suffers a crisis, where two or more attractors cross the boundaries of each other's basin of attraction. As an orbit moves through the first attractor it can cross over the boundary and become attracted to the second attractor, where it will stay until its
https://en.wikipedia.org/wiki/Mental%20Calculation%20World%20Cup
The Mental Calculation World Cup (German: Weltmeisterschaften im Kopfrechnen, or World Championship in Mental Calculation) is an international competition for mental calculators, held every two years in Germany. Mental Calculation World Cup 2004 The first Mental Calculation World Cup was held in Annaberg-Buchholz, Germany on 30 October 2004. There were 17 participants from 10 countries. The World Cup involved the following contests (and two surprise tasks): Adding ten 10-digit numbers, 10 tasks in 10 minutes Winner: Alberto Coto (Spain); 10 correct results; 5:50 minutes, world record Multiplying two 8-digit numbers, 10 tasks in 15 minutes Winner: Alberto Coto (Spain), 8 correct results Calendar Calculations, two series one minute each, dates from the years 1600–2100 Winner: Matthias Kesselschläger (Germany), 33 correct results, world record Square Root from 6-digit numbers, 10 tasks in 15 minutes Winner: Jan van Koningsveld (Germany) In the overall ranking the first place is taken by Robert Fountain (Great Britain), the runner-up was Jan van Koningsveld (Germany). Mental Calculation World Cup 2006 The second Mental Calculation World Cup was held on 4 November 2006 in the Mathematikum museum in Gießen, Germany. 26 Calculators from 11 countries took part. The World Cup involved the following contests (and two surprise tasks): Adding ten 10-digit numbers, 10 tasks in 10 minutes Winner: Jorge Arturo Mendoza Huertas (Peru); 10 correct results Multiplying two 8-digit numbers, 10 tasks in 15 minutes Winner: Alberto Coto (Spain) Calendar Calculations, two series one minute each, dates from the years 1600–2100 Winner: Matthias Kesselschläger (Germany) Square Root from 6-digit numbers, 10 tasks in 15 minutes Winner: Robert Fountain (Great Britain) Robert Fountain (Great Britain) defended his title in the overall competition, the places 2 to 4 have been won by Jan van Koningsveld (Netherlands), Gert Mittring (Germany) and Yusnier Viera Romero (Cuba). Mental Calculation
https://en.wikipedia.org/wiki/KKOL%20%28AM%29
KKOL (1300 kHz) is an AM radio station in Seattle, Washington. It is owned by Salem Media Group. It airs a conservative talk radio format, branded as "1300 The Answer," featuring nationally syndicated Salem Radio Network hosts including Dennis Prager, Mike Gallagher, Sebastian Gorka, Hugh Hewitt and Charlie Kirk. The radio studios and offices are on Fifth Avenue South. KKOL is the oldest radio station in Seattle, first licensed on May 23, 1922. The transmitter site is on North Madison Avenue on Bainbridge Island, co-located with KLFE 1590 AM. By day, KKOL transmits 50,000 watts, the maximum for AM radio stations in the U.S., using a two-tower array directional antenna. At night it switches to non-directional operation, but to protect other stations on 1300 AM from interference, it reduces power to 3,200 watts. History KDZE KKOL was first licensed, with the sequentially assigned call letters KDZE, on May 23, 1922. It was owned by the Rhodes Company Department Store at 1321 Second Avenue in Seattle. In the early days of broadcasting, some stations were owned by department stores and electronics stores, to promote the sale of receivers. C. B. Williams, the department store's advertising manager, coordinated the installation of the initial 50-watt transmitter. The station's glass-enclosed studio was located on the second floor of the store, where shoppers could observe its operation. At this time there was only a single wavelength, 360 meters (833 kHz) available for "entertainment" broadcasts, so KDZE was required to make a time-sharing agreement with the other stations already in operation. On June 23, Seattle stations were scheduled to operate from noon to 10:30 p.m., with KDZE assigned the 3:30 to 4:15 p.m time period. In May 1923, the U.S. Commerce Department, which regulated radio at this time, made a range of frequencies available to "Class B" stations that had higher powers and better programming. The Seattle region was initially assigned 610 kHz, with 660 
https://en.wikipedia.org/wiki/Coprinus%20comatus
Coprinus comatus, commonly known as the shaggy ink cap, lawyer's wig, or shaggy mane, is a common fungus often seen growing on lawns, along gravel roads and waste areas. The young fruit bodies first appear as white cylinders emerging from the ground, then the bell-shaped caps open out. The caps are white, and covered with scales—this is the origin of the common names of the fungus. The gills beneath the cap are white, then pink, then turn black and deliquesce ('melt') into a black liquid filled with spores (hence the "ink cap" name). This mushroom is unusual because it will turn black and dissolve itself in a matter of hours after being picked or depositing spores. When young it is an excellent edible mushroom provided that it is eaten soon after being collected (it keeps very badly because of the autodigestion of its gills and cap). If long-term storage is desired, microwaving, sauteing or simmering until limp will allow the mushrooms to be stored in a refrigerator for several days or frozen. Also, placing the mushrooms in a glass of ice water will delay the decomposition for a day or two so that one has time to incorporate them into a meal. Processing or icing must be done whether for eating or storage within four to six hours of harvest to prevent undesirable changes to the mushroom. The species is cultivated in China as food. Taxonomy The shaggy ink cap was first described by Danish naturalist Otto Friedrich Müller in 1780 as Agaricus comatus, before being given its current binomial name in 1797 by Christiaan Hendrik Persoon. Its specific name derives from coma, or "hair", hence comatus, "hairy" or "shaggy". Other common names include lawyer's wig, and shaggy mane. Coprinus comatus is the type species for the genus Coprinus. This genus was formerly considered to be a large one with well over 100 species. However, molecular analysis of DNA sequences showed that the former species belonged in two families, the Agaricaceae and the Psathyrellaceae. Coprinus coma
https://en.wikipedia.org/wiki/Hofbauer%20cell
Hofbauer cells are oval eosinophilic histiocytes with granules and vacuoles found in the placenta, which are of mesenchymal origin, in mesoderm of the chorionic villus, particularly numerous in early pregnancy. Etymology They are named after J. Isfred Isidore Hofbauer (1871-1961), a German-American gynecologist who described the cell type in his book (Biology of the Human Placenta with a special emphasis on the question of fetal nourishment). Function They are believed to be a type of macrophage and are most likely involved in preventing the transmission of pathogens from the mother to the fetus (vertical transmission). Although there are many studies concerning placental vasculogenesis and angiogenesis, there has been a lack of evidence on the possible roles of Hofbauer cells in these processes. According to a systems level single-cell transcriptomics based study of human placental cell-cell communication, Hofbauer cells produce HBEGF, an EGFR ligand, which drives differentiation of villous cytotrophoblasts (VCT) towards syncytiotrophoblasts (SCT). Histology Under histology sections, Hofbauer cells have appeared with discernible amount of cytoplasm. See also Chorionic villi
https://en.wikipedia.org/wiki/Sobolev%20inequality
In mathematics, there is in mathematical analysis a class of Sobolev inequalities, relating norms including those of Sobolev spaces. These are used to prove the Sobolev embedding theorem, giving inclusions between certain Sobolev spaces, and the Rellich–Kondrachov theorem showing that under slightly stronger conditions some Sobolev spaces are compactly embedded in others. They are named after Sergei Lvovich Sobolev. Sobolev embedding theorem Let denote the Sobolev space consisting of all real-valued functions on whose first weak derivatives are functions in . Here is a non-negative integer and . The first part of the Sobolev embedding theorem states that if , and are two real numbers such that then and the embedding is continuous. In the special case of and , Sobolev embedding gives where is the Sobolev conjugate of , given by This special case of the Sobolev embedding is a direct consequence of the Gagliardo–Nirenberg–Sobolev inequality. The result should be interpreted as saying that if a function in has one derivative in , then itself has improved local behavior, meaning that it belongs to the space where . (Note that , so that .) Thus, any local singularities in must be more mild than for a typical function in . The second part of the Sobolev embedding theorem applies to embeddings in Hölder spaces . If and with then one has the embedding This part of the Sobolev embedding is a direct consequence of Morrey's inequality. Intuitively, this inclusion expresses the fact that the existence of sufficiently many weak derivatives implies some continuity of the classical derivatives. If then for every . In particular, as long as , the embedding criterion will hold with and some positive value of . That is, for a function on , if has derivatives in and , then will be continuous (and actually Hölder continuous with some positive exponent ). Generalizations The Sobolev embedding theorem holds for Sobolev spaces on other suitable domains
https://en.wikipedia.org/wiki/Arvid%20Noe
Arne Vidar Røed (23 July 1946 – 24 April 1976), known in medical literature by the anagram Arvid Darre Noe, was a Norwegian sailor and truck driver who contracted one of the earliest confirmed cases of HIV/AIDS. His was the first confirmed HIV case in Europe though the disease was not identified at the time of his death. The virus spread to his wife and youngest daughter both of whom also died; this was the first documented cluster of AIDS cases before the AIDS epidemic of the early 1980s. The researchers studying the cases referred to Røed as the "Norwegian sailor" or the anagram "Arvid Darre Noe" to conceal his identity; his true name, Arne Vidar Røed, became known long after his death. Illness and death Røed began his career in the merchant navy in 1961, at the age of 15. As established by the journalist Edward Hooper, Røed visited Africa twice during his travels; the first time from 1961 to 1962 on board the Hoegh Aronde, along the west coast of Africa to Douala, Cameroon. On this trip, Røed contracted gonorrhea. By 1968, Røed was no longer a sailor and was working as a long haul truck driver throughout Europe mainly in West Germany. Beginning in 1968, Røed suffered from joint pain, lymphedema, and lung infections (This was also the year American teenager Robert Rayford first presented with similar symptoms; he was later identified as the first North American AIDS case). Røed's condition stabilized with treatment until 1975, when his symptoms worsened. He developed motor control difficulties and dementia and died on 24 April 1976. His wife grew ill with similar symptoms and died in December. Although their two older children were not born infected, their third child, a daughter, died on 4 January 1976, at the age of eight, and was the first person documented to have died of AIDS outside the United States. Røed, his wife, and his daughter were buried in Borre, Vestfold, Norway. Later investigations Approximately a decade after Røed's death, tests
https://en.wikipedia.org/wiki/Stern%27s%20Pickle%20Works
Stern's Pickle Works, headquartered at 111 Powell Place off Melville Road in Farmingdale, New York, was the last remaining pickle factory on Long Island from the 19th century. History In 1888 Jarvis Andrew Lattin (1853–1941) started a pickle and sauerkraut factory in Farmingdale, New York. There were many pickling companies already established in the area. He had a house built on the land next to the factory. The factory in 1894 was sold to Aaron Stern and it became the "Stern and Lattin Pickle Company" and in 1914 "Stern and Brauner". It was also listed as "Stern Pickle Products, Inc." and "Stern's Pickle Works". It was at 111 Powell Place off Melville Road and lasted until 1985. Aaron Stern (1876-1952) He went to the US in 1893 from Austria and was naturalized in 1898. He married Anna (1889-1933) in 1910 and had the following children: Sidney Stern (1915–2008); Nathan Stern; Joseph Stern (1909-1996); Hilda Stern; and Edythe Stern (1918-2016). Aaron was living in Brooklyn. Names Lattin Pickle company (c1888) Stern and Lattin Pickle Company (c1894) Stern and Brauner Stern Pickle Products, Inc. (c1894-1985) and Stern's Pickle Works
https://en.wikipedia.org/wiki/Chronic%20inflammatory%20demyelinating%20polyneuropathy
Chronic inflammatory demyelinating polyneuropathy (CIDP) is an acquired autoimmune disease of the peripheral nervous system characterized by progressive weakness and impaired sensory function in the legs and arms. The disorder is sometimes called chronic relapsing polyneuropathy (CRP) or chronic inflammatory demyelinating polyradiculoneuropathy (because it involves the nerve roots). CIDP is closely related to Guillain–Barré syndrome and it is considered the chronic counterpart of that acute disease. Its symptoms are also similar to progressive inflammatory neuropathy. It is one of several types of neuropathy. Types Several variants have been reported. Specially important are: An asymmetrical variant of CIDP is known as Lewis-Sumner Syndrome or MADSAM (multifocal acquired demyelinating sensory and motor neuropathy). A variant with CNS involvement named combined central and peripheral demyelination (CCPD). This variant is special because it belongs at the same time to the CIDP syndrome and to the multiple sclerosis spectrum. These cases seem to be related to the presence of anti-neurofascin autoantibodies. Causes Chronic inflammatory demyelinating polyneuropathy (or polyradiculoneuropathy) is considered an autoimmune disorder destroying myelin, the protective covering of the nerves. Typical early symptoms are "tingling" (sort of electrified vibration or paresthesia) or numbness in the extremities, frequent (night) leg cramps, loss of reflexes (in knees), muscle fasciculations, "vibration" feelings, loss of balance, general muscle cramping and nerve pain. CIDP is extremely rare but under-recognized and under-treated due to its heterogeneous presentation (both clinical and electrophysiological) and the limitations of clinical, serologic, and electrophysiologic diagnostic criteria. Despite these limitations, early diagnosis and treatment is favoured in preventing irreversible axonal loss and improving functional recovery. There is a lack of awareness and treatmen
https://en.wikipedia.org/wiki/Active%20Format%20Description
In television technology, Active Format Description (AFD) is a standard set of codes that can be sent in the MPEG video stream or in the baseband SDI video signal that carries information about their aspect ratio and other active picture characteristics. It has been used by television broadcasters to enable both 4:3 and 16:9 television sets to optimally present pictures transmitted in either format. It has also been used by broadcasters to dynamically control how down-conversion equipment formats widescreen 16:9 pictures for 4:3 displays. Standard AFD codes provide information to video devices about where in the coded picture the active video is and also the "protected area" which is the area that needs to be shown. Outside the protected area, edges at the sides or the top can be removed without the viewer missing anything significant. Video decoders and display devices can then use this information, together with knowledge of the display shape and user preferences, to choose a presentation mode. AFD can be used in the generation of Widescreen signaling, although MPEG alone contains enough information to generate this. AFDs are not part of the core MPEG standard; they were originally developed within the Digital TV Group in the UK and submitted to DVB as an extension, which has subsequently also been adopted by ATSC (with some changes). SMPTE has also adopted AFD for baseband SDI carriage as standard SMPTE 2016-1-2007, "Format for Active Format Description and Bar Data". Active Format Description is occasionally incorrectly referred to as "Active Format Descriptor". There is no "descriptor" (descriptor has a specific meaning in ISO/IEC 13818-1, MPEG syntax). The AFD data is carried in the Video Layer of MPEG, ISO/IEC 13818-2. When carried in digital video, AFDs can be stored in the Video Index Information, in line 11 of the video. By using AFDs broadcasters can also control the timing of Aspect Ratio switches more accurately than using MPEG signalling alone. T
https://en.wikipedia.org/wiki/List%20of%20PAN%20dating%20software
PAN dating software is computer software to encourage conversation with others on a similar wireless network. Bawadu BlackPeopleMeet Live Radar MobiLuck Nokia Sensor Proxy Dating See also Lovegety Toothing
https://en.wikipedia.org/wiki/ADP%20ribosylation%20factor
ADP ribosylation factors (ARFs) are members of the ARF family of GTP-binding proteins of the Ras superfamily. ARF family proteins are ubiquitous in eukaryotic cells, and six highly conserved members of the family have been identified in mammalian cells. Although ARFs are soluble, they generally associate with membranes because of N-terminus myristoylation. They function as regulators of vesicular traffic and actin remodelling. The small ADP ribosylation factor (Arf) GTP-binding proteins are major regulators of vesicle biogenesis in intracellular traffic. They are the founding members of a growing family that includes Arl (Arf-like), Arp (Arf-related proteins) and the remotely related Sar (Secretion-associated and Ras-related) proteins. Arf proteins cycle between inactive GDP-bound and active GTP-bound forms that bind selectively to effectors. The classical structural GDP/GTP switch is characterised by conformational changes at the so-called switch 1 and switch 2 regions, which bind tightly to the gamma-phosphate of GTP but poorly or not at all to the GDP nucleotide. Structural studies of Arf1 and Arf6 have revealed that although these proteins feature the switch 1 and 2 conformational changes, they depart from other small GTP-binding proteins in that they use an additional, unique switch to propagate structural information from one side of the protein to the other. The GDP/GTP structural cycles of human Arf1 and Arf6 feature a unique conformational change that affects the beta2beta3 strands connecting switch 1 and switch 2 (interswitch) and also the amphipathic helical N-terminus. In GDP-bound Arf1 and Arf6, the interswitch is retracted and forms a pocket to which the N-terminal helix binds, the latter serving as a molecular hasp to maintain the inactive conformation. In the GTP-bound form of these proteins, the interswitch undergoes a two-residue register shift that pulls switch 1 and switch 2 up, restoring an active conformation that can bind GTP. In this confo
https://en.wikipedia.org/wiki/Keyboard%20tablature
Keyboard tablature is a form of musical notation for keyboard instruments. Widely used in some parts of Europe from the 15th century, it co-existed with, and was eventually replaced by modern staff notation in the 18th century. The defining characteristic of the best known type, German organ tablature, is the use of letters to indicate pitch (with added stems or loops to indicate accidentals) as well as beams for rhythm. Spain and Portugal used a slightly different cipher tablature, called cifra. Historical details The earliest extant music manuscripts written in German tablature date from the first half of the 15th century, with the oldest example, a German manuscript dating from 1432, containing the earliest known setting of a partial organ mass as well as a piece based on a cantus firmus. These manuscripts used letters (the same as today) to identify pitch, with the upper voice typically written on a staff in mensural notation. This style was also present in other German-speaking areas, such as Austria. These manuscripts contain valuable information as to the evolution of the music from the period, with extensive evidence of the influence of vocal, and later dance music, on early instrumental music. This practice which could still be seen in collections from the 16th century eventually led to the full-fledged Baroque dance suites of later centuries. This hybrid tablature was also featured in some early printed music books, such as Arnolt Schlick’s Tabulaturen etlicher Lobgesang und Lidlein of 1512. Later notation that included the upper voice in letters as well became prevalent in the latter part of the 16th century. Even works published in open score, such as Samuel Scheidt's Tablatura Nova (1624), may have been influenced by the strict vertical alignment of so-called "new German organ tablature". Remaining in use in Germany (and neighboring areas, such as modern-day Hungary or Poland) through the time of Bach, the music of some composers of the period remains
https://en.wikipedia.org/wiki/TFNP
In computational complexity theory, the complexity class TFNP is the class of total function problems which can be solved in nondeterministic polynomial time. That is, it is the class of function problems that are guaranteed to have an answer, and this answer can be checked in polynomial time, or equivalently it is the subset of FNP where a solution is guaranteed to exist. The abbreviation TFNP stands for "Total Function Nondeterministic Polynomial". TFNP contains many natural problems that are of interest to computer scientists. These problems include integer factorization, finding a Nash Equilibrium of a game, and searching for local optima. TFNP is widely conjectured to contain problems that are computationally intractable, and several such problems have been shown to be hard under cryptographic assumptions. However, there are no known unconditional intractability results or results showing NP-hardness of TFNP problems. TFNP is not believed to have any complete problems. Formal Definition The class TFNP is formally defined as follows. A binary relation P(x,y) is in TFNP if and only if there is a deterministic polynomial time algorithm that can determine whether P(x,y) holds given both x and y, and for every x, there exists a y which is at most polynomially longer than x such that P(x,y) holds. It was first defined by Megiddo and Papadimitriou in 1989, although TFNP problems and subclasses of TFNP had been defined and studied earlier. Examples Pigeonhole Principle Problem Input: A (polynomially computable) mapping f which maps a set of n+1 items to a set of n items. Question: Find two items a and b such that f(a)=f(b). Let x be a mapping, and y a 2-tuple of items in its domain. The binary relation in question P(x,y) has the meaning "the images of both entries of y under x are equal", which, since the mapping is polynomially computable, is polynomially decidable. Moreover, such tuple y must exist for any mapping because of the Pigeonhole Principle. Connec
https://en.wikipedia.org/wiki/Expressive%20power%20%28computer%20science%29
In computer science, the expressive power (also called expressiveness or expressivity) of a language is the breadth of ideas that can be represented and communicated in that language. The more expressive a language is, the greater the variety and quantity of ideas it can be used to represent. For example, the Web Ontology Language expression language profile (OWL2 EL) lacks ideas (such as negation) that can be expressed in OWL2 RL (rule language). OWL2 EL may therefore be said to have less expressive power than OWL2 RL. These restrictions allow for more efficient (polynomial time) reasoning in OWL2 EL than in OWL2 RL. So OWL2 EL trades some expressive power for more efficient reasoning (processing of the knowledge representation language). Information description The term expressive power may be used with a range of meaning. It may mean a measure of the ideas expressible in that language: regardless of ease (theoretical expressivity) concisely and readily (practical expressivity) The first sense dominates in areas of mathematics and logic that deal with the formal description of languages and their meaning, such as formal language theory, mathematical logic and process algebra. In informal discussions, the term often refers to the second sense, or to both. This is often the case when discussing programming languages. Efforts have been made to formalize these informal uses of the term. The notion of expressive power is always relative to a particular kind of thing that the language in question can describe, and the term is normally used when comparing languages that describe the same kind of things, or at least comparable kinds of things. The design of languages and formalisms involves a trade-off between expressive power and analyzability. The more a formalism can express, the harder it becomes to understand what instances of the formalism say. Decision problems become harder to answer or completely undecidable. Examples In formal language theory For
https://en.wikipedia.org/wiki/Quartic%20graph
In the mathematical field of graph theory, a quartic graph is a graph where all vertices have degree 4. In other words, a quartic graph is a 4-regular graph. Examples Several well-known graphs are quartic. They include: The complete graph K5, a quartic graph with 5 vertices, the smallest possible quartic graph. The Chvátal graph, another quartic graph with 12 vertices, the smallest quartic graph that both has no triangles and cannot be colored with three colors. The Folkman graph, a quartic graph with 20 vertices, the smallest semi-symmetric graph. The Meredith graph, a quartic graph with 70 vertices that is 4-connected but has no Hamiltonian cycle, disproving a conjecture of Crispin Nash-Williams. Every medial graph is a quartic plane graph, and every quartic plane graph is the medial graph of a pair of dual plane graphs or multigraphs. Knot diagrams and link diagrams are also quartic plane multigraphs, in which the vertices represent the crossings of the diagram and are marked with additional information concerning which of the two branches of the knot crosses the other branch at that point. Properties Because the degree of every vertex in a quartic graph is even, every connected quartic graph has an Euler tour. And as with regular bipartite graphs more generally, every bipartite quartic graph has a perfect matching. In this case, a much simpler and faster algorithm for finding such a matching is possible than for irregular graphs: by selecting every other edge of an Euler tour, one may find a 2-factor, which in this case must be a collection of cycles, each of even length, with each vertex of the graph appearing in exactly one cycle. By selecting every other edge again in these cycles, one obtains a perfect matching in linear time. The same method can also be used to color the edges of the graph with four colors in linear time. Quartic graphs have an even number of Hamiltonian decompositions. Open problems It is an open conjecture whether all quartic Hamil
https://en.wikipedia.org/wiki/KZUP-CD
KZUP-CD (channel 20) is a low-power, Class A independent television station in Baton Rouge, Louisiana, United States. It is owned by Nexstar Media Group alongside Fox affiliate WGMB-TV (channel 44) and CW owned-and-operated station WBRL-CD (channel 21); Nexstar also provides certain services to NBC affiliate WVLA-TV (channel 33) under joint sales and shared services agreements (JSA/SSA) with owner White Knight Broadcasting. The stations share studios on Perkins Road in Baton Rouge, while KZUP-CD's transmitter is located near Addis, Louisiana. While "KZUP-CD" is the station's official call sign, it uses "KZUP-TV" for promotional purposes. History The station signed on the air in 1999 as a WZUP, a UPN affiliate available only on cable (TCI and later Cox channel 13). It was the second UPN affiliate (of three) in the Baton Rouge area. When the station went over the air on November 26, 2002, it changed its call sign to KZUP-CA; originally it was going to air on channel 21 and WB affiliate WBRL-CA was on channel 19, but this assignment was short-lived. Channel 19 was once used as a translator station for local station and original UPN affiliate WBTR, and when KZUP went on the air, WBTR moved to previously-unused channel 41. It became an independent station after losing UPN to Raycom Media's WBXH-CA in 2003. As an independent, it called itself "Z-19," and it primarily aired African-American-oriented programming like Good Times, The Fresh Prince of Bel-Air, and The Jeffersons. For brief periods in 2005, KZUP was used to simulcast WVLA and WGMB over analog as their individual transmitter towers were turned off to allow upgrades for their digital television channels. KZUP became an affiliate of the Retro Television Network on September 15, 2008. In 2012, White Knight Broadcasting dropped RTN and resumed carrying syndicated programming. On April 24, 2013, Communications Corporation of America announced the sale of its entire group to Nexstar Broadcasting Group. WVLA and KZU
https://en.wikipedia.org/wiki/Armstrong%20limit
The Armstrong limit or Armstrong's line is a measure of altitude above which atmospheric pressure is sufficiently low that water boils at the normal temperature of the human body. Exposure to pressure below this limit results in a rapid loss of consciousness, followed by a series of changes to cardiovascular and neurological functions, and eventually death, unless pressure is restored within 60–90 seconds. On Earth, the limit is around above sea level, above which atmospheric air pressure drops below 0.0618 atm (6.3 kPa, 47 mmHg, or about 1 psi). The U.S. Standard Atmospheric model sets the Armstrong pressure at an altitude of . The term is named after United States Air Force General Harry George Armstrong, who was the first to recognize this phenomenon. Effect on body fluids At or above the Armstrong limit, exposed body fluids such as saliva, tears, urine, and the liquids wetting the alveoli within the lungs—but not vascular blood (blood within the circulatory system)—will boil away without a full-body pressure suit, and no amount of breathable oxygen delivered by any means will sustain life for more than a few minutes. The NASA technical report Rapid (Explosive) Decompression Emergencies in Pressure-Suited Subjects, which discusses the brief accidental exposure of a human to near vacuum, notes: "The subject later reported that ... his last conscious memory was of the saliva on his tongue beginning to boil." At the nominal body temperature of , water has a vapour pressure of ; which is to say, at an ambient pressure of , the boiling point of water is . A pressure of 6.3 kPa—the Armstrong limit—is about 1/16 of the standard sea-level atmospheric pressure of . At higher altitudes water vapour from ebullism will add to the decompression bubbles of nitrogen gas and cause the body tissues to swell up, though the tissues and the skin are strong enough not to burst under the internal pressure of vapourised water. Formulas for calculating the standard pressure at a g
https://en.wikipedia.org/wiki/Atomistix%20Virtual%20NanoLab
Atomistix Virtual NanoLab (VNL) is a commercial point-and-click software for simulation and analysis of physical and chemical properties of nanoscale devices. Virtual NanoLab is developed and sold commercially by QuantumWise A/S. QuantumWise was then acquired by Synopsys in 2017. Features With its graphical interface, Virtual NanoLab provides a user-friendly approach to atomic-scale modeling. The software contains a set of interactive instruments that allows the user to design nanosystems, to set up and execute numerical calculations, and to visualize the results. Samples such as molecules, nanotubes, crystalline systems, and two-probe systems (i.e. a nanostructure coupled to two electrodes) are built with a few mouse clicks. Virtual NanoLab contains a 3D visualization tool, the Nanoscope, where atomic geometries and computed results can be viewed and analyzed. One can for example plot Bloch functions of nanotubes and crystals, molecular orbitals, electron densities, and effective potentials. The numerical engine that carries out the actual simulations is Atomistix ToolKit, which combines density functional theory and non-equilibrium Green's functions to ab initio electronic-structure and transport calculations. Atomistix ToolKit is developed from the academic codes TranSIESTA and McDCal. See also Atomistix ToolKit NanoLanguage Atomistix
https://en.wikipedia.org/wiki/Model-driven%20engineering
Model-driven engineering (MDE) is a software development methodology that focuses on creating and exploiting domain models, which are conceptual models of all the topics related to a specific problem. Hence, it highlights and aims at abstract representations of the knowledge and activities that govern a particular application domain, rather than the computing (i.e. algorithmic) concepts. MDE is a subfield of a software design approach referred as round-trip engineering. The scope of the MDE is much wider than that of the Model-driven architecture. Overview The MDE approach is meant to increase productivity by maximizing compatibility between systems (via reuse of standardized models), simplifying the process of design (via models of recurring design patterns in the application domain), and promoting communication between individuals and teams working on the system (via a standardization of the terminology and the best practices used in the application domain). For instance, in model-driven development, technical artifacts such as source code, documentation, tests, and more are generated algorithmically from a domain model. A modeling paradigm for MDE is considered effective if its models make sense from the point of view of a user that is familiar with the domain, and if they can serve as a basis for implementing systems. The models are developed through extensive communication among product managers, designers, developers and users of the application domain. As the models approach completion, they enable the development of software and systems. Some of the better known MDE initiatives are: The Object Management Group (OMG) initiative Model-Driven Architecture (MDA) which is leveraged by several of their standards such as Meta-Object Facility, XMI, CWM, CORBA, Unified Modeling Language (to be more precise, the OMG currently promotes the use of a subset of UML called fUML together with its action language, ALF, for model-driven architecture; a former approach r
https://en.wikipedia.org/wiki/ATLAS%20Transformation%20Language
ATL (ATLAS Transformation Language) is a model transformation language and toolkit developed and maintained by OBEO and AtlanMod. It was initiated by the AtlanMod team (previously called ATLAS Group). In the field of Model-Driven Engineering (MDE), ATL provides ways to produce a set of target models from a set of source models. Released under the terms of the Eclipse Public License, ATL is an M2M (Eclipse) component, inside of the Eclipse Modeling Project (EMP). Overview ATL is a model transformation language (MTL) developed by OBEO and INRIA to answer the QVT Request For Proposal. QVT is an Object Management Group standard for performing model transformations. It can be used to do syntactic or semantic translation. ATL is built on top of a model transformation Virtual Machine. ATL is the ATLAS INRIA & LINA research group answer to the OMG MOF/QVT RFP. It is a model transformation language specified both as a metamodel and as a textual concrete syntax. It is a hybrid of declarative and imperative. The preferred style of transformation writing is declarative, which means simple mappings can be expressed simply. However, imperative constructs are provided so that some mappings too complex to be declaratively handled can still be specified. An ATL transformation program is composed of rules that define how source model elements are matched and navigated to create and initialize the elements of the target models. Architecture A model-transformation-oriented virtual machine has been defined and implemented to provide execution support for ATL while maintaining a certain level of flexibility. As a matter of fact, ATL becomes executable simply because a specific transformation from its metamodel to the virtual machine bytecode exists. Extending ATL is therefore mainly a matter of specifying the new language features execution semantics in terms of simple instructions: basic actions on models (elements creations and properties assignments). Example An ATL program
https://en.wikipedia.org/wiki/Amphiarthrosis
Amphiarthrosis is a type of continuous, slightly movable joint. Most amphiarthroses are held together by cartilage, as a result of which limited movements between the bones is made possible. An example is the joints of the vertebral column only allow for small movements between adjacent vertebrae, but when added together, these movements provide the flexibility that allows the body to twist, or bend to the front, back, or side. Types In amphiarthroses, the contiguous bony surfaces can be: A symphysis: connected by broad flattened disks of fibrocartilage, of a more or less complex structure, which adhere to the ends of each bone, as in the articulations between the bodies of the vertebrae or the inferior articulation of the two hip bones (aka the pubic symphysis). The strength of the pubic symphysis is important in conferring weight-bearing stability to the pelvis. An interosseous membrane - the sheet of connective tissue joining neighboring bones (e.g. tibia and fibula).
https://en.wikipedia.org/wiki/Power%20electronic%20substrate
The role of the substrate in power electronics is to provide the interconnections to form an electric circuit (like a printed circuit board), and to cool the components. Compared to materials and techniques used in lower power microelectronics, these substrates must carry higher currents and provide a higher voltage isolation (up to several thousand volts). They also must operate over a wide temperature range (up to 150 or 200 °C). Direct Bonded Copper (DBC) substrate DBC substrates are commonly used in power modules, because of their very good thermal conductivity. They are composed of a ceramic material tile with a sheet of copper bonded to one or both sides by a high-temperature oxidation process (the copper and substrate are heated to a carefully controlled temperature in an atmosphere of nitrogen containing about 30 ppm of oxygen; under these conditions, a copper-oxygen eutectic forms which bonds successfully both to copper and the oxides used as substrates). The top copper layer can be preformed prior to firing or chemically etched using printed circuit board technology to form an electrical circuit, while the bottom copper layer is usually kept plain. The substrate is attached to a heat spreader by soldering the bottom copper layer to it. A related technique uses a seed layer, photoimaging, and then additional copper plating to allow for fine lines (as small as 50 micrometres) and through-vias to connect front and back sides. This can be combined with polymer-based circuits to create high density substrates that eliminate the need for direct connection of power devices to heat sinks. One of the main advantages of the DBC vs other power electronic substrates is their low coefficient of thermal expansion, which is close to that of silicon (compared to pure copper). This ensures good thermal cycling performances (up to 50,000 cycles). The DBC substrates also have excellent electrical insulation and good heat spreading characteristics. Ceramic material used
https://en.wikipedia.org/wiki/Carrier%20preselect
Carrier preselect is a term relating to the telecommunications industry. It is a method of routing calls for least-cost routing (LCR) without the need for programming of PBX telephone system. This is the process whereby a telephone subscriber whose telephone line is maintained by one company, usually a former monopoly provider (e.g. BT), can choose to have some of their calls automatically routed across a different telephone company's network (e.g. Talk Talk) without needing to enter a special code or special equipment. See also Local loop unbundling Wholesale line rental External links Ofcom - Carrier Pre-Selection and Wholesale Line Rental - Outbound Carrier Pre-Selection Services from GCI - Outbound Carrier Pre-Selection Services from Six Degrees Group Telephony Local loop Broadband Telecommunications economics Teletraffic Telephone exchanges