source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Anonym.OS
|
Anonym.OS was a Live CD operating system based on OpenBSD 3.8 with strong encryption and anonymization tools. The goal of the project was to provide secure, anonymous web browsing access to everyday users. The operating system was OpenBSD 3.8, although many packages have been added to facilitate its goal. It used Fluxbox as its window manager.
The project was discontinued after the release of Beta 4 (2006).
Distributed
Designed, created and distributed by kaos.theory/security.research.
Legacy
Although this specific project is no longer updated, its successors are Incognito OS (discontinued in 2008) and FreeSBIE.
See also
Comparison of BSD operating systems
Security-focused operating system
|
https://en.wikipedia.org/wiki/Local%20nonsatiation
|
In microeconomics, the property of local nonsatiation (LNS) of consumer preferences states that for any bundle of goods there is always another bundle of goods arbitrarily close that is strictly preferred to it.
Formally, if X is the consumption set, then for any and every , there exists a such that and is strictly preferred to .
Several things to note are:
Local nonsatiation is implied by monotonicity of preferences. However, as the converse is not true, local nonsatiation is a weaker condition.
There is no requirement that the preferred bundle y contain more of any good – hence, some goods can be "bads" and preferences can be non-monotone.
It rules out the extreme case where all goods are "bads", since the point x = 0 would then be a bliss point.
Local nonsatiation can only occur either if the consumption set is unbounded or open (in other words, it is not compact) or if x is on a section of a bounded consumption set sufficiently far away from the ends. Near the ends of a bounded set, there would necessarily be a bliss point where local nonsatiation does not hold.
Applications of local nonsatiation
Local nonsatiation (LNS) is often applied in consumer theory, a branch of microeconomics, as an important property often assumed in theorems and propositions. Consumer theory is a study of how individuals make decisions and spend their money based on their preferences and budget. Local nonsatiation is also a key assumption for the First welfare theorem.
Indifference curve
An indifference curve is a set of all commodity bundles providing consumers with the same level of utility. The indifference curve is named so because the consumer would be indifferent between choosing any of these bundles. The indifference curves are not thick.
Walras’s law
Local nonsatiation is a key assumption in the Walras’ law theorem. Walras's law says that if consumers have locally nonsatiated preferences, they will consume their entire budget over their lifetime.
The indirect
|
https://en.wikipedia.org/wiki/Lemniscate%20elliptic%20functions
|
In mathematics, the lemniscate elliptic functions are elliptic functions related to the arc length of the lemniscate of Bernoulli. They were first studied by Giulio Fagnano in 1718 and later by Leonhard Euler and Carl Friedrich Gauss, among others.
The lemniscate sine and lemniscate cosine functions, usually written with the symbols and (sometimes the symbols and or and are used instead), are analogous to the trigonometric functions sine and cosine. While the trigonometric sine relates the arc length to the chord length in a unit-diameter circle the lemniscate sine relates the arc length to the chord length of a lemniscate
The lemniscate functions have periods related to a number called the lemniscate constant, the ratio of a lemniscate's perimeter to its diameter. This number is a quartic analog of the (quadratic) , ratio of perimeter to diameter of a circle.
As complex functions, and have a square period lattice (a multiple of the Gaussian integers) with fundamental periods and are a special case of two Jacobi elliptic functions on that lattice, .
Similarly, the hyperbolic lemniscate sine and hyperbolic lemniscate cosine have a square period lattice with fundamental periods
The lemniscate functions and the hyperbolic lemniscate functions are related to the Weierstrass elliptic function .
Lemniscate sine and cosine functions
Definitions
The lemniscate functions and can be defined as the solution to the initial value problem:
or equivalently as the inverses of an elliptic integral, the Schwarz–Christoffel map from the complex unit disk to a square with corners
Beyond that square, the functions can be analytically continued to the whole complex plane by a series of reflections.
By comparison, the circular sine and cosine can be defined as the solution to the initial value problem:
or as inverses of a map from the upper half-plane to a half-infinite strip with real part between and positive imaginary part:
Relation to the lemniscate co
|
https://en.wikipedia.org/wiki/Supervisory%20control%20theory
|
The supervisory control theory (SCT), also known as the Ramadge–Wonham framework (RW framework), is a method for automatically synthesizing supervisors that restrict the behavior of a plant such that as much as possible of the given specifications are fulfilled. The plant is assumed to spontaneously generate events. The events are in either one of the following two categories controllable or uncontrollable. The supervisor observes the string of events generated by the plant and might prevent the plant from generating a subset of the controllable events. However, the supervisor has no means of forcing the plant to generate an event.
In its original formulation the SCT considered the plant and the specification to be modeled by formal languages, not necessarily regular languages generated by finite automata as was done in most subsequent work.
See also
Discrete event dynamic system (DEDS)
Boolean differential calculus (BDC)
|
https://en.wikipedia.org/wiki/Isopropyl%20acetate
|
Isopropyl acetate is an ester, an organic compound which is the product of esterification of acetic acid and isopropanol. It is a clear, colorless liquid with a characteristic fruity odor.
Isopropyl acetate is a solvent with a wide variety of manufacturing uses that is miscible with most other organic solvents, and slightly soluble in water (although less so than ethyl acetate). It is used as a solvent for cellulose, plastics, oil and fats. It is a component of some printing inks and perfumes.
Isopropyl acetate decomposes slowly on contact with steel in the presence of air, producing acetic acid and isopropanol. It reacts violently with oxidizing materials and it attacks many plastics.
Isopropyl acetate is quite flammable in both its liquid and vapor forms, and it may be harmful if swallowed or inhaled.
The Occupational Safety and Health Administration has set a permissible exposure limit (PEL) of 250ppm (950mg/m3) over an eight-hour time-weighted average for workers handling isopropyl acetate.
|
https://en.wikipedia.org/wiki/Pleomorphism%20%28microbiology%29
|
In microbiology, pleomorphism (from Ancient Greek , pléō, "more", and , morphḗ, form), also pleiomorphism, is the ability of some microorganisms to alter their morphology, biological functions or reproductive modes in response to environmental conditions. Pleomorphism has been observed in some members of the Deinococcaceae family of bacteria. The modern definition of pleomorphism in the context of bacteriology is based on variation of morphology or functional methods of the individual cell, rather than a heritable change of these characters as previously believed.
Bacteria
In the first decades of the 20th century, the term "pleomorphism" was used to refer to the idea that bacteria change morphology, biological systems, or reproductive methods dramatically according to environmental cues. This claim was controversial among microbiologists of the time, and split them into two schools: the monomorphists, who opposed the claim, and the pleomorphists such as Antoine Béchamp, Ernst Almquist, Günther Enderlein, Albert Calmette, Gastons Naessens, Royal Raymond Rife, and Lida Mattman, who supported the posit. According to a 1997 journal article by Milton Wainwright, a British microbiologist, pleomorphism of bacteria lacked wide acceptance among modern microbiologists of the time.
Monomorphic theory, supported by Louis Pasteur, Rudolf Virchow, Ferdinand Cohn, and Robert Koch, emerged to become the dominant paradigm in modern medical science: it is now almost universally accepted that each bacterial cell is derived from a previously existing cell of practically the same size and shape. However it has recently been shown that certain bacteria are capable of dramatically changing shape.
Sergei Winogradsky took a middle-ground stance in the pleomorphism controversy. He agreed with the monomorphic school of thought, but disagreed with some of the foundational microbiological beliefs that the prominent monomorphists Cohn and Koch held. Winogradsky published a literature review t
|
https://en.wikipedia.org/wiki/Confluence%20%28abstract%20rewriting%29
|
In computer science, confluence is a property of rewriting systems, describing which terms in such a system can be rewritten in more than one way, to yield the same result. This article describes the properties in the most abstract setting of an abstract rewriting system.
Motivating examples
The usual rules of elementary arithmetic form an abstract rewriting system.
For example, the expression (11 + 9) × (2 + 4) can be evaluated starting either at the left or at the right parentheses;
however, in both cases the same result is eventually obtained.
If every arithmetic expression evaluates to the same result regardless of reduction strategy, the arithmetic rewriting system is said to be ground-confluent. Arithmetic rewriting systems may be confluent or only ground-confluent depending on details of the rewriting system.
A second, more abstract example is obtained from the following proof of each group element equalling the inverse of its inverse:
This proof starts from the given group axioms A1–A3, and establishes five propositions R4, R6, R10, R11, and R12, each of them using some earlier ones, and R12 being the main theorem. Some of the proofs require non-obvious, or even creative, steps, like applying axiom A2 in reverse, thereby rewriting "1" to "a−1 ⋅ a" in the first step of R6's proof. One of the historical motivations to develop the theory of term rewriting was to avoid the need for such steps, which are difficult to find by an inexperienced human, let alone by a computer program .
If a term rewriting system is confluent and terminating, a straightforward method exists to prove equality between two expressions (also known as terms) s and t:
Starting with s, apply equalities from left to right as long as possible, eventually obtaining a term s′.
Obtain from t a term t′ in a similar way.
If both terms s′ and t′ literally agree, then s and t are proven equal.
More importantly, if they disagree, then s and t cannot be equal.
That is, any two terms s an
|
https://en.wikipedia.org/wiki/Tagged%20pointer
|
In computer science, a tagged pointer is a pointer (concretely a memory address) with additional data associated with it, such as an indirection bit or reference count. This additional data is often "folded" into the pointer, meaning stored inline in the data representing the address, taking advantage of certain properties of memory addressing. The name comes from "tagged architecture" systems, which reserved bits at the hardware level to indicate the significance of each word; the additional data is called a "tag" or "tags", though strictly speaking "tag" refers to data specifying a type, not other data; however, the usage "tagged pointer" is ubiquitous.
Folding tags into the pointer
There are various techniques for folding tags into a pointer.
Most architectures are byte-addressable (the smallest addressable unit is a byte), but certain types of data will often be aligned to the size of the data, often a word or multiple thereof. This discrepancy leaves a few of the least significant bits of the pointer unused, which can be used for tags – most often as a bit field (each bit a separate tag) – as long as code that uses the pointer masks out these bits before accessing memory. E.g., on a 32-bit architecture (for both addresses and word size), a word is 32 bits = 4 bytes, so word-aligned addresses are always a multiple of 4, hence end in 00, leaving the last 2 bits available; while on a 64-bit architecture, a word is 64 bits = 8 bytes, so word-aligned addresses end in 000, leaving the last 3 bits available. In cases where data is aligned at a multiple of word size, further bits are available. In case of word-addressable architectures, word-aligned data does not leave any bits available, as there is no discrepancy between alignment and addressing, but data aligned at a multiple of word size does.
Conversely, in some operating systems, virtual addresses are narrower than the overall architecture width, which leaves the most significant bits available for tags; this
|
https://en.wikipedia.org/wiki/SU-8%20photoresist
|
SU-8 is a commonly used epoxy-based negative photoresist. Negative refers to a photoresist whereby the parts exposed to UV become cross-linked, while the remainder of the film remains soluble and can be washed away during development.
As shown in the structural diagram, SU-8 derives its name from the presence of 8 epoxy groups. This is a statistical average per moiety. It is these epoxies that cross-link to give the final structure.
It can be made into a viscous polymer that can be spun or spread over a thickness ranging from below 1 micrometer up to above 300 micrometers, or Thick Film Dry Sheets (TFDS) for lamination up to above 1 millimetre thick. Up to 500 µm, the resist can be processed with standard contact lithography. Above 500 µm, absorption leads to increasing sidewall undercuts and poor curing at the substrate interface. It can be used to pattern high aspect ratio structures. An aspect ratio of (> 20) has been achieved with the solution formulation and (> 40) has been demonstrated from the dry resist. Its maximum absorption is for ultraviolet light with a wavelength of the i-line: 365 nm (it is not practical to expose SU-8 with g-line ultraviolet light). When exposed, SU-8's long molecular chains cross-link, causing the polymerisation of the material. SU-8 series photoresists use gamma-butyrolactone or cyclopentanone as the primary solvent.
SU-8 was originally developed as a photoresist for the microelectronics industry, to provide a high-resolution mask for fabrication of semiconductor devices.
It is now mainly used in the fabrication of microfluidics (mainly via soft lithography, but also with other imprinting techniques such as nanoimprint lithography) and microelectromechanical systems parts. It is also one of the most biocompatible materials known and is often used in bio-MEMS for life science applications.
Composition and processing
SU-8 is composed of Bisphenol A Novolac epoxy that is dissolved in an organic solvent (gamma-butyrolactone GBL
|
https://en.wikipedia.org/wiki/XX%20male%20syndrome
|
XX male syndrome, also known as de la Chapelle syndrome, is a rare congenital intersex condition in which an individual with a 46,XX karyotype (otherwise associated with females) has phenotypically male characteristics that can vary among cases. Synonyms include 46,XX testicular difference of sex development (46,XX DSD), 46,XX sex reversal, nonsyndromic 46,XX testicular DSD, and XX sex reversal.
In 90 percent of these individuals, the syndrome is caused by the Y chromosome's SRY gene, which triggers male reproductive development, being atypically included in the crossing over of genetic information that takes place between the pseudoautosomal regions of the X and Y chromosomes during meiosis in the father. When the X with the SRY gene combines with a normal X from the mother during fertilization, the result is an XX male. Less common are SRY-negative XX males, which can be caused by a mutation in an autosomal or X chromosomal gene. The masculinization of XX males is variable.
This syndrome is diagnosed through various detection methods and occurs in approximately 1:20,000 newborn males, making it much less common than Klinefelter syndrome. Treatment is medically unnecessary, although some individuals choose to undergo treatments to make them appear more male or female. The alternative name for XX male syndrome refers to Finnish scientist Albert de la Chapelle, who studied the condition and its etiology.
Signs and symptoms
The appearance of XX males can fall into one of three categories: 1) males that have normal internal and external genitalia, 2) males with external ambiguities, and 3) males that have both internal and external genital ambiguities. External genital ambiguities can include hypospadias, micropenis, and clitoromegaly. Typically, the appearance of XX males differs from that of an XY male in that they are smaller in height and weight. Most XX males have small testes, and have an increase in maldescended testicles compared to XY males. All are believe
|
https://en.wikipedia.org/wiki/Imre%20Leader
|
Imre Bennett Leader (born 30 October 1963) is a British mathematician, a professor in DPMMS at the University of Cambridge working in the field of combinatorics. He is also known as an Othello player.
Life
He is the son of the physicist Elliot Leader and his first wife Ninon Neményi, previously married to the poet Endre Kövesi; Darian Leader is his brother. Imre Lakatos was a family friend and his godfather.
Leader was educated at St Paul's School in London, from 1976 to 1980. He won a silver medal on the British team at the 1981 International Mathematical Olympiad (IMO) for pre-undergraduates. He later acted as the official leader of the British IMO team, taking over from Adam McBride in 1999, to 2001. He was the IMO's Chief Coordinator and Problems Group Chairman in 2002.
Leader went on to Trinity College, Cambridge, where he graduated B.A. in 1984, M.A. in 1989, and Ph.D. in 1989. His Ph.D. was in mathematics was for work on combinatorics, supervised by Béla Bollobás. From 1989 to 1996 he was Fellow at Peterhouse, Cambridge, then was Reader at University College London from 1996 to 2000. He was a lecturer at Cambridge from 2000 to 2002, and Reader there from 2002 to 2005. In 2000 he became a Fellow of Trinity College.
Awards and honours
In 1999 Leader was awarded a Junior Whitehead Prize for his contributions to combinatorics. Cited results included the proof, with Reinhard Diestel, of the bounded graph conjecture of Rudolf Halin.
Othello
Leader in an interview in 2016 stated that he began to play Othello in 1981, with his friend Jeremy Rickard. Between 1983 and 2019 he was 15 times the British Othello champion. In 1983 he came second in the world individual championship, and in 1988 he played on the British team that won the world team championship. In 2019 he won the European championship, beating Matthias Berg in the final in Berlin.
|
https://en.wikipedia.org/wiki/Transitive%20reduction
|
In the mathematical field of graph theory, a transitive reduction of a directed graph is another directed graph with the same vertices and as few edges as possible, such that for all pairs of vertices , a (directed) path from to in exists if and only if such a path exists in the reduction. Transitive reductions were introduced by , who provided tight bounds on the computational complexity of constructing them.
More technically, the reduction is a directed graph that has the same reachability relation as . Equivalently, and its transitive reduction should have the same transitive closure as each other, and the transitive reduction of should have as few edges as possible among all graphs with that property.
The transitive reduction of a finite directed acyclic graph (a directed graph without directed cycles) is unique and is a subgraph of the given graph. However, uniqueness fails for graphs with (directed) cycles, and for infinite graphs not even existence is guaranteed.
The closely related concept of a minimum equivalent graph is a subgraph of that has the same reachability relation and as few edges as possible. The difference is that a transitive reduction does not have to be a subgraph of . For finite directed acyclic graphs, the minimum equivalent graph is the same as the transitive reduction. However, for graphs that may contain cycles, minimum equivalent graphs are NP-hard to construct, while transitive reductions can be constructed in polynomial time.
Transitive reduction can be defined for an abstract binary relation on a set, by interpreting the pairs of the relation as arcs in a directed graph.
Classes of graphs
In directed acyclic graphs
The transitive reduction of a finite directed graph G is a graph with the fewest possible edges that has the same reachability relation as the original graph. That is, if there is a path from a vertex x to a vertex y in graph G, there must also be a path from x to y in the transitive reduction of G, and
|
https://en.wikipedia.org/wiki/David%20Jablonski
|
David Ira Jablonski (born 1953) is an American professor of geophysical sciences at the University of Chicago. His research focuses upon the ecology and biogeography of the origin of major novelties, the evolutionary role of mass extinctions—in particular the Cretaceous–Paleogene extinction event—and other large-scale processes in the history of life.
Jablonksi is a proponent of the extended evolutionary synthesis.
Education
Jablonski was educated at Columbia University (earning his Bachelor of Arts degree in 1974) and completed his graduate work at Yale University (with his Master of Science degree in 1976 and Ph.D. in 1979). As an undergraduate he worked at the American Museum of Natural History in the City of New York, NY. Then continued postdoctoral research at the University of California, Santa Barbara and the University of California, Berkeley. In 1985 he was hired by the University of Chicago.
Awards
In 1988 the Paleontological Society awarded Jablonski with the Charles Schuchert Award, which is given to persons under 40 "whose work reflects excellence and promise in paleontology". In 2010 he was elected to the National Academy of Sciences.
In 2017 the Paleontological Society awarded him their most prestigious prize, the Paleontological Society Medal
|
https://en.wikipedia.org/wiki/CDC%201604
|
The CDC 1604 was a 48-bit computer designed and manufactured by Seymour Cray and his team at the Control Data Corporation (CDC). The 1604 is known as one of the first commercially successful transistorized computers. (The IBM 7090 was delivered earlier, in November 1959.) Legend has it that the 1604 designation was chosen by adding CDC's first street address (501 Park Avenue) to Cray's former project, the ERA-UNIVAC 1103.
A cut-down 24-bit version, designated the CDC 924, was shortly thereafter produced, and delivered to NASA.
The first 1604 was delivered to the U.S. Navy Post Graduate School in January 1960 for JOVIAL applications supporting major Fleet Operations Control Centers primarily for weather prediction in Hawaii, London, and Norfolk, Virginia. By 1964, over 50 systems were built. The CDC 3600, which added five op codes, succeeded the 1604, and "was largely compatible" with it.
One of the 1604s was shipped to the Pentagon to DASA (Defense Atomic Support Agency) and used during the Cuban missile crises to predict possible strikes by the Soviet Union against the United States.
A 12-bit minicomputer, called the CDC 160, was often used as an I/O processor in 1604 systems. A stand-alone version of the 160 called the CDC 160-A was arguably the first minicomputer.
Architecture
Memory in the CDC 1604 consisted of 32K 48-bit words of magnetic core memory with a cycle time of 6.4 microseconds. It was organized as two banks of 16K words each, with odd addresses in one bank and even addresses in the other. The two banks were phased 3.2 microseconds apart, so average effective memory access time was 4.8 microseconds. The computer executed about 100,000 operations per second.
Each 48-bit word contained two 24-bit instructions. The instruction format was 6-3-15: six bits for the operation code, three bits for a "designator" (index register for memory access instructions, condition for jump (branch) instructions) and fifteen bits for a memory address (or shift
|
https://en.wikipedia.org/wiki/Martingale%20representation%20theorem
|
In probability theory, the martingale representation theorem states that a random variable that is measurable with respect to the filtration generated by a Brownian motion can be written in terms of an Itô integral with respect to this Brownian motion.
The theorem only asserts the existence of the representation and does not help to find it explicitly; it is possible in many cases to determine the form of the representation using Malliavin calculus.
Similar theorems also exist for martingales on filtrations induced by jump processes, for example, by Markov chains.
Statement
Let be a Brownian motion on a standard filtered probability space and let be the augmented filtration generated by . If X is a square integrable random variable measurable with respect to , then there exists a predictable process C which is adapted with respect to such that
Consequently,
Application in finance
The martingale representation theorem can be used to establish the existence
of a hedging strategy.
Suppose that is a Q-martingale process, whose volatility is always non-zero.
Then, if is any other Q-martingale, there exists an -previsible process , unique up to sets of measure 0, such that with probability one, and N can be written as:
The replicating strategy is defined to be:
hold units of the stock at the time t, and
hold units of the bond.
where is the stock price discounted by the bond price to time and is the expected payoff of the option at time .
At the expiration day T, the value of the portfolio is:
and it is easy to check that the strategy is self-financing: the change in the value of the portfolio only depends on the change of the asset prices .
See also
Backward stochastic differential equation
|
https://en.wikipedia.org/wiki/Shrub%E2%80%93steppe
|
Shrub-steppe is a type of low-rainfall natural grassland. While arid, shrub-steppes have sufficient moisture to support a cover of perennial grasses or shrubs, a feature which distinguishes them from deserts.
The primary ecological processes historically at work in shrub-steppe ecosystems are drought and fire. Shrub-steppe plant species have developed particular adaptations to low annual precipitation and summer drought conditions. Plant adaptations to different soil moisture regimes influence their distribution. A frequent fire regime in the shrub-steppe similarly adds to the patchwork pattern of shrub and grass that characterizes shrub-steppe ecosystems.
North America
The shrub-steppes of North America occur in the western United States and western Canada, in the rain shadow between the Cascades and Sierra Nevada on the west and the Rocky Mountains on the east. They extend from south-central British Columbia down into south central and south-eastern Washington, eastern Oregon, and eastern California, and across through Idaho, Nevada, and Utah into western Wyoming and Colorado, and down into northern and central New Mexico and northern Arizona. Growth is dominated primarily by low-lying shrubs, such as big sagebrush (Artemisia tridentata) and bitterbrush (Purshia tridentata), with too little rainfall to support the growth of forests, though some trees do occur. Other important plants are bunchgrasses such as Pseudoroegneria spicata, which have historically provided forage for livestock as well as wildlife, but are quickly being replaced by nonnative annual species like cheatgrass (Bromus tectorum), tumble mustard (Sisymbrium altissimum), and Russian thistle (Salsola kali). There is also a suite of animals that call the shrub-steppe home, including sage grouse, pygmy rabbit, Western rattlesnake, and pronghorn.
Historically, much of the shrub-steppe in the state of Washington was referred to as "scabland" because of the deep channels cut into pure basalt rock by
|
https://en.wikipedia.org/wiki/Narcissistic%20number
|
In number theory, a narcissistic number (also known as a pluperfect digital invariant (PPDI), an Armstrong number (after Michael F. Armstrong) or a plus perfect number) in a given number base is a number that is the sum of its own digits each raised to the power of the number of digits.
Definition
Let be a natural number. We define the narcissistic function for base to be the following:
where is the number of digits in the number in base , and
is the value of each digit of the number. A natural number is a narcissistic number if it is a fixed point for , which occurs if . The natural numbers are trivial narcissistic numbers for all , all other narcissistic numbers are nontrivial narcissistic numbers.
For example, the number 153 in base is a narcissistic number, because and .
A natural number is a sociable narcissistic number if it is a periodic point for , where for a positive integer (here is the th iterate of ), and forms a cycle of period . A narcissistic number is a sociable narcissistic number with , and an amicable narcissistic number is a sociable narcissistic number with .
All natural numbers are preperiodic points for , regardless of the base. This is because for any given digit count , the minimum possible value of is , the maximum possible value of is , and the narcissistic function value is . Thus, any narcissistic number must satisfy the inequality . Multiplying all sides by , we get , or equivalently, . Since , this means that there will be a maximum value where , because of the exponential nature of and the linearity of . Beyond this value , always. Thus, there are a finite number of narcissistic numbers, and any natural number is guaranteed to reach a periodic point or a fixed point less than , making it a preperiodic point. Setting equal to 10 shows that the largest narcissistic number in base 10 must be less than .
The number of iterations needed for to reach a fixed point is the narcissistic function's persistence o
|
https://en.wikipedia.org/wiki/Molecular%20configuration
|
The molecular configuration of a molecule is the permanent geometry that results from the spatial arrangement of its bonds. The ability of the same set of atoms to form two or more molecules with different configurations is stereoisomerism. This is distinct from constitutional isomerism which arises from atoms being connected in a different order. Conformers which arise from single bond rotations, if not isolatable as atropisomers, do not count as distinct molecular configurations as the spatial connectivity of bonds is identical.
Enantiomers
Enantiomers are molecules having one or more chiral centres that are mirror images of each other. Chiral centres are designated R or S. If the 3 groups projecting towards you are arranged clockwise from highest priority to lowest priority, that centre is designated R. If counterclockwise, the centre is S. Priority is based on atomic number: atoms with higher atomic number are higher priority. If two molecules with one or more chiral centres differ in all of those centres, they are enantiomers.
Diastereomers
Diastereomers are distinct molecular configurations that are a broader category. They usually differ in physical characteristics as well as chemical properties. If two molecules with more than one chiral centre differ in one or more (but not all) centres, they are diastereomers. All stereoisomers that are not enantiomers are diastereomers. Diastereomerism also exists in alkenes. Alkenes are designated Z or E depending on group priority on adjacent carbon atoms. E/Z notation describes the absolute stereochemistry of the double bond. Cis/trans notation is also used to describe the relative orientations of groups.
Configurations in amino acids
Amino acids are designated either L or D depending on relative group arrangements around the stereogenic carbon center. L/D designations are not related to S/R absolute configurations. Only L configured amino acids are found in biological organisms. All amino acids except for L-c
|
https://en.wikipedia.org/wiki/Privacy%20software
|
Privacy software is software built to protect the privacy of its users. The software typically works in conjunction with Internet usage to control or limit the amount of information made available to third parties. The software can apply encryption or filtering of various kinds.
Types of protection
Privacy software can refer to two different types of protection. The first type is protecting a user's Internet privacy from the World Wide Web. There are software products that will mask or hide a user's IP address from the outside world to protect the user from identity theft. The second type of protection is hiding or deleting the user's Internet traces that are left on their PC after they have been surfing the Internet. There is software that will erase all the user's Internet traces and there is software that will hide and (Encrypt) a user's traces so that others using their PC will not know where they have been surfing.
Whitelisting and blacklisting
One solution to enhance privacy software is whitelisting. Whitelisting is a process in which a company identifies the software that it will allow and does not try to recognize malware. Whitelisting permits acceptable software to run and either prevents anything else from running or lets new software run in a quarantined environment until its validity can be verified. Whereas whitelisting allows nothing to run unless it is on the whitelist, blacklisting allows everything to run unless it is on the black. A blacklist then includes certain types of software that are not allowed to run in the company environment. For example, a company might blacklist peer-to-peer file sharing on its systems. In addition to software, people, devices, and websites can also be whitelisted or blacklisted.
Intrusion detection systems
Intrusion detection systems are designed to detect all types of malicious network traffic and computer usage that cannot be detected by a firewall. These systems capture all network traffic flows and examine
|
https://en.wikipedia.org/wiki/Arid%20Lands%20Ecology%20Reserve
|
The Arid Land Ecology Reserve (ALE) is the largest tract of shrub-steppe ecosystem remaining in the U.S. state of Washington. It is managed for the U.S. Department of Energy by the Pacific Northwest National Laboratory (which is operated for the U.S. Department of Energy by Battelle Memorial Institute). The 320 km² area is a portion of the 1500 km² National Environmental Research Park located on the Hanford Site on the northwest boundary of Richland, Washington.
On June 27, 2000, a range fire destroyed most of the native sagebrush and bunchgrass as well as damaged the microbiotic crust. Though the US Fish and Wildlife Service has attempted to re-introduce native flora, the Arid Lands Ecology Reserve is currently dominated by non-native species such as cheatgrass, knapweeds, and Russian thistle (tumbleweed) which flourished after the 2000 fire. Other species such as spiny hop sage and Wyoming big sagebrush were decimated by the fire and in its aftermath.
Vegetation
Shrub-steppe
This vegetation type describes plant communities found in and around arid mountains, ridges and slopes. In the ALE this includes shrubs (sagebrush and rabbit brush), perennial bunchgrasses (Sandberg's blue grass and bluebunch wheat grass) as well as both annual and perennial forbs (balsamroot, phlox and fleabane).
Riparian
This vegetation type refers to plant communities located around springs and streamflow. In the ALE these areas are dominated by willow, black cottonwood, chokecherry, and mock Orange.
Point of Interest
North facing slope of Rattlesnake Mountain is the highest “treeless” mountain in the United States.
Rare plant species such as Mountain Milk Vetch and Piper's daisy can be found in this area.
History and Significant Dates
From the early 1800s to around the 1940s this area was used as animal pasture, human homesteading, oil drilling and development of infrastructure such as roads. In 1943 the United States Department of Energy (DOE) gained ownership of the land. T
|
https://en.wikipedia.org/wiki/Lability
|
Lability refers to something that is constantly undergoing change or is likely to undergo change.
Biochemistry
In reference to biochemistry, this is an important concept as far as kinetics is concerned in metalloproteins. This can allow for the rapid synthesis and degradation of substrates in biological systems.
Biology
Cells
Labile cells refer to cells that constantly divide by entering and remaining in the cell cycle. These are contrasted with "stable cells" and "permanent cells".
An important example of this is in the epithelium of the cornea, where cells divide at the basal level and move upwards, and the topmost cells die and fall off.
Proteins
In medicine, the term "labile" means susceptible to alteration or destruction. For example, a heat-labile protein is one that can be changed or destroyed at high temperatures.
The opposite of labile in this context is "stable".
Soils
Compounds or materials that are easily transformed (often by biological activity) are termed labile. For example, labile phosphate is that fraction of soil phosphate that is readily transformed into soluble or plant-available phosphate. Labile organic matter is the soil organic matter that is easily decomposed by microorganisms.
Chemistry
The term is used to describe a transient chemical species. As a general example, if a molecule exists in a particular conformation for a short lifetime, before adopting a lower energy conformation (structural arrangement), the former molecular structure is said to have 'high lability' (such as C25, a 25-carbon fullerene spheroid). The term is sometimes also used in reference to reactivity – for example, a complex that quickly reaches equilibrium in solution is said to be labile (with respect to that solution). Another common example is the cis effect in organometallic chemistry, which is the labilization of CO ligands in the cis position of octahedral transition metal complexes.
See also
Emotional lability
|
https://en.wikipedia.org/wiki/Primer%20extension
|
Primer extension is a technique whereby the 5' ends of RNA can be mapped - that is, they can be sequenced and properly identified.
Primer extension can be used to determine the start site of transcription (the end site cannot be determined by this method) by which its sequence is known. This technique requires a radiolabelled primer (usually 20 - 50 nucleotides in length) which is complementary to a region near the 3' end of the mRNA. The primer is allowed to anneal to the RNA and reverse transcriptase is used to synthesize cDNA from the RNA until it reaches the 5' end of the RNA. By denaturing the hybrid and using the extended primer cDNA as a marker on an electrophoretic gel, it is possible to determine the transcriptional start site. It is usually done so by comparing its location on the gel with the DNA sequence (e.g. Sanger sequencing), preferably by using the same primer on the DNA template strand. The exact nucleotide by which the transcription starts at can be pinpointed by matching the labelled extended primer with the marker nucleotide, who are both sharing the same migration distance on the gel.
Primer extension offers an alternative to a nuclease protection assay (S1 nuclease mapping) for quantifying and mapping RNA transcripts. The hybridization probe for primer extension is a synthesized oligonucleotide, whereas S1 mapping requires isolation of a DNA fragment. Both methods provide information where a mRNA starts and provide an estimate of the concentration of a transcript by the intensity of the transcript band on the resulting autoradiograph. Unlike S1 mapping, however, primer extension can only be used to locate the 5’-end of an mRNA transcript because the DNA synthesis required for the assay relies on reverse transcriptase (only polymerizes in the 5’ → 3’ direction).
Primer extension is unaffected by splice sites and is thus preferable in situations where intervening splice sites prevent S1 mapping. Finally, primer extension is more accurate t
|
https://en.wikipedia.org/wiki/Resiniferatoxin
|
Resiniferatoxin (RTX) is a naturally occurring chemical found in resin spurge (Euphorbia resinifera), a cactus-like plant commonly found in Morocco, and in Euphorbia poissonii found in northern Nigeria. It is a potent functional analog of capsaicin, the active ingredient in chili peppers.
Biological activity
Resiniferatoxin has a score of 16 billion Scoville heat units, making pure resiniferatoxin about 500 to 1000 times hotter than pure capsaicin. Resiniferatoxin activates transient vanilloid receptor 1 (TRPV1) in a subpopulation of primary afferent sensory neurons involved in nociception, the transmission of physiological pain. TRPV1 is an ion channel in the plasma membrane of sensory neurons and stimulation by resiniferatoxin causes this ion channel to become permeable to cations, especially calcium. The influx of cations causes the neuron to depolarize, transmitting signals similar to those that would be transmitted if the innervated tissue were being burned or damaged. This stimulation is followed by desensitization and analgesia, in part because the nerve endings die from calcium overload.
Total synthesis
A total synthesis of (+)-resiniferatoxin was completed by the Paul Wender group at Stanford University in 1997. The process begins with a starting material of 1,4-pentadien-3-ol and consists of more than 25 significant steps. As of 2007, this represented the only complete total synthesis of any member of the daphnane family of molecules.
One of the main challenges in synthesizing a molecule such as resiniferatoxin is forming the three-ring backbone of the structure. The Wender group was able to form the first ring of the structure by first synthesizing Structure 1 in Figure 1. By reducing the ketone of Structure 1 followed by oxidizing the furan nucleus with m-CPBA and converting the resulting hydroxy group to an oxyacetate, Structure 2 can be obtained. Structure 2 contains the first ring of the three-ring structure of RTX. It reacts through an oxidopy
|
https://en.wikipedia.org/wiki/Kronecker%27s%20lemma
|
In mathematics, Kronecker's lemma (see, e.g., ) is a result about the relationship between convergence of infinite sums and convergence of sequences. The lemma is often used in the proofs of theorems concerning sums of independent random variables such as the strong Law of large numbers. The lemma is named after the German mathematician Leopold Kronecker.
The lemma
If is an infinite sequence of real numbers such that
exists and is finite, then we have for all and that
Proof
Let denote the partial sums of the x'''s. Using summation by parts,
Pick any ε > 0. Now choose N so that is ε-close to s for k > N. This can be done as the sequence converges to s. Then the right hand side is:
Now, let n go to infinity. The first term goes to s, which cancels with the third term. The second term goes to zero (as the sum is a fixed value). Since the b'' sequence is increasing, the last term is bounded by .
|
https://en.wikipedia.org/wiki/Permanent%20signal
|
Permanent signal (PS) in American telephony terminology, or permanent loop in British usage, is a condition in which a POTS line is off-hook without connection for an extended period of time. This is indicated in modern switches by the silent termination after the off-hook tone times out and the telephone exchange computer puts the telephone line on its High & Wet list or Wetlist. In older switches, however, a Permanent Signal Holding Trunk (PSHT) would play either an off-hook tone (howler tone) or a 480/500 Hz high tone (which would subsequently bleed into adjacent lines via crosstalk). Off-hook tone is a tone of increasing intensity that is intended to alert telephone users to the fact that the receiver has been left off the hook without being connected in a call. On some systems before the off-hook tone is played, an intercept message may be announced. The most common message reads as follows; "If you'd like to make a call, please hang up and try again. If you need help, hang up and then dial your operator."
Permanent signal can also describe the state of a trunk that is seized but has not been dialed upon, if it remains in a busy condition (sometimes alerting with reorder).
In most mid-20th-century switching equipment, a permanent signal would tie up a junctor circuit, diminishing the ability of the switch to handle outgoing calls. When flooded cables or other conditions made this a real problem, switch staff would open the cable, or paper the off-normal contacts of the crossbar switch, or block the line relay from operating. These methods had the disadvantage of blocking all outgoing calls from that line until it was manually cleared. Manufacturers also sold devices that monitored the talk wires and held up the cutoff relay or crossbar hold magnet until the condition cleared. Some crossbar line circuit designs had a park condition allowing the line circuit itself to monitor the line.
Stored program control exchanges finally solved the problem, by sett
|
https://en.wikipedia.org/wiki/Mutation%20frequency
|
Mutation frequency and mutation rates are highly correlated to each other. Mutation frequencies test are cost effective in laboratories however; these two concepts provide vital information in reference to accounting for the emergence of mutations on any given germ line.
There are several test utilized in measuring the chances of mutation frequency and rates occurring in a particular gene pool. Some of the test are as follows:
Avida Digital Evolution Platform
Fluctuation Analysis
Mutation frequency and rates provide vital information about how often a mutation may be expressed in a particular genetic group or sex. Yoon et., 2009 suggested that as sperm donors ages increased the sperm mutation frequencies increased. This reveals the positive correlation in how males are most likely to contribute to genetic disorders that reside within X-linked recessive chromosome.
There are additional factors affecting mutation frequency and rates involving evolutionary influences. Since, organisms may pass mutations to their offspring incorporating and analyzing the mutation frequency and rates of a particular species may provide a means to adequately comprehend its longevity
Aging
The time course of spontaneous mutation frequency from middle to late adulthood was measured in four different tissues of the mouse. Mutation frequencies in the cerebellum (90% neurons) and male germ cells were lower than in liver and adipose tissue. Furthermore, the mutation frequencies increased with age in liver and adipose tissue, whereas in the cerebellum and male germ cells the mutation frequency remained constant
Dietary restricted rodents live longer and are generally healthier than their ad libitum fed counterparts. No changes were observed in the spontaneous chromosomal mutation frequency of dietary restricted mice (aged 6 and 12 months) compared to ad libitum fed control mice. Thus dietary restriction appears to have no appreciable effect on spontaneous mutation in chromosomal
|
https://en.wikipedia.org/wiki/Warp%20%26%20Warp
|
is a multidirectional shooter arcade video game developed and published by Namco in 1981. It was released by Rock-Ola in North America as Warp Warp. The game was ported to the Sord M5 and MSX. A sequel, Warpman, was released in 1985 for the Family Computer with additional enemy types, power-ups, and improved graphics and sound.
Gameplay
The player must take control of a "Monster Fighter", who must shoot tongue-sticking aliens named "Berobero" (a Japanese onomatopoeic word for licking) in the "Space World" without letting them touch him. If he kills three Berobero of the same colour in a row, a mysterious alien will appear that can be killed for extra points. When the Warp Zone in the centre of the screen flashes (with the Katakana text in the Japanese version or the English text "WARP" in the US versions), it is possible for the Monster Fighter to warp to the "Maze World", where the Berobero must be killed with time-delay bombs. The delay is controlled by how long the player holds the button down - but every time he kills one, his bombs will get stronger, making it easier for the Monster Fighter to blow himself up with his own bombs until he returns to Space World.
Reception
In a retrospsective review, AllGame compared its gameplay to Wizard of Wor and Bomberman, describing it as "an obscure but endearing maze shooter".
|
https://en.wikipedia.org/wiki/Lupinine
|
Lupinine is a quinolizidine alkaloid present in the genus Lupinus (colloquially referred to as lupins) of the flowering plant family Fabaceae. The scientific literature contains many reports on the isolation and synthesis of this compound as well as a vast number of studies on its biosynthesis from its natural precursor, lysine. Studies have shown that lupinine hydrochloride is a mildly toxic acetylcholinesterase inhibitor and that lupinine has an inhibitory effect on acetylcholine receptors. The characteristically bitter taste of lupin beans, which come from the seeds of Lupinus plants, is attributable to the quinolizidine alkaloids which they contain, rendering them unsuitable for human and animal consumption unless handled properly. However, because lupin beans have potential nutritional value due to their high protein content, efforts have been made to reduce their alkaloid content through the development of "sweet" varieties of Lupinus.
Toxicity
Lupinine is a hepatotoxin prevalent in the seeds of leguminous herbs of the genus Lupinus. Lupinine and other quinolizidine alkaloids give a bitter taste to naturally growing lupin flowers. Due to the toxicity of quinolizidine alkaloids, lupin beans are soaked overnight and rinsed to remove some of their alkaloid content. However, when the cooking and rinsing procedure is insufficient, 10 grams of seeds are able to liberate as much as 100 milligrams of lupinine.
The neurotoxicity of lupinine has been known within veterinary medical circles for some time due to the use of lupins as a forage feed for grazing livestock since it has high protein content. It is found to produce lupinosis, which is a morbid, and often fatal condition that results in acute atrophy of liver function and which affects domestic animals such as cattle and sheep. When ingested by humans, quinolizidine alkaloid poisoning causes trembling, shaking, excitation, as well as convulsions. Lupinine, in addition to being orally toxic to mammals, is also
|
https://en.wikipedia.org/wiki/Linux%20Security%20Modules
|
Linux Security Modules (LSM) is a framework allowing the Linux kernel to support without bias a variety of computer security models. LSM is licensed under the terms of the GNU General Public License and is a standard part of the Linux kernel since Linux 2.6. AppArmor, SELinux, Smack, and TOMOYO Linux are the currently approved security modules in the official kernel.
Design
LSM was designed in order to answer all the requirements for successfully implementing a mandatory access control module, while imposing the fewest possible changes to the Linux kernel. LSM avoids the approach of system call interposition used by Systrace because it doesn't scale to multiprocessor kernels and is subject to TOCTTOU (race) attacks. Instead, LSM inserts "hooks" (upcalls to the module) at every point in the kernel where a user-level system-call is about to result with an access to an important internal kernel-object like inodes and task control blocks.
LSM is narrowly scoped to solve the problem of access control, while not imposing a large and complex change-patch on the mainstream kernel. It isn't intended to be a general "hook" or "upcall" mechanism, nor does it support Operating system-level virtualization.
LSM's access-control goal is very closely related to the problem of system auditing, but is subtly different. Auditing requires that every attempt at access be recorded. LSM cannot deliver this, because it would require a great many more hooks, in order to detect cases where the kernel "short circuits" failing system-calls and returns an error code before getting near significant objects.
The LSM design is described in the paper Linux Security Modules: General Security Support for the Linux Kernel presented at USENIX Security 2002. At the same conference was the paper Using CQUAL for Static Analysis of Authorization Hook Placement which studied automatic static analysis of the kernel code to verify that all of the necessary hooks have actually been inserted into the Linu
|
https://en.wikipedia.org/wiki/Esterase
|
In biochemistry, an esterase is a class of enzyme that splits esters into an acid and an alcohol in a chemical reaction with water called hydrolysis (and as such, it is a type of hydrolase).
A wide range of different esterases exist that differ in their substrate specificity, their protein structure, and their biological function.
EC classification/list of enzymes
EC 3.1.1: Carboxylic ester hydrolases
Acetylesterase (EC 3.1.1.6), splits off acetyl groups
Cholinesterase
Acetylcholinesterase, inactivates the neurotransmitter acetylcholine
Pseudocholinesterase, broad substrate specificity, found in the blood plasma and in the liver
Pectinesterase (EC 3.1.1.11), clarifies fruit juices
EC 3.1.2: Thiolester hydrolases
Thioesterase
Ubiquitin carboxy-terminal hydrolase L1
EC 3.1.3: Phosphoric monoester hydrolases
Phosphatase (EC 3.1.3.x), hydrolyses phosphoric acid monoesters into a phosphate ion and an alcohol
Alkaline phosphatase, removes phosphate groups from many types of molecules, including nucleotides, proteins, and alkaloids.
Phosphodiesterase (PDE), inactivates the second messenger cAMP
cGMP specific phosphodiesterase type 5, is inhibited by Sildenafil (Viagra)
Fructose bisphosphatase (3.1.3.11), converts fructose-1,6-bisphosphate to fructose-6-phosphate in gluconeogenesis
EC 3.1.4: Phosphoric diester hydrolases
EC 3.1.5: Triphosphoric monoester hydrolases
EC 3.1.6: Sulfuric ester hydrolases (sulfatases)
EC 3.1.7: Diphosphoric monoester hydrolases
EC 3.1.8: Phosphoric triester hydrolases
Exonucleases (deoxyribonucleases and ribonucleases)
EC 3.1.11: Exodeoxyribonucleases producing 5'-phosphomonoesters
EC 3.1.13: Exoribonucleases producing 5'-phosphomonoesters
EC 3.1.14: Exoribonucleases producing 3'-phosphomonoesters
EC 3.1.15: Exonucleases active with either ribo- or deoxy-
Endonucleases (deoxyribonucleases and ribonucleases)
Endodeoxyribonuclease
Endoribonuclease
either deoxy- or ribo-
See also
Enzyme
List of enzymes
Carboxyl
|
https://en.wikipedia.org/wiki/Irish%20Defence%20Forces%20cap%20badge
|
The Irish Defence Forces Cap Badge (or "FF badge" as it is sometimes called) is common to all services and corps of the Irish Defence Forces. Although principally associated with the Irish Army (Defence Force regulations in fact describe it as "the Army Badge") it is also worn by and appears in elements of the insignia of the Naval Service and Air Corps.
Origin and early usage
The badge is said to be designed in 1913 by Eoin MacNeill, a founding member and chairman of the Irish Volunteers, but there is also evidence that points to other origins, notably Canon Peadar O'Leary and The O'Rahilly. Variations existed for territorial commands, but the majority of volunteers wore the Óglaigh na hÉireann badge. It was worn by republicans in the 1916 Easter Rising. It was rarely worn by the Irish Republican Army in the War of Independence as doing so could lead to a prison term. Eventually the Free State Army adopted the badge for their new uniforms before the Irish Civil War.
Description
Design
The design of the Army Badge which is prescribed in Defence Force Regulations as follows:
"...As a component of rank insignia and which is specified in the Third Schedule as the form of the cap badge, shall be a sunburst - An Gal Gréine, surmounted by an 8-pointed star, a point of the star being uppermost, bearing the letters "FF" (in Gaelic characters) encircled by a representation of an ancient warrior's sword belt on which the words "Óglaigh na hÉireann" are inscribed."
Inscription
"FF" - Fianna Fáil - "Fianna of Inis Fáil", i.e. Army of Ireland (the political party Fianna Fáil formed in 1926 adopted the same name)
"Óglaiġ na h-Éireann" - ''Irish Volunteers
Current usage and variations
Irish Army
In the Army, the badge is worn by all ranks on all head-dress. Enlisted and non-commissioned ranks wear a "Stay-Brite" anodised aluminium brass replica. Some enlisted ranks, particularly older soldiers, wear the original Brass Badge which, although no longer official issue,
|
https://en.wikipedia.org/wiki/Semantic%20mapper
|
A semantic mapper is tool or service that aids in the transformation of data elements from one namespace into another namespace. A semantic mapper is an essential component of a semantic broker and one tool that is enabled by the Semantic Web technologies.
Essentially the problems arising in semantic mapping are the same as in data mapping for data integration purposes, with the difference that here the semantic relationships are made explicit through the use of semantic nets or ontologies which play the role of data dictionaries in data mapping.
Structure
A semantic mapper must have access to three data sets:
List of data elements in source namespace
List of data elements in destination namespace
List of semantic equivalent statements between source and destination (e.g. owl:equivalentClass, owl:equivalentProperty or owl:sameAs in OWL).
A semantic mapper processes on a list of data elements in the source namespace. The semantic mapper will successively translate the data elements from the source namespace to the destination namespace. The mapping does not necessarily need to be a one-to-one mapping. Some data elements may map to several data elements in the destination.
Some semantic mappers are static in that they will do a one-time data transforms. Others will generate an executable program to repeatedly perform this transform. The output of this program may be any transformation system such as XSLT, a Java program or a program in some other procedural language.
See also
Data model
Data wrangling
Enterprise application integration
Mediation
Ontology matching
Semantic heterogeneity
Semantic integration
Semantic translation
Semantic unification
|
https://en.wikipedia.org/wiki/Time%20dependent%20vector%20field
|
In mathematics, a time dependent vector field is a construction in vector calculus which generalizes the concept of vector fields. It can be thought of as a vector field which moves as time passes. For every instant of time, it associates a vector to every point in a Euclidean space or in a manifold.
Definition
A time dependent vector field on a manifold M is a map from an open subset on
such that for every , is an element of .
For every such that the set
is nonempty, is a vector field in the usual sense defined on the open set .
Associated differential equation
Given a time dependent vector field X on a manifold M, we can associate to it the following differential equation:
which is called nonautonomous by definition.
Integral curve
An integral curve of the equation above (also called an integral curve of X) is a map
such that , is an element of the domain of definition of X and
.
Equivalence with time-independent vector fields
A time dependent vector field on can be thought of as a vector field on where does not depend on
Conversely, associated with a time-dependent vector field on is a time-independent one
on In coordinates,
The system of autonomous differential equations for is equivalent to that of non-autonomous ones for and is a bijection between the sets of integral curves of and respectively.
Flow
The flow of a time dependent vector field X, is the unique differentiable map
such that for every ,
is the integral curve of X that satisfies .
Properties
We define as
If and then
, is a diffeomorphism with inverse .
Applications
Let X and Y be smooth time dependent vector fields and the flow of X. The following identity can be proved:
Also, we can define time dependent tensor fields in an analogous way, and prove this similar identity, assuming that is a smooth time dependent tensor field:
This last identity is useful to prove the Darboux theorem.
|
https://en.wikipedia.org/wiki/Osteochondritis%20dissecans
|
Osteochondritis dissecans (OCD or OD) is a joint disorder primarily of the subchondral bone in which cracks form in the articular cartilage and the underlying subchondral bone. OCD usually causes pain during and after sports. In later stages of the disorder there will be swelling of the affected joint which catches and locks during movement. Physical examination in the early stages does only show pain as symptom, in later stages there could be an effusion, tenderness, and a crackling sound with joint movement.
OCD is caused by blood deprivation of the secondary physes around the bone core of the femoral condyle. This happens to the epiphyseal vessels under the influence of repetitive overloading of the joint during running and jumping sports. During growth such chondronecrotic areas grow into the subchondral bone. There it will show as bone defect area under articular cartilage. The bone will then possibly heal to the surrounding condylar bone in 50% of the cases. Or it will develop into a pseudarthrosis between condylar bone core and osteochondritis flake leaving the articular cartilage it supports prone to damage. The damage is executed by ongoing sport overload. The result is fragmentation (dissection) of both cartilage and bone, and the free movement of these bone and cartilage fragments within the joint space, causing pain, blockage and further damage. OCD has a typical anamnesis with pain during and after sports without any history of trauma. Some symptoms of late stages of osteochondritis dissecans are found with other diseases like rheumatoid disease of children and meniscal ruptures. The disease can be confirmed by X-rays, computed tomography (CT) or magnetic resonance imaging (MRI) scans.
Non-surgical treatment is successful in 50% of the cases. If in late stages the lesion is unstable and the cartilage is damaged, surgical intervention is an option as the ability for articular cartilage to heal is limited. When possible, non-operative forms of managemen
|
https://en.wikipedia.org/wiki/Fecal%20microbiota%20transplant
|
Fecal microbiota transplant (FMT), also known as a stool transplant, is the process of transferring fecal bacteria and other microbes from a healthy individual into another individual. FMT is an effective treatment for Clostridioides difficile infection (CDI). For recurrent CDI, FMT is more effective than vancomycin alone, and may improve the outcome after the first index infection.
Side effects may include a risk of infections, therefore the donor should be screened.
With CDI becoming more common, FMT is gaining increasing prominence, with some experts calling for it to become the first-line therapy for CDI. FMT has been used experimentally to treat other gastrointestinal diseases, including colitis, constipation, irritable bowel syndrome, and neurological conditions, such as multiple sclerosis and Parkinson's. In the United States, human feces has been regulated as an experimental drug since 2013. In the United Kingdom, FMT regulation is under the remit of the Medicines and Healthcare products Regulatory Agency.
Medical uses
Clostridioides difficile infection
Fecal microbiota transplant is approximately 85–90% effective in people with CDI for whom antibiotics have not worked or in whom the disease recurs following antibiotics. Most people with CDI recover with one FMT treatment.
A 2009 study found that fecal microbiota transplant was an effective and simple procedure that was more cost-effective than continued antibiotic administration and reduced the incidence of antibiotic resistance.
Once considered to be a "last resort therapy" by some medical professionals, due to its unusual nature and invasiveness compared with antibiotics, perceived potential risk of infection transmission, and lack of Medicare coverage for donor stool, position statements by specialists in infectious diseases and other societies have been moving toward acceptance of FMT as a standard therapy for relapsing CDI and also Medicare coverage in the United States.
It has been recommend
|
https://en.wikipedia.org/wiki/King%20%26%20Balloon
|
is a fixed shooter arcade video game released by Namco in 1980 and licensed to GamePlan for U.S. manufacture and distribution. It runs upon the Namco Galaxian hardware, based on the Z80 microprocessor, with an extra Zilog Z80 microprocessor to drive a DAC for speech; it was one of the first games to have speech synthesis. An MSX port was released in Japan in 1984.
Gameplay
The player controls two green men with an orange cannon, stationed on the parapet of a castle, that fires at a fleet of hot-air balloons. Below the cannon, the King moves slowly back and forth on the ground as the balloons return fire and dive toward him. If a balloon reaches the ground, it will sit there until the King walks into it, at which time it lifts off with him. The player must then shoot the balloon to free the King, who will parachute safely to the ground. At times, two or more diving balloons can combine to form a single larger one, which awards extra points and splits apart when hit.
The cannon is destroyed by collision with balloons or their shots, but is replaced after a brief delay with no effect on the number of remaining lives. One life is lost whenever a balloon carries the King off the top of the screen; the game ends when all lives are lost.
As in Galaxian, the round number stops increasing at round 48.
Voice
The King speaks when he is captured ("HELP!"), when he is rescued ("THANK YOU"), and when he is carried away ("BYE BYE!"). The balloons make the same droning sound as the aliens from Galaxian, released in the previous year, and the cannon's shots also make the same sound as those of the player's ship (the "Galaxip") from the very same game.
In the original Japanese version of the game, the King speaks English with a heavy Japanese accent, saying "herupu" ("help!"), "sankyū" ("thank you"), and "baibai" ("bye bye!"). The U.S. version of the game features a different voice for the King without the Japanese accent.
Legacy
King & Balloon was later featured in Namco Muse
|
https://en.wikipedia.org/wiki/Inner%20mitochondrial%20membrane
|
The inner mitochondrial membrane (IMM) is the mitochondrial membrane which separates the mitochondrial matrix from the intermembrane space.
Structure
The structure of the inner mitochondrial membrane is extensively folded and compartmentalized. The numerous invaginations of the membrane are called cristae, separated by crista junctions from the inner boundary membrane juxtaposed to the outer membrane. Cristae significantly increase the total membrane surface area compared to a smooth inner membrane and thereby the available working space for oxidative phosphorylation.
The inner membrane creates two compartments. The region between the inner and outer membrane, called the intermembrane space, is largely continuous with the cytosol, while the more sequestered space inside the inner membrane is called the matrix.
Cristae
For typical liver mitochondria, the area of the inner membrane is about 5 times as large as the outer membrane due to cristae. This ratio is variable and mitochondria from cells that have a greater demand for ATP, such as muscle cells, contain even more cristae. Cristae membranes are studded on the matrix side with small round protein complexes known as F1 particles, the site of proton-gradient driven ATP synthesis. Cristae affect overall chemiosmotic function of mitochondria.
Cristae junctions
Cristae and the inner boundary membranes are separated by junctions. The end of cristae are partially closed by transmembrane protein complexes that bind head to head and link opposing crista membranes in a bottleneck-like fashion. For example, deletion of the junction protein IMMT leads to a reduced inner membrane potential and impaired growth and to dramatically aberrant inner membrane structures which form concentric stacks instead of the typical invaginations.
Composition
The inner membrane of mitochondria is similar in lipid composition to the membrane of bacteria. This phenomenon can be explained by the endosymbiont hypothesis of the origin of mito
|
https://en.wikipedia.org/wiki/Personal%20Internet%20Communicator
|
The Personal Internet Communicator (PIC) is a consumer device released by AMD in 2004 to allow people in emerging countries access to the internet. Originally part of AMD's 50x15 Initiative, the PIC has been deployed by Internet service providers (ISPs) in several developing countries. It is based on an AMD Geode CPU and uses Microsoft Windows CE and Microsoft Internet Explorer 6.
The fanless PC runs the low-power Geode x86 processor and is equipped with 128 MB DDR memory and a 10 GB 3.5-inch hard drive. The device's price is $185 with a keyboard, mouse, and preinstalled software for basic personal computing and internet/email access and $249 with a monitor included.
Transfer to Data Evolution Corporation
AMD stopped work on the Personal Internet Communicator project after nearly two years of planning and development. In December 2006 AMD transferred PIC manufacturing assets to Data Evolution Corporation and the device was then marketed as the decTOP.
Local service providers, such as telecommunications firms and government-sponsored communications initiatives, brand, promote, and sell the PIC. AMD owns the design of the PIC, but the device is currently manufactured by Solectron, a high volume contract manufacturing specialist. Other companies integrated into the development include Seagate and Samsung. The ISPs include the Tata Group (India), CRC (México), Telmex (México), and Cable and Wireless (in the Caribbean).
Technology
Hardware
The device is designed for minimal cost like a consumer audio/video appliance, is not internally expandable, and comes equipped with a minimum set of interfaces which include a VGA graphics display interface, four USB 1.1 ports, a built-in 56 kbit/s ITU v.92 Fax/Modem and an AC'97 audio interface providing sound capabilities. There is also an IDE-connector and the case can house a normal 3.5" HDD.
Hardware specifications
AMD Geode GX 500@1.0W processor, 366 MHz clock rate
128 MBytes DDR RAM (Up to 512 MBytes)
VGA display inte
|
https://en.wikipedia.org/wiki/PelB%20leader%20sequence
|
The pelB leader sequence is a sequence of amino acids which, when attached to a protein, directs the protein to the bacterial periplasm, where the sequence is removed by a signal peptidase. Specifically, pelB refers to pectate lyase B of Erwinia carotovora CE. The leader sequence consists of the 22 N-terminal amino acid residues. This leader sequence can be attached to any other protein (on the DNA level) resulting in a transfer of such a fused protein to the periplasmic space of Gram-negative bacteria, such as Escherichia coli, often used in genetic engineering.
Protein secretion can increase the stability of cloned gene products. For instance it was shown that the half-life of the recombinant proinsulin is increased 10-fold when the protein is secreted to the periplasmic space. (vijji. Narne, R.S.Ramya)
One of pelB's possible applications is to direct coat protein-antigen fusions to the cell surface for the construction of engineered bacteriophages for the purpose of phage display.
The Pectobacterium carotovorum pelB leader sequence commonly used in molecular biology has the sequence MKYLLPTAAAGLLLLAAQPAMA (UniProt Q04085).
|
https://en.wikipedia.org/wiki/Musical%20isomorphism
|
In mathematics—more specifically, in differential geometry—the musical isomorphism (or canonical isomorphism) is an isomorphism between the tangent bundle and the cotangent bundle of a pseudo-Riemannian manifold induced by its metric tensor. There are similar isomorphisms on symplectic manifolds. The term musical refers to the use of the symbols (flat) and (sharp).
In the notation of Ricci calculus, it is also known as raising and lowering indices.
Motivation
In linear algebra, a finite-dimensional vector space is isomorphic to its dual space but not canonically isomorphic to it. On the other hand a finite-dimensional vector space endowed with a non-degenerate bilinear form , is canonically isomorphic to its dual, the isomorphism being given by:
An example is where is a Euclidean space, and is its inner product.
Musical isomorphisms are the global version of this isomorphism and its inverse, for the tangent bundle and cotangent bundle of a (pseudo-)Riemannian manifold . They are isomorphisms of vector bundles which are at any point the above isomorphism applied to the (pseudo-)Euclidean space (the tangent space of at point ) endowed with the inner product . More generally, musical isomorphisms always exist between a vector bundle endowed with a bundle metric and its dual.
Because every paracompact manifold can be endowed with a Riemannian metric, the musical isomorphisms allow to show that on those spaces a vector bundle is always isomorphic to its dual (but not canonically unless a (pseudo-)Riemannian metric has been associated with the manifold).
Discussion
Let be a pseudo-Riemannian manifold. Suppose is a moving tangent frame (see also smooth frame) for the tangent bundle with, as dual frame (see also dual basis), the moving coframe (a moving tangent frame for the cotangent bundle ; see also coframe) . Then, locally, we may express the pseudo-Riemannian metric (which is a -covariant tensor field that is symmetric and nondegenerate) as (where
|
https://en.wikipedia.org/wiki/Mongoose-V
|
The Mongoose-V 32-bit microprocessor for spacecraft onboard computer applications is a radiation-hardened and expanded 10–15 MHz version of the MIPS R3000 CPU. Mongoose-V was developed by Synova of Melbourne, Florida, USA, with support from the NASA Goddard Space Flight Center.
The Mongoose-V processor first flew on NASA's Earth Observing-1 (EO-1) satellite launched in November 2000 where it functioned as the main flight computer. A second Mongoose-V controlled the satellite's solid-state data recorder.
The Mongoose-V requires 5 volts and is packaged into a 256-pin ceramic quad flatpack (CQFP).
Examples of spacecraft that use the Mongoose-V include:
Earth Observing-1 (EO-1)
NASA's Microwave Anisotropy Probe (MAP), launched in June 2001, carried a Mongoose-V flight computer similar to that on EO-1.
NASA's Space Technology 5 series of microsatellites
CONTOUR
TIMED
Pluto probe New Horizons
See also
RAD750 Power PC
LEON
ERC32
Radiation hardening
Communications survivability
Faraday cage
Institute for Space and Defense Electronics, Vanderbilt University
Mars Reconnaissance Orbiter
MESSENGER Mercury probe
Mars rovers
TEMPEST
|
https://en.wikipedia.org/wiki/Kleine%E2%80%93Levin%20syndrome
|
Kleine–Levin syndrome (KLS) is a rare neurological disorder characterized by persistent episodic hypersomnia accompanied by cognitive and behavioral changes. These changes may include disinhibition, sometimes manifested through hypersexuality, hyperphagia or emotional lability, and other symptoms, such as derealization. Patients generally experience recurrent episodes of the condition for more than a decade, which may return at a later age. Individual episodes generally last more than a week, sometimes lasting for months. The condition greatly affects the personal, professional, and social lives of those with KLS. The severity of symptoms and the course of the syndrome vary between those with KLS. Patients commonly have about 20 episodes over about a decade. Several months may elapse between episodes.
The onset of the condition usually follows a viral infection (72% of patients); several different viruses have been observed to trigger KLS. It is generally only diagnosed after similar conditions have been excluded; MRI, CT scans, lumbar puncture, and toxicology tests are used to rule out other possibilities. The syndrome's mechanism is not known, but the thalamus is thought to possibly play a role. SPECT has shown thalamic hypoperfusion in patients during episodes.
KLS is very rare, occurring at a rate of 1 in 500 000, which limits research into genetic factors. The condition primarily affects teenagers (81% of reported patients), with a bias towards males (68-72% of cases), though females can also be affected, and the age of onset varies. There is no known cure, and there is little evidence supporting drug treatment. Lithium has been reported to have limited effects in case reports, decreasing the length of episodes and duration between them in some patients. Stimulants have been shown to promote wakefulness during episodes, but they do not counteract cognitive symptoms or decrease the duration of episodes. The condition is named after Willi Kleine and Max Levin
|
https://en.wikipedia.org/wiki/Class%20I%20PI%203-kinases
|
Class I PI 3-kinases are a subgroup of the enzyme family, phosphoinositide 3-kinase that possess a common protein domain structure, substrate specificity, and method of activation. Class I PI 3-kinases are further divided into two subclasses, class IA PI 3-kinases and class IB PI 3-kinases.
Class IA PI 3-kinases
Class IA PI 3-kinases are activated by receptor tyrosine kinases (RTKs).
There are three catalytic subunits that are classified as class IA PI 3-kinases:
p110α
p110β
p110δ
There are currently five regulatory subunits that are known to associate with class IA PI 3-kinases catalytic subunits:
p85α and p85β
p55α and p55γ
p50α
Class IB PI 3-kinases
Class IB PI 3-kinases are activated by G-protein-coupled receptors (GPCRs).
The only known class IB PI 3-kinase catalytic subunit is p110γ.
There are two known regulatory subunits for p110γ:
p101
p84/ p87PIKAP.
See also
Phosphoinositide 3-kinase#Class I
Phosphoinositide 3-kinase inhibitor
|
https://en.wikipedia.org/wiki/Class%20II%20PI%203-kinases
|
Class II PI 3-kinases are a subgroup of the enzyme family, phosphoinositide 3-kinase that share a common protein domain structure, substrate specificity and method of activation.
Class II PI 3-kinases were the most recently identified class of PI 3-kinases.
There are three class II PI 3-kinase isoforms expressed in mammalian cells;
PI3K-C2α encoded by the PIK3C2A gene
PI3K-C2β encoded by the PIK3C2B gene
PI3K-C2γ encoded by the PIK3C2G gene
See also
Phosphoinositide 3-kinase
|
https://en.wikipedia.org/wiki/Brain%20size
|
The size of the brain is a frequent topic of study within the fields of anatomy, biological anthropology, animal science and evolution. Measuring brain size and cranial capacity is relevant both to humans and other animals, and can be done by weight or volume via MRI scans, by skull volume, or by neuroimaging intelligence testing. The relationship between brain size and intelligence remains a controversial although frequently investigated question.
Humans
In humans, the right cerebral hemisphere is typically larger than the left, whereas the cerebellar hemispheres are typically closer in size. The adult human brain weighs on average about . In men the average weight is about 1370 g and in women about 1200 g. The volume is around 1260 cm3 in men and 1130 cm3 in women, although there is substantial individual variation. Yet another study argued that adult human brain weight is 1300-1400 g for adult humans and 350-400 g for newborn humans. There is a range of volume and weights, and not just one number that one can definitively rely on, as with body mass. It is also important to note that variation between individuals is not as important as variation within species, as overall the differences are much smaller. The mechanisms of interspecific and intraspecific variation also differ.
Variation and evolution
From early primates to hominids and finally to Homo sapiens, the brain is progressively larger, with exception of extinct Neanderthals whose brain size exceeded modern Homo sapiens. The volume of the human brain has increased as humans have evolved (see Homininae), starting from about 600 cm3 in Homo habilis up to 1680 cm3 in Homo neanderthalensis, which was the hominid with the biggest brain size. Some data suggest that the average brain size has decreased since then, including a study concluding the decrease "was surprisingly recent, occurring in the last 3,000 years". However, a reanalysis of the same data suggests that brain size has not decreased, and tha
|
https://en.wikipedia.org/wiki/Resampling%20%28statistics%29
|
In statistics, resampling is the creation of new samples based on one observed sample.
Resampling methods are:
Permutation tests (also re-randomization tests)
Bootstrapping
Cross validation
Permutation tests
Permutation tests rely on resampling the original data assuming the null hypothesis. Based on the resampled data it can be concluded how likely the original data is to occur under the null hypothesis.
Bootstrap
Bootstrapping is a statistical method for estimating the sampling distribution of an estimator by sampling with replacement from the original sample, most often with the purpose of deriving robust estimates of standard errors and confidence intervals of a population parameter like a mean, median, proportion, odds ratio, correlation coefficient or regression coefficient. It has been called the plug-in principle, as it is the method of estimation of functionals of a population distribution by evaluating the same functionals at the empirical distribution based on a sample.
For example, when estimating the population mean, this method uses the sample mean; to estimate the population median, it uses the sample median; to estimate the population regression line, it uses the sample regression line.
It may also be used for constructing hypothesis tests. It is often used as a robust alternative to inference based on parametric assumptions when those assumptions are in doubt, or where parametric inference is impossible or requires very complicated formulas for the calculation of standard errors. Bootstrapping techniques are also used in the updating-selection transitions of particle filters, genetic type algorithms and related resample/reconfiguration Monte Carlo methods used in computational physics. In this context, the bootstrap is used to replace sequentially empirical weighted probability measures by empirical measures. The bootstrap allows to replace the samples with low weights by copies of the samples with high weights.
Cross-validation
Cross-va
|
https://en.wikipedia.org/wiki/Saynoto0870.com
|
saynoto0870.com is a UK website with a directory of non-geographic telephone numbers and their geographic alternatives.
The website, which primarily started as a directory of cheaper alternatives to 0870 numbers (hence the name), also lists geographic-rate (01, 02 or 03) or free-to-caller (080) alternatives for 0843, 0844, 0845, 0871, 0872, and 0873 revenue-share numbers.
The vast majority of telephone numbers are submitted by visitors to the website, but the discussion board also offers a place for visitors to request alternative telephone numbers if they are not included in the database. Some companies that advertise a non-geographic number will also offer a number for calling from abroad – usually starting +44 1 or +44 2 – this number can be used within the UK (removing the +44 and replacing it with 0) to avoid the cost of calling non-geographic telephone numbers. Some companies will also offer a geographic alternative if asked.
Motivations behind the website
Organisations using 084, 087 and 09 non-geographic telephone numbers (except for 0870 numbers from 1 August 2009 to 1 July 2015) automatically impose a Service Charge on all callers. Calls to 01, 02 and 03 numbers, standard 07 mobile numbers and 080 numbers do not attract such an additional charge.
The revenue derived from the Service Charge is used by the call recipient to cover the call handling and call routing costs incurred at their end of the call. Any excess is paid out under a "revenue share" agreement or may be used as a contribution towards meeting other expenses such as a lease for switchboard equipment.
Non-geographic telephone numbers beginning 084 and 087 have increasingly been employed for inappropriate uses such as bookings, renewals, refunds, cancellations, customer services and complaints, and for contacting public services including essential health services.
Most landline providers offer 'inclusive' calls of up to one hour to standard telephone numbers (those beginning with 01, 02
|
https://en.wikipedia.org/wiki/FROSTBURG
|
FROSTBURG was a Connection Machine 5 (CM-5) massively parallel supercomputer used by the US National Security Agency (NSA) to perform higher-level mathematical calculations. The CM-5 was built by the Thinking Machines Corporation, based in Cambridge, Massachusetts, at a cost of US$25 million. The system was installed at NSA in 1991, and operated until 1997. It was the first massively parallel processing computer bought by NSA, originally containing 256 processing nodes. The system was upgraded in 1993 with another 256 nodes, for a total of 512 nodes. The system had a total of 500 billion 32-bit words (≈ 2 terabytes) of storage, 2.5 billion words (≈ 10 gigabytes) of memory, and could perform at a theoretical maximum 65.5 gigaFLOPS. The operating system CMost was based on Unix, but optimized for parallel processing.
FROSTBURG is now on display at the National Cryptologic Museum.
See also
HARVEST
Cryptanalytic computer
|
https://en.wikipedia.org/wiki/Salicylaldehyde
|
Salicylic aldehyde (2-hydroxybenzaldehyde) is the organic compound with the formula (C7 H6 O2) C6H4CHO-2-OH. Along with 3-hydroxybenzaldehyde and 4-hydroxybenzaldehyde, it is one of the three isomers of hydroxybenzaldehyde. This colorless oily liquid has a bitter almond odor at higher concentration. Salicylaldehyde is a key precursor to a variety of chelating agents, some of which are commercially important.
Production
Salicylaldehyde is prepared from phenol and chloroform by heating with sodium hydroxide or potassium hydroxide in a Reimer–Tiemann reaction:
Alternatively, it is produced by condensation of phenol or its derivatives with formaldehyde to give hydroxybenzyl alcohol, which is oxidized to the aldehyde.
Salicylaldehydes in general may be prepared by other ortho-selective formylation reactions from the corresponding phenol, for instance by the Duff reaction, Reimer–Tiemann reaction, or by treatment with paraformaldehyde in the presence of magnesium chloride and a base.
Natural occurrences
Salicylaldehyde was identified as a characteristic aroma component of buckwheat.
It is also one of the components of castoreum, the exudate from the castor sacs of the mature North American beaver (Castor canadensis) and the European beaver (Castor fiber), used in perfumery.
Furthermore, salicylaldehyde occurs in the larval defensive secretions of several leaf beetle species that belong the subtribe Chrysomelina. An example for a leaf beetle species that produces salicylaldehyde is the red poplar leaf beetle Chrysomela populi.
Reactions and applications
Salicylaldehyde is used to make the following:
Oxidation with hydrogen peroxide gives catechol (1,2-dihydroxybenzene) (Dakin reaction).
Etherification with chloroacetic acid followed by cyclisation gives the heterocycle benzofuran (coumarone). The first step in this reaction to the substituted benzofuran is called the Rap–Stoermer condensation after E. Rap (1895) and R. Stoermer (1900).
Salicylaldehyde is co
|
https://en.wikipedia.org/wiki/Sphingosyl%20phosphatide
|
Sphingosyl phosphatide refers to a lipid containing phosphorus and a long-chain base.
|
https://en.wikipedia.org/wiki/Color-glass%20condensate
|
Color-glass condensate (CGC) is a type of matter theorized to exist in atomic nuclei when they collide at near the speed of light. During such collision, one is sensitive to the gluons that have very small momenta, or more precisely a very small Bjorken scaling variable.
The small momenta gluons dominate the description of the collision because their density is very large. This is because a high-momentum gluon is likely to split into smaller momentum gluons.
When the gluon density becomes large enough, gluon-gluon recombination puts a limit on how large the gluon density can be. When gluon recombination balances gluon splitting, the density of gluons saturate, producing new and universal properties of hadronic matter. This state of saturated gluon matter is called the color-glass condensate.
"Color" in the name "color-glass condensate" refers to a type of charge that quarks and gluons carry as a result of the strong nuclear force. The word "glass" is borrowed from the term for silica and other materials that are disordered and act like solids on short time scales but liquids on long time scales. In the CGC phase, the gluons themselves are disordered and do not change their positions rapidly. "Condensate" means that the gluons have a very high density.
The color-glass condensate describes an intrinsic property of matter that can only be observed under high-energy conditions such as those at RHIC, the Large Hadron Collider as well as the future Electron Ion Collider.
The color-glass condensate is important because it is proposed as a universal form of matter that describes the properties of all high-energy, strongly interacting particles. It has simple properties that follow from first principles in the theory of strong interactions, quantum chromodynamics. It has the potential to explain many unsolved problems such as how particles are produced in high-energy collisions, and the distribution of matter itself inside of these particles.
Researchers at CERN beli
|
https://en.wikipedia.org/wiki/Eyring%20equation
|
The Eyring equation (occasionally also known as Eyring–Polanyi equation) is an equation used in chemical kinetics to describe changes in the rate of a chemical reaction against temperature. It was developed almost simultaneously in 1935 by Henry Eyring, Meredith Gwynne Evans and Michael Polanyi. The equation follows from the transition state theory, also known as activated-complex theory. If one assumes a constant enthalpy of activation and constant entropy of activation, the Eyring equation is similar to the empirical Arrhenius equation, despite the Arrhenius equation being empirical and the Eyring equation based on statistical mechanical justification.
General form
The general form of the Eyring–Polanyi equation somewhat resembles the Arrhenius equation:
where is the rate constant, is the Gibbs energy of activation, is the transmission coefficient, is the Boltzmann constant, is the temperature, and is the Planck constant.
The transmission coefficient is often assumed to be equal to one as it reflects what fraction of the flux through the transition state proceeds to the product without recrossing the transition state. So, a transmission coefficient equal to one means that the fundamental no-recrossing assumption of transition state theory holds perfectly. However, is typically not one because (i) the reaction coordinate chosen for the process at hand is usually not perfect and (ii) many barrier-crossing processes are somewhat or even strongly diffusive in nature. For example, the transmission coefficient of methane hopping in a gas hydrate from one site to an adjacent empty site is between 0.25 and 0.5. Typically, reactive flux correlation function (RFCF) simulations are performed in order to explicitly calculate from the resulting plateau in the RFCF. This approach is also referred to as the Bennett-Chandler approach, which yields a dynamical correction to the standard transition state theory-based rate constant.
It can be rewritten as:
One can put
|
https://en.wikipedia.org/wiki/Xbox%20Games%20Store
|
Xbox Games Store (formerly Xbox Live Marketplace) is a digital distribution platform used by Microsoft's Xbox 360 video game console. The service allows users to download or purchase video games (including both Xbox Live Arcade games and full Xbox 360 titles), add-ons for existing games, game demos along with other miscellaneous content such as gamer pictures and Dashboard themes.
Initially used by the Xbox One during its launch in 2013, after five years the Xbox Games Store was replaced by Microsoft Store, as the standard digital storefront for all Windows 10 devices. The subsequent Xbox Series X/S consoles also use Microsoft Store.
The service also previously offered sections for downloading video content, such as films and television episodes; as of late 2012, this functionality was superseded by Xbox Music and Xbox Video (now known as Groove Music and Microsoft Movies & TV respectively).
In August 2023, Microsoft announced that the Xbox 360 Store will be shutting down on July 29, 2024; however, backwards-compatible Xbox 360 titles will remain available for purchase on the Microsoft Store for Xbox One and Xbox Series X/S after the Xbox 360 Store's sunset date.
Services
Xbox Live Arcade
The Xbox Live Arcade (XBLA) branding encompasses digital-only games that can be purchased from Xbox Games Store on the Xbox 360, including ports of classic games and new original titles.
Games on Demand
The Games on Demand section of Xbox Games Store allows users to purchase downloadable versions of Xbox 360 titles, along with games released for the original Xbox. Most of all, some of the delisted downloadable contents of the respective Xbox 360 games are also included in this edition.
Xbox Live Indie Games
As part of the "New Xbox Experience" update launched on November 19, 2008, Microsoft launched Xbox Live Community Games, and later renamed to "Xbox Live Indie Games", a service similar to Xbox Live Arcade (XBLA), with smaller and less expensive games created by independ
|
https://en.wikipedia.org/wiki/168%20%28number%29
|
168 (one hundred [and] sixty-eight) is the natural number following 167 and preceding 169.
In mathematics
168 is an even number, a composite number, an abundant number, and an idoneal number.
There are 168 primes less than 1000. 168 is the product of the first two perfect numbers.
168 is the order of the group PSL(2,7), the second smallest nonabelian simple group.
From Hurwitz's automorphisms theorem, 168 is the maximum possible number of automorphisms of a genus 3 Riemann surface, this maximum being achieved by the Klein quartic, whose symmetry group is PSL(2,7). The Fano plane has 168 symmetries.
168 is the largest known n such that 2n does not contain all decimal digits.
168 is the fourth Dedekind number.
In astronomy
168P/Hergenrother is a periodic comet in the Solar System
168 Sibylla is a dark Main belt asteroid
In the military
was an Imperial Japanese Navy during World War II
is a United States Navy fleet tugboat
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War I
was a United States Navy during World War II
was a United States Navy steamer during World War II
was a La Salle-class transport during World War II
In movies
The 168 Film Project in Los Angeles
In transportation
New York City Subway stations
168th Street (New York City Subway); a subway station complex at 168th Street and Broadway consisting of:
168th Street (IRT Broadway – Seventh Avenue Line); serving the train
168th Street (IND Eighth Avenue Line); serving the trains
168th Street (BMT Jamaica Line); was the former terminal of the BMT Jamaica Line in Queens
British Rail Class 168
VASP Flight 168 from Rio de Janeiro, Brazil to Fortaleza; crashed on June 8, 1982
In other fields
168 is also:
The year AD 168 or 168 BC
168 AH is a year in the Islamic calendar that corresponds to 784 – 785 CE
The number of hours in a week, or 7 x
|
https://en.wikipedia.org/wiki/Mini-Europe
|
Mini-Europe is a miniature park located in the Bruparck entertainment park, at the foot of the Atomium, in Brussels, Belgium. Mini-Europe has reproductions of monuments in the European Union and other countries within the continent of Europe on display, at a scale of 1:25. Roughly 80 cities and 350 buildings are represented. Mini-Europe receives 350,000 visitors per year and has a turnover of €4 million.
Mini-Europe is the brainchild of Johannes A. Lorijn, who founded similar miniature parks in Austria and Spain. The park contains live action models such as trains, mills, an erupting Mount Vesuvius, and cable cars. A guide gives the details on all the monuments. At the end of the visit, the Spirit of Europe exhibition gives an interactive overview of the EU in the form of multimedia games.
The park is built on an area of . The initial investment was of €10 million in 1989, on its inauguration by then-Prince Philip of Belgium.
Exhibits
Building the monuments
The monuments exposed are chosen for the quality of their architecture or their European symbolism. Most of the monuments were made using moulds. The final copy used to be cast from epoxy resin, but nowadays polyester is used. Three of the monuments were made out of stone (e.g. the Leaning Tower of Pisa, in marble). A computer-assisted milling procedure was used for two of the models. After painting, the monuments are installed on site, together with decorations and lighting.
Many of the monuments were financed by European countries or regions. The Brussels Grand-Place model cost €350,000 to make. The Cathedral of Santiago de Compostela required more than 24,000 hours of work.
Gardens
Ground cover plants, dwarf trees, bonsais and grafted trees are used alongside miniature monuments, and the paths are adorned with bushes and flowers.
List of models
Austria
Melk Abbey
Belgium
Grand-Place / City Hall, Brussels
The Belfry, Bruges
Town Hall, Leuven
Castle of Vêves, Celles
Citadel and Notre Dame Church, Dinan
|
https://en.wikipedia.org/wiki/180th%20meridian
|
The 180th meridian or antimeridian is the meridian 180° both east and west of the prime meridian in a geographical coordinate system. The longitude at this line can be given as either east or west.
On Earth, the prime and 180th meridians form a great circle that divides the planet into the Western and Eastern Hemispheres. The antimeridian passes mostly through the open waters of the Pacific Ocean but also runs across land in Russia, Fiji, and Antarctica. An important function of this meridian is its use as the basis for the International Date Line, which snakes around national borders to maintain date consistency within the territories of Russia, the United States, Kiribati, Fiji and New Zealand.
Starting at the North Pole of the Earth and heading south to the South Pole, the 180th meridian passes through:
The meridian also passes between (but not particularly close to):
through the Aleutian Island chain of US territory
the Gilbert Islands and the Phoenix Islands of Kiribati
North Island and the Kermadec Islands of New Zealand
the Bounty Islands and the Chatham Islands, also of New Zealand
The only places where roads cross this meridian are in Fiji and Russia. Fiji has several such roads and some buildings very close to it. Russia has three roads in the Chukotka Autonomous Okrug.
Software representation problems
Many geographic software libraries or data formats project the world to a rectangle; very often this rectangle is split exactly at the 180th meridian. This often makes it non-trivial to do simple tasks (like representing an area, or a line) over the 180th meridian. Some examples:
The GeoJSON specification strongly suggests splitting geometries so that neither of their parts cross the antimeridian.
In OpenStreetMap, areas (like the boundary of Russia) are split at the 180th meridian.
See also
179th meridian east
179th meridian west
Prime meridian
International Date Line
Notes
m180 meridian
Pacific Ocean
|
https://en.wikipedia.org/wiki/Code%20signing
|
Code signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed. The process employs the use of a cryptographic hash to validate authenticity and integrity. Code signing was invented in 1995 by Michael Doyle, as part of the Eolas WebWish browser plug-in, which enabled the use of public-key cryptography to sign downloadable Web app program code using a secret key, so the plug-in code interpreter could then use the corresponding public key to authenticate the code before allowing it access to the code interpreter’s APIs.
Code signing can provide several valuable features. The most common use of code signing is to provide security when deploying; in some programming languages, it can also be used to help prevent namespace conflicts. Almost every code signing implementation will provide some sort of digital signature mechanism to verify the identity of the author or build system, and a checksum to verify that the object has not been modified. It can also be used to provide versioning information about an object or to store other metadata about an object.
The efficacy of code signing as an authentication mechanism for software depends on the security of underpinning signing keys. As with other public key infrastructure (PKI) technologies, the integrity of the system relies on publishers securing their private keys against unauthorized access. Keys stored in software on general-purpose computers are susceptible to compromise. Therefore, it is more secure, and best practice, to store keys in secure, tamper-proof, cryptographic hardware devices known as hardware security modules or HSMs.
Providing security
Many code signing implementations will provide a way to sign the code using a system involving a pair of keys, one public and one private, similar to the process employed by TLS or SSH. For example, in the case of .NET, the developer uses a priv
|
https://en.wikipedia.org/wiki/SnRNP
|
snRNPs (pronounced "snurps"), or small nuclear ribonucleoproteins, are RNA-protein complexes that combine with unmodified pre-mRNA and various other proteins to form a spliceosome, a large RNA-protein molecular complex upon which splicing of pre-mRNA occurs. The action of snRNPs is essential to the removal of introns from pre-mRNA, a critical aspect of post-transcriptional modification of RNA, occurring only in the nucleus of eukaryotic cells.
Additionally, U7 snRNP is not involved in splicing at all, as U7 snRNP is responsible for processing the 3′ stem-loop of histone pre-mRNA.
The two essential components of snRNPs are protein molecules and RNA. The RNA found within each snRNP particle is known as small nuclear RNA, or snRNA, and is usually about 150 nucleotides in length. The snRNA component of the snRNP gives specificity to individual introns by "recognizing" the sequences of critical splicing signals at the 5' and 3' ends and branch site of introns. The snRNA in snRNPs is similar to ribosomal RNA in that it directly incorporates both an enzymatic and a structural role.
SnRNPs were discovered by Michael R. Lerner and Joan A. Steitz.
Thomas R. Cech and Sidney Altman also played a role in the discovery, winning the Nobel Prize for Chemistry in 1989 for their independent discoveries that RNA can act as a catalyst in cell development.
Types
At least five different kinds of snRNPs join the spliceosome to participate in splicing. They can be visualized by gel electrophoresis and are known individually as: U1, U2, U4, U5, and U6. Their snRNA components are known, respectively, as: U1 snRNA, U2 snRNA, U4 snRNA, U5 snRNA, and U6 snRNA.
In the mid-1990s, it was discovered that a variant class of snRNPs exists to help in the splicing of a class of introns found only in metazoans, with highly conserved 5' splice sites and branch sites. This variant class of snRNPs includes: U11 snRNA, U12 snRNA, U4atac snRNA, and U6atac snRNA. While different, they perform the same fu
|
https://en.wikipedia.org/wiki/Cancelbot
|
A cancelbot is an automated or semi-automated process for sending out third-party cancel messages over Usenet, commonly as a stopgap measure to combat spam.
History
One of the earliest uses of a cancelbot was by microbiology professor Richard DePew, to remove anonymous postings in science newsgroups. Perhaps the most well known early cancelbot was used in June 1994 by Arnt Gulbrandsen within minutes of the first post of Canter & Siegel's second spam wave, as it was created in response to their "Green Card spam" in April 1994. Usenet spammers have alleged that cancelbots are a tool of the mythical Usenet cabal.
Rationale
Cancelbots must follow community consensus to be able to serve a useful purpose, and historically, technical criteria have been the only acceptable criteria for determining if messages are cancelable, and only a few active cancellers ever obtain the broad community support needed to be effective.
Pseudosites are referenced in cancel headers by legitimate cancelbots to identify the criteria on which a message is being canceled, allowing administrators of Usenet sites to determine via standard "aliasing" mechanisms which criteria that they will accept third-party cancels for.
Currently, the generally accepted criteria (and associated pseudosites) are:
By general convention, special values are given in X-Canceled-By, Message-Id and Path headers when performing third-party cancels. This allows administrators to decide which reasons for third-party cancellation are acceptable for their site:
The $alz convention states that the Message-Id: header used for a third-party cancel should always be the original Message-Id: with "cancel." prepended.
The X-Canceled-By: convention states that the operator of a cancelbot should provide a consistent, valid, and actively monitored contact email address for their cancelbot in the X-Canceled-By: header, both to identify the canceler, and to provide a point of contact in case something goes wrong or questions
|
https://en.wikipedia.org/wiki/Banana%20passionfruit
|
Banana passionfruit (Passiflora supersect. Tacsonia), also known as taxo and curuba, is a group of around 64 Passiflora species found in South America. Most species in this section are found in high elevation cloud forest habitats. Flowers have a cylindrical hypanthium.
Species
Invasive species
P. tarminiana and P. tripartita thrive in the climate of New Zealand. They are invasive species since they can smother forest margins and forest regrowth. It is illegal to sell, cultivate and distribute the plants.
Banana passionfruit vines are now smothering more than of native forest on the islands of Hawaii and Kauai. Seeds are spread by feral pigs, birds and humans. The vine can also be found all across the highlands of New Guinea and Tasmania.
|
https://en.wikipedia.org/wiki/WinFiles
|
WinFiles, formerly Windows95.com, was an Internet download directory website. Originally, it was founded by Steve Jenkins in 1994.
CNET buyout
On February 24, 1999, CNET agreed to pay WinFiles owner Jenesys LLC US$5.75 million and made an additional $5.75 million payment 18 months after the closing of the deal - totaling $11.5 million.
|
https://en.wikipedia.org/wiki/%CE%91-Pinene
|
α-Pinene is an organic compound of the terpene class. It is one of the two isomers of pinene, the other being β-pinene. An alkene, it contains a reactive four-membered ring. It is found in the oils of many species of many coniferous trees, notably the pine. It is also found in the essential oil of rosemary (Rosmarinus officinalis) and Satureja myrtifolia (also known as Zoufa in some regions). Both enantiomers are known in nature; (1S,5S)- or (−)-α-pinene is more common in European pines, whereas the (1R,5R)- or (+)-α-isomer is more common in North America. The enantiomers' racemic mixture is present in some oils such as eucalyptus oil and orange peel oil.
Reactivity
Commercially important derivatives of alpha-pinene are linalool, geraniol, nerol, a-terpineol, and camphene.
α-Pinene 1 is reactive owing to the presence of the four-membered ring adjacent to the alkene. The compound is prone to skeletal rearrangements such as the Wagner–Meerwein rearrangement. Acids typically lead to rearranged products. With concentrated sulfuric acid and ethanol the major products are terpineol 2 and its ethyl ether 3, while glacial acetic acid gives the corresponding acetate 4. With dilute acids, terpin hydrate 5 becomes the major product.
With one molar equivalent of anhydrous HCl, the simple addition product 6a can be formed at low temperature in the presence of diethyl ether, but it is very unstable. At normal temperatures, or if no ether is present, the major product is bornyl chloride 6b, along with a small amount of fenchyl chloride 6c. For many years 6b (also called "artificial camphor") was referred to as "pinene hydrochloride", until it was confirmed as identical with bornyl chloride made from camphene. If more HCl is used, achiral 7 (dipentene hydrochloride) is the major product along with some 6b. Nitrosyl chloride followed by base leads to the oxime 8 which can be reduced to "pinylamine" 9. Both 8 and 9 are stable compounds containing an intact four-membered ring, a
|
https://en.wikipedia.org/wiki/DELTA%20%28taxonomy%29
|
DELTA (DEscription Language for TAxonomy) is a data format used in taxonomy for recording descriptions of living things. It is designed for computer processing, allowing the generation of identification keys, diagnosis, etc.
It is widely accepted as a standard and many programs using this format are available for various taxonomic tasks.
It was devised by the CSIRO Australian Division of Entomology in 1971 to 2000, with a notable part taken by Dr. Michael J. Dallwitz. More recently, the Atlas of Living Australia (ALA) rewrote the DELTA software in Java so it can run in a Java environment and across multiple operating systems. The software package can now be found at and downloaded from the ALA site.
DELTA System
The DELTA System is a group of integrated programs that are built on the DELTA format. The main program is the DELTA Editor, which provides an interface for creating a matrix of characters for any number taxa. A whole suite of programs can be found and run from within the DELTA editor which allow for the output of an interactive identification key, called Intkey. Other powerful features include the output of natural language descriptions, full diagnoses, and differences among taxa.
|
https://en.wikipedia.org/wiki/Nuclear%20pumped%20laser
|
A nuclear pumped laser is a laser pumped with the energy of fission fragments. The lasing medium is enclosed in a tube lined with uranium-235 and subjected to high neutron flux in a nuclear reactor core. The fission fragments of the uranium create excited plasma with inverse population of energy levels, which then lases. Other methods, e.g. the He-Ar laser, can use the He(n,p)H reaction, the transmutation of helium-3 in a neutron flux, as the energy source, or employing the energy of the alpha particles.
This technology may achieve high excitation rates with small laser volumes.
Some example lasing media:
carbon dioxide
3helium-argon
3helium-krypton
3helium-xenon
Development
Research in nuclear pumped lasers started in the early 1970s when researchers were unable to produce a laser with a wavelength shorter than 110 nm with the end goal of creating an x-ray laser. When laser wavelengths become that short the laser requires a huge amount of energy which must also be delivered in an extremely short period of time. In 1975 it was estimated, by George Chapline and Lowell Wood from the Lawrence Livermore National Laboratory, that “pumping a 10-keV (0.12-nm) laser would require around a watt per atom” in a pulse that was “10−15 seconds x the square of the wavelength in angstroms.” As this problem was unsolvable with the materials at hand and a laser oscillator was not working, research moved to creating pumps that used excited plasma. Early attempts used high-powered lasers to excite the plasma to create an even more highly powered laser. Results using this method were unsatisfying, and fell short of the goal. Livermore scientists first suggested using a nuclear reaction as a power source in 1975. By 1980 Livermore considered both nuclear bombs and nuclear reactors as viable energy sources for an x-ray laser.
On November 14, 1980, the first successful test of the bomb-powered x-ray laser was conducted. The use of a bomb was initially supported over that
|
https://en.wikipedia.org/wiki/The%20World%20%28Internet%20service%20provider%29
|
The World is an Internet service provider originally headquartered in Brookline, Massachusetts. It was the first commercial ISP in the world that provided a direct connection to the internet, with its first customer logging on in November 1989.
Controversy
Many government and university installations blocked, threatened to block, or attempted to shut-down The World's Internet connection until Software Tool & Die was eventually granted permission by the National Science Foundation to provide public Internet access on "an experimental basis."
Domain name history
The World is operated by Software Tool & Die. The site and services were initially hosted solely under the domain name world.std.com which continues to function to this day.
Sometime in or before 1994, the domain name world.com had been purchased by Software Tool & Die and used as The World's primary domain name. In 2000, STD let go ownership of world.com and is no longer associated with it.
In 1999, STD obtained the domain name theworld.com, promoting the PascalCase version TheWorld.com as the primary domain name of The World.
Services
The World still offers text-based dial-up and PPP dial-up, with over 9000 telephone access numbers throughout Canada, the United States, and Puerto Rico. Other features include shell access, with many historically standard shell features and utilities still offered. Additional user services include Usenet feed, personal web space, mailing lists, and email aliases. As of 2012, there were approximately 1750 active users.
More recent features include domain name hosting and complete website hosting.
Community
The World offers a community Usenet hierarchy, wstd.*, which is accessible only to users of The World. There are over 60 newsgroups in this hierarchy. The World users may send each other Memos (password protected messaging) and access a list of all personal customer websites.
Much of The World's website and associated functionality was designed and built by James
|
https://en.wikipedia.org/wiki/Neutron%20temperature
|
The neutron detection temperature, also called the neutron energy, indicates a free neutron's kinetic energy, usually given in electron volts. The term temperature is used, since hot, thermal and cold neutrons are moderated in a medium with a certain temperature. The neutron energy distribution is then adapted to the Maxwell distribution known for thermal motion. Qualitatively, the higher the temperature, the higher the kinetic energy of the free neutrons. The momentum and wavelength of the neutron are related through the de Broglie relation. The long wavelength of slow neutrons allows for the large cross section.
Neutron energy distribution ranges
But different ranges with different names are observed in other sources.
The following is a detailed classification:
Thermal
A thermal neutron is a free neutron with a kinetic energy of about 0.025 eV (about 4.0×10−21 J or 2.4 MJ/kg, hence a speed of 2.19 km/s), which is the energy corresponding to the most probable speed at a temperature of 290 K (17 °C or 62 °F), the mode of the Maxwell–Boltzmann distribution for this temperature, Epeak = k T.
After a number of collisions with nuclei (scattering) in a medium (neutron moderator) at this temperature, those neutrons which are not absorbed reach about this energy level.
Thermal neutrons have a different and sometimes much larger effective neutron absorption cross-section for a given nuclide than fast neutrons, and can therefore often be absorbed more easily by an atomic nucleus, creating a heavier, often unstable isotope of the chemical element as a result. This event is called neutron activation.
Epithermal
Neutrons of energy greater than thermal
Greater than 0.025 eV
Cadmium
Neutrons which are strongly absorbed by cadmium
Less than 0.5 eV.
Epicadmium
Neutrons which are not strongly absorbed by cadmium
Greater than 0.5 eV.
Cold (slow) neutrons
Neutrons of lower (much lower) energy than thermal neutrons.
Less than 5 meV.
Cold (slow) neutrons are subclassif
|
https://en.wikipedia.org/wiki/Edward%20Newton
|
Sir Edward Newton (10 November 1832 – 25 April 1897) was a British colonial administrator and ornithologist.
He was born at Elveden Hall, Suffolk the sixth and youngest son of William Newton, MP. He was the brother of ornithologist Alfred Newton. He graduated from Magdelene College, Cambridge in 1857 and was one of the twenty founding members of the British Ornithologists' Union.
Newton was the Colonial Secretary for Mauritius from 1859 to 1877. From there he sent his brother a number of specimens, including the dodo and the Rodrigues solitaire, both already extinct.
In 1878, Newton initiated the first laws anywhere specifically designed to protect indigenous land birds from persecution.
Edward was later Colonial Secretary and Lieutenant-Governor of Jamaica (1877–1883). He married Mary Louisa Cranstoun, daughter of W.W.R. Kerr in 1869. She died the following year.
He is commemorated in the binomial of the Malagasy kestrel, Falco newtoni.
Phelsuma edwardnewtoni, a species of gecko, is named in his honour.
Bibliography
(with five plates)
|
https://en.wikipedia.org/wiki/Software%20design%20description
|
A software design description (a.k.a. software design document or SDD; just design document; also Software Design Specification) is a representation of a software design that is to be used for recording design information, addressing various design concerns, and communicating that information to the design’s stakeholders. An SDD usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. Practically, the description is required to coordinate a large team under a single vision, needs to be a stable reference, and outline all parts of the software and how they will work.
Composition
The SDD usually contains the following information:
The Data-driven design describes structures that reside within the software. Attributes and relationships between data objects dictate the choice of data structures.
The architecture design uses information flowing characteristics, and maps them into the program structure. The transformation mapping method is applied to exhibit distinct boundaries between incoming and outgoing data. The data flow diagrams allocate control input, processing and output along three separate modules.
The interface design describes internal and external program interfaces, as well as the design of the human interface. Internal and external interface designs are based on the information obtained from the analysis model.
The procedural design describes structured programming concepts using graphical, tabular and textual notations.
These design mediums enable the designer to represent procedural detail, that facilitates translation to code. This blueprint for implementation forms the basis for all subsequent software engineering work.
IEEE 1016
IEEE 1016-2009, titled IEEE Standard for Information Technology—Systems Design—Software Design Descriptions, is an IEEE standard that specifies "the required information content and organization" for an SDD. IEEE 1016 does not specify the medium of an
|
https://en.wikipedia.org/wiki/Stacking-fault%20energy
|
The stacking-fault energy (SFE) is a materials property on a very small scale. It is noted as γSFE in units of energy per area.
A stacking fault is an interruption of the normal stacking sequence of atomic planes in a close-packed crystal structure. These interruptions carry a certain stacking-fault energy. The width of stacking fault is a consequence of the balance between the repulsive force between two partial dislocations on one hand and the attractive force due to the surface tension of the stacking fault on the other hand. The equilibrium width is thus partially determined by the stacking-fault energy. When the SFE is high the dissociation of a full dislocation into two partials is energetically unfavorable, and the material can deform either by dislocation glide or cross-slip. Lower SFE materials display wider stacking faults and have more difficulties for cross-slip.
The SFE modifies the ability of a dislocation in a crystal to glide onto an intersecting slip plane. When the SFE is low, the mobility of dislocations in a material decreases.
Stacking faults and stacking fault energy
A stacking fault is an irregularity in the planar stacking sequence of atoms in a crystal – in FCC metals the normal stacking sequence is ABCABC etc., but if a stacking fault is introduced it may introduce an irregularity such as ABCBCABC into the normal stacking sequence. These irregularities carry a certain energy which is called stacking-fault energy.
Influences on stacking fault energy
Stacking fault energy is heavily influenced by a few major factors, specifically base metal, alloying metals, percent of alloy metals, and valence-electron to atom ratio.
Alloying elements effects on SFE
It has long been established that the addition of alloying elements significantly lowers the SFE of most metals. Which element and how much is added dramatically affects the SFE of a material. The figures on the right show how the SFE of copper lowers with the addition of two different all
|
https://en.wikipedia.org/wiki/Semiochemical
|
A semiochemical, from the Greek σημεῖον (semeion), meaning "signal", is a chemical substance or mixture released by an organism that affects the behaviors of other individuals. Semiochemical communication can be divided into two broad classes: communication between individuals of the same species (intraspecific) or communication between different species (interspecific).
It is usually used in the field of chemical ecology to encompass pheromones, allomones, kairomones, attractants and repellents.
Many insects, including parasitic insects, use semiochemicals. Pheromones are intraspecific signals that aid in finding mates, food and habitat resources, warning of enemies, and avoiding competition. Interspecific signals known as allomones and kairomones have similar functions.
In nature
Pheromone
A pheromone (from Greek phero "to bear" + hormone from Greek – "impetus") is a secreted or excreted chemical factor that triggers a social response in members of the same species. Pheromones are chemicals capable of acting outside the body of the secreting individual to impact the behavior of the receiving individual. There are alarm pheromones, food trail pheromones, sex pheromones, and many others that affect behavior or physiology. Their use among insects has been particularly well documented. In addition, some vertebrates and plants communicate by using pheromones. A notable example of pheromone usage to indicate sexual receptivity in insects can be seen in the female Dawson's burrowing bee, which uses a particular mixture of cuticular hydrocarbons to signal sexual receptivity to mating, and then another mixture to indicate sexual disinterest. These hydrocarbons, in association with other chemical signals produced in the Dufour's gland, have been implicated in male repulsion signaling as well.
The term "pheromone" was introduced by Peter Karlson and Martin Lüscher in 1959, based on the Greek word pherein (to transport) and hormone (to stimulate). They are also sometim
|
https://en.wikipedia.org/wiki/Type%20enforcement
|
The concept of type enforcement (TE), in the field of information technology, is an access control mechanism for regulating access in computer systems. Implementing TE gives priority to mandatory access control (MAC) over discretionary access control (DAC). Access clearance is first given to a subject (e.g. process) accessing objects (e.g. files, records, messages) based on rules defined in an attached security context. A security context in a domain is defined by a domain security policy. In the Linux security module (LSM) in SELinux, the security context is an extended attribute. Type enforcement implementation is a prerequisite for MAC, and a first step before multilevel security (MLS) or its replacement multi categories security (MCS). It is a complement of role-based access control (RBAC).
Control
Type enforcement implies fine-grained control over the operating system, not only to have control over process execution, but also over domain transition or authorization scheme. This is why it is best implemented as a kernel module, as is the case with SELinux. Using type enforcement is a way to implement the FLASK architecture.
Access
Using type enforcement, users may (as in Microsoft Active Directory) or may not (as in SELinux) be associated with a Kerberos realm, although the original type enforcement model implies so. It is always necessary to define a TE access matrix containing rules about clearance granted to a given security context, or subject's rights over objects according to an authorization scheme.
Security
Practically, type enforcement evaluates a set of rules from the source security context of a subject, against a set of rules from the target security context of the object. A clearance decision occurs depending on the TE access description (matrix). Then, DAC or other access control mechanisms (MLS / MCS, ...) apply.
History
Type enforcement was introduced in the Secure Ada Target architecture in the late 1980s with a full implementation de
|
https://en.wikipedia.org/wiki/Regular%20grid
|
A regular grid is a tessellation of n-dimensional Euclidean space by congruent parallelotopes (e.g. bricks).
Its opposite is irregular grid.
Grids of this type appear on graph paper and may be used in finite element analysis, finite volume methods, finite difference methods, and in general for discretization of parameter spaces. Since the derivatives of field variables can be conveniently expressed as finite differences, structured grids mainly appear in finite difference methods. Unstructured grids offer more flexibility than structured grids and hence are very useful in finite element and finite volume methods.
Each cell in the grid can be addressed by index (i, j) in two dimensions or (i, j, k) in three dimensions, and each vertex has coordinates in 2D or in 3D for some real numbers dx, dy, and dz representing the grid spacing.
Related grids
A Cartesian grid is a special case where the elements are unit squares or unit cubes, and the vertices are points on the integer lattice.
A rectilinear grid is a tessellation by rectangles or rectangular cuboids (also known as rectangular parallelepipeds) that are not, in general, all congruent to each other. The cells may still be indexed by integers as above, but the mapping from indexes to vertex coordinates is less uniform than in a regular grid. An example of a rectilinear grid that is not regular appears on logarithmic scale graph paper.
A skewed grid is a tessellation of parallelograms or parallelepipeds. (If the unit lengths are all equal, it is a tessellation of rhombi or rhombohedra.)
A curvilinear grid or structured grid is a grid with the same combinatorial structure as a regular grid, in which the cells are quadrilaterals or [general] cuboids, rather than rectangles or rectangular cuboids.
See also
|
https://en.wikipedia.org/wiki/Kairomone
|
A kairomone (a coinage using the Greek καιρός opportune moment, paralleling pheromone) is a semiochemical, emitted by an organism, which mediates interspecific interactions in a way that benefits an individual of another species which receives it and harms the emitter. This "eavesdropping" is often disadvantageous to the producer (though other benefits of producing the substance may outweigh this cost, hence its persistence over evolutionary time). The kairomone improves the fitness of the recipient and in this respect differs from an allomone (which is the opposite: it benefits the producer and harms the receiver) and a synomone (which benefits both parties). The term is mostly used in the field of entomology (the study of insects). Two main ecological cues are provided by kairomones; they generally either indicate a food source for the receiver, or the presence of a predator, the latter of which is less common or at least less studied.
Predators use them to find prey
An example of this can be found in the Ponderosa Pine tree (Pinus ponderosa), which produces a terpene called myrcene when it is damaged by the Western pine beetle. Instead of deterring the insect, it acts synergistically with aggregation pheromones which in turn act to lure more beetles to the tree.
Specialist predatory beetles find bark beetles (their prey) using the pheromones the bark beetles produce. In this case the chemical substance produced is both a pheromone (communication between bark beetles) and a kairomone (eavesdropping). This was discovered accidentally when the predatory beetles and other enemies were attracted to insect traps baited with bark beetle pheromones.
Pheromones of different kinds may be exploited as kairomones by receivers. The German wasp, Vespula germanica, is attracted to a pheromone produced by male Mediterranean fruit flies (Ceratitis capitata) when the males gather for a mating display, causing the death of some. In contrast, it is the alarm pheromone (used to c
|
https://en.wikipedia.org/wiki/Cocurvature
|
In mathematics in the branch of differential geometry, the cocurvature of a connection on a manifold is the obstruction to the integrability of the vertical bundle.
Definition
If M is a manifold and P is a connection on M, that is a vector-valued 1-form on M which is a projection on TM such that PabPbc = Pac, then the cocurvature is a vector-valued 2-form on M defined by
where X and Y are vector fields on M.
See also
Curvature
Lie bracket
Frölicher-Nijenhuis bracket
Differential geometry
Curvature (mathematics)
|
https://en.wikipedia.org/wiki/BLEU
|
BLEU (bilingual evaluation understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is" – this is the central idea behind BLEU. Invented at IBM in 2001, BLEU was one of the first metrics to claim a high correlation with human judgements of quality, and remains one of the most popular automated and inexpensive metrics.
Scores are calculated for individual translated segments—generally sentences—by comparing them with a set of good quality reference translations. Those scores are then averaged over the whole corpus to reach an estimate of the translation's overall quality. Intelligibility or grammatical correctness are not taken into account.
BLEU's output is always a number between 0 and 1. This value indicates how similar the candidate text is to the reference texts, with values closer to 1 representing more similar texts. Few human translations will attain a score of 1, since this would indicate that the candidate is identical to one of the reference translations. For this reason, it is not necessary to attain a score of 1. Because there are more opportunities to match, adding additional reference translations will increase the BLEU score.
Mathematical definition
Basic setup
A basic, first attempt at defining the BLEU score would take two arguments: a candidate string and a list of reference strings . The idea is that should be close to 1 when is similar to , and close to 0 if not.
As an analogy, the BLEU score is like a language teacher trying to score the quality of a student translation by checking how closely it follows the reference answers .
Since in natural language processing, one should evaluate a large set of candidate strings, one must generalize the BLEU score to the case where one has
|
https://en.wikipedia.org/wiki/Eisenstein%20ideal
|
In mathematics, the Eisenstein ideal is an ideal in the endomorphism ring of the Jacobian variety of a modular curve, consisting roughly of elements of the Hecke algebra of Hecke operators that annihilate the Eisenstein series. It was introduced by , in studying the rational points of modular curves. An Eisenstein prime is a prime in the support of the Eisenstein ideal (this has nothing to do with primes in the Eisenstein integers).
Definition
Let N be a rational prime, and define
J0(N) = J
as the Jacobian variety of the modular curve
X0(N) = X.
There are endomorphisms Tl of J for each prime number l not dividing N. These come from the Hecke operator, considered first as an algebraic correspondence on X, and from there as acting on divisor classes, which gives the action on J. There is also a Fricke involution w (and Atkin–Lehner involutions if N is composite). The Eisenstein ideal, in the (unital) subring of End(J) generated as a ring by the Tl, is generated as an ideal by the elements
Tl − l - 1
for all l not dividing N, and by
w + 1.
Geometric definition
Suppose that T* is the ring generated by the Hecke operators acting on all modular forms for Γ0(N) (not just the cusp forms). The ring T of Hecke operators on the cusp forms is a quotient of T*, so Spec(T) can be viewed as a subscheme of Spec(T*). Similarly Spec(T*) contains a line (called the Eisenstein line) isomorphic to Spec(Z) coming from the action of Hecke operators on the Eisenstein series. The Eisenstein ideal is the ideal defining the intersection of the Eisenstein line with Spec(T) in Spec(T*).
Example
The Eisenstein ideal can also be defined for higher weight modular forms. Suppose that T is the full Hecke algebra generated by Hecke operators Tn acting on the 2-dimensional space of modular forms of level 1 and weight 12.This space is 2 dimensional, spanned by the Eigenforms given by the Eisenstein series E12 and the modular discriminant Δ. The map taking a Hecke operator Tn to its eige
|
https://en.wikipedia.org/wiki/Rate%20function
|
In mathematics — specifically, in large deviations theory — a rate function is a function used to quantify the probabilities of rare events. Such functions are used to formulate large deviation principle. A large deviation principle quantifies the asymptotic probability of rare events for a sequence of probabilities.
A rate function is also called a Cramér function, after the Swedish probabilist Harald Cramér.
Definitions
Rate function An extended real-valued function I : X → [0, +∞] defined on a Hausdorff topological space X is said to be a rate function if it is not identically +∞ and is lower semi-continuous, i.e. all the sub-level sets
are closed in X.
If, furthermore, they are compact, then I is said to be a good rate function.
A family of probability measures (μδ)δ > 0 on X is said to satisfy the large deviation principle with rate function I : X → [0, +∞) (and rate 1 ⁄ δ) if, for every closed set F ⊆ X and every open set G ⊆ X,
If the upper bound (U) holds only for compact (instead of closed) sets F, then (μδ)δ>0 is said to satisfy the weak large deviations principle (with rate 1 ⁄ δ and weak rate function I).
Remarks
The role of the open and closed sets in the large deviation principle is similar to their role in the weak convergence of probability measures: recall that (μδ)δ > 0 is said to converge weakly to μ if, for every closed set F ⊆ X and every open set G ⊆ X,
There is some variation in the nomenclature used in the literature: for example, den Hollander (2000) uses simply "rate function" where this article — following Dembo & Zeitouni (1998) — uses "good rate function", and "weak rate function". Regardless of the nomenclature used for rate functions, examination of whether the upper bound inequality (U) is supposed to hold for closed or compact sets tells one whether the large deviation principle in use is strong or weak.
Properties
Uniqueness
A natural question to ask, given the somewhat abstract setting of the general framework above, i
|
https://en.wikipedia.org/wiki/Large%20deviations%20theory
|
In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions. While some basic ideas of the theory can be traced to Laplace, the formalization started with insurance mathematics, namely ruin theory with Cramér and Lundberg. A unified formalization of large deviation theory was developed in 1966, in a paper by Varadhan. Large deviations theory formalizes the heuristic ideas of concentration of measures and widely generalizes the notion of convergence of probability measures.
Roughly speaking, large deviations theory concerns itself with the exponential decline of the probability measures of certain kinds of extreme or tail events.
Introductory examples
An elementary example
Consider a sequence of independent tosses of a fair coin. The possible outcomes could be heads or tails. Let us denote the possible outcome of the i-th trial by where we encode head as 1 and tail as 0. Now let denote the mean value after trials, namely
Then lies between 0 and 1. From the law of large numbers it follows that as N grows, the distribution of converges to (the expected value of a single coin toss).
Moreover, by the central limit theorem, it follows that is approximately normally distributed for large The central limit theorem can provide more detailed information about the behavior of than the law of large numbers. For example, we can approximately find a tail probability of that is greater than for a fixed value of However, the approximation by the central limit theorem may not be accurate if is far from unless is sufficiently large. Also, it does not provide information about the convergence of the tail probabilities as However, the large deviation theory can provide answers for such problems.
Let us make this statement more precise. For a given value let us compute the tail probability Define
Note that the function is a convex, nonnegative function that is zero at and
|
https://en.wikipedia.org/wiki/EMBO%20Reports
|
EMBO Reports is a peer-reviewed scientific journal covering research related to biology at a molecular level. It publishes primary research papers, reviews, and essays and opinion. It also features commentaries on the social impact of advances in the life sciences and the converse influence of society on science. A sister journal to The EMBO Journal, EMBO Reports was established in 2000 and was published on behalf of the European Molecular Biology Organization by Nature Publishing Group since 2003. It is now published by EMBO Press.
External links
Molecular biology
Molecular and cellular biology journals
Monthly journals
English-language journals
Academic journals established in 2000
European Molecular Biology Organization academic journals
|
https://en.wikipedia.org/wiki/Calcium%20benzoate
|
Calcium benzoate refers to the calcium salt of benzoic acid. When used in the food industry as a preservative, its E number is E213 (INS number 213); it is approved for use as a food additive in the EU, USA and Australia and New Zealand.
The formulas and structures of calcium carboxylate derivatives of calcium and related metals are complex. Generally the coordination number is eight and the carboxylates form Ca-O bonds. Another variable is the degree of hydration.
Uses
It is preferentially used more than other benzoic acid salts or its esters in foods due to its better solubility and safety to humans.
It is used in soft drinks, fruit juice, concentrates, soy milk, soy sauce and vinegar.
It is the most widely used preservative in making bread and other bakery products.
It is used as preservative for water based insecticidal composition which can be sprayed as well as in preparation of calcium formate direct sprayable fertilizers.
It is used as preservative for mouth wash compositions.
It is used in water hardness reducers.
See also
Sodium benzoate
Potassium benzoate
|
https://en.wikipedia.org/wiki/Normalized%20difference%20vegetation%20index
|
The normalized difference vegetation index (NDVI) is a widely-used metric for quantifying the health and density of vegetation using sensor data. It is calculated from spectrometric data at two specific bands: red and near-infrared. The spectrometric data is usually sourced from remote sensors, such as satellites.
The metric is popular in industry because of its accuracy. It has a high correlation with the true state of vegetation on the ground. The index is easy to interpret: NDVI will be a value between -1 and 1. An area with nothing growing in it will have an NDVI of zero. NDVI will increase in proportion to vegetation growth. An area with dense, healthy vegetation will have an NDVI of one. NDVI values less than 0 suggest a lack of dry land. An ocean will yield an NDVI of -1.
Brief history
The exploration of outer space started in earnest with the launch of Sputnik 1 by the Soviet Union on 4 October 1957. This was the first man-made satellite orbiting the Earth. Subsequent successful launches, both in the Soviet Union (e.g., the Sputnik and Cosmos programs), and in the U.S. (e.g., the Explorer program), quickly led to the design and operation of dedicated meteorological satellites. These are orbiting platforms embarking instruments specially designed to observe the Earth's atmosphere and surface with a view to improve weather forecasting. Starting in 1960, the TIROS series of satellites embarked television cameras and radiometers. This was later (1964 onwards) followed by the Nimbus satellites and the family of Advanced Very High Resolution Radiometer instruments on board the National Oceanic and Atmospheric Administration (NOAA) platforms. The latter measures the reflectance of the planet in red and near-infrared bands, as well as in the thermal infrared. In parallel, NASA developed the Earth Resources Technology Satellite (ERTS), which became the precursor to the Landsat program. These early sensors had minimal spectral resolution, but tended to include ba
|
https://en.wikipedia.org/wiki/Energeticism
|
Energeticism is the physical view that energy is the fundamental element in all physical change. It posits a specific ontology, or a philosophy of being, which holds that all things are ultimately composed of energy, and which is opposed to ontological idealism. Energeticism might be associated with the physicist and philosopher Ernst Mach, though his attitude to it is ambiguous. It was also propounded by the chemist Wilhelm Ostwald.
Energeticism is largely rejected today, in part due to its Aristotelian and metaphysical leanings and its rejection of the existence of a micro-world (such as the one that chemists or physicists have discovered). Ludwig Boltzmann and Max Planck posited that matter and energy are distinct from each other and, hence, that energy cannot itself be the fundamental unit of nature upon which all other units are based.
|
https://en.wikipedia.org/wiki/Calcium%20sorbate
|
Calcium sorbate is the calcium salt of sorbic acid. Calcium sorbate is a polyunsaturated fatty acid salt.
It is a commonly used food preservative; its E number is E203, but it is no longer allowed to be used in the European Union.
|
https://en.wikipedia.org/wiki/Bracket%20polynomial
|
In the mathematical field of knot theory, the bracket polynomial (also known as the Kauffman bracket) is a polynomial invariant of framed links. Although it is not an invariant of knots or links (as it is not invariant under type I Reidemeister moves), a suitably "normalized" version yields the famous knot invariant called the Jones polynomial. The bracket polynomial plays an important role in unifying the Jones polynomial with other quantum invariants. In particular, Kauffman's interpretation of the Jones polynomial allows generalization to invariants of 3-manifolds.
The bracket polynomial was discovered by Louis Kauffman in 1987.
Definition
The bracket polynomial of any (unoriented) link diagram , denoted , is a polynomial in the variable , characterized by the three rules:
, where is the standard diagram of the unknot
The pictures in the second rule represent brackets of the link diagrams which differ inside a disc as shown but are identical outside. The third rule means that adding a circle disjoint from the rest of the diagram multiplies the bracket of the remaining diagram by .
Further reading
Louis H. Kauffman, State models and the Jones polynomial. Topology 26 (1987), no. 3, 395--407. (introduces the bracket polynomial)
External links
Knot theory
Polynomials
|
https://en.wikipedia.org/wiki/Michel%20parameters
|
The Michel parameters, usually denoted by and , are four parameters used in describing the phase space distribution of leptonic decays of charged leptons, . They are named after the physicist Louis Michel. Sometimes instead of , the product is quoted. Within the Standard Model of electroweak interactions, these parameters are expected to be
Precise measurements of energy and angular distributions of the daughter leptons in decays of polarized muons and tau leptons are so far in good agreement with these predictions of the Standard Model.
Muon decay
Consider the decay of the positive muon:
In the muon rest frame, energy and angular distributions of the positrons emitted in the decay of a polarised muon expressed in terms of Michel parameters are the following, neglecting electron and neutrino masses and the radiative corrections:
where is muon polarisation, , and is the angle between muon spin direction and positron momentum direction. For the decay of the negative muon, the sign of the term containing should be inverted.
For the decay of the positive muon, the expected decay distribution for the Standard Model values of Michel parameters is
Integration of this expression over electron energy gives the angular distribution of the daughter positrons:
The positron energy distribution integrated over the polar angle is
|
https://en.wikipedia.org/wiki/Decimal%20degrees
|
Decimal degrees (DD) is a notation for expressing latitude and longitude geographic coordinates as decimal fractions of a degree. DD are used in many geographic information systems (GIS), web mapping applications such as OpenStreetMap, and GPS devices. Decimal degrees are an alternative to using sexagesimal degrees (degrees, minutes, and seconds - DMS notation). As with latitude and longitude, the values are bounded by ±90° and ±180° respectively.
Positive latitudes are north of the equator, negative latitudes are south of the equator. Positive longitudes are east of the Prime Meridian; negative longitudes are west of the Prime Meridian. Latitude and longitude are usually expressed in that sequence, latitude before longitude. The abbreviation dLL has been used in the scientific literature with locations in texts being identified as a tuple within square brackets, for example [54.5798,-3.5820]. The appropriate decimal places are used,<ref>W. B. Whalley, 2021.'Mapping small glaciers, rock glaciers and related features in an age of retreating glaciers: using decimal latitude-longitude locations and 'geomorphic information tensors,Geografia Fisica e Dinamica Quaternaria 2021:44 55-67,DOI 10.4461/ GFDQ.2021.44.4</ref> negative values are given as hyphen-minus, Unicode 002D.
Precision
The radius of the semi-major axis of the Earth at the equator is resulting in a circumference of . The equator is divided into 360 degrees of longitude, so each degree at the equator represents . As one moves away from the equator towards a pole, however, one degree of longitude is multiplied by the cosine of the latitude, decreasing the distance, approaching zero at the pole. The number of decimal places required for a particular precision at the equator is:
A value in decimal degrees to a precision of 4 decimal places is precise to at the equator. A value in decimal degrees to 5 decimal places is precise to at the equator. Elevation also introduces a small error: at elevation, the
|
https://en.wikipedia.org/wiki/Segmental%20analysis%20%28biology%29
|
Segmental analysis is a method of anatomical analysis for describing the connective morphology of the human body. Instead of describing anatomy in terms of spatial relativity, as in the anatomical position method, segmental analysis describes anatomy in terms of which organs, tissues, etc. connect to each other, and the characteristics of those connections.
Literature
Anderson RH, Becker AE, Freedom RM, et al. Sequential segmental analysis of congenital heart disease. Pediatric Cardiology 1984;5(4):281-7.
Anatomy
|
https://en.wikipedia.org/wiki/FRISK%20Software%20International
|
FRISK Software International (established in 1993) was an Icelandic software company that developed F-Prot antivirus and F-Prot AVES antivirus and anti-spam service. The company was founded in 1993. It was acquired by Cyren in 2012.
History
The company was founded in 1993. Its name is derived from the initial letters of the personal name and patronymic of Friðrik Skúlason, its founder. Dr. Vesselin Vladimirov Bontchev, a computer expert from Bulgaria, best known for his research on the Dark Avenger virus, worked for the company as an anti-virus researcher.
F-Prot Antivirus was first released in 1989, making it one of the longest lived anti-virus brands on the market. It was the world's first with a heuristic engine. It is sold in both home and corporate packages, of which there are editions for Windows and Linux. There are corporate versions for Microsoft Exchange, Solaris, and certain IBM eServers. The Linux version is available to home users free of charge, with virus definition updates. Free 30-day trial versions for other platforms can be downloaded. F-Prot AVES is specifically targeted towards corporate users.
The company has also produced a genealogy program called Espólín and Púki, a spellchecker with additional features.
In Summer of 2012, FRISK was acquired by Cyren, an Israeli-American provider of security products.
F-Prot Antivirus
F-Prot Antivirus (stylized F-PROT) is an antivirus product developed by FRISK Software International. It is available in related versions for several platforms. It is available for Microsoft Windows, Microsoft Exchange Server, Linux, Solaris, AIX and IBM eServers.
FRISK Software International allows others to develop applications using their scanning engine, through the use of a SDK. Many software vendors use the F-Prot Antivirus engine, including SUSE.
F-Prot Antivirus reached end-of-life on July 31, 2021, and is no longer sold.
Friðrik Skúlason
Friðrik Skúlason, also sometimes known as "Frisk", is the founder of FR
|
https://en.wikipedia.org/wiki/Basic%20Interoperable%20Scrambling%20System
|
Basic Interoperable Scrambling System, usually known as BISS, is a satellite signal scrambling system developed by the European Broadcasting Union and a consortium of hardware manufacturers.
Prior to its development, "ad hoc" or "occasional use" satellite news feeds were transmitted either using proprietary encryption methods (e.g. RAS, or PowerVu), or without any encryption. Unencrypted satellite feeds allowed anyone with the correct equipment to view the program material.
Proprietary encryption methods were determined by encoder manufacturers, and placed major compatibility limitations on the type of satellite receiver (IRD) that could be used for each feed. BISS was an attempt to create an "open platform" encryption system, which could be used across a range of manufacturers equipment.
There are mainly two different types of BISS encryption used:
BISS-1 transmissions are protected by a 12 digit hexadecimal "session key" that is agreed by the transmitting and receiving parties prior to transmission. The key is entered into both the encoder and decoder, this key then forms part of the encryption of the digital TV signal and any receiver with BISS-support with the correct key will decrypt the signal.
BISS-E (E for encrypted) is a variation where the decoder has stored one secret BISS-key entered by for example a rights holder. This is unknown to the user of the decoder. The user is then sent a 16-digit hexadecimal code, which is entered as a "session key". This session key is then mathematically combined internally to calculate a BISS-1 key that can decrypt the signal.
Only a decoder with the correct secret BISS-key will be able to decrypt a BISS-E feed. This gives the rights holder control as to exactly which decoder can be used to decrypt/decode a specific feed. Any BISS-E encrypted feed will have a corresponding BISS-1 key that will unlock it.
BISS-E is amongst others used by EBU to protect UEFA Champions League, NBC in the United States for NBC O&O and Af
|
https://en.wikipedia.org/wiki/Factorization%20of%20polynomials
|
In mathematics and computer algebra, factorization of polynomials or polynomial factorization expresses a polynomial with coefficients in a given field or in the integers as the product of irreducible factors with coefficients in the same domain. Polynomial factorization is one of the fundamental components of computer algebra systems.
The first polynomial factorization algorithm was published by Theodor von Schubert in 1793. Leopold Kronecker rediscovered Schubert's algorithm in 1882 and extended it to multivariate polynomials and coefficients in an algebraic extension. But most of the knowledge on this topic is not older than circa 1965 and the first computer algebra systems:
When the long-known finite step algorithms were first put on computers, they turned out to be highly inefficient. The fact that almost any uni- or multivariate polynomial of degree up to 100 and with coefficients of a moderate size (up to 100 bits) can be factored by modern algorithms in a few minutes of computer time indicates how successfully this problem has been attacked during the past fifteen years. (Erich Kaltofen, 1982)
Nowadays, modern algorithms and computers can quickly factor univariate polynomials of degree more than 1000 having coefficients with thousands of digits. For this purpose, even for factoring over the rational numbers and number fields, a fundamental step is a factorization of a polynomial over a finite field.
Formulation of the question
Polynomial rings over the integers or over a field are unique factorization domains. This means that every element of these rings is a product of a constant and a product of irreducible polynomials (those that are not the product of two non-constant polynomials). Moreover, this decomposition is unique up to multiplication of the factors by invertible constants.
Factorization depends on the base field. For example, the fundamental theorem of algebra, which states that every polynomial with complex coefficients has complex roots,
|
https://en.wikipedia.org/wiki/Apolysis
|
Apolysis ( "discharge, lit. absolution") is the separation of the cuticle from the epidermis in arthropods and related groups (Ecdysozoa). Since the cuticle of these animals is also the skeletal support of the body and is inelastic, it is shed during growth and a new covering of larger dimensions is formed. During this process, an arthropod becomes dormant for a period of time. Enzymes are secreted to digest the inner layers of the existing cuticle, detaching the animal from the outer cuticle. This allows the new cuticle to develop without being exposed to the environmental elements.
After apolysis, ecdysis occurs. Ecdysis is the actual emergence of the arthropod into the environment and always occurs directly after apolysis. The newly emerged animal then hardens and continues its life.
|
https://en.wikipedia.org/wiki/Mario%20Livio
|
Mario Livio (born June 19, 1945) is an Israeli-American astrophysicist and an author of works that popularize science and mathematics. For 24 years (1991–2015) he was an astrophysicist at the Space Telescope Science Institute, which operates the Hubble Space Telescope. He has published more than 400 scientific articles on topics including cosmology, supernova explosions, black holes, extrasolar planets, and the emergence of life in the universe. His book on the irrational number phi, The Golden Ratio: The Story of Phi, the World's Most Astonishing Number (2002), won the Peano Prize and the International Pythagoras Prize for popular books on mathematics.
Scientific career
Livio earned a Bachelor of Science degree in physics and mathematics at the Hebrew University of Jerusalem, a Master of Science degree in theoretical particle physics at the Weizmann Institute, and a Ph.D. in theoretical astrophysics at Tel Aviv University. He was a professor of physics at the Technion – Israel Institute of Technology from 1981 to 1991, before moving to the Space Telescope Science Institute.
Livio has focused much of his research on supernova explosions and their use in determining the rate of expansion of the universe. He has also studied so-called dark energy, black holes, and the formation of planetary systems around young stars. He has contributed to hundreds of papers in peer-reviewed journals on astrophysics. Among his prominent contributions, he has authored and co-authored important papers on topics related to accretion onto compact objects (white dwarfs, neutron stars, and black holes). In 1980, he published one of the very first multi-dimensional numerical simulations of the collapse of a massive star and a supernova explosion. He was one of the pioneers in the study of common envelope evolution of binary stars, and he applied the results to the shaping of planetary nebulae as well as to the progenitors of Type Ia supernovae. Together with D. Eichler, T. Piran, and D. S
|
https://en.wikipedia.org/wiki/Petite%20mutation
|
petite (ρ–) is a mutant first discovered in the yeast Saccharomyces cerevisiae. Due to the defect in the respiratory chain, 'petite' yeast are unable to grow on media containing only non-fermentable carbon sources (such as glycerol or ethanol) and form small colonies when grown in the presence of fermentable carbon sources (such as glucose). The petite phenotype can be caused by the absence of, or mutations in, mitochondrial DNA (termed "cytoplasmic Petites"), or by mutations in nuclear-encoded genes involved in oxidative phosphorylation. A neutral petite produces all wild type progeny when crossed with wild type.
petite mutations can be induced using a variety of mutagens, including DNA intercalating agents, as well as chemicals that can interfere with DNA synthesis in growing cells. Mutagens that create Petites are implicated in increased rates of degenerative diseases and in the aging process.
Overview
A mutation that produces small (petite" > petite) anaerobic-like colonies had shown first in Yeast Saccharomyces cerevisiae and described by Boris Ephrussi and his co-workers in 1949 in Gif-sur-Yvette, France. The cells of petite colonies were smaller than those of wild-type colonies, but the term “petite” refers only to colony size and not the individual cell size.
History
Over 50 years ago, in a lab in France, Ephrussi, et al. discovered a non-Mendelian inherited factor that is essential to respiration in the yeast, Saccharomyces cerevisiae. S. cerevisiae without this factor, known as the ρ-factor, is described by the development of small colonies when compared to the wild-type yeast. These smaller colonies were dubbed petite colonies. These petite mutants were observed to be spontaneously produced naturally at a rate of 0.1%-1.0% every generation. They also found that treatment of wild-type S. cerevisiae with DNA-intercalating agents would more rapidly produce this mutation.
Schatz identified a region of the yeast's nuclear DNA that was associated wi
|
https://en.wikipedia.org/wiki/Rifleman%27s%20rule
|
Rifleman's rule is a "rule of thumb" that allows a rifleman to accurately fire a rifle that has been calibrated for horizontal targets at uphill or downhill targets. The rule says that only the horizontal range should be considered when adjusting a sight or performing hold-over in order to account for bullet drop. Typically, the range of an elevated target is considered in terms of the slant range, incorporating both the horizontal distance and the elevation distance (possibly negative, i.e. downhill), as when a rangefinder is used to determine the distance to target. The slant range is not compatible with standard ballistics tables for estimating bullet drop.
The Rifleman's rule provides an estimate of the horizontal range for engaging a target at a known slant range (the uphill or downhill distance from the rifle). For a bullet to strike a target at a slant range of and an incline of , the rifle sight must be adjusted as if the shooter were aiming at a horizontal target at a range of . Figure 1 illustrates the shooting scenario. The rule holds for inclined and declined shooting (all angles measured with respect to horizontal). Very precise computer modeling and empirical evidence suggests that the rule does appear to work with reasonable accuracy in air and with both bullets and arrows.
Background
Definitions
There is a device that is mounted on the rifle called a sight. While there are many forms of rifle sight, they all permit the shooter to set the angle between the bore of the rifle and the line of sight (LOS) to the target. Figure 2 illustrates the relationship between the LOS and bore angle.
This relationship between the LOS to the target and the bore angle is determined through a process called "zeroing." The bore angle is set to ensure that a bullet on a parabolic trajectory will intersect the LOS to the target at a specific range. A properly adjusted rifle barrel and sight are said to be "zeroed." Figure 3 illustrates how the LOS, bullet trajecto
|
https://en.wikipedia.org/wiki/Electrocorticography
|
Electrocorticography (ECoG), a type of intracranial electroencephalography (iEEG), is a type of electrophysiological monitoring that uses electrodes placed directly on the exposed surface of the brain to record electrical activity from the cerebral cortex. In contrast, conventional electroencephalography (EEG) electrodes monitor this activity from outside the skull. ECoG may be performed either in the operating room during surgery (intraoperative ECoG) or outside of surgery (extraoperative ECoG). Because a craniotomy (a surgical incision into the skull) is required to implant the electrode grid, ECoG is an invasive procedure.
History
ECoG was pioneered in the early 1950s by Wilder Penfield and Herbert Jasper, neurosurgeons at the Montreal Neurological Institute. The two developed ECoG as part of their groundbreaking Montreal procedure, a surgical protocol used to treat patients with severe epilepsy. The cortical potentials recorded by ECoG were used to identify epileptogenic zones – regions of the cortex that generate epileptic seizures. These zones would then be surgically removed from the cortex during resectioning, thus destroying the brain tissue where epileptic seizures had originated. Penfield and Jasper also used electrical stimulation during ECoG recordings in patients undergoing epilepsy surgery under local anesthesia. This procedure was used to explore the functional anatomy of the brain, mapping speech areas and identifying the somatosensory and somatomotor cortex areas to be excluded from surgical removal.
A doctor named Robert Galbraith Heath was also an early researcher of the brain at the Tulane University School of Medicine.
Electrophysiological basis
ECoG signals are composed of synchronized postsynaptic potentials (local field potentials), recorded directly from the exposed surface of the cortex. The potentials occur primarily in cortical pyramidal cells, and thus must be conducted through several layers of the cerebral cortex, cerebro
|
https://en.wikipedia.org/wiki/Fleet%20Street%20Publisher
|
Fleet Street Publisher was an Atari ST desktop publishing program produced by Mirrorsoft in the United Kingdom and released in November 1986. A IBM PC compatible version produced by Rowan Software was planned for 1987 but never released.
Running under GEM the program offered features such as multi-column text, the ability to design flow charts and graphics and multiple document sizes (flyers, menus, cards, etc.). Possible font sizes ranged from 4 to 216 points with support for accented characters. Character and line spacing where fully controllable by the user. The software came with a 150 image clipart gallery.
The software was superseded by Timeworks Publisher (Publish-It in the United States), which the market regarded as a much better product. This new version was produced by GST Software Products, and upgrades for the PC versions were available into the late 2000s.
Versions
Fleet Street Publisher (1986, published by Spectrum Holobyte and France Image Logiciel)
Fleet Street Publisher 1.1 (1987, published by Mirrorsoft)
Fleet Street Publisher 2.0 (1989, published by MichTron)
Fleet Street Publisher 3.0 (1989, published by MichTron)
|
https://en.wikipedia.org/wiki/Energy%20current
|
Energy current is a flow of energy defined by the Poynting vector (), as opposed to normal current (flow of charge). It was originally postulated by Oliver Heaviside. It is also an informal name for Energy flux.
Explanation
"Energy current" is a somewhat informal term that is used, on occasion, to describe the process of energy transfer in situations where the transfer can usefully be viewed in terms of a flow. It is particularly used when the transfer of energy is more significant to the discussion than the process by which the energy is transferred. For instance, the flow of fuel oil in a pipeline could be considered as an energy current, although this would not be a convenient way of visualising the fullness of the storage tanks.
The units of energy current are those of power (W). This is closely related to energy flux, which is the energy transferred per unit area per unit time (measured in W/m).
Energy current in electromagnetism
A specific use of the concept of energy current was promulgated by Oliver Heaviside in the last quarter of the 19th century. Against heavy resistance from the engineering community, Heaviside worked out the physics of signal velocity/impedance/distortion on telegraph, telephone, and undersea cables. He invented the inductor-loaded "distortionless line" later patented by Michael Pupin in the USA.
Building on the concept of the Poynting vector, which describes the flow of energy in a transverse electromagnetic wave as the vector product of its electric and magnetic fields (), Heaviside sought to extend this by treating the transfer of energy due to the electric current in a conductor in a similar manner. In doing so he reversed the contemporary view of current, so that the electric and magnetic fields due to the current are the "prime movers", rather than being a result of the motion of the charge in the conductor.
Heaviside's approach had some adherents at the time—enough, certainly, to quarrel with the "traditionalists" in p
|
https://en.wikipedia.org/wiki/Disc%20mill
|
A disc mill is a type of crusher that can be used to grind, cut, shear, shred, fiberize, pulverize, granulate, crack, rub, curl, fluff, twist, hull, blend, or refine. It works in a similar manner to the ancient Buhrstone mill in that the feedstock is fed between opposing discs or plates. The discs may be grooved, serrated, or spiked.
Applications
Typical applications for a single-disc mill are all three stages of the wet milling of field corn, manufacture of peanut butter, processing nut shells, ammonium nitrate, urea, producing chemical slurries and recycled paper slurries, and grinding chromium metal.
Double-disc mills are typically used for alloy powders, aluminum chips, bark, barley, borax, brake lining scrap, brass chips, sodium hydroxide, chemical salts, coconut shells, copper powder, cork, cottonseed hulls, pharmaceuticals, feathers, hops, leather, oilseed cakes, phosphates, rice, rosin, sawdust, and seeds.
Disc mills are relatively expensive to run and maintain and they consume much more power than other shredding machines, and are not used where ball mills or hammermills produce the desired results at a lower cost.
Mechanism
Substances are crushed between the edge of a thick, spinning disk and something else. Some mills cover the edge of the disk in blades to chop up incoming matter rather than crush it.
Industrial equipment
Grinding mills
|
https://en.wikipedia.org/wiki/Hypouricemia
|
Hypouricemia or hypouricaemia is a level of uric acid in blood serum that is below normal. In humans, the normal range of this blood component has a lower threshold set variously in the range of 2 mg/dL to 4 mg/dL, while the upper threshold is 530 μmol/L (6 mg/dL) for women and 619 μmol/L (7 mg/dL) for men. Hypouricemia usually is benign and sometimes is a sign of a medical condition.
Presentation
Complications
Although normally benign, idiopathic renal hypouricemia may increase the risk of exercise-induced acute kidney failure. There is also evidence that hypouricemia can worsen conditions such as rheumatoid arthritis, especially when combined with low Vitamin C uptake, due to free radical damage.
Causes
Hypouricemia is benign and not a medical condition, but it is a useful medical sign. It is usually due to drugs and toxic agents, sometimes to diet or genetics, and, rarely, suggests an underlying medical condition.
Medication
The majority of drugs that contribute to hypouricemia are uricosuric drugs that increase the excretion of uric acid from the blood into the urine. Others include drugs that reduce the production of uric acid: xanthine oxidase inhibitors, urate oxidase (rasburicase), and sevelamer.
Diet
Hypouricemia is common in vegetarians and vegans due to the low purine content of most vegetarian diets. Vegetarian diet has been found to result in mean serum uric acid values as low as 239 μ mol/L (2.7 mg/dL). While a vegetarian diet is typically seen as beneficial with respect to conditions such as gout, it may be associated with some other health conditions.
Transient hypouricemia sometimes is produced by total parenteral nutrition. Paradoxically, total parenteral nutrition may produce hypouricemia followed shortly by acute gout, a condition normally associated with hyperuricemia. The reasons for this are unclear.
Genetics
Two kinds of genetic mutations are known to cause hypouricemia: mutations causing xanthine oxidase deficiency, which r
|
https://en.wikipedia.org/wiki/PC-based%20IBM%20mainframe-compatible%20systems
|
Since the rise of the personal computer in the 1980s, IBM and other vendors have created PC-based IBM-compatible mainframes which are compatible with the larger IBM mainframe computers. For a period of time PC-based mainframe-compatible systems had a lower price and did not require as much electricity or floor space. However, they sacrificed performance and were not as dependable as mainframe-class hardware. These products have been popular with mainframe developers, in education and training settings, for very small companies with non-critical processing, and in certain disaster relief roles (such as field insurance adjustment systems for hurricane relief).
Background
Up until the mid-1990s, mainframes were very large machines that often occupied entire rooms. The rooms were often air conditioned and had special power arrangements to accommodate the three-phase electric power required by the machines. Modern mainframes are now physically comparatively small and require little or no special building arrangements.
System/370
IBM had demonstrated use of a mainframe instruction set in their first desktop computer—the IBM 5100, released in 1975. This product used microcode to execute many of the System/370's processor instructions, so that it could run a slightly modified version of IBM's APL mainframe program interpreter.
In 1980 rumors spread of a new IBM personal computer, perhaps a miniaturized version of the 370. In 1981 the IBM Personal Computer appeared, but it was not based on the System 370 architecture. However, IBM did use their new PC platform to create some exotic combinations with additional hardware that could execute S/370 instructions locally.
Personal Computer XT/370
In October 1983, IBM announced the IBM Personal Computer XT/370. This was essentially a three-in-one product. It could run PC DOS locally, it could also act as 3270 terminal, and finally—its most important distinguishing feature relative to an IBM 3270 PC—was that it could execute S/37
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.