source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Palmar%20aponeurosis
|
The palmar aponeurosis (palmar fascia) invests the muscles of the palm, and consists of central, lateral, and medial portions.
Structure
The central portion occupies the middle of the palm, is triangular in shape, and of great strength
Its apex is continuous with the lower margin of the transverse carpal ligament, and receives the expanded tendon of the palmaris longus.
Its base divides below into four slips, one for each finger. Each slip gives off superficial fibers to the skin of the palm and finger, those to the palm joining the skin at the furrow corresponding to the metacarpophalangeal articulations, and those to the fingers passing into the skin at the transverse fold at the bases of the fingers.
The deeper part of each slip subdivides into two processes, which are inserted into the fibrous sheaths of the flexor tendons. From the sides of these processes offsets are attached to the transverse metacarpal ligament.
By this arrangement short channels are formed on the front of the heads of the metacarpal bones; through these the flexor tendons pass. The intervals between the four slips transmit the digital vessels and nerves, and the tendons of the lumbricales.
At the points of division into the slips mentioned, numerous strong, transverse fasciculi bind the separate processes together.
The central part of the palmar aponeurosis is intimately bound to the integument by dense fibroareolar tissue forming the superficial palmar fascia, and gives origin by its medial margin to the palmaris brevis.
It covers the superficial volar arch, the tendons of the flexor muscles, and the branches of the median and ulnar nerves; and on either side it gives off a septum, which is continuous with the interosseous aponeurosis, and separates the intermediate from the collateral groups of muscles.
Lateral and medial portions
The lateral and medial portions of the palmar aponeurosis are thin, fibrous layers, which cover, on the radial side, the muscles of the ball of the thu
|
https://en.wikipedia.org/wiki/Sun-2
|
The Sun-2 series of UNIX workstations and servers was launched by Sun Microsystems in November 1983. As the name suggests, the Sun-2 represented the second generation of Sun systems, superseding the original Sun-1 series. The Sun-2 series used a 10 MHz Motorola 68010 microprocessor with a proprietary Sun-2 Memory Management Unit (MMU), which enabled it to be the first Sun architecture to run a full virtual memory UNIX implementation, SunOS 1.0, based on 4.1BSD. Early Sun-2 models were based on the Intel Multibus architecture, with later models using VMEbus, which continued to be used in the successor Sun-3 and Sun-4 families.
Sun-2 systems were supported in SunOS until version 4.0.3.
A port to support Multibus Sun-2 systems in NetBSD was begun in January 2001 from the Sun-3 support in the NetBSD 1.5 release. Code supporting the Sun-2 began to be merged into the NetBSD tree in April 2001. sun2 is considered a tier 2 support platform as of NetBSD 7.0.1.
Sun-2 models
Models are listed in approximately chronological order.
A desktop disk and tape sub-system was introduced for the Sun-2/50 desktop workstation. It could hold a 5 ¼" disk drive and 5 ¼" tape drive. It used DD-50 (sometimes erroneously referred to as DB-50) connectors for its SCSI cables, a Sun specific design. It was often referred to as a "Sun Shoebox".
Sun-1 systems upgraded with Sun-2 Multibus CPU boards were sometimes referred to as the 2/100U (upgraded Sun-100) or 2/150U (upgraded Sun-150).
A typical configuration of a monochrome 2/120 with 4 MB of memory, 71 MB SCSI disk and 20 MB 1/4" SCSI tape cost $29,300 (1986 US price list).
A color 2/160 with 8 MB of memory, two 71 MB SCSI disks and 60 MB 1/4" SCSI tape cost $48,800 (1986 US price list).
A Sun 2/170 server with 4 MB of memory, no display, two Fujitu Eagle 380 MB disk drive, one Xylogics 450 SMD disk controller, a 6250 bpi 1/2 inch tape drive and a 72" rack cost $79,500 (1986 US price list).
Sun-2 hardware
Sun 2 Multibus systems
|
https://en.wikipedia.org/wiki/CMBFAST
|
In physical cosmology, CMBFAST is a computer code, written by Uroš Seljak and Matias Zaldarriaga, for computing the anisotropy of the cosmic microwave background. It was the first efficient program to do so, reducing the time taken to compute the anisotropy from several days to a few minutes by using a novel semi-analytic line-of-sight approach.
|
https://en.wikipedia.org/wiki/MicroMUSE
|
MicroMUSE is a MUD started in 1990. It is based on the TinyMUSE system, which allows members to interact in a virtual environment called Cyberion City, as well as to create objects and modify their environment. MicroMUSE was conceived as an environment to allow people in far-flung locations to interact with each other, particularly college students with Internet access. A core group of users remain active.
History
1990
MicroMUSE was founded as MicroMUSH by the user known as "Jin" in the summer of 1990. Based upon TinyMUSH, MicroMUSH was centered around Cyberion City, a space station orbiting Earth of the 24th century. The initial MicroMUSH database was largely due to the efforts of Jin and the Wizards who went by the online aliases "Trout_Complex", "Coyote", "Opera_Ghost", "Snooze", "Wai", "Star" and "Mama.Bear". Larry "Leet" Foard and "Bard" (later known as "Michael") were, along with Jin, the primary programmers.
The focus, at the time, primarily was communication and creativity. Users were encouraged to build "objects" and were given extensive leeway to create and communicate with other members. At times, it could be compared to a high-tech version of the wild west.
1991
Typical problems of growth and success, over time, led to issues with computing resources. In April 1991, MicroMUSH moved to MIT. The name was officially changed to MicroMUSE during this same time period.
1992
Through 1992, the focus of MicroMUSE continued to change, though not very noticeably to existing users. New users were given a smaller "quota" of object which they could build. The game was extremely popular at this point. Users could log in at almost any time of day and find at least thirty active people.
1993
By the end of 1993, the space engine, which had been developed within the original theme of MicroMUSE, was moved out of MicroMUSE. The focus was shifting; it became less about creativity and communication between random people across the internet, and more about bring
|
https://en.wikipedia.org/wiki/Newtonian%20gauge
|
In general relativity, the Newtonian gauge is a perturbed form of the Friedmann–Lemaître–Robertson–Walker line element. The gauge freedom of general relativity is used to eliminate two scalar degrees of freedom of the metric, so that it can be written as:
where the Latin indices a and b are summed over the spatial directions and is the Kronecker delta. We can instead make use of conformal time as the time component yielding the longitudinal or conformal Newtonian gauge:
which is related by the simple transformation . They are called Newtonian gauges because is the Newtonian gravitational potential of classical Newtonian gravity, which satisfies the Poisson equation for non-relativistic matter and on scales where the expansion of the universe may be neglected. It includes only scalar perturbations of the metric: by the scalar-vector-tensor decomposition these evolve independently of the vector and tensor perturbations and are the predominant ones affecting the growth of structure in the universe in cosmological perturbation theory. The vector perturbations vanish in cosmic inflation and the tensor perturbations are gravitational waves, which have a negligible effect on physics except for the so-called B-modes of the cosmic microwave background polarization. The tensor perturbation is truly gauge independent, since it is the same in all gauges.
In a universe without anisotropic stress (that is, where the stress–energy tensor is invariant under spatial rotations, or the three principal pressures are identical) the Einstein equation sets .
|
https://en.wikipedia.org/wiki/Scalar%E2%80%93vector%E2%80%93tensor%20decomposition
|
In cosmological perturbation theory, the scalar–vector–tensor decomposition is a decomposition of the most general linearized perturbations of the Friedmann–Lemaître–Robertson–Walker metric into components according to their transformations under spatial rotations. It was first discovered by E. M. Lifshitz in 1946. It follows from Helmholtz's Theorem (see Helmholtz decomposition.) The general metric perturbation has ten degrees of freedom. The decomposition states that the evolution equations for the most general linearized perturbations of the Friedmann–Lemaître–Robertson–Walker metric can be decomposed into four scalars, two divergence-free spatial vector fields (that is, with a spatial index running from 1 to 3), and a traceless, symmetric spatial tensor field with vanishing doubly and singly longitudinal components. The vector and tensor fields each have two independent components, so this decomposition encodes all ten degrees of freedom in the general metric perturbation. Using gauge invariance four of these components (two scalars and a vector field) may be set to zero.
If the perturbed metric where is the perturbation, then the decomposition is as follows,
where the Latin indices i and j run over spatial components (1,…,3). The tensor field is traceless under the spatial part of the background metric (i.e. ). The spatial vector and tensor undergo further decomposition. The vector is written
where and ( is the covariant derivative defined with respect to the spatial metric ). The notation is used because in Fourier space, these equations indicate that the vector points parallel and perpendicular to the direction of the wavevector, respectively. The parallel component can be expressed as the gradient of a scalar, . Thus can be written as a combination of a scalar and a divergenceless, two-component vector.
Finally, an analogous decomposition can be performed on the traceless tensor field . It can be written
where
where is a scalar (the combina
|
https://en.wikipedia.org/wiki/Cosmological%20perturbation%20theory
|
In physical cosmology, cosmological perturbation theory is the theory by which the evolution of structure is understood in the Big Bang model. Cosmological perturbation theory may be broken into two categories: Newtonian or general relativistic. Each case uses its governing equations to compute gravitational and pressure forces which cause small perturbations to grow and eventually seed the formation of stars, quasars, galaxies and clusters. Both cases apply only to situations where the universe is predominantly homogeneous, such as during cosmic inflation and large parts of the Big Bang. The universe is believed to still be homogeneous enough that the theory is a good approximation on the largest scales, but on smaller scales more involved techniques, such as N-body simulations, must be used. When deciding whether to use general relativity for perturbation theory, note that Newtonian physics is only applicable in some cases such as for scales smaller than the Hubble horizon, where spacetime is sufficiently flat, and for which speeds are non-relativistic.
Because of the gauge invariance of general relativity, the correct formulation of cosmological perturbation theory is subtle.
In particular, when describing an inhomogeneous spacetime, there is often not a preferred coordinate choice. There are currently two distinct approaches to perturbation theory in classical general relativity:
gauge-invariant perturbation theory based on foliating a space-time with hyper-surfaces, and
1+3 covariant gauge-invariant perturbation theory based on threading a space-time with frames.
Newtonian perturbation theory
In this section, we will focus on the effect of matter on structure formation in the hydrodynamical fluid regime. This regime is useful because dark matter has dominated structure growth for most of the universe's history. In this regime, we are on sub-Hubble scales (where is the Hubble parameter) so we can take spacetime to be flat, and ignore general relativisti
|
https://en.wikipedia.org/wiki/Congruence%20bias
|
Congruence bias is the tendency of people to over-rely on testing their initial hypothesis (the most congruent one) while neglecting to test alternative hypotheses. That is, people rarely try experiments that could disprove their initial belief, but rather try to repeat their initial results. It is a special case of the confirmation bias.
Examples
Suppose that, in an experimental setting, a subject is presented with two buttons and told that pressing one of those buttons, but not the other, will open a door. The subject adopts the hypothesis that the button on the left opens the door in question. A direct test of this hypothesis would be pressing the button on the left; an indirect test would be pressing the button on the right. The latter is still a valid test because once the result of the door's remaining closed is found, the left button is proven to be the desired button. (This example is parallel to Bruner, Goodnow, and Austin's example in the psychology classic, A Study of Thinking.)
It is possible to apply this idea of direct and indirect testing to more complicated experiments in order to explain the presence of a congruence bias in people's reasoning. Congruence bias could be said to be present if a subject tests their own (usually naive) hypothesis again and again instead of trying to disprove it.
The classic example of subjects' congruence bias was discovered by Peter Wason (1960, 1968). Here, the experimenter gave subjects the number sequence "2, 4, 6", telling the subjects that this sequence followed a particular rule and instructing subjects to find the rule underlying the sequence logic. Subjects provide their own number sequences as tests to see if they could ascertain the rule dictating which numbers could be included in the sequence and which could not. Most subjects quickly assumed that the underlying rule is "numbers ascending by 2", and provide as tests only sequences concordant with this rule, such as "8, 10, 12" or "3, 5, 7" (direct testing
|
https://en.wikipedia.org/wiki/IP%20set
|
In mathematics, an IP set is a set of natural numbers which contains all finite sums of some infinite set.
The finite sums of a set D of natural numbers are all those numbers that can be obtained by adding up the elements of some finite nonempty subset of D.
The set of all finite sums over D is often denoted as FS(D). Slightly more generally, for a sequence of natural numbers (ni), one can consider the set of finite sums FS((ni)), consisting of the sums of all finite length subsequences of (ni).
A set A of natural numbers is an IP set if there exists an infinite set D such that FS(D) is a subset of A. Equivalently, one may require that A contains all finite sums FS((ni)) of a sequence (ni).
Some authors give a slightly different definition of IP sets: They require that FS(D) equal A instead of just being a subset.
The term IP set was coined by Hillel Furstenberg and Benjamin Weiss to abbreviate "infinite-dimensional parallelepiped". Serendipitously, the abbreviation IP can also be expanded to "idempotent" (a set is an IP if and only if it is a member of an idempotent ultrafilter).
Hindman's theorem
If is an IP set and , then at least one is an IP set.
This is known as Hindman's theorem or the finite sums theorem. In different terms, Hindman's theorem states that the class of IP sets is partition regular.
Since the set of natural numbers itself is an IP set and partitions can also be seen as colorings, one can reformulate a special case of Hindman's theorem in more familiar terms: Suppose the natural numbers are "colored" with n different colors; each natural number gets one and only one color. Then there exists a color c and an infinite set D of natural numbers, all colored with c, such that every finite sum over D also has color c.
Hindman's theorem is named for mathematician Neil Hindman, who proved it in 1974.
The Milliken–Taylor theorem is a common generalisation of Hindman's theorem and Ramsey's theorem.
Semigroups
The definition of being IP has be
|
https://en.wikipedia.org/wiki/Partition%20regularity
|
In combinatorics, a branch of mathematics, partition regularity is one notion of largeness for a collection of sets.
Given a set , a collection of subsets is called partition regular if every set A in the collection has the property that, no matter how A is partitioned into finitely many subsets, at least one of the subsets will also belong to the collection. That is,
for any , and any finite partition , there exists an i ≤ n such that belongs to . Ramsey theory is sometimes characterized as the study of which collections are partition regular.
Examples
The collection of all infinite subsets of an infinite set X is a prototypical example. In this case partition regularity asserts that every finite partition of an infinite set has an infinite cell (i.e. the infinite pigeonhole principle.)
Sets with positive upper density in : the upper density of is defined as (Szemerédi's theorem)
For any ultrafilter on a set , is partition regular: for any , if , then exactly one .
Sets of recurrence: a set R of integers is called a set of recurrence if for any measure-preserving transformation of the probability space (Ω, β, μ) and of positive measure there is a nonzero so that .
Call a subset of natural numbers a.p.-rich if it contains arbitrarily long arithmetic progressions. Then the collection of a.p.-rich subsets is partition regular (Van der Waerden, 1927).
Let be the set of all n-subsets of . Let . For each n, is partition regular. (Ramsey, 1930).
For each infinite cardinal , the collection of stationary sets of is partition regular. More is true: if is stationary and for some , then some is stationary.
The collection of -sets: is a -set if contains the set of differences for some sequence .
The set of barriers on : call a collection of finite subsets of a barrier if:
and
for all infinite , there is some such that the elements of X are the smallest elements of I; i.e. and .
This generalizes Ramsey's theorem, as each is a barrier. (
|
https://en.wikipedia.org/wiki/Luminescent%20bacteria
|
Luminescent bacteria emit light as the result of a chemical reaction during which chemical energy is converted to light energy. Luminescent bacteria exist as symbiotic organisms carried within a larger organism, such as many deep sea organisms, including the Lantern Fish, the Angler fish, certain jellyfish, certain clams and the Gulper eel. The light is generated by an enzyme-catalyzed chemoluminescence reaction, wherein the pigment luciferin is oxidised by the enzyme luciferase. The expression of genes related to bioluminescence is controlled by an operon called the lux operon.
Some species of luminescent bacteria possess quorum sensing, the ability to determine local population by the concentration of chemical messengers. Species which have quorum sensing can turn on and off certain chemical pathways, commonly luminescence; in this way, once population levels reach a certain point the bacteria switch on light-production
Characteristics of the phenomenon
Bioluminescence is a form of luminescence, or "cold light" emission; less than 20% of the light generates thermal radiation. It should not be confused with fluorescence, phosphorescence or refraction of light. Most forms of bioluminescence are brighter (or only exist) at night, following a circadian rhythm.
See also
Dinoflagellates
Vibrionaceae (e.g. Vibrio fischeri, Vibrio harveyi, Vibrio phosphoreum)
|
https://en.wikipedia.org/wiki/Grigore%20Moisil
|
Grigore Constantin Moisil (; 10 January 1906 – 21 May 1973) was a Romanian mathematician, computer pioneer, and titular member of the Romanian Academy. His research was mainly in the fields of mathematical logic (Łukasiewicz–Moisil algebra), algebraic logic, MV-algebra, and differential equations. He is viewed as the father of computer science in Romania.
Moisil was also a member of the Academy of Sciences of Bologna and of the International Institute of Philosophy. In 1996, the IEEE Computer Society awarded him posthumously the Computer Pioneer Award.
Biography
Grigore Moisil was born in 1906 in Tulcea into an intellectual family. His great-grandfather, Grigore Moisil (1814–1891), a clergyman, was one of the founders of the first Romanian high school in Năsăud. His father, Constantin Moisil (1876–1958), was a history professor, archaeologist and numismatist; as a member of the Romanian Academy, he filled the position of Director of the Numismatics Office of the Academy. His mother, Elena (1863–1949), was a teacher in Tulcea, later the director of "Maidanul Dulapului" school in Bucharest (now "Ienăchiță Văcărescu" school).
Grigore Moisil attended primary school in Bucharest, then high school in Vaslui and Bucharest (at ) between 1916 and 1922. In 1924 he was admitted to the Civil Engineering School of the Polytechnic University of Bucharest, and also the Mathematics School of the University of Bucharest. He showed a stronger interest in mathematics, so he quit the Polytechnic University in 1929, despite already having passed all the third-year exams. In 1929 he defended his Ph.D. thesis, La mécanique analytique des systemes continus (Analytical mechanics of continuous systems), before a commission led by Gheorghe Țițeica, with Dimitrie Pompeiu and Anton Davidoglu as members. The thesis was published the same year by the Gauthier-Villars publishing house in Paris, and received favourable comments from Vito Volterra, Tullio Levi-Civita, and Paul Lévy.
In 1930 Mo
|
https://en.wikipedia.org/wiki/Kr%C3%BCppel%20associated%20box
|
The Krüppel associated box (KRAB) domain is a category of transcriptional repression domains present in approximately 400 human zinc finger protein-based transcription factors (KRAB zinc finger proteins). The KRAB domain typically consists of about 75 amino acid residues, while the minimal repression module is approximately 45 amino acid residues. It is predicted to function through protein-protein interactions via two amphipathic helices. The most prominent interacting protein is called TRIM28 initially visualized as SMP1, cloned as KAP1 and TIF1-beta. Substitutions for the conserved residues abolish repression.
Over 10 independently encoded KRAB domains have been shown to be effective repressors of transcription, suggesting this activity to be a common property of the domain. KRAB domains can be fused with dCas9 CRISPR tools to form even stronger repressors.
Evolution
The KRAB domain had initially been identified in 1988 as a periodic array of leucine residues separated by six amino acids 5’ to the zinc finger region of KOX1/ZNF10 coined heptad repeat of leucines (also known as a leucine zipper). Later, this domain was named in association with the C2H2-Zinc finger proteins Krüppel associated box (KRAB). The KRAB domain is confined to genomes from tetrapod organisms. The KRAB containing C2H2-ZNF genes constitute the largest sub-family of zinc finger genes. More than half of the C2H2-ZNF genes are associated with a KRAB domain in the human genome. They are more prone to clustering and are found in large clusters on the human genome.
The KRAB domain presents one of the strongest repressors in the human genome. Once the KRAB domain was fused to the tetracycline repressor (TetR), the TetR-KRAB fusion proteins were the first engineered drug-inducible repressor that worked in mammalian cells.
Examples
Human genes encoding KRAB-ZFPs include KOX1/ZNF10, KOX8/ZNF708, ZNF43, ZNF184, ZNF91, HPF4, HTF10 and HTF34.
|
https://en.wikipedia.org/wiki/Power-on%20reset
|
A power-on reset (PoR, POR) generator is a microcontroller or microprocessor peripheral that generates a reset signal when power is applied to the device. It ensures that the device starts operating in a known state.
PoR generator
In VLSI devices, the power-on reset (PoR) is an electronic device incorporated into the integrated circuit that detects the power applied to the chip and generates a reset impulse that goes to the entire circuit placing it into a known state.
A simple PoR uses the charging of a capacitor, in series with a resistor, to measure a time period during which the rest of the circuit is held in a reset state. A Schmitt trigger may be used to deassert the reset signal cleanly, once the rising voltage of the RC network passes the threshold voltage of the Schmitt trigger. The resistor and capacitor values should be determined so that the charging of the RC network takes long enough that the supply voltage will have stabilised by the time the threshold is reached.
One of the issues with using RC network to generate PoR pulse is the sensitivity of the R and C values to the power-supply ramp characteristics. When the power supply ramp is rapid, the R and C values can be calculated so that the time to reach the switching threshold of the schmitt trigger is enough to apply a long enough reset pulse. When the power supply ramp itself is slow, the RC network tends to get charged up along with the power-supply ramp up. So when the input schmitt stage is all powered up and ready, the input voltage from the RC network would already have crossed the schmitt trigger point. This means that there might not be a reset pulse supplied to the core of the VLSI.
Power-on reset on IBM mainframes
On an IBM mainframe, a power-on reset (POR) is a sequence of actions that the processor performs either due to a POR request from the operator or as part of turning on power. The operator requests a POR for configuration changes that cannot be recognized by a simple Syste
|
https://en.wikipedia.org/wiki/Logarithmic%20growth
|
In mathematics, logarithmic growth describes a phenomenon whose size or cost can be described as a logarithm function of some input. e.g. y = C log (x). Any logarithm base can be used, since one can be converted to another by multiplying by a fixed constant. Logarithmic growth is the inverse of exponential growth and is very slow.
A familiar example of logarithmic growth is a number, N, in positional notation, which grows as logb (N), where b is the base of the number system used, e.g. 10 for decimal arithmetic. In more advanced mathematics, the partial sums of the harmonic series
grow logarithmically. In the design of computer algorithms, logarithmic growth, and related variants, such as log-linear, or linearithmic, growth are very desirable indications of efficiency, and occur in the time complexity analysis of algorithms such as binary search.
Logarithmic growth can lead to apparent paradoxes, as in the martingale roulette system, where the potential winnings before bankruptcy grow as the logarithm of the gambler's bankroll. It also plays a role in the St. Petersburg paradox.
In microbiology, the rapidly growing exponential growth phase of a cell culture is sometimes called logarithmic growth. During this bacterial growth phase, the number of new cells appearing is proportional to the population. This terminological confusion between logarithmic growth and exponential growth may be explained by the fact that exponential growth curves may be straightened by plotting them using a logarithmic scale for the growth axis.
See also
(an even slower growth model)
|
https://en.wikipedia.org/wiki/Ceragenin
|
Ceragenins, or cationic steroid antimicrobials (CSAs), are synthetically-produced, small-molecule chemical compounds consisting of a sterol backbone with amino acids and other chemical groups attached to them. These compounds have a net positive charge that is electrostatically attracted to the negative-charged cell membranes of certain viruses, fungi and bacteria. CSAs have a high binding affinity for such membranes (including Lipid A) and are able to rapidly disrupt the target membranes leading to rapid cell death. While CSAs have a mechanism of action that is also seen in antimicrobial peptides, which form part of the body's innate immune systum, they avoid many of the difficulties associated with their use as medicines.
Ceragenins were discovered by Paul B. Savage of Brigham Young University's Department of Chemistry and Biochemistry. In data previously presented by Savage and other researchers, CSAs were shown to have broad-spectrum antibacterial activity. Derya Unutmaz, Associate Professor of Microbiology and Immunology at the Vanderbilt University School of Medicine, tested several CSAs in his laboratory for their ability to kill HIV directly. Unutmaz said in 2006 "We have some preliminary but very exciting results. But we would like to formally show this before making any claims that would cause unwanted hype."
On February 6, 2006, researchers including Savage announced, before peer review, that a ceragenin compound, CSA-54, appeared to inactivate HIV.
|
https://en.wikipedia.org/wiki/Lithogenic%20silica
|
Lithogenic silica (LSi) is silica (SiO2) derived from terrigenous rock (Igneous, metamorphic, and sedimentary), lithogenic sediments composed of the detritus of pre-existing rock, volcanic ejecta, extraterrestrial material, and minerals such silicate. Silica is the most abundant compound in the Earth's crust (59%) and the main component of almost every rock (>95%).
Lithogenic Silica in Marine Systems
LSi can either be accumulated "directly" in marine sediments as clastic particles or be transferred into dissolved silica (DSi) in the water column. Within living marine systems, DSi is the most important form of silica Forms of DSi, such as silicic acid (Si(OH)4), are utilized by silicoflagellates and radiolarians to create their mineral skeletons, and by diatoms to develop their frustules (external shells). These structures are vitally important, as they can protect, amplify light for photosynthesis, and even help keep these organisms afloat in the water column. DSi more readily forms from biogenic silica (BSi) than from LSi, as the latter is less soluble in water. However, LSi is still an important supply to the silica cycle, due to it being a primary supplier of silica to the water column.
Sources
Rivers are one of the major suppliers of LSi to marine environments. As they flow, rivers pick up fine particles, such as clays, silts, and sand, through physical weathering. Lithogenic silicic acid forms through chemical weathering, as CO2-rich water comes into contact with silicate and aluminosilicate minerals from terrestrial rocks. The silicic acid is then transported to the river via runoff or groundwater flow before being transported to the ocean. Estimates of combined flux (both lithogenic and biogenic) report that about 6.2 ± 1.8 Tmol Si year−1 and 147 ¨ ± 44 Tmol Si year−1 of dissolved and particulate silica, respectively, enter estuaries.
Eolian transport occurs when wind picks up weathered particles, primarily lithogenic, and transports them into the atmosph
|
https://en.wikipedia.org/wiki/Biogenic%20silica
|
Biogenic silica (bSi), also referred to as opal, biogenic opal, or amorphous opaline silica, forms one of the most widespread biogenic minerals. For example, microscopic particles of silica called phytoliths can be found in grasses and other plants.
Silica is an amorphous metalloid oxide formed by complex inorganic polymerization processes. This is opposed to the other major biogenic minerals, comprising carbonate and phosphate, which occur in nature as crystalline iono-covalent solids (e.g. salts) whose precipitation is dictated by solubility equilibria. Chemically, bSi is hydrated silica (SiO2·nH2O), which is essential to many plants and animals.
Diatoms in both fresh and salt water extract dissolved silica from the water to use as a component of their cell walls. Likewise, some holoplanktonic protozoa (Radiolaria), some sponges, and some plants (leaf phytoliths) use silicon as a structural material. Silicon is known to be required by chicks and rats for growth and skeletal development. Silicon is in human connective tissues, bones, teeth, skin, eyes, glands and organs.
Silica in marine environments
Silicate, or silicic acid (H4SiO4), is an important nutrient in the ocean. Unlike the other major nutrients such as phosphate, nitrate, or ammonium, which are needed by almost all marine plankton, silicate is an essential chemical requirement for very specific biota, including diatoms, radiolaria, silicoflagellates, and siliceous sponges. These organisms extract dissolved silicate from open ocean surface waters for the buildup of their particulate silica (SiO2), or opaline, skeletal structures (i.e. the biota's hard parts). Some of the most common siliceous structures observed at the cell surface of silica-secreting organisms include: spicules, scales, solid plates, granules, frustules, and other elaborate geometric forms, depending on the species considered.
Marine sources of silica
Five major sources of dissolved silica to the marine environment can be distingu
|
https://en.wikipedia.org/wiki/Backjumping
|
In backtracking algorithms, backjumping is a technique that reduces search space, therefore increasing efficiency. While backtracking always goes up one level in the search tree when all values for a variable have been tested, backjumping may go up more levels. In this article, a fixed order of evaluation of variables is used, but the same considerations apply to a dynamic order of evaluation.
Definition
Whenever backtracking has tried all values for a variable without finding any solution, it reconsiders the last of the previously assigned variables, changing its value or further backtracking if no other values are to be tried. If is the current partial assignment and all values for have been tried without finding a solution, backtracking concludes that no solution extending
exists. The algorithm then "goes up" to , changing 's value if possible, backtracking again otherwise.
The partial assignment is not always necessary in full to prove that no value of leads to a solution. In particular, a prefix of the partial assignment may have the same property, that is, there exists an index such that cannot be extended to form a solution with whatever value for . If the algorithm can prove this fact, it can directly consider a different value for instead of reconsidering as it would normally do.
The efficiency of a backjumping algorithm depends on how high it is able to backjump. Ideally, the algorithm could jump from to whichever variable is such that the current assignment to cannot be extended to form a solution with any value of . If this is the case, is called a safe jump.
Establishing whether a jump is safe is not always feasible, as safe jumps are defined in terms of the set of solutions, which is what the algorithm is trying to find. In practice, backjumping algorithms use the lowest index they can efficiently prove to be a safe jump. Different algorithms use different methods for determining whether a jump is safe. These methods have different
|
https://en.wikipedia.org/wiki/Interfoveolar%20ligament
|
Lateral to the conjoint tendon, previously known as the inguinal aponeurotic falx, there is a ligamentous band originating from the lower margin of the transversalis fascia and extending down in front of the inferior epigastric artery to the superior ramus of the pubis; it is termed the interfoveolar ligament of Hesselbach and sometimes contains a few muscular fibers.
It is named for Franz Kaspar Hesselbach.
|
https://en.wikipedia.org/wiki/Jugular%20fossa
|
The jugular fossa is a deep depression in the inferior part of the temporal bone at the base of the skull. It lodges the bulb of the internal jugular vein.
Structure
The jugular fossa is located in the temporal bone, posterior to the carotid canal and the cochlear aqueduct.
In the bony ridge dividing the carotid canal from the jugular fossa is the small inferior tympanic canaliculus for the passage of the tympanic branch of the glossopharyngeal nerve.
In the lateral part of the jugular fossa is the mastoid canaliculus for the entrance of the auricular branch of the vagus nerve.
Behind the jugular fossa is a quadrilateral area, the jugular surface, covered with cartilage in the fresh state, and articulating with the jugular process of the occipital bone.
Variation
The jugular fossa has variable depth and size in different skulls.
Function
The jugular fossa lodges the bulb of the internal jugular vein.
Clinical significance
Abnormally shaped jugular fossae may cause ear problems. If it lies close to the cochlea, it may cause tinnitus. A high jugular fossa may be linked to Ménière's disease.
See also
Fossa (anatomy)
Additional images
|
https://en.wikipedia.org/wiki/Sun-1
|
Sun-1 was the first generation of UNIX computer workstations and servers produced by Sun Microsystems, launched in May 1982. These were based on a CPU board designed by Andy Bechtolsheim while he was a graduate student at Stanford University and funded by DARPA. The Sun-1 systems ran SunOS 0.9, a port of UniSoft's UniPlus V7 port of Seventh Edition UNIX to the Motorola 68000 microprocessor, with no window system. Affixed to the case of early Sun-1 workstations and servers is a red bas relief emblem with the word SUN spelled using only symbols shaped like the letter U. This is the original Sun logo, rather than the more familiar purple diamond shape used later.
The first Sun-1 workstation was sold to Solo Systems in May 1982. The Sun-1/100 was used in the original Lucasfilm EditDroid non-linear editing system.
Models
Hardware
The Sun-1 workstation was based on the Stanford University SUN workstation designed by Andy Bechtolsheim (advised by Vaughan Pratt and Forest Baskett), a graduate student and co-founder of Sun Microsystems. At the heart of this design were the Multibus CPU, memory, and video display cards. The cards used in the Sun-1 workstation were a second-generation design with a private memory bus allowing memory to be expanded to 2 MB without performance degradation.
The Sun 68000 board introduced in 1982 was a powerful single-board computer. It combined a 10 MHz Motorola 68000 microprocessor, a Sun-designed memory management unit (MMU), 256 KB of zero wait state memory with parity, up to 32 KB of EPROM memory, two serial ports, a 16-bit parallel port and an Intel Multibus (IEEE 796 bus) interface in a single , Multibus form factor.
By using the Motorola 68000 processor tightly coupled with the Sun-1 MMU, the Sun 68000 CPU board was able to support a multi-tasking operating system such as UNIX. It included an advanced Sun-designed multi-process two-level MMU with facilities for memory protection, code sharing and demand paging of memory. The Sun
|
https://en.wikipedia.org/wiki/Dynamic%20synchronous%20transfer%20mode
|
Dynamic synchronous transfer mode (DTM) is an optical networking technology standardized by the European Telecommunications Standards Institute (ETSI) in 2001 beginning with specification ETSI ES 201 803-1. DTM is a time-division multiplexing and a circuit-switching network technology that combines switching and transport. It is designed to provide a guaranteed quality of service (QoS) for streaming video services, but can be used for packet-based services as well. It is marketed for professional media networks, mobile TV networks, digital terrestrial television (DTT) networks, in content delivery networks and in consumer oriented networks, such as "triple play" networks.
History
The DTM architecture was conceived in 1985 and developed at the Royal Institute of Technology (KTH) in Sweden.
It was published in February 1996.
The research team was split into two spin-off companies, reflecting two different approaches to use the technology. One of these companies remains active in the field and delivers commercial products based on the DTM technology. Its name is Net Insight.
See also
Broadband Integrated Services Digital Network
|
https://en.wikipedia.org/wiki/Multiscale%20modeling
|
Multiscale modeling or multiscale mathematics is the field of solving problems that have important features at multiple scales of time and/or space. Important problems include multiscale modeling of fluids, solids, polymers, proteins, nucleic acids as well as various physical and chemical phenomena (like adsorption, chemical reactions, diffusion).
An example of such problems involve the Navier–Stokes equations for incompressible fluid flow.
In a wide variety of applications, the stress tensor is given as a linear function of the gradient . Such a choice for has been proven to be sufficient for describing the dynamics of a broad range of fluids. However, its use for more complex fluids such as polymers is dubious. In such a case, it may be necessary to use multiscale modeling to accurately model the system such that the stress tensor can be extracted without requiring the computational cost of a full microscale simulation.
History
Horstemeyer 2009, 2012 presented a historical review of the different disciplines (mathematics, physics, and materials science) for solid materials related to multiscale materials modeling.
The aforementioned DOE multiscale modeling efforts were hierarchical in nature. The first concurrent multiscale model occurred when Michael Ortiz (Caltech) took the molecular dynamics code, Dynamo, (developed by Mike Baskes at Sandia National Labs) and with his students embedded it into a finite element code for the first time. Martin Karplus, Michael Levitt, Arieh Warshel 2013 were awarded a Nobel Prize in Chemistry for the development of a multiscale model method using both classical and quantum mechanical theory which were used to model large complex chemical systems and reactions.
Areas of research
In physics and chemistry, multiscale modeling is aimed at the calculation of material properties or system behavior on one level using information or models from different levels. On each level, particular approaches are used for the description of
|
https://en.wikipedia.org/wiki/Model%20horse
|
Model horses are scale replicas of real horses. They originated simultaneously – but independently – in the United States, Canada, and the United Kingdom, followed later by Sweden (UK-influenced), Germany (US-influenced), and Australia. They encompass a wide variety of fanbase activities, from those who simply like to collect, to those who show their models at model horse shows. Unlike model cars or trains, model horse collectibles do not need to be assembled from kits, although they can be altered to the collector's liking.
Brief history
In the late 1960s, UK collectors came together through PONY magazine, and several clubs and newsletters were born, the most important being The Postal Pony Club. From this was created the Lindfield Model Showing Association and later Model Horse News (MHN), a bi-monthly magazine which ran until 1989. In 1979 The International Arabist magazine appeared, which though restricted to Arab horses and their descendants, was the first magazine to actively seek to unite hobbyists from around the world. While MHN remained largely in the original tradition of Julips, etc., TIA promoted realism through custom Breyers. TIA changed its name to Model Horse International (MHI) to reflect its move away from purely Arabian interest, but the magazine folded around 1989. MHN also folded around this time, but was replaced by Model Horses Unlimited (MHU) catering for both realistic and more traditional models, and which is still in existence today.
During the 1960s, hobbyists such as Ellen Hitchins, Simone Smiljanic, and Marney Walerius began to organize photo shows. One of the earliest known clubs was the IMHA, or International Model Horse Association, which was run by Hitchins and Smiljanic. Many young hobbyists got their start after reading a short article about model horse collecting, which was published in the March 1969 issue of Western Horseman magazine. In the 1970s, US model horse collectors decided that their horses should be doing somethin
|
https://en.wikipedia.org/wiki/Uniform%20binary%20search
|
Uniform binary search is an optimization of the classic binary search algorithm invented by Donald Knuth and given in Knuth's The Art of Computer Programming. It uses a lookup table to update a single array index, rather than taking the midpoint of an upper and a lower bound on each iteration; therefore, it is optimized for architectures (such as Knuth's MIX) on which
a table lookup is generally faster than an addition and a shift, and
many searches will be performed on the same array, or on several arrays of the same length
C implementation
The uniform binary search algorithm looks like this, when implemented in C.
#define LOG_N 4
static int delta[LOG_N];
void make_delta(int N)
{
int power = 1;
int i = 0;
do {
int half = power;
power <<= 1;
delta[i] = (N + half) / power;
} while (delta[i++] != 0);
}
int unisearch(int *a, int key)
{
int i = delta[0] - 1; /* midpoint of array */
int d = 0;
while (1) {
if (key == a[i]) {
return i;
} else if (delta[d] == 0) {
return -1;
} else {
if (key < a[i]) {
i -= delta[++d];
} else {
i += delta[++d];
}
}
}
}
/* Example of use: */
#define N 10
int main(void)
{
int a[N] = {1, 3, 5, 6, 7, 9, 14, 15, 17, 19};
make_delta(N);
for (int i = 0; i < 20; ++i)
printf("%d is at index %d\n", i, unisearch(a, i));
return 0;
}
|
https://en.wikipedia.org/wiki/Quotient%20category
|
In mathematics, a quotient category is a category obtained from another category by identifying sets of morphisms. Formally, it is a quotient object in the category of (locally small) categories, analogous to a quotient group or quotient space, but in the categorical setting.
Definition
Let C be a category. A congruence relation R on C is given by: for each pair of objects X, Y in C, an equivalence relation RX,Y on Hom(X,Y), such that the equivalence relations respect composition of morphisms. That is, if
are related in Hom(X, Y) and
are related in Hom(Y, Z), then g1f1 and g2f2 are related in Hom(X, Z).
Given a congruence relation R on C we can define the quotient category C/R as the category whose objects are those of C and whose morphisms are equivalence classes of morphisms in C. That is,
Composition of morphisms in C/R is well-defined since R is a congruence relation.
Properties
There is a natural quotient functor from C to C/R which sends each morphism to its equivalence class. This functor is bijective on objects and surjective on Hom-sets (i.e. it is a full functor).
Every functor F : C → D determines a congruence on C by saying f ~ g iff F(f) = F(g). The functor F then factors through the quotient functor C → C/~ in a unique manner. This may be regarded as the "first isomorphism theorem" for categories.
Examples
Monoids and groups may be regarded as categories with one object. In this case the quotient category coincides with the notion of a quotient monoid or a quotient group.
The homotopy category of topological spaces hTop is a quotient category of Top, the category of topological spaces. The equivalence classes of morphisms are homotopy classes of continuous maps.
Let k be a field and consider the abelian category Mod(k) of all vector spaces over k with k-linear maps as morphisms. To "kill" all finite-dimensional spaces, we can call two linear maps f,g : X → Y congruent iff their difference has finite-dimensional image. In the resulting quo
|
https://en.wikipedia.org/wiki/Nitrogen%20rule
|
The nitrogen rule states that organic compounds containing exclusively hydrogen, carbon, nitrogen, oxygen, silicon, phosphorus, sulfur, and the halogens either have (1) an odd nominal mass that indicates an odd number of nitrogen atoms are present or (2) an even nominal mass that indicates an even number of nitrogen atoms in the molecular formula of the neutral compound. The nitrogen rule is not a rule as much as a general principle which may prove useful when attempting to solve organic mass spectrometry structures.
Formulation of the rule
This rule is derived from the fact that, perhaps coincidentally, for the most common chemical elements in neutral organic compounds (hydrogen, carbon, nitrogen, oxygen, silicon, phosphorus, sulfur, and the halogens), elements with even numbered nominal masses form even numbers of covalent bonds, while elements with odd numbered nominal masses form odd numbers of covalent bonds, with the exception of nitrogen, which has a nominal (or integer) mass of 14, but has a valency of 3.
The nitrogen rule is only true for neutral structures in which all of the atoms in the molecule have a number of covalent bonds equal to their standard valency (counting each sigma bond and pi bond as a separate covalent bond for the purposes of the calculation). Therefore, the rule is typically only applied to the molecular ion signal in the mass spectrum.
Mass spectrometry generally operates by measuring the mass of ions. If the measured ion is generated by creating or breaking a single covalent bond (such as protonating an amine to form an ammonium center or removing a hydride from a molecule to leave a positively charged ion) then the nitrogen rule becomes reversed (odd numbered masses indicate even numbers of nitrogens and vice versa). However, for each consecutive covalent bond that is broken or formed, the nitrogen rule again reverses.
Therefore, a more rigorous definition of the nitrogen rule for organic compounds containing exclusively hyd
|
https://en.wikipedia.org/wiki/H%C3%B6lder%20condition
|
In mathematics, a real or complex-valued function f on d-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are real constants C ≥ 0, α > 0, such that
for all x and y in the domain of f. More generally, the condition can be formulated for functions between any two metric spaces. The number α is called the exponent of the Hölder condition. A function on an interval satisfying the condition with α > 1 is constant. If α = 1, then the function satisfies a Lipschitz condition. For any α > 0, the condition implies the function is uniformly continuous. The condition is named after Otto Hölder.
We have the following chain of strict inclusions for functions defined on a closed and bounded interval [a, b] of the real line with a < b :
Continuously differentiable ⊂ Lipschitz continuous ⊂ α-Hölder continuous ⊂ uniformly continuous ⊂ continuous,
where 0 < α ≤ 1.
Hölder spaces
Hölder spaces consisting of functions satisfying a Hölder condition are basic in areas of functional analysis relevant to solving partial differential equations, and in dynamical systems. The Hölder space Ck,α(Ω), where Ω is an open subset of some Euclidean space and k ≥ 0 an integer, consists of those functions on Ω having continuous derivatives up through order k and such that the kth partial derivatives are Hölder continuous with exponent α, where 0 < α ≤ 1. This is a locally convex topological vector space. If the Hölder coefficient
is finite, then the function f is said to be (uniformly) Hölder continuous with exponent α in Ω. In this case, the Hölder coefficient serves as a seminorm. If the Hölder coefficient is merely bounded on compact subsets of Ω, then the function f is said to be locally Hölder continuous with exponent α in Ω.
If the function f and its derivatives up to order k are bounded on the closure of Ω, then the Hölder space can be assigned the norm
where β ranges over multi-indices and
These seminorms and norms are often denoted
|
https://en.wikipedia.org/wiki/Samuel%20Goldflam
|
Samuel Wulfowicz Goldflam (15 February 1852 – 26 August 1932) was a Polish-Jewish neurologist best known for his brilliant 1893 analysis of myasthenia gravis (Erb-Goldflam syndrome).
Biography
Goldflam received his education in his native city of Warsaw. He graduated from secondary school in 1869, then studied medicine at Warsaw University. He qualified as a physician in 1875, then worked in internal medicine at Holy Ghost Hospital under Professor Wilhelm Dusan Lambl (1824-95), known for the giardia parasite, Lamblia intestinalis. Lambl was not much of a mentor, so Goldflam worked largely by himself. His position at the internal-medicine clinic supplied him with ample research material. At that time, both internal-medicine and neurology patients were seen at Lambl’s clinic.
In 1882 Goldflam studied with the famous neurologists Karl Friedrich Otto Westphal (1833-90) and Jean-Martin Charcot (1825-93), then returned to Warsaw to teach neurology in the manner of the great masters. After a new period in Lambl’s clinic at Holy Ghost Hospital, he established his own clinic at Graniczna Street, no. 10, in Warsaw, for underprivileged patients, which he ran for nearly 40 years.
During World War I, Goldflam worked as a volunteer in the Jewish Hospital with his great friend, the neurologist, Edward Flatau (1869-1932). During the war he was one of the first to notice a correspondence between malnourishment and diseases, and he documented a bone and joint disease under the name osteoarthropathia dysalimentaria (1918). His main interest, however, was in the significance of reflexes, the neurological aspects of syphilis, and eye reflexes.
Goldflam was a sharp clinician with the ability to recognize small clues of illness which often escaped the attention of his colleagues. He not only worked with patients but was a pathologist. His profound observations and publications were recognized in Poland and abroad.
Goldflam established the Jewish Society for Mental Disorders and e
|
https://en.wikipedia.org/wiki/Melanocortin
|
The melanocortins are a family of neuropeptide hormones which are the ligands of the melanocortin receptors The melanocortin system consists of melanocortin receptors, ligands, and accessory proteins. The genes of the melanocortin system are found in chordates. Melanocortins were originally named so because their earliest known function was in melanogenesis. It is now known that the melanocortin system regulates diverse functions throughout the body, including inflammatory response, fibrosis, melanogenesis, steroidogenesis, energy homeostasis, sexual function, and exocrine gland function.
There are four endogenous melanocortin agonists which are derived from post-transcriptional processing of the precursor molecule proopiomelanocortin (POMC) (Figure 1). They are Adrenocorticotropic hormone (ACTH), a-melanocyte stimulating hormone (MSH), b-MSH, and g-MSH. In addition to agonists which activate melanocortin receptors , there are two antagonists which inhibit receptor activity, Agouti and Agouti-related protein (Agrp). Lastly, the ligand β-defensin 3 acts as a neutral melanocortin receptor antagonist.
Receptors
The 5 melanocortin receptors are seven-transmembrane G-protein coupled receptors with differing ligand affinities, tissue and cell type expression, and downstream functions (Figure 2). MC1R is expressed on melanocytes, macrophages, epithelial cells, endothelial cells, fibroblasts, monocytes and numerous other immune cells, but is also present in brain, testis, and intestine. Its main functions are in melanogenesis and anti-inflammatory signaling. MC2R is expressed in the adrenal cortex and adipocytes and promotes steroidogenesis. MC3R and MC4R are primarily expressed in the brain and regulate energy homeostasis. MC3R is additionally involved in immunomodulation while MC4R has a role in sexual function. MC5R is highly expressed in skin and adrenal glands and has a role in exocrine function. MC2R is activated exclusively by ACTH, whereas the other 4 receptors
|
https://en.wikipedia.org/wiki/Eternal%20inflation
|
Eternal inflation is a hypothetical inflationary universe model, which is itself an outgrowth or extension of the Big Bang theory.
According to eternal inflation, the inflationary phase of the universe's expansion lasts forever throughout most of the universe. Because the regions expand exponentially rapidly, most of the volume of the universe at any given time is inflating. Eternal inflation, therefore, produces a hypothetically infinite multiverse, in which only an insignificant fractal volume ends inflation.
Paul Steinhardt, one of the original researchers of the inflationary model, introduced the first example of eternal inflation in 1983, and Alexander Vilenkin showed that it is generic.
Alan Guth's 2007 paper, "Eternal inflation and its implications", states that under reasonable assumptions "Although inflation is generically eternal into the future, it is not eternal into the past." Guth detailed what was known about the subject at the time, and demonstrated that eternal inflation was still considered the likely outcome of inflation, more than 20 years after eternal inflation was first introduced by Steinhardt.
Overview
Development of the theory
Inflation, or the inflationary universe theory, was originally developed as a way to overcome the few remaining problems with what was otherwise considered a successful theory of cosmology, the Big Bang model.
In 1979, Alan Guth introduced the inflationary model of the universe to explain why the universe is flat and homogeneous (which refers to the smooth distribution of matter and radiation on a large scale). The basic idea was that the universe underwent a period of rapidly accelerating expansion a few instants after the Big Bang. He offered a mechanism for causing the inflation to begin: false vacuum energy. Guth coined the term "inflation," and was the first to discuss the theory with other scientists worldwide.
Guth's original formulation was problematic, as there was no consistent way to bring an end
|
https://en.wikipedia.org/wiki/Transactional%20memory
|
In computer science and engineering, transactional memory attempts to simplify concurrent programming by allowing a group of load and store instructions to execute in an atomic way. It is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing.
Transactional memory systems provide high-level abstraction as an alternative to low-level thread synchronization. This abstraction allows for coordination between concurrent reads and writes of shared data in parallel systems.
Motivation
In concurrent programming, synchronization is required when parallel threads attempt to access a shared resource. Low-level thread synchronization constructs such as locks are pessimistic and prohibit threads that are outside a critical section from running the code protected by the critical section. The process of applying and releasing locks often functions as an additional overhead in workloads with little conflict among threads. Transactional memory provides optimistic concurrency control by allowing threads to run in parallel with minimal interference. The goal of transactional memory systems is to transparently support regions of code marked as transactions by enforcing atomicity, consistency and isolation.
A transaction is a collection of operations that can execute and commit changes as long as a conflict is not present. When a conflict is detected, a transaction will revert to its initial state (prior to any changes) and will rerun until all conflicts are removed. Before a successful commit, the outcome of any operation is purely speculative inside a transaction. In contrast to lock-based synchronization where operations are serialized to prevent data corruption, transactions allow for additional parallelism as long as few operations attempt to modify a shared resource. Since the programmer is not responsible for explicitly identifying locks or the order in which they are acquired, programs that utilize t
|
https://en.wikipedia.org/wiki/Whip%20%28tree%29
|
A whip is a slender, unbranched shoot or plant. This term is used typically in forestry to refer to unbranched young tree seedlings of approximately 0.5-1.0 m (1 ft 7 in-3 ft 3 in) in height and 2–3 years old, that have been grown for planting out.
|
https://en.wikipedia.org/wiki/Gene%20expression%20profiling
|
In the field of molecular biology, gene expression profiling is the measurement of the activity (the expression) of thousands of genes at once, to create a global picture of cellular function. These profiles can, for example, distinguish between cells that are actively dividing, or show how the cells react to a particular treatment. Many experiments of this sort measure an entire genome simultaneously, that is, every gene present in a particular cell.
Several transcriptomics technologies can be used to generate the necessary data to analyse. DNA microarrays measure the relative activity of previously identified target genes. Sequence based techniques, like RNA-Seq, provide information on the sequences of genes in addition to their expression level.
Background
Expression profiling is a logical next step after sequencing a genome: the sequence tells us what the cell could possibly do, while the expression profile tells us what it is actually doing at a point in time. Genes contain the instructions for making messenger RNA (mRNA), but at any moment each cell makes mRNA from only a fraction of the genes it carries. If a gene is used to produce mRNA, it is considered "on", otherwise "off". Many factors determine whether a gene is on or off, such as the time of day, whether or not the cell is actively dividing, its local environment, and chemical signals from other cells. For instance, skin cells, liver cells and nerve cells turn on (express) somewhat different genes and that is in large part what makes them different. Therefore, an expression profile allows one to deduce a cell's type, state, environment, and so forth.
Expression profiling experiments often involve measuring the relative amount of mRNA expressed in two or more experimental conditions. This is because altered levels of a specific sequence of mRNA suggest a changed need for the protein coded by the mRNA, perhaps indicating a homeostatic response or a pathological condition. For example, higher leve
|
https://en.wikipedia.org/wiki/May%E2%80%93Thurner%20syndrome
|
May–Thurner syndrome (MTS), also known as the iliac vein compression syndrome, is a condition in which compression of the common venous outflow tract of the left lower extremity may cause discomfort, swelling, pain or iliofemoral deep vein thrombosis.
Specifically, the problem is due to left common iliac vein compression by the overlying right common iliac artery. This leads to stasis of blood, which predisposes to the formation of blood clots. Uncommon variations of MTS have been described, such as the right common iliac vein getting compressed by the right common iliac artery.
In the 21st century, the May–Thurner syndrome definition has been expanded to a broader disease profile known as nonthrombotic iliac vein lesions (NIVL) which can involve both the right and left iliac veins as well as multiple other named venous segments. This syndrome frequently manifests as pain when the limb is dependent (hanging down the edge of a bed/chair) and/or significant swelling of the whole limb.
Signs and symptoms
Because of its similarities to deep vein thrombosis (DVT), May-Thurner's syndrome is rarely diagnosed amongst the general population. In this condition, the right iliac artery sequesters and compresses the left common iliac vein against the lumbar section of the spine, resulting in swelling of the legs and ankles, pain, tingling, and/or numbness in the legs and feet. The pain is often presented as dull, and may progress up and down the leg depending on the patient. Lower extremities may feel warm to the touch, and swelling may maintain or dissipate throughout the day.
Mechanism
In contrast to the right common iliac vein, which ascends almost vertically to the inferior vena cava, the left common iliac vein traverses diagonally from left to right to enter the inferior vena cava. Along this course, it goes under the right common iliac artery, which may compress it against the lumbar spine and limit the flow of blood out of the left leg. There are case reports of the
|
https://en.wikipedia.org/wiki/Immunogenicity
|
Immunogenicity is the ability of a foreign substance, such as an antigen, to provoke an immune response in the body of a human or other animal. It may be wanted or unwanted:
Wanted immunogenicity typically relates to vaccines, where the injection of an antigen (the vaccine) provokes an immune response against the pathogen, protecting the organism from future exposure. Immunogenicity is a central aspect of vaccine development.
Unwanted immunogenicity is an immune response by an organism against a therapeutic antigen. This reaction leads to production of anti-drug-antibodies (ADAs), inactivating the therapeutic effects of the treatment and potentially inducing adverse effects.
A challenge in biotherapy is predicting the immunogenic potential of novel protein therapeutics. For example, immunogenicity data from high-income countries are not always transferable to low-income and middle-income countries. Another challenge is considering how the immunogenicity of vaccines changes with age. Therefore, as stated by the World Health Organization, immunogenicity should be investigated in a target population since animal testing and in vitro models cannot precisely predict immune response in humans.
Antigenicity is the capacity of a chemical structure (either an antigen or hapten) to bind specifically with a group of certain products that have adaptive immunity: T cell receptors or antibodies (a.k.a. B cell receptors). Antigenicity was more commonly used in the past to refer to what is now known as immunogenicity, and the two are still often used interchangeably. However, strictly speaking, immunogenicity refers to the ability of an antigen to induce an adaptive immune response. Thus an antigen might bind specifically to a T or B cell receptor, but not induce an adaptive immune response. If the antigen does induce a response, it is an 'immunogenic antigen', which is referred to as an immunogen.
Antigenic immunogenic potency
Many lipids and nucleic acids are relatively s
|
https://en.wikipedia.org/wiki/Renal%20circulation
|
The renal circulation supplies the blood to the kidneys via the renal arteries, left and right, which branch directly from the abdominal aorta. Despite their relatively small size, the kidneys receive approximately 20% of the cardiac output.
Each renal artery branches into segmental arteries, dividing further into interlobar arteries, which penetrate the renal capsule and extend through the renal columns between the renal pyramids. The interlobar arteries then supply blood to the arcuate arteries that run through the boundary of the cortex and the medulla. Each arcuate artery supplies several interlobular arteries that feed into the afferent arterioles that supply the glomeruli.
After filtration occurs, the blood moves through a small network of venules that converge into interlobular veins. As with the arteriole distribution, the veins follow the same pattern: the interlobular provide blood to the arcuate veins then back to the interlobar veins, which come to form the renal vein exiting the kidney for transfusion for blood.
Structure
Arterial system
The table below shows the path that blood takes when it travels through the glomerulus, traveling "down" the arteries and "up" the veins. However, this model is greatly simplified for clarity and symmetry. Some of the other paths and complications are described at the bottom of the table. The interlobar artery and vein (not to be confused with interlobular) are between two renal lobes, also known as the renal column (cortex region between two pyramids).
Note 1: The renal artery also provides a branch to the inferior suprarenal artery to supply the adrenal gland.
Note 2: Also called the cortical radiate arteries. The interlobular artery also supplies to the stellate veins.
Note 3: The efferent arterioles do not directly drain into the interlobular vein, but rather they go to the peritubular capillaries first. The efferent arterioles of the juxtamedullary nephron drain into the vasa recta.
Segmental arteries
The
|
https://en.wikipedia.org/wiki/Viral%20eukaryogenesis
|
Viral eukaryogenesis is the hypothesis that the cell nucleus of eukaryotic life forms evolved from a large DNA virus in a form of endosymbiosis within a methanogenic archaeon or a bacterium. The virus later evolved into the eukaryotic nucleus by acquiring genes from the host genome and eventually usurping its role. The hypothesis was first proposed by Philip Bell in 2001 and was further popularized with the discovery of large, complex DNA viruses (such as Mimivirus) that are capable of protein biosynthesis.
Viral eukaryogenesis has been controversial for several reasons. For one, it is sometimes argued that the posited evidence for the viral origins of the nucleus can be conversely used to suggest the nuclear origins of some viruses. Secondly, this hypothesis has further inflamed the longstanding debate over whether viruses are living organisms.
Hypothesis
The viral eukaryogenesis hypothesis posits that eukaryotes are composed of three ancestral elements: a viral component that became the modern nucleus; a prokaryotic cell (an archaeon according to the eocyte hypothesis) which donated the cytoplasm and cell membrane of modern cells; and another prokaryotic cell (here bacterium) that, by endocytosis, became the modern mitochondrion or chloroplast.
In 2006, researchers suggested that the transition from RNA to DNA genomes first occurred in the viral world. A DNA-based virus may have provided storage for an ancient host that had previously used RNA to store its genetic information (such host is called ribocell or ribocyte). Viruses may initially have adopted DNA as a way to resist RNA-degrading enzymes in the host cells. Hence, the contribution from such a new component may have been as significant as the contribution from chloroplasts or mitochondria. Following this hypothesis, archaea, bacteria, and eukaryotes each obtained their DNA informational system from a different virus. In the original paper it was also an RNA cell at the origin of eukaryotes, but eventu
|
https://en.wikipedia.org/wiki/Kramers%E2%80%93Wannier%20duality
|
The Kramers–Wannier duality is a symmetry in statistical physics. It relates the free energy of a two-dimensional square-lattice Ising model at a low temperature to that of another Ising model at a high temperature. It was discovered by Hendrik Kramers and Gregory Wannier in 1941. With the aid of this duality Kramers and Wannier found the exact location of the critical point for the Ising model on the square lattice.
Similar dualities establish relations between free energies of other statistical models. For instance, in 3 dimensions the Ising model is dual to an Ising gauge model.
Intuitive idea
The 2-dimensional Ising model exists on a lattice, which is a collection of squares in a chessboard pattern. With the finite lattice, the edges can be connected to form a torus. In theories of this kind, one constructs an involutive transform. For instance, Lars Onsager suggested that the Star-Triangle transformation could be used for the triangular lattice. Now the dual of the discrete torus is itself. Moreover, the dual of a highly disordered system (high temperature) is a well-ordered system (low temperature). This is because the Fourier transform takes a high bandwidth signal (more standard deviation) to a low one (less standard deviation). So one has essentially the same theory with an inverse temperature.
When one raises the temperature in one theory, one lowers the temperature in the other. If there is only one phase transition, it will be at the point at which they cross, at which the temperature is equal. Because the 2D Ising model goes from a disordered state to an ordered state, there is a near one-to-one mapping between the disordered and ordered phases.
The theory has been generalized, and is now blended with many other ideas. For instance, the square lattice is replaced by a circle, random lattice, nonhomogeneous torus, triangular lattice, labyrinth, lattices with twisted boundaries, chiral Potts model, and many others.
One of the consequ
|
https://en.wikipedia.org/wiki/Edinburgh%20Parallel%20Computing%20Centre
|
EPCC, formerly the Edinburgh Parallel Computing Centre, is a supercomputing centre based at the University of Edinburgh. Since its foundation in 1990, its stated mission has been to accelerate the effective exploitation of novel computing throughout industry, academia and commerce.
The University has supported high performance computing (HPC) services since 1982. , through EPCC, it supports the UK's national high-end computing system, ARCHER (Advanced Research Computing High End Resource), and the UK Research Data Facility (UK-RDF).
Overview
EPCC's activities include: consultation and software development for industry and academia; research into high-performance computing; hosting advanced computing facilities and supporting their users; training and education .
The Centre offers two Masters programmes: MSc in High-Performance Computing and MSc in High-Performance Computing with Data Science .
It is a member of the Globus Alliance and, through its involvement with the OGSA-DAI project, it works with the Open Grid Forum DAIS-WG.
Around half of EPCC's annual turnover comes from collaborative projects with industry and commerce. In addition to privately funded projects with businesses, EPCC receives funding from Scottish Enterprise, the Engineering and Physical Sciences Research Council and the European Commission.
History
EPCC was established in 1990, following on from the earlier Edinburgh Concurrent Supercomputer Project and chaired by Jeffery Collins from 1991. From 2002 to 2016 EPCC was part of the University's School of Physics & Astronomy, becoming an independent Centre of Excellence within the University's College of Science and Engineering in August 2016.
It was extensively involved in all aspects of Grid computing including: developing Grid middleware and architecture tools to facilitate the uptake of e-Science; developing business applications and collaborating in scientific applications and demonstration projects.
The Centre was a founder member o
|
https://en.wikipedia.org/wiki/Canine%20discoid%20lupus%20erythematosus
|
Discoid lupus erythematosus (DLE) is an uncommon autoimmune disease of the basal cell layer of the skin. It occurs in humans and cats, more frequently occurring in dogs. It was first described in dogs by Griffin and colleagues in 1979. DLE is one form of cutaneous lupus erythematosus (CLE). DLE occurs in dogs in two forms: a classical facial predominant form or generalized with other areas of the body affected. Other non-discoid variants of CLE include vesicular CLE, exfoliative CLE and mucocutaneous CLE. It does not progress to systemic lupus erythematosus (SLE) in dogs. SLE can also have skin symptoms, but it appears that the two are either separate diseases. DLE in dogs differs from SLE in humans in that plasma cells predominate histologically instead of T lymphocytes. Because worsening of symptoms occurs with increased ultraviolet light exposure, sun exposure most likely plays a role in DLE, although certain breeds (see below) are predisposed. After pemphigus foliaceus, DLE is the second most common autoimmune skin disease in dogs.
Symptoms
The most common initial symptom is scaling and loss of pigment on the nose. The surface of the nose becomes smooth gray, and ulcerated, instead of the normal black cobblestone texture. Over time the lips, the skin around the eyes, the ears, and the genitals may become involved. Lesions may progress to ulceration and lead to tissue destruction. DLE is often worse in summer due to increased sun exposure.
Diagnosis
DLE is easily confused with solar dermatitis, pemphigus, ringworm, and other types of dermatitis. Biopsy is required to make the distinction. Histopathologically, there is inflammation at the dermoepidermal junction and degeneration of the basal cell layer. Unlike in SLE, an anti-nuclear antibody test is usually negative.
Treatment
Avoiding sun exposure and the use of sunscreens (not containing zinc oxide as this is toxic to dogs) is important. Topical therapy includes corticosteroid and tacrolimus use.
|
https://en.wikipedia.org/wiki/Bubble%20nest
|
Bubble nests, also called foam nests, are created by some fish and frog species as floating masses of bubbles blown with an oral secretion, saliva bubbles, and occasionally aquatic plants. Fish that build and guard bubble nests are known as aphrophils. Aphrophils include gouramis (including Betta species) and the synbranchid eel Monopterus alba in Asia, Microctenopoma (Anabantidae), Polycentropsis (Nandidae), and Hepsetus odoe (the only member of Hepsetidae) in Africa, and callichthyines and the electric eel in South America. Most, if not all, fish that construct floating bubble nests live in tropical, oxygen-depleted standing waters. Osphronemidae, containing the Bettas and Gouramies, are the most commonly recognized family of bubble nest makers, though some members of that family mouthbrood instead. The nests are constructed as a place for fertilized eggs to be deposited while incubating and guarded by one or both parents (usually solely the male) until the fry hatch.
Bubble nests can also be found in the habitats of domesticated male Betta fish. Nests found in these types of habitats indicate a healthy and happy fish.
Construction and Nest Characteristics
Bubble nests are built even when not in presence of female or fry (though often a female swimming past will trigger the frantic construction of the nest). Males will build bubble nests of various sizes and thicknesses, depending on the male's territory and personality. Some males build constantly, some occasionally, some when introduced to a female and some do not even begin until after spawning. Some nests will be large, some small, some thick. Nest size does not directly correlate with number of eggs.
Bigger males build larger bubble nests. Large bubble nests are able to handle more eggs and larval fish and thus can only be handled by larger males. Larger males are also able to be more successful in protecting their eggs and juvenile fish from predators.
Most nests are found in shallow bodies and margina
|
https://en.wikipedia.org/wiki/MSin3%20interaction%20domain
|
The mSin3 interaction domain (SID) is an interaction domain which is present on several transcriptional repressor proteins including TGFβ (transforming growth factor β) and Mad. It interacts with the paired amphipathic alpha-helix 2 (PAH2) domain of mSin3, a transcriptional repressor domain that is attached to transcription repressor proteins such as the mSin3A corepressor.
Action of histone deacetylase 1 and 2 (HDAC1/2) is induced by the interaction of mSin3A with a multi-protein complex containing HDAC1/2. Transcription is also repressed by histone deacetylase-independent means.
External links
A 13-Amino Acid Amphipathic α-Helix Is Required for the Functional Interaction between the Transcriptional Repressor Mad1 and mSin3A
Protein domains
|
https://en.wikipedia.org/wiki/Mercury%20telluride
|
Mercury telluride (HgTe) is a binary chemical compound of mercury and tellurium. It is a semi-metal related to the II-VI group of semiconductor materials. Alternative names are mercuric telluride and mercury(II) telluride.
HgTe occurs in nature as the mineral form coloradoite.
Physical properties
All properties are at standard temperature and pressure unless stated otherwise. The lattice parameter is about 0.646 nm in the cubic crystalline form. The bulk modulus is about 42.1 GPa. The thermal expansion coefficient is about 5.2×10−6/K. Static dielectric constant 20.8, dynamic dielectric constant 15.1. Thermal conductivity is low at 2.7 W·m2/(m·K). HgTe bonds are weak leading to low hardness values. Hardness 2.7×107 kg/m2.
Doping
N-type doping can be achieved with elements such as boron, aluminium, gallium, or indium. Iodine and iron will also dope n-type. HgTe is naturally p-type due to mercury vacancies. P-type doping is also achieved by introducing zinc, copper, silver, or gold.
Topological insulation
Mercury telluride was the first topological insulator discovered, in 2007. Topological insulators cannot support an electric current in the bulk, but electronic states confined to the surface can serve as charge carriers.
Chemistry
HgTe bonds are weak. Their enthalpy of formation, around −32kJ/mol, is less than a third of the value for the related compound cadmium telluride. HgTe is easily etched by acids, such as hydrobromic acid.
Growth
Bulk growth is from a mercury and tellurium melt in the presence of a high mercury vapour pressure. HgTe can also be grown epitaxially, for example, by sputtering or by metalorganic vapour phase epitaxy.
Nanoparticles of mercury telluride can be obtained via cation exchange from cadmium telluride nanoplatelets.
See also
Cadmium telluride
Mercury selenide
Mercury cadmium telluride
|
https://en.wikipedia.org/wiki/F-score
|
In statistical analysis of binary classification, the F-score or F-measure is a measure of a test's accuracy. It is calculated from the precision and recall of the test, where the precision is the number of true positive results divided by the number of all positive results, including those not identified correctly, and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known as positive predictive value, and recall is also known as sensitivity in diagnostic binary classification.
The F1 score is the harmonic mean of the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more generic score applies additional weights, valuing one of precision or recall more than the other.
The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if either precision or recall are zero.
Etymology
The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the Fourth Message Understanding Conference (MUC-4, 1992).
Definition
The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall:
.
Fβ score
A more general F score, , that uses a positive real factor , where is chosen such that recall is considered times as important as precision, is:
.
In terms of Type I and type II errors this becomes:
.
Two commonly used values for are 2, which weighs recall higher than precision, and 0.5, which weighs recall lower than precision.
The F-measure was derived so that "measures the effectiveness of retrieval with respect to a user who attaches times as much importance to recall as precision". It is based on Van Rijsbergen's effectiveness measure
.
Their relationship is where .
Diagnostic testing
This is related to the field of binary classification where recall i
|
https://en.wikipedia.org/wiki/Pseudomonas%20virus%20phi6
|
Φ6 (Phi 6) is the best-studied bacteriophage of the virus family Cystoviridae. It infects Pseudomonas bacteria (typically plant-pathogenic P. syringae). It has a three-part, segmented, double-stranded RNA genome, totalling ~13.5 kb in length. Φ6 and its relatives have a lipid membrane around their nucleocapsid, a rare trait among bacteriophages. It is a lytic phage, though under certain circumstances has been observed to display a delay in lysis which may be described as a "carrier state".
Proteins
The genome of Φ6 codes for 12 proteins. P1 is a major capsid protein which is responsible of forming the skeleton of the polymerase complex. In the interior of the shell formed by P1 is the P2 viral replicase and transcriptase protein. The spikes binding to receptors on the Φ6 virion are formed by the protein P3. P4 is a nucleoside-triphosphatase which is required for the genome packaging and transcription. P5 is a lytic enzyme. The spike protein P3 is anchored to a fusogenic envelope protein in P6. P7 is a minor capsid protein, P8 is responsible of forming the nucleocapsid surface shell and P9 is a major envelope protein. P12 is a non-structural morphogenic protein shown to be a part of the envelope assembly. P10 and P13 are proteins coding genes that are associated with the viral envelope and P14 is a non-structural protein.
Life cycle
Φ6 typically attaches to the Type IV pilus of P. syringae with its attachment protein, P3. It is thought that the cell then retracts its pilus, pulling the phage toward the bacterium. Fusion of the viral envelope with the bacterial outer membrane is facilitated by the phage protein, P6. The muralytic (peptidoglycan-digesting) enzyme, P5, then digests a portion of the cell wall, and the nucleocapsid enters the cell coated with the bacterial outer membrane.
A copy of the sense strand of the large genome segment (6374 bases) is then synthesized (transcription) on the vertices of the capsid, with the RNA-dependent RNA polymerase, P2,
|
https://en.wikipedia.org/wiki/Reconstituted%20meat
|
A reconstituted meat, meat slurry, or emulsified meat is a liquefied meat product that contains fewer fats, pigments and less myoglobin than unprocessed dark meats. Meat slurry is more malleable than dark meats and eases the process of meat distribution as pipelines may be used.
Meat slurry is not designed to sell for general consumption; rather, it is used as a meat supplement in food products for humans, such as chicken nuggets, and food for domestic animals. Poultry is a common meat slurry. Beef and pork are also used.
Properties and production
The characteristics of dark meat from poultry; such as its color, low plasticity, and high fat content; are caused by myoglobin, a pigmented chemical compound found in muscle tissue that undergoes frequent use. Because domestic poultry rarely fly, the flight muscles in the breast contain little myoglobin and appear white. Dark meat which is high in myoglobin is less useful in industry, especially fast food, because it is difficult to mold into shapes. Processing dark meat into a slurry makes it more like white meat, easier to prepare.
The meat is first finely ground and mixed with water. The mixture is then used in a centrifuge or with an emulsifier to separate the fats and myoglobin from the muscle. The product is then allowed to settle into three layers: meat, excess water, and fat. The remaining liquefied meat is then flash-frozen and packaged.
See also
Meat emulsion
Mechanically separated meat
Offal
Pink slime
Surimi
|
https://en.wikipedia.org/wiki/SageMath
|
SageMath (previously Sage or SAGE, "System for Algebra and Geometry Experimentation") is a computer algebra system (CAS) with features covering many aspects of mathematics, including algebra, combinatorics, graph theory, numerical analysis, number theory, calculus and statistics.
The first version of SageMath was released on 24 February 2005 as free and open-source software under the terms of the GNU General Public License version 2, with the initial goals of creating an "open source alternative to Magma, Maple, Mathematica, and MATLAB". The originator and leader of the SageMath project, William Stein, was a mathematician at the University of Washington.
SageMath uses a syntax resembling Python's, supporting procedural, functional and object-oriented constructs.
Development
Stein realized when designing Sage that there were many open-source mathematics software packages already written in different languages, namely C, C++, Common Lisp, Fortran and Python.
Rather than reinventing the wheel, Sage (which is written mostly in Python and Cython) integrates many specialized CAS software packages into a common interface, for which a user needs to know only Python. However, Sage contains hundreds of thousands of unique lines of code adding new functions and creating the interfaces among its components.
SageMath uses both students and professionals for development. The development of SageMath is supported by both volunteer work and grants. However, it was not until 2016 that the first full-time Sage developer was hired (funded by an EU grant). The same year, Stein described his disappointment with a lack of academic funding and credentials for software development, citing it as the reason for his decision to leave his tenured academic position to work full-time on the project in a newly founded company, SageMath, Inc.
Achievements
2007: first prize in the scientific software division of Les Trophées du Libre, an international competition for free software.
2012: on
|
https://en.wikipedia.org/wiki/Proteomyxa
|
Proteomyxa is a name given by E. Ray Lankester to a group of Sarcodina. This is an obsolete group.
Many of the species are endoparasites in living cells, mostly of algae or fungi, but not exclusively. At least two species of Pseudospora have been taken for reproductive stages in the life history of their hosts—whence indeed the generic name. Plasmodiophora brassicae gives rise to the disease known as Hanburies or fingers and toes in Cruciferae; Lymphosporidium causes a virulent epidemic among the American brook trout (Salvelinus fontinalis). Archerina boltoni is remarkable for containing a pair of chlorophyll corpuscles in each cell; no nucleus has been made out, but the chlorophyll bodies divide previous to fission. It is a fresh-water form. The cells of this species form loose aggregates or filoplasmodia, like those of Mikrogromia or Leydenia.
|
https://en.wikipedia.org/wiki/Irvingia%20gabonensis
|
Irvingia gabonensis is a species of African trees in the genus Irvingia, sometimes known by the common names wild mango, African mango, or bush mango. They bear edible mango-like fruits, and are especially valued for their fat- and protein-rich nuts.
Distribution and habitat
Irvingia gabonensis is indigenous to the humid forest zone from the northern tip of Angola, including Congo, DR Congo, Nigeria, Ivory Coast and south-western Uganda. Since 2009, the Gabonese government has prohibited logging of the andok tree until 2034.
Biophysical limits
The tree is present in the tropical wet and dry climate zone. African bush mango grows naturally in canopied jungle, gallery forests and semi-deciduous forests. It grows at altitudes from with annual rainfalls from . Supported temperature ranges from . Soils more than deep are needed, with a moderate fertility and good drainage. pH can range from 4.5 to 7.5.
Description
Irvingia gabonensis grows straight, up to a height of and in diameter. It has buttresses to a height of 3m (10 ft). The outer bark is smooth to scaly with grey to yellow-grey color. The crown is evergreen, spherical and dense. Leaves are elliptic, one margin is often a little rounder than the other, acuminate, dark green and glossy on the upper surface. Flowers are yellow to greenish-white in small panicles. The flowers are bisexual.
The fruit is nearly spherical, green when ripe with a bright orange pulp. The stone is woody and contains one seed. Seeds germinate epigeally (above ground).
Ecology
Irvingia gabonensis is insect-pollinated by Coleoptera, Diptera, Hymenoptera and Lepidoptera. It flowers from March to June and has two fruiting seasons: from April to July and from September to October.
Seeds are dispersed by vertebrates, including elephants and gorillas. With a reduction in the number of those animals, the spread and regeneration of African bush mango decreases and it becomes more dependent on human planting.
Cultivation
In the past, 90% o
|
https://en.wikipedia.org/wiki/Normalization%20%28image%20processing%29
|
In image processing, normalization is a process that changes the range of pixel intensity values. Applications include photographs with poor contrast due to glare, for example. Normalization is sometimes called contrast stretching or histogram stretching. In more general fields of data processing, such as digital signal processing, it is referred to as dynamic range expansion.
The purpose of dynamic range expansion in the various applications is usually to bring the image, or other type of signal, into a range that is more familiar or normal to the senses, hence the term normalization. Often, the motivation is to achieve consistency in dynamic range for a set of data, signals, or images to avoid mental distraction or fatigue. For example, a newspaper will strive to make all of the images in an issue share a similar range of grayscale.
Normalization transforms an n-dimensional grayscale image
with intensity values in the range , into a new image
with intensity values in the range .
The linear normalization of a grayscale digital image is performed according to the formula
For example, if the intensity range of the image is 50 to 180 and the desired range is 0 to 255 the process entails subtracting 50 from each of pixel intensity, making the range 0 to 130. Then each pixel intensity is multiplied by 255/130, making the range 0 to 255.
Normalization might also be non linear, this happens when there isn't a linear relationship between and . An example of non-linear normalization is when the normalization follows a sigmoid function, in that case, the normalized image is computed according to the formula
Where defines the width of the input intensity range, and defines the intensity around which the range is centered.
Auto-normalization in image processing software typically normalizes to the full dynamic range of the number system specified in the image file format.
See also
Audio normalization, audio analog
Histogram equalization
|
https://en.wikipedia.org/wiki/Metastatic%20calcification
|
Metastatic calcification is deposition of calcium salts in otherwise normal tissue, because of elevated serum levels of calcium, which can occur because of deranged metabolism as well as increased absorption or decreased excretion of calcium and related minerals, as seen in hyperparathyroidism.
In contrast, dystrophic calcification is caused by abnormalities or degeneration of tissues resulting in mineral deposition, though blood levels of calcium remain normal. These differences in pathology also mean that metastatic calcification is often found in many tissues throughout a person or animal, whereas dystrophic calcification is localized.
Metastatic calcification can occur widely throughout the body but principally affects the interstitial tissues of the vasculature, kidneys, lungs, and gastric mucosa. For the latter three, acid secretions or rapid changes in pH levels contribute to the formation of salts.
Causes
Hypercalcemia, elevated blood calcium, has numerous causes, including
Elevated levels of parathyroid hormone due to hyperparathyroidism, leading to bone resorption and subsequent hypercalcemia by reducing phosphate concentration.
Secretion of parathyroid hormone-related protein by certain tumors.
Resorption of bone due to
Primary bone marrow tumors (e.g. multiple myeloma and leukemia)
Metastasis of other tumors, breast cancer for example, to bone.
Paget disease
Immobilization
Vitamin D related disorders
Vitamin D intoxication
Williams syndrome (increased sensitivity to vitamin D)
Sarcoidosis
Kidney failure
|
https://en.wikipedia.org/wiki/Homogentisic%20acid
|
Homogentisic acid (2,5-dihydroxyphenylacetic acid) is a phenolic acid usually found in Arbutus unedo (strawberry-tree) honey. It is also present in the bacterial plant pathogen Xanthomonas campestris pv. phaseoli as well as in the yeast Yarrowia lipolytica where it is associated with the production of brown pigments. It is oxidatively dimerised to form hipposudoric acid, one of the main constituents of the 'blood sweat' of hippopotamuses.
It is less commonly known as melanic acid, the name chosen by William Prout.
Human pathology
Accumulation of excess homogentisic acid and its oxide, named alkapton, is a result of the failure of the enzyme homogentisic acid 1,2-dioxygenase (typically due to a mutation) in the degradative pathway of tyrosine, consequently associated with alkaptonuria.
Intermediate
It is an intermediate in the catabolism of aromatic amino acids such as phenylalanine and tyrosine.
4-Hydroxyphenylpyruvate (produced by transamination of tyrosine) is acted upon by the enzyme 4-hydroxyphenylpyruvate dioxygenase to yield homogentisate. If active and present, the enzyme homogentisate 1,2-dioxygenase further degrades homogentisic acid to yield 4-maleylacetoacetic acid.
|
https://en.wikipedia.org/wiki/Systematic%20reconnaissance%20flight
|
Systematic Reconnaissance Flight (SRF) is a scientific method in wildlife survey for assessing the distribution and abundance of wild animals. It is widely used in Africa, Australia and North America for assessment of plains and woodland wildlife and other species.
The method involves systematic or random flight lines (transects) over the target area at a constant height above ground, with at least one observer recording wildlife in a calibrated strip on at least one side of the aircraft.
The method has been often criticised for low accuracy and precision, but is considered to be the best option for relatively inexpensive coverage of large game areas.
|
https://en.wikipedia.org/wiki/Frans%C3%A9n%E2%80%93Robinson%20constant
|
The Fransén–Robinson constant, sometimes denoted F, is the mathematical constant that represents the area between the graph of the reciprocal Gamma function, , and the positive x axis. That is,
Other expressions
The Fransén–Robinson constant has numerical value , and continued fraction representation [2; 1, 4, 4, 1, 18, 5, 1, 3, 4, 1, 5, 3, 6, ...] . The constant is somewhat close to Euler's number This fact can be explained by approximating the integral by a sum:
and this sum is the standard series for e. The difference is
or equivalently
The Fransén–Robinson constant can also be expressed using the Mittag-Leffler function as the limit
It is however unknown whether F can be expressed in closed form in terms of other known constants.
Calculation history
A fair amount of effort has been made to calculate the numerical value of the Fransén–Robinson constant with high accuracy.
The value was computed to 36 decimal places by Herman P. Robinson using 11 point Newton–Cotes quadrature, to 65 digits by A. Fransén using Euler–Maclaurin summation, and to 80 digits by Fransén and S. Wrigge using Taylor series and other methods. William A. Johnson computed 300 digits, and Pascal Sebah was able to compute 1025 digits using Clenshaw–Curtis integration.
|
https://en.wikipedia.org/wiki/Packet%20crafting
|
Packet crafting is a technique that allows network administrators to probe firewall rule-sets and find entry points into a targeted system or network. This is done by manually generating packets to test network devices and behaviour, instead of using existing network traffic. Testing may target the firewall, IDS, TCP/IP stack, router or any other component of the network. Packets are usually created by using a packet generator or packet analyzer which allows for specific options and flags to be set on the created packets. The act of packet crafting can be broken into four stages: Packet Assembly, Packet Editing, Packet Play and Packet Decoding. Tools exist for each of the stages - some tools are focused only on one stage while others such as Ostinato try to encompass all stages.
Packet assembly
Packet Assembly is the creation of the packets to be sent. Some popular programs used for packet assembly are Hping, Nemesis, Ostinato, Cat Karat packet builder, Libcrafter, libtins, PcapPlusPlus, Scapy, Wirefloss and Yersinia. Packets may be of any protocol and are designed to test specific rules or situations. For example, a TCP packet may be created with a set of erroneous flags to ensure that the target machine sends a RESET command or that the firewall blocks any response.
Packet editing
Packet Editing is the modification of created or captured packets. This involves modifying packets in manners which are difficult or impossible to do in the Packet Assembly stage, such as modifying the payload of a packet. Programs such as Scapy, Ostinato, Netdude allow a user to modify recorded packets' fields, checksums and payloads quite easily. These modified packets can be saved in packet streams which may be stored in pcap files to be replayed later.
Packet play
Packet Play or Packet Replay is the act of sending a pre-generated or captured series of packets. Packets may come from Packet Assembly and Editing or from captured network attacks. This allows for testing of a given us
|
https://en.wikipedia.org/wiki/Coordinative%20definition
|
A coordinative definition is a postulate which assigns a partial meaning to the theoretical terms of a scientific theory by correlating the mathematical objects of the pure or formal/syntactical aspects of a theory with physical objects in the world. The idea was formulated by the logical positivists and arises out of a formalist vision of mathematics as pure symbol manipulation.
Formalism
In order to get a grasp on the motivations which inspired the development of the idea of coordinative definitions, it is important to understand the doctrine of formalism as it is conceived in the philosophy of mathematics. For the formalists, mathematics, and particularly geometry, is divided into two parts: the pure and the applied. The first part consists in an uninterpreted axiomatic system, or syntactic calculus, in which terms such as point, straight line and between (the so-called primitive terms) have their meanings assigned to them implicitly by the axioms in which they appear. On the basis of deductive rules eternally specified in advance, pure geometry provides a set of theorems derived in a purely logical manner from the axioms. This part of mathematics is therefore a priori but devoid of any empirical meaning, not synthetic in the sense of Kant.
It is only by connecting these primitive terms and theorems with physical objects such as rulers or rays of light that, according to the formalist, pure mathematics becomes applied mathematics and assumes an empirical meaning. The method of correlating the abstract mathematical objects of the pure part of theories with physical objects consists in coordinative definitions.
It was characteristic of logical positivism to consider a scientific theory to be nothing more than a set of sentences, subdivided into the class of theoretical sentences, the class of observational sentences, and the class of mixed sentences. The first class contains terms which refer to theoretical entities, that is to entities not directly observabl
|
https://en.wikipedia.org/wiki/Symptomatic%20treatment
|
Symptomatic treatment, supportive care, supportive therapy, or palliative treatment is any medical therapy of a disease that only affects its symptoms, not the underlying cause. It is usually aimed at reducing the signs and symptoms for the comfort and well-being of the patient, but it also may be useful in reducing organic consequences and sequelae of these signs and symptoms of the disease. In many diseases, even in those whose etiologies are known (e.g., most viral diseases, such as influenza and Rift Valley fever), symptomatic treatment is the only treatment available so far. For more detail, see supportive therapy. For conditions like cancer, arthritis, neuropathy, tendinopathy, and injury, it can be useful to distinguish treatments that are supportive/palliative and cannot alter the natural history of the disease (disease modifying treatments).
Examples
Examples of symptomatic treatments:
Analgesics, to reduce pain
Anti-inflammatory agents, for inflammation caused by arthritis
Antitussives, for cough
Antihistaminics (also known as antihistamines), for allergy
Antipyretics, for fever
Enemas for constipation
Treatments that reduce unwanted side effects from drugs
Uses
When the etiology (the cause, set of causes, or manner of causation of a disease or condition) for the disease is known, then specific treatment may be instituted, but it is generally associated with symptomatic treatment, as well.
Symptomatic treatment is not always recommended, and in fact, it may be dangerous, because it may mask the presence of an underlying etiology which will then be forgotten or treated with great delay. Examples:
Low-grade fever for 15 days or more is sometimes the only symptom of bacteremia by staphylococcus bacteria. Suppressing it by symptomatic treatment will hide the disease from effective diagnosis and treatment with antibiotics. The consequence may be severe (rheumatic fever, nephritis, endocarditis, etc.)
Chronic headache may be caused simply by a cons
|
https://en.wikipedia.org/wiki/KYK-13
|
The KYK-13 Electronic Transfer Device is a common fill device designed by the United States National Security Agency for the transfer and loading of cryptographic keys with their corresponding check word. The KYK-13 is battery powered and uses the DS-102 protocol for key transfer. Its National Stock Number is 5810-01-026-9618.
Even though the KYK-13 was first introduced in 1976 and was supposed to have been made obsolete by the AN/CYZ-10 Data Transfer Device, it is still widely used because of its simplicity and reliability. A simpler device than the CYZ-10, the KIK-30 "Really Simple Key Loader" (RASKL) is now planned to replace the KYK-13s, with up to $200 million budgeted to procure them in quantity.
Components
P1 and J1 Connectors - Electrically the same connection. Used to connect to a fill cable, COMSEC device, KOI-18, KYX-15, another KYK-13, or AN/CYZ-10.
Battery Compartment - Holds battery which powers KYK-13.
Mode Switch - Three-position rotary switch used to select operation modes.
"Z" - Used to zeroize selected keys.
ON - Used to fill and transfer keys.
OFF CHECK - Used to conduct parity checks.
Parity Lamp - Blinks when parity is checked or fill is transferred.
Initiate Push Button - Push this button when loading or zeroizing the KYK-13.
Address Select Switch - Seven-position rotary switch.
"Z" ALL - Zeroizes all six storage registers when Mode Switch is set to "Z."
1 THROUGH 6 - Six storage registers for storing keys in KYK-13.
|
https://en.wikipedia.org/wiki/Password%20notification%20email
|
Password notification email is a common password recovery technique used by websites. If a user forgets their password then a password notification email is sent containing enough information for the user to access their account again. This method of password retrieval relies on the assumption that only the legitimate owner of the account has access to the inbox for that particular email address.
The process is often initiated by the user clicking on a forgotten password link on the website where, after entering their username or email address, the password notification email would be automatically sent to the inbox of the account holder. This email may contain a temporary password or a URL that can be followed to enter a new password for that account. The new password or the URL often contain a randomly generated string of text that can only be obtained by reading that particular email.
Another method used is to send all or part of the original password in the email. Sending only a few characters of the password, can help the user to remember their original password, without having to reveal the whole password to them.
Security Concerns
The main issue is that the contents of the password notification email can be easily discovered by anyone with access to the inbox of the account owner. This could be as a result of shoulder surfing or if the inbox itself is not password protected. The contents could then be used to compromise the security of the account. The user would therefore have the responsibility of either securely deleting the email or ensuring that its contents are not revealed to anyone else. A partial solution to this problem, is to cause any links contained within the email to expire after a period of time, making the email useless if it is not used quickly after it is sent.
Any method that sends part of the original password means that the password is stored in plain text and leaves the password open to an attack from hackers. This is why it is typi
|
https://en.wikipedia.org/wiki/Tarsus%20%28eyelids%29
|
The tarsi (: tarsus) or tarsal plates are two comparatively thick, elongated plates of dense connective tissue, about in length for the upper eyelid and 5 mm for the lower eyelid; one is found in each eyelid, and contributes to its form and support. They are located directly above the lid margins. The tarsus has a lower and upper part making up the palpebrae.
Superior
The superior tarsus (tarsus superior; superior tarsal plate), the larger, is of a semilunar form, about in breadth at the center, and gradually narrowing toward its extremities. It is adjoined by the superior tarsal muscle.
To the anterior surface of this plate the aponeurosis of the levator palpebrae superioris is attached.
Inferior
The inferior tarsus (tarsus inferior; inferior tarsal plate) is smaller, is thin, is elliptical in form, and has a vertical diameter of about . The free or ciliary margins of these plates are thick and straight.
Relations
The attached or orbital margins are connected to the circumference of the orbit by the orbital septum.
The lateral angles are attached to the zygomatic bone by the lateral palpebral raphe.
The medial angles of the two plates end at the lacrimal lake, and are attached to the frontal process of the maxilla by the medial palpebral ligament).
The sulcus subtarsalis is a groove in the inner surface of each eyelid.
Along the inner margin of the tarsus are modified sebaceous glands known as tarsal glands (or meibomian glands), aligned vertically within the tarsi: 30 to 40 glands in the upper lid, and 20 to 30 in the lower lid, which secrete a lipid-rich product which helps keep the lacrimal secretions or tears from evaporating too quickly, thus keeping the eye moist.
Additional images
See also
List of specialized glands within the human integumentary system
|
https://en.wikipedia.org/wiki/Filamentation
|
Filamentation is the anomalous growth of certain bacteria, such as Escherichia coli, in which cells continue to elongate but do not divide (no septa formation). The cells that result from elongation without division have multiple chromosomal copies.
In the absence of antibiotics or other stressors, filamentation occurs at a low frequency in bacterial populations (4–8% short filaments and 0–5% long filaments in 1- to 8-hour cultures). The increased cell length can protect bacteria from protozoan predation and neutrophil phagocytosis by making ingestion of cells more difficult. Filamentation is also thought to protect bacteria from antibiotics, and is associated with other aspects of bacterial virulence such as biofilm formation.
The number and length of filaments within a bacterial population increases when the bacteria are exposed to different physical, chemical and biological agents (e.g. UV light, DNA synthesis-inhibiting antibiotics, bacteriophages). This is termed conditional filamentation. Some of the key genes involved in filamentation in E. coli include sulA, minCD and damX.
Filament formation
Antibiotic-induced filamentation
Some peptidoglycan synthesis inhibitors (e.g. cefuroxime, ceftazidime) induce filamentation by inhibiting the penicillin binding proteins (PBPs) responsible for crosslinking peptidoglycan at the septal wall (e.g. PBP3 in E. coli and P. aeruginosa). Because the PBPs responsible for lateral wall synthesis are relatively unaffected by cefuroxime and ceftazidime, cell elongation proceeds without any cell division and filamentation is observed.
DNA synthesis-inhibiting and DNA damaging antibiotics (e.g. metronidazole, mitomycin C, the fluoroquinolones, novobiocin) induce filamentation via the SOS response. The SOS response inhibits septum formation until the DNA can be repaired, this delay stopping the transmission of damaged DNA to progeny. Bacteria inhibit septation by synthesizing protein SulA, an FtsZ inhibitor that halts Z-ri
|
https://en.wikipedia.org/wiki/Flying%20and%20gliding%20animals
|
A number of animals are capable of aerial locomotion, either by powered flight or by gliding. This trait has appeared by evolution many times, without any single common ancestor. Flight has evolved at least four times in separate animals: insects, pterosaurs, birds, and bats. Gliding has evolved on many more occasions. Usually the development is to aid canopy animals in getting from tree to tree, although there are other possibilities. Gliding, in particular, has evolved among rainforest animals, especially in the rainforests in Asia (most especially Borneo) where the trees are tall and widely spaced. Several species of aquatic animals, and a few amphibians and reptiles have also evolved this gliding flight ability, typically as a means of evading predators.
Types
Animal aerial locomotion can be divided into two categories: powered and unpowered. In unpowered modes of locomotion, the animal uses aerodynamic forces exerted on the body due to wind or falling through the air. In powered flight, the animal uses muscular power to generate aerodynamic forces to climb or to maintain steady, level flight. Those who can find air that is rising faster than they are falling can gain altitude by soaring.
Unpowered
These modes of locomotion typically require an animal start from a raised location, converting that potential energy into kinetic energy and using aerodynamic forces to control trajectory and angle of descent. Energy is continually lost to drag without being replaced, thus these methods of locomotion have limited range and duration.
Falling: decreasing altitude under the force of gravity, using no adaptations to increase drag or provide lift.
Parachuting: falling at an angle greater than 45° from the horizontal with adaptations to increase drag forces. Very small animals may be carried up by the wind. Some gliding animals may use their gliding membranes for drag rather than lift, to safely descend.
Gliding flight: falling at an angle less than 45° from the horizo
|
https://en.wikipedia.org/wiki/Andy%20Lennon
|
Andy Lennon (September 1, 1914 - November 24, 2007) is most notably associated with his work in advanced model aircraft design.
Background
Lennon was involved in aviation since the age of 15, when he went for a short ride in a Curtiss Robin. He soon joined the Montreal Flying Club and began flying D.H. Gypsy Moths and early two-place Aeronca cabin monoplanes. He was educated in Canada at Edward VII School, Strathcona Academy, Montreal Technical School, McGill University and the University of Western Ontario, (London, Ontario).
Involvement in Manufacturing
Lennon entered the Canadian aircraft manufacturing industry and later moved to general manufacturing as an industrial engineer. Throughout his career, he continued to study aeronautics, particularly aircraft design, aviation texts, NACA and NASA reports and aviation periodicals. He tested many aeronautics theories by designing, building and flying nearly 25 experimental R/C models-miniatures of potential light aircraft. One model, the Seagull III was a flying boat with wide aerobatic capabilities. Lennon was a licensed pilot in the United States and Canada.
Contributions in Literature
Lennon was a contributing editor to Model Airplane News, Model Aviation, Model Builder, RC Modeler, Fly RC and RC Models and Electronics. He wrote several books: "Basics of R/C Model Aircraft Design", "R/C Model Airplane Design" and "Canard: A Revolution in Flight." His last book was published in 1996, has been reprinted twice since. Andy's authority in aerodynamics and related studies are well acknowledged by leaders in the aviation industry. His book "Canard: A Revolution in Flight" had the foreword written by Burt Rutan, a fitting authority in Canard design. For his last book "Basics of R/C Model Aircraft Design", Bob Kress, who designed the F-14, among other designs, wrote the introduction.
Model Design Development
Lennon, since 1957, has designed and published a wide range of model aircraft in various publications. These d
|
https://en.wikipedia.org/wiki/Charge-transfer%20amplifier
|
The charge-transfer amplifier (CTA) is an electronic amplifier circuit. Also known as transconveyance amplifiers, CTAs amplify electronic signals by dynamically conveying charge between capacitive nodes in proportion to the size of a differential input voltage. By appropriately selecting the relative node capacitances, voltage amplification occurs by the charge-voltage relationship of capacitors. CTAs are clocked, or sampling, amplifiers. They consume zero static power and can be designed to consume (theoretically) arbitrarily low dynamic power, proportional to the size of input signals being sampled. CMOS technology is most commonly used for implementation. CTAs were introduced in memory circuits in the 1970s, and more recently have been applied in multi-bit analog-to-digital converters (ADCs). They are also used in dynamic voltage comparator circuits.
See also
Comparator
Mixed-signal integrated circuit
Charge amplifier
|
https://en.wikipedia.org/wiki/Sicherman%20dice
|
Sicherman dice are a pair of 6-sided dice with non-standard numbers–one with the sides 1, 2, 2, 3, 3, 4 and the other with the sides 1, 3, 4, 5, 6, 8. They are notable as the only pair of 6-sided dice that are not normal dice , bear only positive integers, and have the same probability distribution for the sum as normal dice. They were invented in 1978 by George Sicherman of Buffalo, New York.
Mathematics
A standard exercise in elementary combinatorics is to calculate the number of ways of rolling any given value with a pair of fair six-sided dice (by taking the sum of the two rolls). The table shows the number of such ways of rolling a given value :
Crazy dice is a mathematical exercise in elementary combinatorics, involving a re-labeling of the faces of a pair of six-sided dice to reproduce the same frequency of sums as the standard labeling. The Sicherman dice are crazy dice that are re-labeled with only positive integers. (If the integers need not be positive, to get the same probability distribution, the number on each face of one die can be decreased by k and that of the other die increased by k, for any natural number k, giving infinitely many solutions.)
The table below lists all possible totals of dice rolls with standard dice and Sicherman dice. One Sicherman die is coloured for clarity: 1–2–2–3–3–4, and the other is all black, 1–3–4–5–6–8.
History
The Sicherman dice were discovered by George Sicherman of Buffalo, New York and were originally reported by Martin Gardner in a 1978 article in Scientific American.
The numbers can be arranged so that all pairs of numbers on opposing sides sum to equal numbers, 5 for the first and 9 for the second.
Later, in a letter to Sicherman, Gardner mentioned that a magician he knew had anticipated Sicherman's discovery. For generalizations of the Sicherman dice to more than two dice and noncubical dice, see Broline (1979), Gallian and Rusin (1979), Brunson and Swift (1997/1998), and Fowler and Swift (1999).
M
|
https://en.wikipedia.org/wiki/Reliable%20Server%20Pooling
|
Reliable Server Pooling (RSerPool) is a computer protocol framework for management of and access to multiple, coordinated (pooled) servers. RSerPool is an IETF standard, which has been developed by the IETF RSerPool Working Group and documented in RFC 5351, RFC 5352, RFC 5353, RFC 5354, RFC 5355 and RFC 5356.
Introduction
In the terminology of RSerPool a server is denoted as a Pool Element (PE). In its Pool, it is identified by its Pool Element Identifier (PE ID), a 32-bit number. The PE ID is randomly chosen upon a PE's registration to its pool. The set of all pools is denoted as the Handlespace. In older literature, it may be denoted as Namespace. This denomination has been dropped in order to avoid confusion with the Domain Name System (DNS). Each pool in a handlespace is identified by a unique Pool Handle (PH), which is represented by an arbitrary byte vector. Usually, this is an ASCII or Unicode name of the pool, e.g. "DownloadPool" or "WebServerPool".
Each handlespace has a certain scope (e.g. an organization or company), denoted as Operation Scope. It is explicitly not a goal of RSerPool to manage the global Internet's pools within a single handlespace. Due to the localisation of Operation Scopes, it is possible to keep the handlespace "flat". That is, PHs do not have any hierarchy in contrast to the Domain Name System with its top-level and sub-domains. This constraint results in a significant simplification of handlespace management.
Within an operation scope, the handlespace is managed by redundant Pool Registrars (PR), also denoted as ENRP servers or Name Servers (NS). PRs have to be redundant in order to avoid a PR to become a Single Point of Failure (SPoF). Each PR of an operation scope is identified by its Registrar ID (PR ID), which is a 32-bit random number. It is not necessary to ensure uniqueness of PR IDs. A PR contains a complete copy of the operation scope's handlespace. PRs of an operation scope synchronize their view of the handlespace usi
|
https://en.wikipedia.org/wiki/David%20Eppstein
|
David Arthur Eppstein (born 1963) is an American computer scientist and mathematician. He is a Distinguished Professor of computer science at the University of California, Irvine. He is known for his work in computational geometry, graph algorithms, and recreational mathematics. In 2011, he was named an ACM Fellow.
Biography
Born in Windsor, England, in 1963, Eppstein received a B.S. in Mathematics from Stanford University in 1984, and later an M.S. (1985) and Ph.D. (1989) in computer science from Columbia University, after which he took a postdoctoral position at Xerox's Palo Alto Research Center. He joined the UC Irvine faculty in 1990, and was co-chair of the Computer Science Department there from 2002 to 2005. In 2014, he was named a Chancellor's Professor. In October 2017, Eppstein was one of 396 members elected as fellows of the American Association for the Advancement of Science.
Eppstein is also an amateur digital photographer as well as a Wikipedia editor and administrator with over 200,000 edits.
Research interests
In computer science, Eppstein's research has included work on minimum spanning trees, shortest paths, dynamic graph data structures, graph coloring, graph drawing and geometric optimization. He has published also in application areas such as finite element meshing, which is used in engineering design, and in computational statistics, particularly in robust, multivariate, nonparametric statistics.
Eppstein served as the program chair for the theory track of the ACM Symposium on Computational Geometry in 2001, the program chair of the ACM-SIAM Symposium on Discrete Algorithms in 2002, and the co-chair for the International Symposium on Graph Drawing in 2009.
Selected publications
Republished in
Books
See also
Eppstein's algorithm
|
https://en.wikipedia.org/wiki/Cuckoo%20hashing
|
Cuckoo hashing is a scheme in computer programming for resolving hash collisions of values of hash functions in a table, with worst-case constant lookup time. The name derives from the behavior of some species of cuckoo, where the cuckoo chick pushes the other eggs or young out of the nest when it hatches in a variation of the behavior referred to as brood parasitism; analogously, inserting a new key into a cuckoo hashing table may push an older key to a different location in the table.
History
Cuckoo hashing was first described by Rasmus Pagh and Flemming Friche Rodler in a 2001 conference paper. The paper was awarded the European Symposium on Algorithms Test-of-Time award in 2020.
Operations
Cuckoo hashing is a form of open addressing in which each non-empty cell of a hash table contains a key or key–value pair. A hash function is used to determine the location for each key, and its presence in the table (or the value associated with it) can be found by examining that cell of the table. However, open addressing suffers from collisions, which happens when more than one key is mapped to the same cell.
The basic idea of cuckoo hashing is to resolve collisions by using two hash functions instead of only one. This provides two possible locations in the hash table for each key. In one of the commonly used variants of the algorithm, the hash table is split into two smaller tables of equal size, and each hash function provides an index into one of these two tables. It is also possible for both hash functions to provide indexes into a single table.
Lookup
Cuckoo hashing uses two hash tables, and . Assuming is the length of each table, the hash functions for the two tables is defined as, and where be the key and be the set whose keys are stored in of or of . The lookup operation is as follows:
The logical or () denotes that, the value of the key is found in either or , which is in worst case.
Deletion
Deletion is performed in since there isn't involvement
|
https://en.wikipedia.org/wiki/History%20of%20plant%20systematics
|
The history of plant systematics—the biological classification of plants—stretches from the work of ancient Greek to modern evolutionary biologists. As a field of science, plant systematics came into being only slowly, early plant lore usually being treated as part of the study of medicine. Later, classification and description was driven by natural history and natural theology. Until the advent of the theory of evolution, nearly all classification was based on the scala naturae. The professionalization of botany in the 18th and 19th century marked a shift toward more holistic classification methods, eventually based on evolutionary relationships.
Antiquity
The peripatetic philosopher Theophrastus (372–287 BC), as a student of Aristotle in Ancient Greece, wrote Historia Plantarum, the earliest surviving treatise on plants, where he listed the names of over 500 plant species. He did not articulate a formal classification scheme, but relied on the common groupings of folk taxonomy combined with growth form: tree shrub; undershrub; or herb.
The De Materia Medica of Dioscorides was an important early compendium of plant descriptions (over five hundred), classifying plants chiefly by their medicinal effects.
Medieval
The Byzantine emperor Constantine VII sent a copy of Dioscorides' pharmacopeia to the Umayyad Caliph Abd al-Rahman III who ruled Córdoba in the 9th century, and also sent a monk named Nicolas to translate the book into Arabic. It was in use from its publication in the 1st century until the 16th century, making it one of the major herbals throughout the Middle Ages. The taxonomy criteria of medieval texts is different from what is used today. Plants with similar external appearance were usually grouped under the same species name, though in modern taxonomy they are considered different.
Abū l-Khayr's botanical work is the most complete Andalusi botanical text known to modern scholars. It is noted for its detailed descriptions of plant morphology and phe
|
https://en.wikipedia.org/wiki/Aggregate%20Server%20Access%20Protocol
|
As a communications protocol the Aggregate Server Access Protocol is used by the Reliable server pooling (RSerPool) framework for the communication between
Pool Elements and Pool Registrars (Application Layer)
Pool Users and Pool Registrars (Application Layer)
Pool Users and Pool Elements (Session Layer)
Standards Documents
Aggregate Server Access Protocol (ASAP)
Aggregate Server Access Protocol (ASAP) and Endpoint Handlespace Redundancy Protocol (ENRP) Parameters
Threats Introduced by Reliable Server Pooling (RSerPool) and Requirements for Security in Response to Threats
Reliable Server Pooling Policies
External links
Thomas Dreibholz's Reliable Server Pooling (RSerPool) Page
IETF RSerPool Working Group
Internet protocols
Internet Standards
Session layer protocols
|
https://en.wikipedia.org/wiki/Algaecide
|
Algaecide or algicide is a biocide used for killing and preventing the growth of algae, often defined in a loose sense that, beyond the biological definition, also includes cyanobacteria ("blue-green algae"). An algaecide may be used for controlled bodies of water (reservoirs, golf ponds, swimming pools), but may also be used on land for locations such as turfgrass.
Types
Inorganic compounds
Some inorganic compounds are known since antiquity for their algicidal action due to their simplicity.
Copper(II) sulfate remains "the most effective algicidal treatment". A related traditional use is the Bordeaux mixture, used to control fungus on fruits.
Hydrated lime, as a biocide, is allowed in the production of organic foods.
Barley straw
Barley straw, in England, is placed in mesh bags and floated in fish ponds or water gardens to help reduce algal growth without harming pond plants and animals. Barley straw has not been approved by the United States Environmental Protection Agency (EPA) for use as a pesticide and its effectiveness as an algaecide in ponds has produced mixed results during university testing in the United States and England. It is unclear how straw actually works.
Synthetic algicides
Synthetic algicides include:
benzalkonium chloride – "quat" disinfectant that attacks membranes
bethoxazin – "new broad spectrum industrial microbicide" in 2012, noted as "Canceled in U.S." in 2022 PubChem-EPA query
cybutryne – banned since 2023 in ship paint
dichlone – quinone fungicide/algaecide, not persistent in soil
dichlorophen – also kills invertebrate animals and bacteria
diuron – herbicide/algaecide, inhibits photosynthesis
endothal – herbicide/algaecide, inhibits protein phosphatase 2A
fentin – quinone fungicide/algaecide, discontinued
isoproturon – selective substituted urea herbicide, discontinued
methabenzthiazuron – substituted urea herbicide, discontinued
nabam – fungicide/algicide discontinued in the EU over cancer
oxyfluorfen – herbicide,
|
https://en.wikipedia.org/wiki/Endpoint%20Handlespace%20Redundancy%20Protocol
|
The Endpoint Handlespace Redundancy Protocol is used by the Reliable server pooling (RSerPool) framework for the communication between Pool Registrars to maintain and synchronize a handlespace.
It is allocated on the application layer like the Aggregate Server Access Protocol. It is a work in progress within the IETF.
External links
Thomas Dreibholz's Reliable Server Pooling (RSerPool) Page
IETF RSerPool Working Group
Endpoint Handlespace Redundancy Protocol (ENRP)
Aggregate Server Access Protocol (ASAP) and Endpoint Handlespace Redundancy Protocol (ENRP) Parameters
Threats Introduced by Reliable Server Pooling (RSerPool) and Requirements for Security in Response to Threats
Reliable Server Pooling Policies
Internet protocols
Internet Standards
Session layer protocols
|
https://en.wikipedia.org/wiki/Plant%20taxonomy
|
Plant taxonomy is the science that finds, identifies, describes, classifies, and names plants. It is one of the main branches of taxonomy (the science that finds, describes, classifies, and names living things).
Plant taxonomy is closely allied to plant systematics, and there is no sharp boundary between the two. In practice, "plant systematics" involves relationships between plants and their evolution, especially at the higher levels, whereas "plant taxonomy" deals with the actual handling of plant specimens. The precise relationship between taxonomy and systematics, however, has changed along with the goals and methods employed.
Plant taxonomy is well known for being turbulent, and traditionally not having any close agreement on circumscription and placement of taxa. See the list of systems of plant taxonomy.
Background
Classification systems serve the purpose of grouping organisms by characteristics common to each group. Plants are distinguished from animals by various traits: they have cell walls made of cellulose, polyploidy, and they exhibit sedentary growth. Where animals have to eat organic molecules, plants are able to change energy from light into organic energy by the process of photosynthesis. The basic unit of classification is species, a group able to breed amongst themselves and bearing mutual resemblance, a broader classification is the genus. Several genera make up a family, and several families an order.
History of classification
The botanical term "angiosperm", from Greek words ( 'bottle, vessel') and ( 'seed'), was coined in the form "Angiospermae" by Paul Hermann in 1690 but he used this term to refer to a group of plants which form only a subset of what today are known as angiosperms. Hermannn's Angiospermae including only flowering plants possessing seeds enclosed in capsules, distinguished from his Gymnospermae, which were flowering plants with achenial or schizo-carpic fruits, the whole fruit or each of its pieces being here regarded
|
https://en.wikipedia.org/wiki/Pool%20Registrar
|
In computing, a Pool Registrar (PR) is a component of the reliable server pooling (RSerPool) framework which manages a handlespace. PRs are also denoted as ENRP server or Name Server (NS).
The responsibilities of a PR are the following:
Register Pool Elements into a handlespace,
Deregister Pool Elements from a handlespace,
Monitor Pool Elements by keep-alive messages,
Provide handle resolution (i.e. server selection) to Pool Users,
Audit the consistency of a handlespace between multiple PRs,
Synchronize a handlespace with another PR.
Standards Documents
Aggregate Server Access Protocol (ASAP)
Endpoint Handlespace Redundancy Protocol (ENRP)
Aggregate Server Access Protocol (ASAP) and Endpoint Handlespace Redundancy Protocol (ENRP) Parameters
Reliable Server Pooling Policies
External links
Thomas Dreibholz's Reliable Server Pooling (RSerPool) Page
IETF RSerPool Working Group
Internet protocols
Internet Standards
|
https://en.wikipedia.org/wiki/Pool%20User
|
A Pool User (PU) is a client in the Reliable Server Pooling (RSerPool) framework.
In order to use the service provided by a pool, a PU has to proceed the following steps:
Ask a Pool Registrar for server selection (the Pool Registrar will return a list of servers, called Pool Elements),
Select one Pool Element, establish a connection and use the actual service,
Repeat the server selection and connection establishment procedure in case of server failures,
Perform an application-specific session failover to a new server to resume an interrupted session,
Report failed servers to the Pool Registrar.
Standards Documents
Aggregate Server Access Protocol (ASAP)
Endpoint Handlespace Redundancy Protocol (ENRP)
Aggregate Server Access Protocol (ASAP) and Endpoint Handlespace Redundancy Protocol (ENRP) Parameters
Reliable Server Pooling Policies
External links
Thomas Dreibholz's Reliable Server Pooling (RSerPool) Page
IETF RSerPool Working Group
Internet protocols
Internet Standards
|
https://en.wikipedia.org/wiki/Compaq%20Presario
|
Presario is a discontinued line of consumer desktop computers and notebooks originally produced by Compaq. The Presario family of computers was introduced in September 1993.
In the mid-1990s, Compaq began manufacturing PC monitors under the Presario brand. A series of all-in-one units, containing both the PC and the monitor in the same case, were also released.
After Compaq merged with HP in 2002, the Presario line of desktops and laptops were sold concurrently with HP’s other products, such as the HP Pavilion. The Presario laptops subsequently replaced the then-discontinued HP OmniBook line of notebooks around that same year.
The Presario brand name continued to be used for low-end home desktops and laptops from 2002 up until the Compaq brand name was discontinued by HP in 2013.
Desktop PC series
Compaq Presario 2100
Compaq Presario 2200
Compaq Presario 2240
Compaq Presario 2254
Compaq Presario 2256
Compaq Presario 2285V
Compaq Presario 2286
Compaq Presario 2288
Compaq Presario 4108
Compaq Presario 4110
Compaq Presario 4160
Compaq Presario 4505
Compaq Presario 4508
Compaq Presario 4528
Compaq Presario 4532
Compaq Presario 4540
Compaq Presario 4600
Compaq Presario 4620
Compaq Presario 4712
Compaq Presario 4800
Compaq Presario 5000 series
Compaq Presario 5000
Compaq Presario 5006US
Compaq Presario 5008US
Compaq Presario 5000A
Compaq Presario 5000T
Compaq Presario 5000Z
Compaq Presario 5010
Compaq Presario 5030
Compaq Presario 5050
Compaq Presario 5080
Compaq Presario 5100 series
Compaq Presario 5150
Compaq Presario 5170
Compaq Presario 5184
Compaq Presario 5185
Compaq Presario 5190
Compaq Presario 5200 series
Compaq Presario 5202
Compaq Presario 5222
Compaq Presario 5240
Compaq Presario 5280
Compaq Presario 5285
Compaq Presario 5360
Compaq Presario 5400
Compaq Presario 5460
Compaq Presario 5477
Compaq Presario 5500
Compaq Presario 5520
Compaq Presario 5599
Compaq Presario 5600 series
Compaq Presario 5660
Compa
|
https://en.wikipedia.org/wiki/Lauricella%20hypergeometric%20series
|
In 1893 Giuseppe Lauricella defined and studied four hypergeometric series FA, FB, FC, FD of three variables. They are :
for |x1| + |x2| + |x3| < 1 and
for |x1| < 1, |x2| < 1, |x3| < 1 and
for |x1|½ + |x2|½ + |x3|½ < 1 and
for |x1| < 1, |x2| < 1, |x3| < 1. Here the Pochhammer symbol (q)i indicates the i-th rising factorial of q, i.e.
where the second equality is true for all complex except .
These functions can be extended to other values of the variables x1, x2, x3 by means of analytic continuation.
Lauricella also indicated the existence of ten other hypergeometric functions of three variables. These were named FE, FF, ..., FT and studied by Shanti Saran in 1954 . There are therefore a total of 14 Lauricella–Saran hypergeometric functions.
Generalization to n variables
These functions can be straightforwardly extended to n variables. One writes for example
where |x1| + ... + |xn| < 1. These generalized series too are sometimes referred to as Lauricella functions.
When n = 2, the Lauricella functions correspond to the Appell hypergeometric series of two variables:
When n = 1, all four functions reduce to the Gauss hypergeometric function:
Integral representation of FD
In analogy with Appell's function F1, Lauricella's FD can be written as a one-dimensional Euler-type integral for any number n of variables:
This representation can be easily verified by means of Taylor expansion of the integrand, followed by termwise integration. The representation implies that the incomplete elliptic integral Π is a special case of Lauricella's function FD with three variables:
Finite-sum solutions of FD
Case 1 : , a positive integer
One can relate FD to the Carlson R function via
with the iterative sum
and
where it can be exploited that the Carlson R function with has an exact representation (see for more information).
The vectors are defined as
where the length of and is , while the vectors and have length .
Case 2: , a positive i
|
https://en.wikipedia.org/wiki/Giardiniera
|
Giardiniera (, ) is an Italian relish of pickled vegetables in vinegar or oil.
Varieties and uses
Italian giardiniera is also called sottaceti ("under vinegar"), a common term for pickled foods. It is typically eaten as an antipasto or with salads.
In the United States, giardiniera is commonly available in traditional or spicy varieties, and the latter is sometimes referred to as "hot mix".
Giardiniera is a versatile condiment that can be used on a variety of different foods, such as bratwurst, bruschetta, burgers, pasta salad, eggs (omelets), hot dogs, tuna salad, sandwiches, and much more. In the U.S. it is not uncommon to use giardiniera on pasta.
In the cuisine of Chicago, an oil-based giardiniera is
often used as a condiment, typically as a topping on Italian beef sandwiches, subs, and pizza.
A milder variety of giardiniera is used for the olive salad in the muffuletta sandwich.
Ingredients
The Italian version includes bell peppers, celery, carrots, cauliflower and gherkins. The pickled vegetables are marinated in oil, red- or white-wine vinegar, herbs and spices.
Chicago-style giardiniera is commonly made spicy with sport peppers or chili flakes, along with a combination of assorted vegetables, including bell peppers, celery, carrots, cauliflower, and sometimes gherkins or olives, all marinated in vegetable oil, olive oil, soybean oil, or any combination of the three. Some commercially prepared versions are labeled "Chicago-style giardiniera".
See also
Encurtido – a pickled vegetable appetizer, side dish and condiment in the Mesoamerican region
|
https://en.wikipedia.org/wiki/International%20Framework%20for%20Nuclear%20Energy%20Cooperation
|
The International Framework for Nuclear Energy Cooperation (IFNEC) is a forum of states and organizations that share a common vision of a safe and secure development of nuclear energy for worldwide purposes. Formerly the Global Nuclear Energy Partnership (GNEP), IFNEC began as a U.S. proposal, announced by United States Secretary of Energy Samuel Bodman on February 6, 2006, to form an international partnership to promote the use of nuclear power and close the nuclear fuel cycle in a way that reduces nuclear waste and the risk of nuclear proliferation. This proposal would divide the world into "fuel supplier nations," which supply enriched uranium fuel and take back spent fuel, and "user nations," which operate nuclear power plants.
As GNEP the proposal proved controversial in the United States and internationally. The U.S. Congress provided far less funding for GNEP than President George W. Bush requested. U.S. arms control organizations criticized the proposal to resume reprocessing as costly and increasing proliferation risks. Some countries and analysts criticized the GNEP proposal for discriminating between countries as nuclear fuel cycle "haves" and "have-nots." In April 2009 the U.S. Department of Energy announced the cancellation of the U.S. domestic component of GNEP.
In 2010, the GNEP was renamed the International Framework for Nuclear Energy Cooperation. IFNEC is now an international partnership with 34 participant and 31 observer countries, and three international organization observers. The international organization observers are: the International Atomic Energy Agency, the Generation IV International Forum, and the European Atomic Energy Community. Since 2015, the Nuclear Energy Agency provides Technical Secretariat support. IFNEC operates by consensus among its partners based on an agreed GNEP Statement of Mission.
GNEP in the United States
The GNEP proposal began as part of the Advanced Energy Initiative announced by President Bush in his 200
|
https://en.wikipedia.org/wiki/Reciprocal%20gamma%20function
|
In mathematics, the reciprocal gamma function is the function
where denotes the gamma function. Since the gamma function is meromorphic and nonzero everywhere in the complex plane, its reciprocal is an entire function. As an entire function, it is of order 1 (meaning that grows no faster than ), but of infinite type (meaning that grows faster than any multiple of , since its growth is approximately proportional to in the left-half plane).
The reciprocal is sometimes used as a starting point for numerical computation of the gamma function, and a few software libraries provide it separately from the regular gamma function.
Karl Weierstrass called the reciprocal gamma function the "factorielle" and used it in his development of the Weierstrass factorization theorem.
Infinite product expansion
Following from the infinite product definitions for the gamma function, due to Euler and Weierstrass respectively, we get the following infinite product expansion for the reciprocal gamma function:
where is the Euler–Mascheroni constant. These expansions are valid for all complex numbers .
Taylor series
Taylor series expansion around 0 gives:
where is the Euler–Mascheroni constant. For , the coefficient for the term can be computed recursively as
where is the Riemann zeta function. An integral representation for these coefficients was recently found by Fekih-Ahmed (2014):
For small values, these give the following values:
Fekih-Ahmed (2014) also gives an approximation for :
where and is the minus-first branch of the Lambert W function.
The Taylor expansion around has the same (but shifted) coefficients, i.e.:
(the reciprocal of Gauss' pi-function).
Asymptotic expansion
As goes to infinity at a constant we have:
Contour integral representation
An integral representation due to Hermann Hankel is
where is the Hankel contour, that is, the path encircling 0 in the positive direction, beginning at and returning to positive infinity with respect for the
|
https://en.wikipedia.org/wiki/Algebraic%20specification
|
Algebraic specification is a software engineering technique for formally specifying system behavior. It was a very active subject of computer science research around 1980.
Overview
Algebraic specification seeks to systematically develop more efficient programs by:
formally defining types of data, and mathematical operations on those data types
abstracting implementation details, such as the size of representations (in memory) and the efficiency of obtaining outcome of computations
formalizing the computations and operations on data types
allowing for automation by formally restricting operations to this limited set of behaviors and data types.
An algebraic specification achieves these goals by defining one or more data types, and specifying a collection of functions that operate on those data types. These functions can be divided into two classes:
Constructor functions: Functions that create or initialize the data elements, or construct complex elements from simpler ones. The set of available constructor functions is implied by the specification's signature. Additionally, a specification can contain equations defining equivalences between the objects constructed by these functions. Whether the underlying representation is identical for different but equivalent constructions is implementation-dependent.
Additional functions: Functions that operate on the data types, and are defined in terms of the constructor functions.
Examples
Consider a formal algebraic specification for the boolean data type.
One possible algebraic specification may provide two constructor functions for the data-element: a true constructor and a false constructor. Thus, a boolean data element could be declared, constructed, and initialized to a value. In this scenario, all other connective elements, such as XOR and AND, would be additional functions. Thus, a data element could be instantiated with either "true" or "false" value, and additional functions could be used to perform an
|
https://en.wikipedia.org/wiki/Stylohyoid%20ligament
|
The stylohyoid ligament is a ligament that extends between the hyoid bone, and the temporal styloid process (of the temporal bone of the skull).
Anatomy
Attachments
It attaches at the lesser horn of hyoid bone inferiorly, and (the apex of) the styloid process of the temporal bone superiorly.
The ligament gives attachment to the superior-most fibres of the middle pharyngeal constrictor muscle.
Relations
The ligament is adjacent to the lateral wall of the oropharynx.
Inferiorly, it is adjacent to th hyoglossus.
Clinical significance
The stylohyoid ligament frequently contains a little cartilage in its center, which is sometimes partially ossified in Eagle syndrome.
Other animals
In many animals, the epihyal is a distinct bone in the centre of the stylohyoid ligament, which is similar to that seen in Eagle syndrome.
|
https://en.wikipedia.org/wiki/Lateral%20thyrohyoid%20ligament
|
The lateral thyrohyoid ligament (lateral hyothyroid ligament) is a round elastic cord, which forms the posterior border of the thyrohyoid membrane and passes between the tip of the superior cornu of the thyroid cartilage and the extremity of the greater cornu of the hyoid bone. The internal branch of the superior laryngeal nerve typical lies lateral to this ligament.
Triticeal cartilage
A small cartilaginous nodule (cartilago triticea), sometimes bony, is frequently found in the lateral thyrohyoid ligament.
|
https://en.wikipedia.org/wiki/Thyrohyoid%20membrane
|
The thyrohyoid membrane (or hyothyroid membrane) is a broad, fibro-elastic sheet of the larynx. It connects the upper border of the thyroid cartilage to the hyoid bone.
Structure
The thyrohyoid membrane is attached below to the upper border of the thyroid cartilage and to the front of its superior cornu, and above to the upper margin of the posterior surface of the body and greater cornu of the hyoid bone. It passes behind the posterior surface of the body of the hyoid. It is separated from the hyoid bone by a mucous bursa, which allows for the upward movement of the larynx during swallowing.
Its middle thicker part is termed the median thyrohyoid ligament. Its lateral thinner portions are pierced by the superior laryngeal vessels and the internal branch of the superior laryngeal nerve. Its anterior surface is in relation with the thyrohyoid muscle, sternohyoid muscle, and omohyoid muscles, and with the body of the hyoid bone. It is pierced by the superior laryngeal nerve. It is also pierced the superior thyroid artery, where there is a thickening of the membrane.
Clinical significance
Superior laryngeal artery
The thyrohyoid membrane needs to be manipulated to access the superior thyroid artery.
History
The thyrohyoid membrane refers to the two structures it connects: the thyroid cartilage and the hyoid bone. It may also be known as the hyothyroid membrane, where the two structures are reversed.
Additional images
|
https://en.wikipedia.org/wiki/Substrate%20coupling
|
In an integrated circuit, a signal can couple from one node to another via the substrate. This phenomenon is referred to as substrate coupling or substrate noise coupling.
The push for reduced cost, more compact circuit boards, and added customer features has provided
incentives for the inclusion of analog functions on primarily digital MOS integrated circuits (ICs) forming
mixed-signal ICs. In these systems, the speed of digital circuits is constantly increasing, chips are
becoming more densely packed, interconnect layers are added, and analog resolution is increased. In addition, recent increase in wireless applications and its growing market are introducing a new set of aggressive design goals for realizing mixed-signal systems.
Here, the designer integrates radio frequency
(RF) analog and base band digital circuitry on a single chip.
The goal is to make single-chip radio frequency
integrated circuits (RFICs) on silicon, where all the blocks are fabricated on the same chip.
One of the advantages of this integration is low power dissipation for portability due to a reduction in the number of package pins and associated bond wire capacitance.
Another reason that an integrated solution offers lower power consumption is that routing high-frequency signals off-chip often requires a 50Ω impedance match, which can result in higher power dissipation.
Other advantages include improved high-frequency performance due to reduced package interconnect parasitics, higher system reliability, smaller package count, and higher integration of RF components with VLSI-compatible digital circuits.
In fact, the single-chip transceiver is now a reality.
The design of such systems, however, is a complicated task. There are two main challenges in realizing
mixed-signal ICs. The first challenging task, specific to RFICs, is to fabricate good on-chip passive elements
such as high-Q inductors. The second challenging task, applicable to any mixed-signal IC and the subject
of this chap
|
https://en.wikipedia.org/wiki/Half-power%20point
|
The half-power point is the point at which the output power has dropped to half of its peak value; that is, at a level of approximately -3 dB.
In filters, optical filters, and electronic amplifiers, the half-power point is also known as half-power bandwidth and is a commonly used definition for the cutoff frequency.
In the characterization of antennas the half-power point is also known as half-power beamwidth and relates to measurement position as an angle and describes directionality.
Amplifiers and filters
This occurs when the output voltage has dropped to (~0.707) of the maximum output voltage and the power has dropped by half. A bandpass amplifier will have two half-power points, while a low-pass amplifier or a high-pass amplifier will have only one.
The bandwidth of a filter or amplifier is usually defined as the difference between the lower and upper half-power points. This is, therefore, also known as the 3 dB bandwidth. There is no lower half-power point for a low-pass amplifier, so the bandwidth is measured relative to DC, i.e., 0 Hz. There is no upper half-power point for an ideal high-pass amplifier, its bandwidth is theoretically infinite. In practice the stopband and transition band are used to characterize a high-pass.
Antenna beams
In antennas, the expression half-power point does not relate to frequency: instead, it describes the extent in space of an antenna beam. The half-power point is the angle off boresight at which the antenna gain first falls to half power (approximately -3 dB) from the peak. The angle between the points is known as the half-power beam width (or simply beam width).
Beamwidth is usually but not always expressed in degrees and for the horizontal plane.
It refers to the main lobe, when referenced to the peak effective radiated power of the main lobe.
Note that other definitions of beam width exist, such as the distance between nulls and distance between first side lobes.
Calculation
The beamwidth can be computed fo
|
https://en.wikipedia.org/wiki/Anti-Life%20Equation
|
The Anti-Life Equation is a fictional concept appearing in American comic books published by DC Comics. In Jack Kirby's Fourth World setting, the Anti-Life Equation is a formula for total control over the minds of sentient beings that is sought by Darkseid, who, for this reason, sends his forces to Earth, as he believes part of the equation exists in the subconsciousness of humanity. Various comics have defined the equation in different ways, but a common interpretation is that the equation may be seen as a mathematical proof of the futility of living, or of life as incarceration of spirit, per predominant religious and modern cultural suppositions.
History
Jack Kirby's original comics established the Anti-Life Equation as giving the being who learns it power to dominate the will of all sentient and sapient races. It is called the Anti-Life Equation because "if someone possesses absolute control over you — you're not really alive". Most stories featuring the Equation use this concept. The Forever People's Mother Box found the Anti-Life Equation in Sonny Sumo, but Darkseid, unaware of this, stranded him in ancient Japan. A man known as Billion-Dollar Bates had control over the Equation's power even without the Mother Box's aid, but was accidentally killed by one of his own guards.
When Metron and Swamp Thing attempt to breach the Source, which drives Swamp Thing temporarily mad, Darkseid discovers that part of the formula is love. Upon being told by the Dominators of their planned invasion of Earth, Darkseid promises not to interfere on the condition that the planet is not destroyed so his quest for the equation is not thwarted.
It is later revealed in Martian Manhunter (vol. 2) #33 that Darkseid first became aware of the equation approximately 300 years ago when he made contact with the people of Mars. Upon learning of the Martian philosophy that free will and spiritual purpose could be defined by a Life Equation, Darkseid postulated that there must exist a negat
|
https://en.wikipedia.org/wiki/Sony%20HDR-HC1
|
The Sony HDR-HC1, introduced in mid-2005 (MSRP US$1999), is the first consumer HDV camcorder to support 1080i.
The CMOS sensor has resolution of 1920x1440 for digital still pictures and captures video at 1440x1080 interlaced, which is the resolution defined for HDV 1080i. The camera may also use the extra pixels for digital image stabilization.
The camcorder can also convert the captured HDV data to DV data for editing the video using non-linear editing systems which do not support HDV or for creating edits which are viewable on non-HDTV television sets.
The HVR-A1 is the prosumer version of the HDR-HC1. It has more manual controls and XLR ports.
Unique features
Expanded focus
Expanded focus lets the user magnify the image temporarily to obtain better manual focus. Expanded focus works in pause mode only; it is not possible to magnify the frame during recording.
A similar feature, named Focus Assist, appeared on the Canon HV20, which was released two years after the HDR-HC1. Focus Assist on Canon camcorders also works only when recording is paused.
Spot meter and spot focus
Spot meter and Spot focus are possible thanks to a touch-sensitive LCD screen, employed on most modern Sony consumer camcorders.
The user can touch the screen to specify a specific region of the image; the camcorder automatically adjusts focus or exposure according to distance to the object and to illumination of the selected spot.
Depending on a scene, changing focus with Spot Focus can cause focus "breathing" or "hunting", when the subject goes in and out of focus several times before the image stabilizes.
Shot transition
Shot transition allows for a smooth automatic scene transition. In particular, it makes rack focus easy.
Two sets of focus and zoom can be preset and stored in "Store-A" and "Store-B" memory slots. The settings can then be gradually applied from one to another within 4 seconds. The transition time is not adjustable.
Presently, the HDR-HC1 is the only consumer c
|
https://en.wikipedia.org/wiki/Microdissection
|
Microdissection refers to a variety of techniques where a microscope is used to assist in dissection.
Different kinds of techniques involve microdissection:
Chromosome microdissection — use of fine glass needle under a microscope to remove a portion from a complete chromosome.
Laser microdissection — use of a laser through a microscope to dissect selected cells.
Laser capture microdissection — use of a laser through a microscope to cause selected cells to adhere to a film.
Microscopy
|
https://en.wikipedia.org/wiki/Laser%20capture%20microdissection
|
Laser capture microdissection (LCM), also called microdissection, laser microdissection (LMD), or laser-assisted microdissection (LMD or LAM), is a method for isolating specific cells of interest from microscopic regions of tissue/cells/organisms (dissection on a microscopic scale with the help of a laser).
Principle
Laser-capture microdissection (LCM) is a method to procure subpopulations of tissue cells under direct microscopic visualization. LCM technology can harvest the cells of interest directly or can isolate specific cells by cutting away unwanted cells to give histologically pure enriched cell populations. A variety of downstream applications exist: DNA genotyping and loss of heterozygosity (LOH) analysis, RNA transcript profiling, cDNA library generation, proteomics discovery and signal-pathway profiling. The total time required to carry out this protocol is typically 1–1.5 h.
Extraction
A laser is coupled into a microscope and focuses onto the tissue on the slide. By movement of the laser by optics or the stage the focus follows a trajectory which is predefined by the user. This trajectory, also called element, is then cut out and separated from the adjacent tissue. After the cutting process, an extraction process has to follow if an extraction process is desired. More recent technologies utilize non-contact microdissection.
There are several ways to extract tissue from a microscope slide with a histopathology sample on it. Press a sticky surface onto the sample and tear out. This extracts the desired region, but can also remove particles or unwanted tissue on the surface, because the surface is not selective. Melt a plastic membrane onto the sample and tear out. The heat is introduced, for example, by a red or infrared (IR) laser onto a membrane stained with an absorbing dye. As this adheres the desired sample onto the membrane, as with any membrane that is put close to the histopathology sample surface, there might be some debris extracted. Another
|
https://en.wikipedia.org/wiki/Escape%20response
|
Escape response, escape reaction, or escape behavior is a mechanism by which animals avoid potential predation. It consists of a rapid sequence of movements, or lack of movement, that position the animal in such a way that allows it to hide, freeze, or flee from the supposed predator. Often, an animal's escape response is representative of an instinctual defensive mechanism, though there is evidence that these escape responses may be learned or influenced by experience.
The classical escape response follows this generalized, conceptual timeline: threat detection, escape initiation, escape execution, and escape termination or conclusion. Threat detection notifies an animal to a potential predator or otherwise dangerous stimulus, which provokes escape initiation, through neural reflexes or more coordinated cognitive processes. Escape execution refers to the movement or series of movements that will hide the animal from the threat or will allow for the animal to flee. Once the animal has effectively avoided the predator or threat, the escape response is terminated. Upon completion of the escape behavior or response, the animal may integrate the experience with its memory, allowing it to learn and adapt its escape response.
Escape responses are anti-predator behaviour that can vary from species to species. The behaviors themselves differ depending upon the species, but may include camouflaging techniques, freezing, or some form of fleeing (jumping, flying, withdrawal, etc.). In fact, variation between individuals is linked to increased survival. In addition, it is not merely increased speed that contributes to the success of the escape response; other factors, including reaction time and the individual's context can play a role. The individual escape response of a particular animal can vary based on an animal's previous experiences and its current state.
Evolutionary importance
The ability to perform an effective escape maneuver directly affects the fitness of the
|
https://en.wikipedia.org/wiki/Chromosome%20microdissection
|
Chromosome microdissection is a technique that physically removes a large section of DNA from a complete chromosome. The smallest portion of DNA that can be isolated using this method comprises 10 million base pairs - hundreds or thousands of individual genes.
Scientists who study chromosomes are known as cytogeneticists. They are able to identify each chromosome based on its unique pattern of dark and light bands. Certain abnormalities, however, cause chromosomes to have unusual banding patterns. For example, one chromosome may have a piece of another chromosome inserted within it, creating extra bands. Or, a portion of a chromosome may be repeated over and over again, resulting in an unusually wide, dark band (known as a homogeneously staining region). Some chromosomal aberrations have been linked to cancer and inherited genetic disorders, and the chromosomes of many tumor cells exhibit irregular bands. To understand more about what causes these conditions, scientists hope to determine which genes and DNA sequences are located near these unusual bands. Chromosome microdissection is a specialized way of isolating these regions by removing the DNA from the band and making that DNA available for further study.
To prepare cells for chromosome microdissection, a scientist first treats them with a chemical that forces them into metaphase: a phase of the cell's life-cycle where the chromosomes are tightly coiled and highly visible. Next, the cells are dropped onto a microscope slide so that the nucleus, which holds all of the genetic material together, breaks apart and releases the chromosomes onto the slide. Then, under a microscope, the scientist locates the specific band of interest, and, using a very fine needle, tears that band away from the rest of the chromosome. The researcher next produces multiple copies of the isolated DNA using a procedure called PCR (polymerase chain reaction). The scientist uses these copies to study the DNA from the unusual region of the
|
https://en.wikipedia.org/wiki/Transcription%20factor%20II%20D
|
Transcription factor II D (TFIID) is one of several general transcription factors that make up the RNA polymerase II preinitiation complex. RNA polymerase II holoenzyme is a form of eukaryotic RNA polymerase II that is recruited to the promoters of protein-coding genes in living cells. It consists of RNA polymerase II, a subset of general transcription factors, and regulatory proteins known as SRB proteins. Before the start of transcription, the transcription Factor II D (TFIID) complex binds to the core promoter DNA of the gene through specific recognition of promoter sequence motifs, including the TATA box, Initiator, Downstream Promoter, Motif Ten, or Downstream Regulatory elements.
Functions
Coordinates the activities of more than 70 polypeptides required for initiation of transcription by RNA polymerase II
Binds to the core promoter to position the polymerase properly
Serves as the scaffold for assembly of the remainder of the transcription complex
Acts as a channel for regulatory signals
Structure
TFIID is itself composed of TBP and several subunits called TATA-binding protein Associated Factors (TBP-associated factors, or TAFs). In a test tube, only TBP is necessary for transcription at promoters that contain a TATA box. TAFs, however, add promoter selectivity, especially if there is no TATA box sequence for TBP to bind to. TAFs are included in two distinct complexes, TFIID and B-TFIID. The TFIID complex is composed of TBP and more than eight TAFs. But, the majority of TBP is present in the B-TFIID complex, which is composed of TBP and TAFII170 (BTAF1) in a 1:1 ratio. TFIID and B-TFIID are not equivalent, since transcription reactions utilizing TFIID are responsive to gene specific transcription factors such as SP1, while reactions reconstituted with B-TFIID are not.
Subunits in the TFIID complex include:
TBP (TATA binding protein), or:
TBP-related factors in animals (TBPL1; TBPL2)
TAF1 (TAFII250)
TAF2 (CIF150)
TAF3 (TAFII140)
TAF4 (TAFII130/135
|
https://en.wikipedia.org/wiki/Regular%20economy
|
A regular economy is an economy characterized by an excess demand function which has the property that its slope at any equilibrium price vector is non-zero. In other words, if we graph the excess demand function against prices, then the excess demand function "cuts" the x-axis assuring that each equilibrium is locally unique. Local uniqueness in turn permits the use of comparative statics - an analysis of how the economy responds to external shocks - as long as these shocks are not too large.
An important result due to Debreu (1970) states that almost any economy, defined by an initial distribution of consumers' endowments, is regular. In technical terms, the set of nonregular economies is of Lebesgue measure zero.
Combined with the index theorem this result implies that almost any economy will have a finite (and odd) number of equilibria.
|
https://en.wikipedia.org/wiki/LaSalle%27s%20invariance%20principle
|
LaSalle's invariance principle (also known as the invariance principle, Barbashin-Krasovskii-LaSalle principle, or Krasovskii-LaSalle principle) is a criterion for the asymptotic stability of an autonomous (possibly nonlinear) dynamical system.
Global version
Suppose a system is represented as
where is the vector of variables, with
If a (see Smoothness) function can be found such that
for all (negative semidefinite),
then the set of accumulation points of any trajectory is contained in where is the union of complete trajectories contained entirely in the set .
If we additionally have that the function is positive definite, i.e.
, for all
and if contains no trajectory of the system except the trivial trajectory for , then the origin is asymptotically stable.
Furthermore, if is radially unbounded, i.e.
, as
then the origin is globally asymptotically stable.
Local version
If
, when
hold only for in some neighborhood of the origin, and the set
does not contain any trajectories of the system besides the trajectory , then the local version of the invariance principle states that the origin is locally asymptotically stable.
Relation to Lyapunov theory
If is negative definite, then the global asymptotic stability of the origin is a consequence of Lyapunov's second theorem. The invariance principle gives a criterion for asymptotic stability in the case when is only negative semidefinite.
Examples
Simple example
Example taken from.
Consider the vector field in the plane. The function satisfies , and is radially unbounded, showing that the origin is globally asymptotically stable.
Pendulum with friction
This section will apply the invariance principle to establish the local asymptotic stability of a simple system, the pendulum with friction. This system can be modeled with the differential equation
where is the angle the pendulum makes with the vertical normal, is the mass of the pendulum, is the
|
https://en.wikipedia.org/wiki/IBM%20TPNS
|
Teleprocessing Network Simulator (TPNS) is an IBM licensed program, first released in 1976 as a test automation tool to simulate the end-user activity of network terminal(s) to a mainframe computer system, for functional testing, regression testing, system testing, capacity management, benchmarking and stress testing.
In 2002, IBM re-packaged TPNS and released
Workload Simulator for z/OS and S/390 (WSim) as a successor product.
History
Teleprocessing Network Simulator (TPNS) Version 1 Release 1 (V1R1) was introduced as Program Product 5740-XT4 in February 1976, followed by four additional releases up to V1R5 (1981).
In August 1981, IBM announced TPNS Version 2 Release 1 () as Program Product 5662-262, followed by three additional releases up to V2R4 (1987).
In January 1989, IBM announced TPNS Version 3 Release 1 () as Program Product 5688-121, followed by four additional releases up to (1996).
In December 1997, IBM announced a Service Level 9711 Functional and Service Enhancements release.
In September 1998, IBM announced the TPNS Test Manager (for ) as a usability enhancement to automate the test process further in order to improve productivity through a logical flow, and to streamline TPNS-based testing of IBM 3270 applications or CPI-C transaction programs.
In December 2001, IBM announced a Service Level 0110 Functional and Service Enhancements release.
In August 2002, IBM announced Workload Simulator for z/OS and S/390 (WSim) V1.1 as Program Number 5655-I39, a re-packaged successor product to TPNS, alongside the WSim Test Manager V1.1, a re-packaged successor to the TPNS Test Manager.
In November 2012, IBM announced a maintenance update of Workload Simulator for z/OS and S/390 (WSim) V1.1, to simplify the installation of updates to the product.
In December 2015, IBM announced enhancements to Workload Simulator for z/OS and S/390 (WSim) V1.1, providing new utilities for TCP/IP data capture and script generation.
Features
Simulation support
Telep
|
https://en.wikipedia.org/wiki/Nuchal%20ligament
|
The nuchal ligament is a ligament at the back of the neck that is continuous with the supraspinous ligament.
Structure
The nuchal ligament extends from the external occipital protuberance on the skull and median nuchal line to the spinous process of the seventh cervical vertebra in the lower part of the neck.
From the anterior border of the nuchal ligament, a fibrous lamina is given off. This is attached to the posterior tubercle of the atlas, and to the spinous processes of the cervical vertebrae, and forms a septum between the muscles on either side of the neck.
The trapezius and splenius capitis muscle attach to the nuchal ligament.
Function
It is a tendon-like structure that has developed independently in humans and other animals well adapted for running. In some four-legged animals, particularly ungulates and canids, the nuchal ligament serves to sustain the weight of the head.
Clinical significance
In Chiari malformation treatment, decompression and duraplasty with a harvested nuchal ligament showed similar outcomes to pericranial and artificial grafts.
Other animals
In sheep and cattle it is known as the paxwax. It relieves the animal of the weight of its head.
The nuchal ligament is unusual in being a ligament with an elastic component, allowing for stretch. Most ligaments are mostly made of highly aligned collagen fibres which do not permit stretching.
Structurally, the nuchal ligament is formed with the association of both elastin proteins as well as type III collagen (45%). The collagen fibrils share a consistent size as well as helical pattern which gives the ligament its tensile strength. The elastin on the other hand is a protein that allows for flexibility. These two elements of the nuchal ligament maintain a complex balance which allows the constant weight bearing of the head along with multidirectional movement without damaging the durability of the ligament through over-use/stretching.
In most other mammals, including the great apes, the
|
https://en.wikipedia.org/wiki/Coactivator%20%28genetics%29
|
A coactivator is a type of transcriptional coregulator that binds to an activator (a transcription factor) to increase the rate of transcription of a gene or set of genes. The activator contains a DNA binding domain that binds either to a DNA promoter site or a specific DNA regulatory sequence called an enhancer. Binding of the activator-coactivator complex increases the speed of transcription by recruiting general transcription machinery to the promoter, therefore increasing gene expression. The use of activators and coactivators allows for highly specific expression of certain genes depending on cell type and developmental stage.
Some coactivators also have histone acetyltransferase (HAT) activity. HATs form large multiprotein complexes that weaken the association of histones to DNA by acetylating the N-terminal histone tail. This provides more space for the transcription machinery to bind to the promoter, therefore increasing gene expression.
Activators are found in all living organisms, but coactivator proteins are typically only found in eukaryotes because they are more complex and require a more intricate mechanism for gene regulation. In eukaryotes, coactivators are usually proteins that are localized in the nucleus.
Mechanism
Some coactivators indirectly regulate gene expression by binding to an activator and inducing a conformational change that then allows the activator to bind to the DNA enhancer or promoter sequence. Once the activator-coactivator complex binds to the enhancer, RNA polymerase II and other general transcription machinery are recruited to the DNA and transcription begins.
Histone acetyltransferase
Nuclear DNA is normally wrapped tightly around histones, making it hard or impossible for the transcription machinery to access the DNA. This association is due primarily to the electrostatic attraction between the DNA and histones as the DNA phosphate backbone is negatively charged and histones are rich in lysine residues, which are posi
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.