source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Convex%20geometry
|
In mathematics, convex geometry is the branch of geometry studying convex sets, mainly in Euclidean space. Convex sets occur naturally in many areas: computational geometry, convex analysis, discrete geometry, functional analysis, geometry of numbers, integral geometry, linear programming, probability theory, game theory, etc.
Classification
According to the Mathematics Subject Classification MSC2010, the mathematical discipline Convex and Discrete Geometry includes three major branches:
general convexity
polytopes and polyhedra
discrete geometry
(though only portions of the latter two are included in convex geometry).
General convexity is further subdivided as follows:
axiomatic and generalized convexity
convex sets without dimension restrictions
convex sets in topological vector spaces
convex sets in 2 dimensions (including convex curves)
convex sets in 3 dimensions (including convex surfaces)
convex sets in n dimensions (including convex hypersurfaces)
finite-dimensional Banach spaces
random convex sets and integral geometry
asymptotic theory of convex bodies
approximation by convex sets
variants of convex sets (star-shaped, (m, n)-convex, etc.)
Helly-type theorems and geometric transversal theory
other problems of combinatorial convexity
length, area, volume
mixed volumes and related topics
valuations on convex bodies
inequalities and extremum problems
convex functions and convex programs
spherical and hyperbolic convexity
Historical note
Convex geometry is a relatively young mathematical discipline. Although the first known contributions to convex geometry date back to antiquity and can be traced in the works of Euclid and Archimedes, it became an independent branch of mathematics at the turn of the 20th century, mainly due to the works of Hermann Brunn and Hermann Minkowski in dimensions two and three. A big part of their results was soon generalized to spaces of higher dimensions, and in 1934
|
https://en.wikipedia.org/wiki/Northwest%20Atlantic%20Fisheries%20Centre
|
The Northwest Atlantic Fisheries Centre is a Government of Canada research facility and office complex located in St. John's, Newfoundland and Labrador.
It is primarily used by the Department of Fisheries and Oceans.
External links
Northwest Atlantic Fisheries Centre (NAFC)
The DFO: watchdog of our waters
Buildings and structures in St. John's, Newfoundland and Labrador
Canadian federal government buildings
Research institutes in Canada
Oceanographic organizations
Biological research institutes
Fisheries and Oceans Canada
Fisheries and aquaculture research institutes
|
https://en.wikipedia.org/wiki/Maurice%20Lamontagne%20Institute
|
The Maurice Lamontagne Institute is a marine science research institute located in Mont Joli, Quebec and is part of the Canadian Department of Fisheries and Oceans.
History
The Maurice Lamontagne Institute was created in 1987. Its mission was to help the Department of Fisheries and Oceans gather and organize documentary resources for the needs of the Québec region.
Description
The Maurice Lamontagne Institute employs around 300 people. Its fields of focus are ocean science and aquatic ecosystems management. Its activities include research on aquatic invasive species, fish stocks and marine mammals, and ocean ecosystem dynamics, forecasting and monitoring of water levels, and developing technological solutions for navigation.
The institute covers an area of 25,000 square metres and has 70 labs onsite.
Library
The library of the Maurice Lamontagne Institute focuses on specialized collections concerning oceans in Québec and in Canada. The library's collection is made of 61,000 monographs and 1100 periodicals.
Research
Researchers at the institute have access to the following vessels:
CCGS Calanus II
CCGS Frederick G. Creed
CCGS Martha L. Black
CCGS Alfred Needler
CCGS Hudson
CCGS Teleost
|
https://en.wikipedia.org/wiki/Thin-film%20bulk%20acoustic%20resonator
|
A thin-film bulk acoustic resonator (FBAR or TFBAR) is a device consisting of a piezoelectric material manufactured by thin film methods between two conductive – typically metallic – electrodes and acoustically isolated from the surrounding medium. The operation is based on the piezoelectricity of the piezolayer between the electrodes.
FBAR devices using piezoelectric films with thicknesses ranging from several micrometres down to tenths of micrometres resonate in the frequency range of 100 MHz to 20 GHz. FBAR or TFBAR resonators fall in the category of bulk acoustic resonators (BAW) and piezoelectric resonators and they are used in applications where high frequency, small size and weight is needed.
Piezoelectricity in thin films
The crystallographic orientation of a thin film depends on the piezomaterial selected and many other items like the surface on which the film is grown and various manufacturing - thin film growth - conditions (temperatures selected, pressure, gases used, vacuum conditions etc.).
Any material like lead zirconate titanate (PZT) or barium strontium titanate (BST) from the list of piezoelectric materials could act as an active material in an FBAR. However two compound materials aluminium nitride (AlN) and zinc oxide (ZnO) are the two most studied piezoelectric materials manufactured for high frequency FBAR realisations. This is due to the fact that the properties like stoichiometry of two compound materials can be easier to control compared to three compound materials manufactured by thin film methods. For example it is known that thin film ZnO with C axis of the crystal structure (crystalline Z axis) normal to the substrate surface excites longitudinal (L) waves. Shear (transverse) (S) waves are excited if C axis of the film crystal structure is 41º tilted. It is also possible – depending on the crystal structure of the film – that both waves (L & S) are excited. Therefore the understanding and control of the crystal structure of the manuf
|
https://en.wikipedia.org/wiki/Simpson%E2%80%93Golabi%E2%80%93Behmel%20syndrome
|
Simpson–Golabi–Behmel syndrome (SGBS), is a rare inherited congenital disorder that can cause craniofacial, skeletal, vascular, cardiac, and renal abnormalities. There is a high prevalence of cancer associated in those with sgbs which includes wilms tumors, neuroblastoma, tumors of the adrenal gland, liver, lungs and abdominal organs.
The syndrome is inherited in an X-linked recessive manner. Females that possess one copy of the mutation are considered to be carriers of the syndrome but may still express varying degrees of the phenotype, suffering mild to severe malady. Males experience a higher likelihood of fetal death.
Types
There are two types of SGBS, each found on a different gene:
SGBS is also considered to be an overgrowth syndrome (OGS). OGS is characterized by a 2-3 standard deviation increase in weight, height, or head circumference above the average for sex and age. One of the most noted features of OGS is the increased risk of neoplasms in certain OGSs. SGBS in particular has been found to have a 10% tumor predisposition frequency with 94% of cases occurring in the abdominal region, most being malignant. It is common for tumors to be embryonal in type and appear before the age of 10.
There are five different types of tumors that patients with SGBS might develop, all intra-abdominal: Wilms tumor, Hepatoblastoma, Hepatocarcinoma, Gonadoblastoma, and Neuroblastoma.
The most common types of tumors developed in patients are the Wilms tumor and hepatoblastoma.
Symptoms and signs
May include one or more of the following symptons:
Macrosomia
Macroglossia
Advanced bone age
Organomegaly is especially noted in liver and spleen
malformations of the kidneys (which may result in sporadic hypokalemia)
gastrointestinal and malabsorption disorders
muscle weakness
bone pain
Neonatal hypoglycemia
Neoplasms
Congenital diaphragmatic hernia
(wide or protruding jaw and tongue, widened nasal bridge, upturned nasal tip) most commonly observed in males or elonga
|
https://en.wikipedia.org/wiki/Hitachimycin
|
Hitachimycin, also known as stubomycin, is a cyclic polypeptide produced by Streptomyces that acts as an antibiotic. It exhibits cytotoxic activity against mammalian cells, Gram-positive bacteria, yeast, and fungi, as well as hemolytic activity; this is mediated by changes at the cell membrane and subsequent lysis. Owing to its cytotoxic activity against mammalian cells and tumors, it was first proposed as an antitumor antibiotic.
As of 2007, it has not been used in a clinical setting.
|
https://en.wikipedia.org/wiki/Active%20chromatin%20sequence
|
An active chromatin sequence (ACS) is a region of DNA in a eukaryotic chromosome in which histone modifications such as acetylation lead to exposure of the DNA sequence thus allowing binding of transcription factors and transcription to take place. Active chromatin may also be called euchromatin. ACSs may occur in non-expressed gene regions which are assumed to be "poised" for transcription. The sequence once exposed often contains a promoter to begin transcription. At this site acetylation or methylation can take place causing a conformational change to the chromatin. At the active chromatin sequence site deacetylation can caused the gene to be repressed if not being expressed.
See also
Chromatin
|
https://en.wikipedia.org/wiki/Quadratic%20form%20%28statistics%29
|
In multivariate statistics, if is a vector of random variables, and is an -dimensional symmetric matrix, then the scalar quantity is known as a quadratic form in .
Expectation
It can be shown that
where and are the expected value and variance-covariance matrix of , respectively, and tr denotes the trace of a matrix. This result only depends on the existence of and ; in particular, normality of is not required.
A book treatment of the topic of quadratic forms in random variables is that of Mathai and Provost.
Proof
Since the quadratic form is a scalar quantity, .
Next, by the cyclic property of the trace operator,
Since the trace operator is a linear combination of the components of the matrix, it therefore follows from the linearity of the expectation operator that
A standard property of variances then tells us that this is
Applying the cyclic property of the trace operator again, we get
Variance in the Gaussian case
In general, the variance of a quadratic form depends greatly on the distribution of . However, if does follow a multivariate normal distribution, the variance of the quadratic form becomes particularly tractable. Assume for the moment that is a symmetric matrix. Then,
.
In fact, this can be generalized to find the covariance between two quadratic forms on the same (once again, and must both be symmetric):
.
In addition, a quadratic form such as this follows a generalized chi-squared distribution.
Computing the variance in the non-symmetric case
Some texts incorrectly state that the above variance or covariance results hold without requiring to be symmetric. The case for general can be derived by noting that
so
is a quadratic form in the symmetric matrix , so the mean and variance expressions are the same, provided is replaced by therein.
Examples of quadratic forms
In the setting where one has a set of observations and an operator matrix , then the residual sum of squares can be written as a quadratic form in :
|
https://en.wikipedia.org/wiki/Biological%20effects%20of%20high-energy%20visible%20light
|
High-energy visible light (HEV light) is short-wave light in the violet/blue band from 400 to 450 nm in the visible spectrum, which has a number of purported negative biological effects, namely on circadian rhythm and retinal health (blue-light hazard), which can lead to age-related macular degeneration. Increasingly, blue blocking filters are being designed into glasses to avoid blue light's purported negative effects. However, there is no good evidence that filtering blue light with spectacles has any effect on eye health, eye strain, sleep quality or vision quality.
Background
Blue LED light
Blue LEDs are often the target of blue-light research due to the increasing prevalence of LED displays and Solid-state lighting (e.g. LED illumination), as well as the blue appearance (higher color temperature) compared with traditional sources. However, natural sunlight has a relatively high spectral density of blue light, so exposure to high levels of blue light is not a new or unique phenomenon despite the relatively recent emergence of LED display technologies. While LED displays emit white by exciting all RGB LEDs, white light from lighting is generally produced by pairing a blue LED emitting primarily near 450 nm combined with a phosphor for down-conversion of some of the blue light to longer wavelengths, which then combine to form white light. This is often considered “the next generation of illumination” as SSL technology dramatically reduces energy resource requirements.
Luminous efficiency
Blue LEDs, particularly those used in white LEDs, operate at around 450 nm, where V(λ)=0.038. This means that blue light at 450 nm requires about 25 times the radiant flux (energy) for one to perceive the same luminous flux as green light at 555 nm. For comparison, UV-A at 380 nm (V(λ)=0.000 039) requires 25 641 times the amount of radiometric energy to be perceived at the same intensity as green, three orders of magnitude greater than blue LEDs. Studies often compare animal
|
https://en.wikipedia.org/wiki/Hesperidin
|
Hesperidin is a flavanone glycoside found in citrus fruits. Its aglycone is hesperetin. Its name is derived from the word "hesperidium", for fruit produced by citrus trees.
Hesperidin was first isolated in 1828 by French chemist M. Lebreton from the white inner layer of citrus peels (mesocarp, albedo).
Hesperidin is believed to play a role in plant defense.
Sources
Rutaceae
700–2,500 ppm in fruit of Citrus aurantium (bitter orange, petitgrain)
in orange juice (Citrus sinensis)
in Zanthoxylum gilletii
in lemon
in lime
in leaves of Agathosma serratifolia
Lamiaceae
Peppermint contains hesperidin.
Content in foods
Approximate hesperidin content per 100 ml
481 mg peppermint, dried
44 mg blood orange, pure juice
26 mg orange, pure juice
18 mg lemon, pure juice
14 mg lime, pure juice
1 mg grapefruit, pure juice
Metabolism
Hesperidin 6-O-α--rhamnosyl-β--glucosidase, an enzyme that uses hesperidin and water to produce hesperetin and rutinose, is found in the Ascomycetes species.
Research
As a flavanone found in the rinds of citrus fruits (such as oranges or lemons), hesperidin is under preliminary research for its possible biological properties in vivo. One review did not find evidence that hesperidin affected blood lipid levels or hypertension. Another review found that hesperidin may improve endothelial function in humans, but the overall results were inconclusive.
Biosynthesis
The biosynthesis of hesperidin stems from the phenylpropanoid pathway, in which the natural amino acid -phenylalanine undergoes a deamination by phenylalanine ammonia lyase to afford (E)-cinnamate. The resulting monocarboxylate undergoes an oxidation by cinnamate 4-hydroxylase to afford (E)-4-coumarate, which is transformed into (E)-4-coumaroyl-CoA by 4-coumarate-CoA ligase. (E)-4-coumaroyl-CoA is then subjected to the type III polyketide synthase naringenin chalcone synthase, undergoing successive condensation reactions and ultimately a ring-closing Claisen condensation
|
https://en.wikipedia.org/wiki/Pot-in-pot%20refrigerator
|
A pot-in-pot refrigerator, clay pot cooler or zeer () is an evaporative cooling refrigeration device which does not use electricity. It uses a porous outer clay pot (lined with wet sand) containing an inner pot (which can be glazed to prevent penetration by the liquid) within which the food is placed. The evaporation of the outer liquid draws heat from the inner pot. The device can cool any substance, and requires only a flow of relatively dry air and a source of water.
History
Many clay pots from around 3000 BC were discovered in the Indus Valley civilization and are considered to have been used for cooling as well as storing water. The pots are similar to the present-day ghara and matki used in India and Pakistan.
There is evidence that evaporative cooling may have been used in North Africa as early as the Old Kingdom of Egypt, circa 2500 BC. Frescoes show slaves fanning water jars, which would increase air flow around porous jars to aid evaporation and cooling the contents. These jars exist even today and are called zeer, hence the name of the pot cooler. Despite being developed in Northern Africa, the technology appeared to have been forgotten since the advent of modern electrical refrigerators.
However, in the Indian subcontinent, ghara, matka and surahi, all of which are different types of clay water pots, are in everyday use to cool water. In Spain, botijos are popular. A botijo is a porous clay container used to keep and to cool water; they have been in use for centuries and are still in relatively widespread use. Botijos are favored most by the low Mediterranean climate; locally, the cooling effect is known as "botijo effect".
In the 1890s, gold miners in Australia developed the Coolgardie safe, based on the same principles.
In rural northern Nigeria in the 1990s, Mohamed Bah Abba developed the Pot-in-Pot Preservation Cooling System, consisting of a small clay pot placed inside a larger one, and the space between the two filled with moist sand. The i
|
https://en.wikipedia.org/wiki/Round-trip%20translation
|
Round-trip translation (RTT), also known as back-and-forth translation, recursive translation and bi-directional translation, is the process of translating a word, phrase or text into another language (forward translation), then translating the result back into the original language (back translation), using machine translation (MT) software. It is often used by laypeople to evaluate a machine translation system, or to test whether a text is suitable for MT when they are unfamiliar with the target language. Because the resulting text can often differ substantially from the original, RTT can also be a source of entertainment.
Software quality
To compare the quality of different machine translation systems, users perform RTT and compare the resulting text to the original. The theory is that the closer the result of the RTT is to the original text, the higher the quality of the machine translation system. One of the problems with this technique is that if there is a problem with the resulting text it is impossible to know whether the error occurred in the forward translation, in the back translation, or in both. In addition, it is possible to get a good back translation from a bad forward translation. A study using the automatic evaluation methods BLEU and F-score compared five different free online translation programs, evaluating the quality of both the forward translation and the back translation, and found no correlation between the quality of the forward translation and the quality of the back translation (i.e., a high quality forward translation did not always correspond to a high quality back translation). The author concluded that RTT was a poor method of predicting the quality of machine translation software. This conclusion was reinforced by a more in-depth study also using automatic evaluation methods. A subsequent study which included human evaluation of the back translation in addition to automatic evaluation methods found that RTT might have some abili
|
https://en.wikipedia.org/wiki/Flavorist
|
A flavorist, also known as flavor chemist, is someone who uses chemistry to engineer artificial and natural flavors. The tools and materials used by flavorists are almost the same as that used by perfumers with the exception that flavorists seek to mimic or modify both the olfactory and gustatory properties of various food products rather than creating just abstract smells. Additionally, the materials and chemicals that a flavorist utilizes for flavor creation must be safe for human consumption.
The profession of flavorists came about when affordable refrigeration for the home spurred of food processing technology, which could affect the quality of the flavor of the food. In some cases, these technologies can remove naturally occurring flavors. To remedy the flavor loss, the food processing industry created the flavor industry. The chemists that resolved the demand of the food processing industry became known as flavorists.
Education
Educational requirements for the profession known as flavorist are varied. Flavorists are often graduated either in Chemistry, Biology or Food Science up to PhDs obtained in subjects such as Biochemistry and Chemistry. Because, however, the training of a flavorist is mostly done on-the-job and specifically at a flavor company known as a flavor house, this training is similar to the apprentice system.
Located in Versailles (France), ISIPCA French School offers two years of high-standard education in food flavoring including 12 months traineeship in a flavor company. This education program provides students with solid background in Flavoring formulation, flavor application, and flavor chemistry (analysis and sensory).
The British Society of Flavourists together with Reading University provide, every year, a three-week flavorist training course for flavorists from all around the world.
Flavorist societies
In the United States, a certified flavorist must be a member of the Society of Flavor Chemists, which meets in New Jersey, Cincin
|
https://en.wikipedia.org/wiki/Jarisch%E2%80%93Herxheimer%20reaction
|
A Jarisch–Herxheimer reaction is a sudden and typically transient reaction that may occur within 24 hours of being administered antibiotics for an infection by a spirochete, including syphilis, leptospirosis, Lyme disease, and relapsing fever. Signs and symptoms include fever, chills, shivers, feeling sick, headache, fast heart beat, low blood pressure, breathing fast, flushing of skin, muscle aches, and worsening of skin lesions. It may sometimes be mistaken as an allergy to the antibiotic.
Jarisch–Herxheimer reactions can be life-threatening because they can cause a significant drop in blood pressure and cause acute end-organ injury, eventually leading to multi-organ failure.
Signs and symptoms
It comprises part of what is known as sepsis and occurs after initiation of antibacterials when treating Gram-negative infections such as Escherichia coli and louse- and tick-borne infections. It usually manifests in 1–3 hours after the first dose of antibiotics as fever, chills, rigor, hypotension, headache, tachycardia, hyperventilation, vasodilation with flushing, myalgia (muscle pain), exacerbation of skin lesions and anxiety. The intensity of the reaction indicates the severity of inflammation. Reaction commonly occurs within two hours of drug administration, but is usually self-limiting.
Causes
The Jarisch–Herxheimer reaction is traditionally associated with antimicrobial treatment of syphilis. The reaction is also seen in the other diseases caused by spirochetes: Lyme disease, relapsing fever, and leptospirosis. There have been case reports of the Jarisch–Herxheimer reaction accompanying treatment of other infections, including Q fever, bartonellosis, brucellosis, trichinellosis, and African trypanosomiasis.
Pathophysiology
Lipoproteins released from treatment of Treponema pallidum infections are believed to induce the Jarisch–Herxheimer reaction. The Herxheimer reaction has shown an increase in inflammatory cytokines during the period of exacerbation, includin
|
https://en.wikipedia.org/wiki/Cell%20cortex
|
The cell cortex, also known as the actin cortex, cortical cytoskeleton or actomyosin cortex, is a specialized layer of cytoplasmic proteins on the inner face of the cell membrane. It functions as a modulator of membrane behavior and cell surface properties. In most eukaryotic cells lacking a cell wall, the cortex is an actin-rich network consisting of F-actin filaments, myosin motors, and actin-binding proteins. The actomyosin cortex is attached to the cell membrane via membrane-anchoring proteins called ERM proteins that plays a central role in cell shape control. The protein constituents of the cortex undergo rapid turnover, making the cortex both mechanically rigid and highly plastic, two properties essential to its function. In most cases, the cortex is in the range of 100 to 1000 nanometers thick.
In some animal cells, the protein spectrin may be present in the cortex. Spectrin helps to create a network by cross-linked actin filaments. The proportions of spectrin and actin vary with cell type. Spectrin proteins and actin microfilaments are attached to transmembrane proteins by attachment proteins between them and the transmembrane proteins. The cell cortex is attached to the inner cytosolic face of the plasma membrane in cells where the spectrin proteins and actin microfilaments form a mesh-like structure that is continuously remodeled by polymerization, depolymerization and branching.
Many proteins are involved in the cortex regulation and dynamics including formins with roles in actin polymerization, Arp2/3 complexes that give rise to actin branching and capping proteins. Due to the branching process and the density of the actin cortex, the cortical cytoskeleton can form a highly complex meshwork such as a fractal structure. Specialized cells are usually characterized by a very specific cortical actin cytoskeleton. For example, in red blood cells, the cell cortex consists of a two-dimensional cross-linked elastic network with pentagonal or hexagonal symme
|
https://en.wikipedia.org/wiki/Hyperboloid%20structure
|
Hyperboloid structures are architectural structures designed using a hyperboloid in one sheet. Often these are tall structures, such as towers, where the hyperboloid geometry's structural strength is used to support an object high above the ground. Hyperboloid geometry is often used for decorative effect as well as structural economy. The first hyperboloid structures were built by Russian engineer Vladimir Shukhov (1853–1939), including the Shukhov Tower in Polibino, Dankovsky District, Lipetsk Oblast, Russia.
Properties
Hyperbolic structures have a negative Gaussian curvature, meaning they curve inward rather than curving outward or being straight. As doubly ruled surfaces, they can be made with a lattice of straight beams, hence are easier to build than curved surfaces that do not have a ruling and must instead be built with curved beams.
Hyperboloid structures are superior in stability against outside forces compared with "straight" buildings, but have shapes often creating large amounts of unusable volume (low space efficiency). Hence they are more commonly used in purpose-driven structures, such as water towers (to support a large mass), cooling towers, and aesthetic features.
A hyperbolic structure is beneficial for cooling towers. At the bottom, the widening of the tower provides a large area for installation of fill to promote thin film evaporative cooling of the circulated water. As the water first evaporates and rises, the narrowing effect helps accelerate the laminar flow, and then as it widens out, contact between the heated air and atmospheric air supports turbulent mixing.
Work of Shukhov
In the 1880s, Shukhov began to work on the problem of the design of roof systems to use a minimum of materials, time and labor. His calculations were most likely derived from mathematician Pafnuty Chebyshev's work on the theory of best approximations of functions. Shukhov's mathematical explorations of efficient roof structures led to his invention of a new syst
|
https://en.wikipedia.org/wiki/Constrained%20optimization
|
In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.
Relation to constraint-satisfaction problems
The constrained-optimization problem (COP) is a significant generalization of the classic constraint-satisfaction problem (CSP) model. COP is a CSP that includes an objective function to be optimized. Many algorithms are used to handle the optimization part.
General form
A general constrained minimization problem may be written as follows:
where and are constraints that are required to be satisfied (these are called hard constraints), and is the objective function that needs to be optimized subject to the constraints.
In some problems, often called constraint optimization problems, the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to be satisfied) is violated.
Solution methods
Many constrained optimization algorithms can be adapted to the unconstrained case, often via the use of a penalty method. However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence. This is referred to as the Maratos effect.
Equality constraints
Substitution method
For very simple problems, say a function of two variables subject to
|
https://en.wikipedia.org/wiki/Franz%20Rellich
|
Franz Rellich (September 14, 1906 – September 25, 1955) was an Austrian-German mathematician. He made important contributions in mathematical physics, in particular for the foundations of quantum mechanics and for the theory of partial differential equations. The Rellich–Kondrachov theorem is named after him.
Biography
Rellich was born in Tramin, then in the County of Tyrol. He studied from 1924 to 1929 at the universities of Graz and Göttingen and received his doctor's degree in 1929 under Richard Courant at Georg August University of Göttingen with the thesis about "Verallgemeinerung der Riemannschen Integrationsmethode auf Differentialgleichungen n-ter Ordnung in zwei Veränderlichen" ("Generalization of Riemann's integration method on differential equations of n-th order in two variables").
When in 1933 the great mathematical-physical tradition in Göttingen terminated with the Machtergreifung of the Nazis, Rellich, having taken an active position against Nazism, was among those forced to leave.
In 1934 he became Privatdozent'' in Marburg, in 1942 professor in Dresden, and in 1946 director of the Mathematical Institute in Göttingen, being instrumental in its reconstruction. Erhard Heinz, Konrad Jörgens, and Jürgen Moser were among of his doctoral students.
His sister Camilla Juliana Anna was the wife of mathematician Bartel Leendert van der Waerden. Rellich died in Göttingen.
Contributions
Among Rellich's most important mathematical contributions are his work in the perturbation theory of linear operators on
Hilbert spaces: he studied the dependence of the spectral family of a self-adjoint operator on the parameter .
Although the origins and applications of the problem are in quantum mechanics, Rellich's approach was completely abstract.
Rellich successfully worked on many partial differential equations with degeneracies.
For instance, he showed that in the elliptic case, the Monge-Ampère differential equation, while not necessarily uniquely soluble, can
|
https://en.wikipedia.org/wiki/Solar%20transition%20region
|
The solar transition region is a region of the Sun's atmosphere between the upper chromosphere and corona. It is important because it is the site of several unrelated but important transitions in the physics of the solar atmosphere:
Below, gravity tends to dominate the shape of most features, so that the Sun may often be described in terms of layers and horizontal features (like sunspots); above, dynamic forces dominate the shape of most features, so that the transition region itself is not a well-defined layer at a particular altitude.
Below, most of the helium is not fully ionized, so that it radiates energy very effectively; above, it becomes fully ionized. This has a profound effect on the equilibrium temperature (see below).
Below, the material is opaque to the particular colors associated with spectral lines, so that most spectral lines formed below the transition region are absorption lines in infrared, visible light, and near ultraviolet, while most lines formed at or above the transition region are emission lines in the far ultraviolet (FUV) and X-rays. This makes radiative transfer of energy within the transition region very complicated.
Below, gas pressure and fluid dynamics usually dominate the motion and shape of structures; above, magnetic forces dominate the motion and shape of structures, giving rise to different simplifications of magnetohydrodynamics. The transition region itself is not well studied in part because of the computational cost, uniqueness, and complexity of Navier–Stokes combined with electrodynamics.
Helium ionization is important because it is a critical part of the formation of the corona: when solar material is cool enough that the helium within it is only partially ionized (i.e. retains one of its two electrons), the material cools by radiation very effectively via both black-body radiation and direct coupling to the helium Lyman continuum. This condition holds at the top of the chromosphere, where the equilibrium tempe
|
https://en.wikipedia.org/wiki/Brain%20mapping
|
Brain mapping is a set of neuroscience techniques predicated on the mapping of (biological) quantities or properties onto spatial representations of the (human or non-human) brain resulting in maps.
According to the definition established in 2013 by Society for Brain Mapping and Therapeutics (SBMT), brain mapping is specifically defined, in summary, as the study of the anatomy and function of the brain and spinal cord through the use of imaging, immunohistochemistry, molecular & optogenetics, stem cell and cellular biology, engineering, neurophysiology and nanotechnology.
Overview
All neuroimaging is considered part of brain mapping. Brain mapping can be conceived as a higher form of neuroimaging, producing brain images supplemented by the result of additional (imaging or non-imaging) data processing or analysis, such as maps projecting (measures of) behavior onto brain regions (see fMRI). One such map, called a connectogram, depicts cortical regions around a circle, organized by lobes. Concentric circles within the ring represent various common neurological measurements, such as cortical thickness or curvature. In the center of the circles, lines representing white matter fibers illustrate the connections between cortical regions, weighted by fractional anisotropy and strength of connection. At higher resolutions brain maps are called connectomes. These maps incorporate individual neural connections in the brain and are often presented as wiring diagrams.
Brain mapping techniques are constantly evolving, and rely on the development and refinement of image acquisition, representation, analysis, visualization and interpretation techniques. Functional and structural neuroimaging are at the core of the mapping aspect of brain mapping.
Some scientists have criticized the brain image-based claims made in scientific journals and the popular press, like the discovery of "the part of the brain responsible" things like love or musical abilities or a specific memory.
|
https://en.wikipedia.org/wiki/Polynomial%20expansion
|
In mathematics, an expansion of a product of sums expresses it as a sum of products by using the fact that multiplication distributes over addition. Expansion of a polynomial expression can be obtained by repeatedly replacing subexpressions that multiply two other subexpressions, at least one of which is an addition, by the equivalent sum of products, continuing until the expression becomes a sum of (repeated) products. During the expansion, simplifications such as grouping of like terms or cancellations of terms may also be applied. Instead of multiplications, the expansion steps could also involve replacing powers of a sum of terms by the equivalent expression obtained from the binomial formula; this is a shortened form of what would happen if the power were treated as a repeated multiplication, and expanded repeatedly. It is customary to reintroduce powers in the final result when terms involve products of identical symbols.
Simple examples of polynomial expansions are the well known rules
when used from left to right. A more general single-step expansion will introduce all products of a term of one of the sums being multiplied with a term of the other:
An expansion which involves multiple nested rewrite steps is that of working out a Horner scheme to the (expanded) polynomial it defines, for instance
.
The opposite process of trying to write an expanded polynomial as a product is called polynomial factorization.
Expansion of a polynomial written in factored form
To multiply two factors, each term of the first factor must be multiplied by each term of the other factor. If both factors are binomials, the FOIL rule can be used, which stands for "First Outer Inner Last," referring to the terms that are multiplied together. For example, expanding
yields
Expansion of (x+y)n
When expanding , a special relationship exists between the coefficients of the terms when written in order of descending powers of x and ascending powers of y. The coefficients will b
|
https://en.wikipedia.org/wiki/Tiller%20%28botany%29
|
A tiller is a shoot that arises from the base of a grass plant. The term refers to all shoots that grow after the initial parent shoot grows from a seed. Tillers are segmented, each segment possessing its own two-part leaf. They are involved in vegetative propagation and, in some cases, also seed production.
"Tillering" refers to the production of side shoots and is a property possessed by many species in the grass family. This enables them to produce multiple stems (tillers) starting from the initial single seedling. This ensures the formation of dense tufts and multiple seed heads. Tillering rates are heavily influenced by soil water quantity. When soil moisture is low, grasses tend to develop more sparse and deep root systems (as opposed to dense, lateral systems). Thus, in dry soils, tillering is inhibited: the lateral nature of tillering is not supported by lateral root growth.
See also
Crown (botany)
|
https://en.wikipedia.org/wiki/Algaculture
|
Algaculture is a form of aquaculture involving the farming of species of algae.
The majority of algae that are intentionally cultivated fall into the category of microalgae (also referred to as phytoplankton, microphytes, or planktonic algae). Macroalgae, commonly known as seaweed, also have many commercial and industrial uses, but due to their size and the specific requirements of the environment in which they need to grow, they do not lend themselves as readily to cultivation (this may change, however, with the advent of newer seaweed cultivators, which are basically algae scrubbers using upflowing air bubbles in small containers).
Commercial and industrial algae cultivation has numerous uses, including production of nutraceuticals such as omega-3 fatty acids (as algal oil) or natural food colorants and dyes, food, fertilizers, bioplastics, chemical feedstock (raw material), protein-rich animal/aquaculture feed, pharmaceuticals, and algal fuel, and can also be used as a means of pollution control and natural carbon sequestration.
Global production of farmed aquatic plants, overwhelmingly dominated by seaweeds, grew in output volume from 13.5 million tonnes in 1995 to just over 30 million tonnes in 2016. Cultured microalgae already contribute to a wide range of sectors in the emerging bioeconomy. Research suggests there are large potentials and benefits of algaculture for the development of a future healthy and sustainable food system.
Uses of algae
Food
Several species of algae are raised for food. While algae have qualities of a sustainable food source, "producing highly digestible proteins, lipids, and carbohydrates, and are rich in essential fatty acids, vitamins, and minerals" and e.g. having a high protein productivity per acre, there are several challenges "between current biomass production and large-scale economic algae production for the food market".
Micro-algae can be used to create microbial protein used as a powder or in a variety of products.
|
https://en.wikipedia.org/wiki/Piczo
|
Piczo was a social networking and blogging website for teens. It was founded in 2003 by Jim Conning in San Francisco, California. Early investors included Catamount, Sierra Ventures, U.S. Venture Partners, and Mangrove Capital Partners.
In March 2009, Piczo was acquired by Stardoll (Stardoll AB with CEO Mattias Miksche). After the acquisition Piczo was led by Stardoll's CEO Mattias Miksche and his Stardoll team.
In September 2012, Piczo.com was acquired by Posh Media Group (PMGcom Publishing AB with CEO Christofer Båge).
In November 2012, Piczo.com shut down.
Their eponymous service, also called Piczo (Piczo.com), was an online photo website builder and community, which was for the generation of free advertising-supported websites.
Launched in 2005 Piczo allowed users to add images, text, guestbooks, message boxes, videos, music and other content to their site using plain text and HTML. Partners included YouTube, VideoEgg, Photobucket, Flock, Yahoo & PollDaddy. When it began, the company's focus was individual web-page design, and blogs were not included as a feature.
In addition to the website development aspect of the site, Piczo once had a User Generated Content repository (the Piczo Zone) where users can browse, post, and consume content that they or others have used on their site. Later on, Piczo remodelled the entire site, and this along with many other features were no longer available.
One of the features that stayed is "The Board" where Piczo informs users about HTML and Internet safety, though most pages are designed for the old Piczo.
In August 2010 Piczo announced "Piczo Plus", a feature that allows users to buy an "ad-free" site, which is no longer available to purchase.
Popularity
Piczo saw around 10 million unique visitors a month. While primarily offering services in English and German, Piczo was also available
in French, Spanish, Romanian, Russian, Japanese, and Korean BETA versions.
The service was very popular with a teenage audience in B
|
https://en.wikipedia.org/wiki/Shockley%20diode%20equation
|
The Shockley diode equation, or the diode law, named after transistor co-inventor William Shockley of Bell Labs, models the exponential current–voltage (I–V) relationship of semiconductor diodes in moderate constant current forward bias or reverse bias:
where
is the diode current,
is the reverse-bias saturation current (or scale current),
is the voltage across the diode,
is the thermal voltage, and
is the ideality factor, also known as the quality factor or emission coefficient.
The equation is called the Shockley ideal diode equation when the ideality factor equals 1, thus is sometimes omitted. The ideality factor typically varies from 1 to 2 (though can in some cases be higher), depending on the fabrication process and semiconductor material. The ideality factor was added to account for imperfect junctions observed in real transistors, mainly due to carrier recombination as charge carriers cross the depletion region.
The thermal voltage is approximately 25.852mV at . At an arbitrary temperature, it is a known constant:
where
is the Boltzmann constant,
is the absolute temperature of the p–n junction, and
is the elementary charge (the magnitude of an electron's charge).
The reverse saturation current is not constant for a given device, but varies with temperature; usually more significantly than , so that typically decreases as increases.
Under reverse bias, the diode equation's exponential term is near 0, so the current is near the somewhat constant reverse current value (roughly a picoampere for silicon diodes or a microampere for germanium diodes, although this is obviously a function of size).
For moderate forward bias voltages the exponential becomes much larger than 1, since the thermal voltage is very small in comparison. The in the diode equation is then negligible, so the forward diode current will approximate
The use of the diode equation in circuit problems is illustrated in the article on diode modeling.
Limitat
|
https://en.wikipedia.org/wiki/Geosat
|
The GEOSAT (GEOdetic SATellite) was a U.S. Navy Earth observation satellite, launched on March 12, 1985 into an 800 km, 108° inclination orbit, with a nodal period of about 6040 seconds. The satellite carried a radar altimeter capable of measuring the distance from the satellite to sea surface with a relative precision of about 5 cm. The initial phase was an 18-month classified Geodetic Mission (GM) have a ground-track with a near-23-day repeat with closure to within 50 kilometers. The effect of atmospheric drag was such that by fall 1986 GEOSAT was in an almost exact 23-day repeat orbit.
Mission
The Geosat GM goal was to provide information on the marine gravity field. If the ocean surface were at rest, and no forces such as tides or winds were acting on it, the water surface would lie along the geoid. To first order, the Earth shape is an oblate spheroid. Subsurface features such as seamounts create a gravitational pull, and features such as ocean trenches create lower gravity areas. Spatial variations in gravity exert influence on the ocean surface and thereby cause spatial structure in the geoid. The deviations of the geoid from the first order spheroid are on the order of ± 100m. By measuring the position of the water surface above the Earth center, the geoid is observed, and the gravity field can be computed through inverse calculations.
Exact Repeat Mission
After the GM concluded 30 September 1986, GEOSAT's scientific Exact Repeat Mission (ERM) began on November 8, 1986 after being maneuvered into a 17.05 day, 244 pass exact repeat orbit that was more favorable for oceanographic applications. When the ERM ended in January 1990, due to failure of the two on-board tape recorders, more than three years of ERM data were collected and made available to the scientific community.
Once the GM goal had been reached, the satellite still had a useful life. An opportunity existed to observe the next order physical process that affects the ocean surface. Curre
|
https://en.wikipedia.org/wiki/Dog%20behaviourist
|
A dog behaviourist is a person who works in modifying or changing behaviour in dogs. They can be experienced dog handlers, who have developed their experience over many years of hands-on experience, or have formal training up to degree level. Some have backgrounds in veterinary science, animal science, zoology, sociology, biology, or animal behaviour, and have applied their experience and knowledge to the interaction between humans and dogs. Professional certification may be offered through either industry associations or local educational institutions. There is however no compulsion for behaviourists to be a member of a professional body nor to take formal training.
Overview
While any person who works to modify a dog's behaviour might be considered a dog behaviourist in the broadest sense of the term, an animal behaviourist, is a title given only to individuals who have obtained relevant professional qualifications. The professional fields and course of study for dog behaviourists include, but are not limited to animal science, zoology, sociology, biology, psychology, ethology, and veterinary science. People with these credentials usually refer to themselves as Clinical Animal Behaviourists, Applied Animal Behaviourists (PhD) or Veterinary behaviourists (veterinary degree). If they limit their practice to a particular species, they might refer to themselves as a dog/cat/bird behaviourist.
While there are many dog trainers who work with behavioural issues, there are relatively few qualified dog behaviourists. For the majority of the general public, the cost of the services of a dog behaviourist usually reflects both the supply/demand inequity, as well as the level of training they have obtained.
Some behaviourists can be identified in the U.S. by the post-nominals "CAAB", indicating that they are a Certified Applied Animal behaviourist (which requires a Ph.D. or veterinary degree), or, "DACVB", indicating that they are a diplomate of the American College of Vet
|
https://en.wikipedia.org/wiki/Thermal%20management%20%28electronics%29
|
All electronic devices and circuitry generate excess heat and thus require thermal management to improve reliability and prevent premature failure. The amount of heat output is equal to the power input, if there are no other energy interactions. There are several techniques for cooling including various styles of heat sinks, thermoelectric coolers, forced air systems and fans, heat pipes, and others. In cases of extreme low environmental temperatures, it may actually be necessary to heat the electronic components to achieve satisfactory operation.
Overview
Thermal resistance of devices
This is usually quoted as the thermal resistance from junction to case of the semiconductor device. The units are °C/W. For example, a heatsink rated at 10 °C/W will get 10 °C hotter than the surrounding air when it dissipates 1 Watt of heat. Thus, a heatsink with a low °C/W value is more efficient than a heatsink with a high °C/W value.
Given two semiconductor devices in the same package, a lower junction to ambient resistance (RθJ-C) indicates a more efficient device. However, when comparing two devices with different die-free package thermal resistances (Ex. DirectFET MT vs wirebond 5x6mm PQFN), their junction to ambient or junction to case resistance values may not correlate directly to their comparative efficiencies. Different semiconductor packages may have different die orientations, different copper(or other metal) mass surrounding the die, different die attach mechanics, and different molding thickness, all of which could yield significantly different junction to case or junction to ambient resistance values, and could thus obscure overall efficiency numbers.
Thermal time constants
A heatsink's thermal mass can be considered as a capacitor (storing heat instead of charge) and the thermal resistance as an electrical resistance (giving a measure of how fast stored heat can be dissipated). Together, these two components form a thermal RC circuit with an associated time co
|
https://en.wikipedia.org/wiki/XMK%20%28operating%20system%29
|
The eXtreme Minimal Kernel (XMK) is a real-time operating system (RTOS) that is designed for minimal RAM/ROM use. It achieves this goal, though it is almost entirely written in the C programming language. As a consequence it can be easily ported to any 8-, 16-, or 32-bit microcontroller.
XMK comes as two independent packages: the XMK Scheduler that contains the core kernel, everything necessary to run a multithreaded embedded application, and the Application Programming Layer (APL) that provides higher level functions atop the XMK Scheduler API.
The XMK distribution contains no standard libraries such as libc that should be part of the development tools for target systems.
External links
XMK: eXtreme Minimal Kernel project home page (broken link)
Windows Evolution Over Timeline
Real-time operating systems
Embedded operating systems
|
https://en.wikipedia.org/wiki/Sidney%20Siegel
|
Sidney Siegel (4 January 1916 in New York City – 29 November 1961) was an American psychologist who became especially well known for his work in popularizing non-parametric statistics for use in the behavioral sciences. He was a co-developer of the statistical test known as the Siegel–Tukey test.
In 1951 Siegel completed a B.A. in vocational arts at San Jose State College (now San Jose State University), then in 1953 a Ph.D. in Psychology at Stanford University. Except for a year spent at the Center for Advanced Study in the Behavioral Sciences at Stanford, he thereafter taught at Pennsylvania State University, until his death in November 1961 of a coronary thrombosis.
His parents, Jacob and Rebecca Siegel, were Jewish immigrants from Romania.
See also
Siegel–Tukey test.
Notes
|
https://en.wikipedia.org/wiki/Additive%20identity
|
In mathematics, the additive identity of a set that is equipped with the operation of addition is an element which, when added to any element in the set, yields . One of the most familiar additive identities is the number 0 from elementary mathematics, but additive identities occur in other mathematical structures where addition is defined, such as in groups and rings.
Elementary examples
The additive identity familiar from elementary mathematics is zero, denoted 0. For example,
In the natural numbers (if 0 is included), the integers the rational numbers the real numbers and the complex numbers the additive identity is 0. This says that for a number belonging to any of these sets,
Formal definition
Let be a group that is closed under the operation of addition, denoted +. An additive identity for , denoted , is an element in such that for any element in ,
Further examples
In a group, the additive identity is the identity element of the group, is often denoted 0, and is unique (see below for proof).
A ring or field is a group under the operation of addition and thus these also have a unique additive identity 0. This is defined to be different from the multiplicative identity 1 if the ring (or field) has more than one element. If the additive identity and the multiplicative identity are the same, then the ring is trivial (proved below).
In the ring of -by- matrices over a ring , the additive identity is the zero matrix, denoted or , and is the -by- matrix whose entries consist entirely of the identity element 0 in . For example, in the 2×2 matrices over the integers the additive identity is
In the quaternions, 0 is the additive identity.
In the ring of functions from , the function mapping every number to 0 is the additive identity.
In the additive group of vectors in the origin or zero vector is the additive identity.
Properties
The additive identity is unique in a group
Let be a group and let and in both denote additive identities, so f
|
https://en.wikipedia.org/wiki/Long%20Term%20Ecological%20Research%20Network
|
The Long Term Ecological Research Network (LTER) consists of a group of over 1800 scientists and students studying ecological processes over extended temporal and spatial scales. Twenty-eight LTER sites cover a diverse set of ecosystems. It is part of the International Long Term Ecological Research Network (ILTER). The project was established in 1980 and is funded by the National Science Foundation. Data from LTER sites is publicly available in the Environmental Data Initiative repository and findable through DataONE search.
LTER sites
There are 28 sites within the LTER Network across the United States, Puerto Rico, and Antarctica, each conducting research on different ecosystems.
LTER sites are both physical places and communities of researchers. Some of the physical places are remote or protected from development, others are deliberately located in cities or agricultural areas. Either way, the program of research for each LTER is tailored to the most pressing and promising questions for that location and the program of research determines the group of researchers with the skills and interests to pursue those questions.
While each LTER site has a unique situation—with different organizational partners and different scientific challenges—the members of the Network apply several common approaches to understanding long-term ecological phenomena. These include observation, large-scale experiments, modeling, synthesis science and partnerships.
Andrews Forest LTER (AND)
Arctic LTER (ARC)
Baltimore Ecosystem Study LTER (BES)
Beaufort Lagoon Ecosystem LTER (BLE)
Bonanza Creek LTER (BNZ)
Central Arizona - Phoenix LTER (CAP)
California Current Ecosystem LTER (CCE)
Cedar Creek LTER (CDR)
Coweeta LTER (CWT) - NSF LTER funding from 1980-2020
Florida Coastal Everglades LTER (FCE)
Georgia Coastal Ecosystems LTER (GCE)
Harvard Forest LTER (HFR)
Hubbard Brook LTER (HBR)
Jornada Basin LTER (JRN)
Kellogg Biological Station LTER (KBS)
Konza Prairie LTER (KNZ)
Luq
|
https://en.wikipedia.org/wiki/Hazchem
|
Hazchem (; from hazardous chemicals) is a warning plate system used in Australia, Hong Kong, Malaysia, New Zealand, India and the United Kingdom for vehicles transporting hazardous substances, and on storage facilities. The top-left section of the plate gives the Emergency Action Code (EAC) telling the fire brigade what actions to take if there's an accident or fire. The middle-left section containing a 4 digit number gives the UN Substance Identification Number describing the material. The lower-left section gives the telephone number that should be called if special advice is needed. The warning symbol in the top right indicates the general hazard class of the material. The bottom-right of the plate carries a company logo or name.
There is also a standard null Hazchem plate to indicate the transport of non-hazardous substances. The null plate does not include an EAC or substance identification.
The National Chemical Emergency Centre (NCEC) in the United Kingdom provides a Free Online Hazchem Guide.
Emergency Action Code
The Emergency Action Code (EAC) is a three character code displayed on all dangerous goods classed carriers, and provides a quick assessment to first responders and emergency responders (i.e. fire fighters and police) of what actions to take should the carrier carrying such goods become involved in an incident (traffic collision, for example). EAC's are characterised by a single number (1 to 4) and either one or two letters (depending on the hazard).
NCEC was commissioned by the Department for Communities and Local Government (CLG) to edit the EAC List 2013 publication, outlining the application of Hazchem Emergency Actions Codes (EACs) in Britain for 2013. The Dangerous Goods Emergency Action Code (EAC) List is reviewed every two years and is an essential compliance document for all emergency services, local government and for those who may control the planning for, and prevention of, emergencies involving dangerous goods. The current EAC Lis
|
https://en.wikipedia.org/wiki/Brettanomyces%20bruxellensis
|
Brettanomyces bruxellensis (the anamorph of Dekkera bruxellensis) is a yeast associated with the Senne valley near Brussels, Belgium. Despite its Latin species name, B. bruxellensis is found all over the globe. In the wild, it is often found on the skins of fruit.
Beer production
B. bruxellensis plays a key role in the production of the typical Belgian beer styles such as lambic, Flanders red ales, gueuze and kriek, and is part of spontaneous fermentation biota. The Trappist Orval has very little in it aswell. It is naturally found in the brewery environment living within oak barrels that are used for the storage of beer during the secondary conditioning stage. Here it completes the long slow fermentation or super-attenuation of beer, often in symbiosis with Pediococcus sp. Macroscopically visible colonies look whitish and show a dome-shaped aspect, depending on the age and size.
B. bruxellensis is increasingly being used by American craft brewers, especially in Maine, California and Colorado. Jolly Pumpkin Artisan Ales, Allagash Brewing Company, Port Brewing Company, Sierra Nevada Brewing Company, Russian River Brewing Company and New Belgium Brewing Company have all brewed beers fermented with B. bruxellensis. The beers have a slightly sour, earthy character. Some have described them as having a "barnyard" or "wet horse blanket" flavor.
Wine production
In the wine industry, B. bruxellensis is generally considered a spoilage yeast and it and other members of the genus are often referred to as Brettanomyces ("brett"). Its metabolic products can impart "sweaty saddle leather", "barnyard", "burnt plastic" or "band-aid" aromas to wine. Some winemakers in France, and occasionally elsewhere, consider it a desirable addition to wine, e.g., in Château de Beaucastel, but New World vintners generally consider it a defect. Some authorities consider brett to be responsible for 90% of the spoilage problems in premium red wines.
One defense against brett is to limit pot
|
https://en.wikipedia.org/wiki/Crypt%20%28Unix%29
|
In Unix computing, crypt or enigma is a utility program used for encryption. Due to the ease of breaking it, it is considered to be obsolete.
The program is usually used as a filter, and it has traditionally been implemented using a "rotor machine" algorithm based on the Enigma machine. It is considered to be cryptographically far too weak to provide any security against brute-force attacks by modern, commodity personal computers.
Some versions of Unix shipped with an even weaker version of the crypt(1) command in order to comply with contemporaneous laws and regulations that limited the exportation of cryptographic software. Some of these were simply implementations of the Caesar cipher (effectively no more secure than ROT13, which is implemented as a Caesar cipher with a well-known key).
History
Cryptographer Robert Morris wrote a M-209-based , which first appeared in Version 3 Unix, to encourage codebreaking experiments; Morris managed to break by hand. Dennis Ritchie automated decryption with a method by James Reeds, and a new Enigma-based version appeared in Version 7, which Reeds and Peter J. Weinberger also broke.
crypt(1) under Linux
Linux distributions generally do not include a Unix compatible version of the crypt command. This is largely due to a combination of three major factors:
crypt is relatively obscure and rarely used for e-mail attachments nor as a file format
crypt is considered to be cryptographically far too weak to withstand brute-force attacks by modern computing systems (Linux systems generally ship with GNU Privacy Guard which is considered to be reasonably secure by modern standards)
During the early years of Linux development and adoption there was some concern that even as weak as the algorithm used by crypt was, that it might still run afoul of ITAR's export controls; so mainstream distribution developers in the United States generally excluded it, leaving their customers to fetch GnuPG or other strong cryptographic software fr
|
https://en.wikipedia.org/wiki/PC%20Exchange
|
PC Exchange was a utility program for Apple Macintosh computers. It was a control panel for the classic Mac OS that lets the operating system mount FAT file systems and mapped file extensions to the user-defined type and creator codes.
It was first made available in 1992 as a commercial software product from Apple, but in 1993, it was no longer a commercial product on its own, and shipped with System 7 Pro as part of Apple's push to become more compatible with Microsoft Windows. It worked transparently, mounting the disks on the desktop as if they were normal Mac disks, with the exception of the large letters PC on the icon which were visually similar to the IBM logotype. Originally only floppy disks were supported, but later versions added support for hard drives, CD-ROMs and other types of media.
PC Exchange is not used in macOS, which uses a completely different driver architecture to read the FAT file system.
Compatibility layers
Classic Mac OS
Apple Inc. file systems
|
https://en.wikipedia.org/wiki/Temple%20elephant
|
Temple elephants are a type of captive elephant. Many major temples own elephants; others hire or are donated elephants during the festive seasons. Temple elephants are usually wild animals, poached from the forests of North East India from wild herds at a young age and then sold into captivity to temples. Their treatment in captivity has been the subject of controversy and condemnation by some, while others claim that elephants form a vital part of the socio-economic framework of many temple ceremonies and festivals in India, particularly in the South.
The largest elephant stable in India is Punnathurkotta of the temple of Guruvayur; it has about 59 captive elephants; it currently houses 58 captive elephants, of which 53 are adult males and 5 are females.
Gallery on elephants
See also
Animal worship
Guruvayur Keshavan
Thrissur Pooram
|
https://en.wikipedia.org/wiki/Tolman%E2%80%93Oppenheimer%E2%80%93Volkoff%20limit
|
The Tolman–Oppenheimer–Volkoff limit (or TOV limit) is an upper bound to the mass of cold, non-rotating neutron stars, analogous to the Chandrasekhar limit for white dwarf stars. If the mass of a neutron star reaches the limit it will collapse to a denser form, most likely a black hole.
Theoretical work in 1996 placed the limit at approximately 1.5 to 3.0 solar masses, corresponding to an original stellar mass of 15 to 20 solar masses; additional work in the same year gave a more precise range of 2.2 to 2.9 solar masses.
Observations of GW170817, the first gravitational wave event due to merging neutron stars (which are thought to have collapsed into a black hole within a few seconds after merging), placed the limit in the range of 2.01 to 2.17 (solar masses).
In the case of a rigidly spinning neutron star, the mass limit is thought to increase by up to 18–20%.
History
The idea that there should be an absolute upper limit for the mass of a cold (as distinct from thermal pressure supported) self-gravitating body dates back to the 1932 work of Lev Landau, based on the Pauli exclusion principle. Pauli's principle shows that the fermionic particles in sufficiently compressed matter would be forced into energy states so high that their rest mass contribution would become negligible when compared with the relativistic kinetic contribution (RKC). RKC is determined just by the relevant quantum wavelength , which would be of the order of the mean interparticle separation. In terms of Planck units, with the reduced Planck constant , the speed of light , and the gravitational constant all set equal to one, there will be a corresponding pressure given roughly by
At the upper mass limit, that pressure will equal the pressure needed to resist gravity. The pressure to resist gravity for a body of mass will be given according to the virial theorem roughly by
where is the density. This will be given by , where is the relevant mass per particle. It can be seen that the
|
https://en.wikipedia.org/wiki/Wallace%20tree
|
A Wallace multiplier is a hardware implementation of a binary multiplier, a digital circuit that multiplies two integers. It uses a selection of full and half adders (the Wallace tree or Wallace reduction) to sum partial products in stages until two numbers are left. Wallace multipliers reduce as much as possible on each layer, whereas Dadda multipliers try to minimize the required number of gates by postponing the reduction to the upper layers.
Wallace multipliers were devised by the Australian computer scientist Chris Wallace in 1964.
The Wallace tree has three steps:
Multiply each bit of one of the arguments, by each bit of the other.
Reduce the number of partial products to two by layers of full and half adders.
Group the wires in two numbers, and add them with a conventional adder.
Compared to naively adding partial products with regular adders, the benefit of the Wallace tree is its faster speed. It has reduction layers, but each layer has only propagation delay. A naive addition of partial products would require time.
As making the partial products is and the final addition is , the total multiplication is , not much slower than addition. From a complexity theoretic perspective, the Wallace tree algorithm puts multiplication in the class NC1.
The downside of the Wallace tree, compared to naive addition of partial products, is its much higher gate count.
These computations only consider gate delays and don't deal with wire delays, which can also be very substantial.
The Wallace tree can be also represented by a tree of 3/2 or 4/2 adders.
It is sometimes combined with Booth encoding.
Detailed explanation
The Wallace tree is a variant of long multiplication. The first step is to multiply each digit (each bit) of one factor by each digit of the other. Each of this partial products has weight equal to the product of its factors. The final product is calculated by the weighted sum of all these partial products.
The first step, as said above, is to
|
https://en.wikipedia.org/wiki/Dadda%20multiplier
|
The Dadda multiplier is a hardware binary multiplier design invented by computer scientist Luigi Dadda in 1965. It uses a selection of full and half adders to sum the partial products in stages (the Dadda tree or Dadda reduction) until two numbers are left. The design is similar to the Wallace multiplier, but the different reduction tree reduces the required number of gates (for all but the smallest operand sizes) and makes it slightly faster (for all operand sizes).
Dadda and Wallace multipliers have the same three steps for two bit strings and of lengths and respectively:
Multiply (logical AND) each bit of , by each bit of , yielding results, grouped by weight in columns
Reduce the number of partial products by stages of full and half adders until we are left with at most two bits of each weight.
Add the final result with a conventional adder.
As with the Wallace multiplier, the multiplication products of the first step carry different weights reflecting the magnitude of the original bit values in the multiplication. For example, the product of bits has weight .
Unlike Wallace multipliers that reduce as much as possible on each layer, Dadda multipliers attempt to minimize the number of gates used, as well as input/output delay. Because of this, Dadda multipliers have a less expensive reduction phase, but the final numbers may be a few bits longer, thus requiring slightly bigger adders.
Description
To achieve a more optimal final product, the structure of the reduction process is governed by slightly more complex rules than in Wallace multipliers.
The progression of the reduction is controlled by a maximum-height sequence , defined by:
, and
This yields a sequence like so:
The initial value of is chosen as the largest value such that , where and are the number of bits in the input multiplicand and multiplier. The lesser of the two bit lengths will be the maximum height of each column of weights after the first stage of multiplication.
|
https://en.wikipedia.org/wiki/Progressive%20familial%20intrahepatic%20cholestasis
|
Progressive familial intrahepatic cholestasis (PFIC) is a group of familial cholestatic conditions caused by defects in biliary epithelial transporters. The clinical presentation usually occurs first in childhood with progressive cholestasis. This usually leads to failure to thrive, cirrhosis, and the need for liver transplantation.
Types
Types of progressive familial intrahepatic cholestasis are as follows:
Type 1 (OMIM #211600), also called Byler disease
Type 2 (OMIM #601847), also called ABCB11 deficiency or BSEP deficiency
Type 3 (OMIM #602347), also called ABCB4 deficiency or MDR3 deficiency
Type 4 (OMIM #615878), from mutation in TJP2
Signs and symptoms
The onset of the disease is usually before age 2, but patients have been diagnosed with PFIC even into adolescence. Of the three entities, PFIC-1 usually presents earliest. Patients usually present in early childhood with cholestasis, jaundice, and failure to thrive. Intense pruritus is characteristic; in patients who present in adolescence, it has been linked with suicide. Patients may have fat malabsorption, leading to fat soluble vitamin deficiency, and complications, including osteopenia.
Pathogenesis
PFIC-1 is caused by a variety of mutations in ATP8B1, a gene coding for a P-type ATPase protein, FIC-1, that is responsible for phospholipid translocation across membranes. It was previously identified as clinical entities known as Byler's disease and Greenland-Eskimo familial cholestasis. Patients with PFIC-1 may also have watery diarrhea, in addition to the clinical features below, due to FIC-1's expression in the intestine. How ATP8B1 mutation leads to cholestasis is not yet well understood.
PFIC-2 is caused by a variety of mutations in ABCB11, the gene that codes for the bile salt export pump, or BSEP. Retention of bile salts within hepatocytes, which are the only cell type to express BSEP, causes hepatocellular damage and cholestasis.
PFIC-3 is caused by a variety of mutations in ABCB4,
|
https://en.wikipedia.org/wiki/Comparison%20of%20hex%20editors
|
The following is a comparison of notable hex editors.
General
Features
See also
Comparison of HTML editors
Comparison of integrated development environments
Comparison of text editors
Comparison of word processors
Notes
ao: ANSI is the Windows character set, OEM is the DOS character set. Both are based on ASCII.
|
https://en.wikipedia.org/wiki/Correlation%20integral
|
In chaos theory, the correlation integral is the mean probability that the states at two different times are close:
where is the number of considered states , is a threshold distance, a norm (e.g. Euclidean norm) and the Heaviside step function. If only a time series is available, the phase space can be reconstructed by using a time delay embedding (see Takens' theorem):
where is the time series, the embedding dimension and the time delay.
The correlation integral is used to estimate the correlation dimension.
An estimator of the correlation integral is the correlation sum:
See also
Recurrence quantification analysis
|
https://en.wikipedia.org/wiki/Corps%20Badge
|
A Corps Badge is a Distinct badge worn by military personnel to distinguish themselves from other Corps, (Known as Branch insignia in the United States) where a Corps is a grouped by a common function (i.e. Infantry, Artillery, Cavalry, Signals Corps etc.) The use of Corps Badges and how a corps is identified is varied around the World.
British Military
In the British Army each Corps has a distinct Cap badge for their corps as well as Stable belt.
Irish Defence Forces
In the Irish Defence Forces the Air Corps and Army use Distinct Collar Corps badges for each corps, As the same Cap Badge is used by all Corps. The Air Corps has its own corps badge mainly worn by Line personnel, while Corps units retain their Corps badge.
American Civil War
Corps Badges were originally worn by Union soldiers on the top of their army forage cap (kepi), left side of the hat, or over their left breast.
Military insignia
Military badges
|
https://en.wikipedia.org/wiki/Tablet%20computer
|
A tablet computer, commonly shortened to tablet, is a mobile device, typically with a mobile operating system and touchscreen display processing circuitry, and a rechargeable battery in a single, thin and flat package. Tablets, being computers, have similar capabilities, but lack some input/output (I/O) abilities that others have. Modern tablets largely resemble modern smartphones, the only differences being that tablets are relatively larger than smartphones, with screens or larger, measured diagonally, and may not support access to a cellular network. Unlike laptops (which have traditionally run off operating systems usually designed for desktops), tablets usually run mobile operating systems, alongside smartphones.
The touchscreen display is operated by gestures executed by finger or digital pen (stylus), instead of the mouse, touchpad, and keyboard of larger computers. Portable computers can be classified according to the presence and appearance of physical keyboards. Two species of tablet, the slate and booklet, do not have physical keyboards and usually accept text and other input by use of a virtual keyboard shown on their touchscreen displays. To compensate for their lack of a physical keyboard, most tablets can connect to independent physical keyboards by Bluetooth or USB; 2-in-1 PCs have keyboards, distinct from tablets.
The form of the tablet was conceptualized in the middle of the 20th century (Stanley Kubrick depicted fictional tablets in the 1968 science fiction film 2001: A Space Odyssey) and prototyped and developed in the last two decades of that century. In 2010, Apple released the iPad, the first mass-market tablet to achieve widespread popularity. Thereafter, tablets rapidly rose in ubiquity and soon became a large product category used for personal, educational and workplace applications. Popular uses for a tablet PC include viewing presentations, video-conferencing, reading e-books, watching movies, sharing photos and more. As of 2021 there
|
https://en.wikipedia.org/wiki/Septum%20transversum
|
The septum transversum is a thick mass of cranial mesenchyme, formed in the embryo, that gives rise to parts of the thoracic diaphragm and the ventral mesentery of the foregut in the developed human being and other mammals.
Origins
The septum transversum originally arises as the most cranial part of the mesenchyme on day 22. During craniocaudal folding, it assumes a position cranial to the developing heart at the level of the cervical vertebrae. During subsequent weeks the dorsal end of the embryo grows much faster than its ventral counterpart resulting in an apparent descent of the ventrally located septum transversum. At week 8, it can be found at the level of the thoracic vertebrae.
Nerve supply
After successful craniocaudal folding the septum transversum picks up innervation from the adjacent ventral rami of spinal nerves C3, C4 and C5, thus forming the precursor of the phrenic nerve. During the descent of the septum, the phrenic nerve is carried along and assumes its descending pathway.
During embryonic development of the thoracic diaphragm, myoblast cells from the septum invade the other components of the diaphragm. They thus give rise to the motor and sensory innervation of the muscular diaphragm by the phrenic nerve.
Derivatives
The cranial part of the septum transversum gives rise to the central tendon of the diaphragm, and is the origin of the myoblasts that invade the pleuroperitoneal folds resulting in the formation of the muscular diaphragm.
The caudal part of the septum transversum is invaded by the hepatic diverticulum which divides within it to form the liver and thus gives rise to the ventral mesentery of the foregut, which in turn is the precursor of the lesser omentum, the visceral peritoneum of the liver and the falciform ligament.
Though not derived from the septum transversum, development of the liver is highly dependent upon signals originating here. Bone morphogenetic protein 2 (BMP-2), BMP-4 and BMP-7 produced from the septum transv
|
https://en.wikipedia.org/wiki/High-Speed%20Serial%20Interface
|
The High-Speed Serial Interface (HSSI) is a differential ECL serial interface standard developed by Cisco Systems and T3plus Networking primarily for use in WAN router connections. It is capable of speeds up to 52 Mbit/s with cables up to in length.
While HSSI uses 50-pin connector physically similar to that used by SCSI-2, it requires a cable with an impedance of 110 Ω (as opposed to the 75 Ω of a SCSI-2 cable).
The physical layer of the standard is defined by EIA-613 and the electrical layer by EIA-612.
It is supported by the Linux kernel since version 3.4-rc2.
|
https://en.wikipedia.org/wiki/Platform%20Invocation%20Services
|
Platform Invocation Services, commonly referred to as P/Invoke, is a feature of Common Language Infrastructure implementations, like Microsoft's Common Language Runtime, that enables managed code to call native code.
Managed code, such as C# or VB.NET, provides native access to classes, methods, and types defined within the libraries that make up the .NET Framework. While the .NET Framework provides an extensive set of functionality, it may lack access to many lower level operating system libraries normally written in unmanaged code or third party libraries also written in unmanaged code. P/Invoke is the technique a programmer can use to access functions in these libraries. Calls to functions within these libraries occur by declaring the signature of the unmanaged function within managed code, which serves as the actual function that can be called like any other managed method. The declaration references the library's file path and defines the function parameters and return in managed types that are most likely to be implicitly marshaled to and from the unmanaged types by the common language run-time (CLR). When the unmanaged data types become too complex for a simple implicit conversion from and to managed types, the framework allows the user to define attributes on the function, return, and/or the parameters to explicitly refine how the data should be marshaled so as not to lead to exceptions in trying to do so implicitly.
There are many abstractions of lower-level programming concepts available to managed code programmers as compared to programming in unmanaged languages. As a result, a programmer with only managed code experience will need to brush up on programming concepts such as pointers, structures, and passing by reference to overcome some of the obstacles in using P/Invoke.
Architecture
Overview
Two variants of P/Invoke currently in use are:
Explicit
Native code is imported via dynamic-linked libraries (DLLs)
Metadata embedded in the caller's
|
https://en.wikipedia.org/wiki/SIMDIS
|
SIMDIS is a software toolset developed by Code 5770 at the US Naval Research Laboratory (NRL). The software provides 2D and 3D interactive graphical and video displays of live and postprocessed simulation, test, and operational data. SIMDIS is a portmanteau of simulation and display.
Features
SIMDIS runs on Windows, Linux, and Sun Microsystems workstations with hardware-accelerated 3D graphics and provides identical execution and "look and feel" for all supported platforms.
SIMDIS provides either a 2D or a 3D display of the normally "seen" data such as platform position and orientation, as well as the "unseen" data such as the interactions of sensor systems with targets, countermeasures, and the environment. It includes custom tools for interactively analyzing and displaying data for equipment modes, spatial grids, ranges, angles, antenna patterns, line of sight and RF propagation. Capability for viewing time synchronized data from either a standalone workstation or multiple networked workstations is also provided.
To meet the needs of range operators, simulation users, analysts, and decision makers SIMDIS provides multiple modes of operation including live display, interactive playback, and scripted multimedia modes. It also provides capability for manipulation of post-processed data and integration with charts, graphs, pictures, audio and video for use in the development and delivery of 3-D visual presentations.
SIMDIS binaries are released under U.S. Department of Defense (DoD) Distribution Statement A, meaning the binaries are approved for public release with unlimited distribution. SIMDIS has been independently accredited and certified by Commander Operational Test and Evaluation Force (COMOPTEVFOR) and Joint Forces Command (JFCOM). It has also been approved for use on the Navy/Marine Corps Intranet (NMCI).
Operational use
SIMDIS provides support for analysis and display of test and training mission data to more than 4000 users. At the Naval Research La
|
https://en.wikipedia.org/wiki/F-ratio%20%28oceanography%29
|
In oceanic biogeochemistry, the f-ratio is the fraction of total primary production fuelled by nitrate (as opposed to that fuelled by other nitrogen compounds such as ammonium). The ratio was originally defined by Richard Eppley and Bruce Peterson in one of the first papers estimating global oceanic production. This fraction was originally believed significant because it appeared to directly relate to the sinking (export) flux of organic marine snow from the surface ocean by the biological pump. However, this interpretation relied on the assumption of a strong depth-partitioning of a parallel process, nitrification, that more recent measurements has questioned.
Overview
Gravitational sinking of organisms (or the remains of organisms) transfers particulate organic carbon from the surface waters of the ocean to its deep interior. This process is known as the biological pump, and quantifying it is of interest to scientists because it is an important aspect of the Earth's carbon cycle. Essentially, this is because carbon transported to the deep ocean is isolated from the atmosphere, allowing the ocean to act as a reservoir of carbon. This biological mechanism is accompanied by a physico-chemical mechanism known as the solubility pump which also acts to transfer carbon to the ocean's deep interior.
Measuring the flux of sinking material (so-called marine snow) is usually done by deploying sediment traps which intercept and store material as it sinks down the water column. However, this is a relatively difficult process, since traps can be awkward to deploy or recover, and they must be left in situ over a long period to integrate the sinking flux. Furthermore, they are known to experience biases and to integrate horizontal as well as vertical fluxes because of water currents. For this reason, scientists are interested in ocean properties that can be more easily measured, and that act as a proxy for the sinking flux. The f-ratio is one such proxy.
"New" and "rege
|
https://en.wikipedia.org/wiki/Compensation%20law%20of%20mortality
|
The compensation law of mortality (or late-life mortality convergence) states that the relative differences in death rates between different populations of the same biological species decrease with age, because the higher initial death rates in disadvantaged populations are compensated by lower pace of mortality increase with age. The age at which this imaginary (extrapolated) convergence of mortality trajectories takes place is named the "species-specific life span" (see Gavrilov and Gavrilova, 1979). For human beings, this human species-specific life span is close to 95 years (Gavrilov and Gavrilova, 1979; 1991).
Compensation law of mortality is a paradoxical empirical observation, and it represents a challenge for methods of survival analysis based on proportionality assumption (proportional hazard models). The compensation law of mortality also represents a great challenge for many theories of aging and mortality, which usually fail to explain this phenomenon. On the other hand, the compensation law follows directly from reliability theory, when the compared systems have different initial levels of redundancy.
See also
Ageing
Biodemography of human longevity
Biogerontology
Demography
Mortality
Reliability theory of aging and longevity
|
https://en.wikipedia.org/wiki/Chetaev%20instability%20theorem
|
The Chetaev instability theorem for dynamical systems states that if there exists, for the system with an equilibrium point at the origin, a continuously differentiable function V(x) such that
the origin is a boundary point of the set ;
there exists a neighborhood of the origin such that for all
then the origin is an unstable equilibrium point of the system.
This theorem is somewhat less restrictive than the Lyapunov instability theorems, since a complete sphere (circle) around the origin for which and both are of the same sign does not have to be produced.
It is named after Nicolai Gurevich Chetaev.
Applications
Chetaev instability theorem has been used to analyze the unfolding dynamics of proteins under the effect of optical tweezers.
See also
Lyapunov function — a function whose existence guarantees stability
|
https://en.wikipedia.org/wiki/GoForce
|
The Nvidia GoForce was a line of chipsets that was used mainly in handheld devices such as PDAs and mobile phones. Nvidia acquired graphics display processor firm MediaQ in 2003, and rebranded the division as GoForce. It has since been replaced by the Nvidia Tegra series of SoCs.
Features
GoForce 2100
Featuring VGA image capture, 2D graphics acceleration, JPEG support, and MJPEG acceleration. Used in the Samsung SCH-M500 Palm OS based flip-phone.
GoForce 2150
Featuring 1.3-megapixel camera support, JPEG support, and 2D graphics acceleration.
Supports 3-megapixel images with the upgrade.
GoForce 3000/GoForce 4000
The GoForce 4000 supports 3.0-megapixel camera and MPEG-4/H.263 codec, whilst GoForce 3000 is a low-cost version of the GoForce 4000 with limited features.
GoForce 4500
Features 3D graphics support with a geometry processor and programmable pixel shaders, used in the Gizmondo device.
GoForce 4800
Supports 3.0-megapixel camera and a 3D graphics engine.
GoForce 5500
The GoForce 5500 is a multimedia processor, incorporates Tensilica Xtensa HiFi 2 Audio Engine (based on the Xtensa LX processor licensed in 2005). It can decode video and audio formats, such as WMV, WMA, MP3, MP4, MPEG, JPEG and supports H.264. Also including a 24-bit 64-voice sound processor with supports up to 32 MB of external memory, 10-megapixel camera support, and 3D graphics engine version 2.
GoForce 5300
Equipped with 2.25 MiB embedded DRAM (eDRAM) on TSMC's 65 nm process, being the first in the GoForce product line. Its multimedia technology is claimed to be similar to the 5500's, but it does not sport a 3D engine and only supports much smaller screens.
GoForce 6100
The latest chipset in the series, the GoForce 6100 (showcased in 2007), claiming the first applications processor by NVIDIA, adds 10-megapixel camera support and integrated 802.11b/g support (with external RF), based on a 130 nm process. It contains a 250 MHz ARM1176JZ-S core.
Implementations
Acer N300 PDA
HTC Fores
|
https://en.wikipedia.org/wiki/Mixed-species%20foraging%20flock
|
A mixed-species feeding flock, also termed a mixed-species foraging flock, mixed hunting party or informally bird wave, is a flock of usually insectivorous birds of different species that join each other and move together while foraging. These are different from feeding aggregations, which are congregations of several species of bird at areas of high food availability.
While it is currently unknown how mixed-species foraging flocks originate, researchers have proposed a few mechanisms for their initiation. Many believe that nuclear species play a vital role in mixed-species flock initiation. Additionally, the forest structure is hypothesized to play a vital role in these flocks' formation. In Sri Lanka, for example, vocal mimicry by the greater racket-tailed drongo might have a key role in the initiation of mixed-species foraging flocks, while in parts of the American tropics packs of foraging golden-crowned warblers might play the same role.
Composition
Mixed-species foraging flocks tend to form around a "nuclear" species. Researchers believe nuclear species both stimulate the formation of a mixed-species flock and maintain the cohesion between bird species. They tend to have a disproportionately large influence on the flock. Nuclear species have a few universal qualities. Typically, they are both generalists that employ a gleaning foraging strategy and intraspecifically social birds. "Associate" or "attendant" species are birds that trail the flock only after it has entered their territory. Researchers have shown that these species tend to have a higher fitness following mixed-species foraging flocks. The third class of birds found in mixed-species flocks have been termed "sentinel" species. Unlike nuclear species, sentinels are fly-catching birds that are rarely gregarious. Their role is to alert the other birds in the mixed-species flock to the arrival of potential predators.
Benefits
Ecologists generally assume that species in the same ecological niche comp
|
https://en.wikipedia.org/wiki/Cultural%20selection%20theory
|
Cultural selection theory is the study of cultural change modelled on theories of evolutionary biology. Cultural selection theory has so far never been a separate discipline. However it has been proposed that
human culture exhibits key Darwinian evolutionary properties, and "the structure of a science of cultural evolution should share fundamental features with the structure of the science of biological evolution".
In addition to Darwin's work the term historically covers a diverse range of theories from both the sciences and the humanities including those of Lamark, politics and economics e.g. Bagehot, anthropology e.g. Edward B. Tylor, literature e.g. Ferdinand Brunetière, evolutionary ethics e.g. Leslie Stephen, sociology e.g. Albert Keller, anthropology e.g. Bronislaw Malinowski, Biosciences e.g.
Alex Mesoudi, geography e.g.
Richard Ormrod, sociobiology and biodiversity e.g. E.O. Wilson, computer programming e.g. Richard Brodie, and other fields e.g. Neoevolutionism, and Evolutionary archaeology.
Outline
Crozier suggests that Cultural Selection emerges from three bases: Social contagion theory, Evolutionary epistemology, and Memetics.
This theory is an extension of memetics. In memetics, memes, much like biology's genes, are informational units passed through generations of culture. However, unlike memetics, cultural selection theory moves past these isolated "memes" to encompass selection processes, including continuous and quantitative parameters. Two other approaches to cultural selection theory are social contagion and evolutionary epistemology.
Social contagion theory’s epidemiological approach construes social entities as analogous to parasites that are transmitted virally through a population of biological organisms. Evolutionary epistemology's focus lies in causally connecting evolutionary biology and rationality by generating explanations for why traits for rational behaviour or thought patterns would have been selected for in a species’ evolut
|
https://en.wikipedia.org/wiki/Parrondo%27s%20paradox
|
Parrondo's paradox, a paradox in game theory, has been described as: A combination of losing strategies becomes a winning strategy. It is named after its creator, Juan Parrondo, who discovered the paradox in 1996. A more explanatory description is:
There exist pairs of games, each with a higher probability of losing than winning, for which it is possible to construct a winning strategy by playing the games alternately.
Parrondo devised the paradox in connection with his analysis of the Brownian ratchet, a thought experiment about a machine that can purportedly extract energy from random heat motions popularized by physicist Richard Feynman. However, the paradox disappears when rigorously analyzed. Winning strategies consisting of various combinations of losing strategies were explored in biology before Parrondo's paradox was published.
Illustrative examples
According to Harmer and Abbott, other than the last two examples (i.e. the saw-tooth example and the coin-tossing example), the other examples do not truly exhibit Parrondo's paradox in its strictest definition. In particular, since Parrondo's paradox was physically motivated from the flashing Brownian ratchet, a skip free process must be used to properly preserve this ratchet action and so only a +1 or -1 unit payoff structure can be used. See the coin-tossing example for an explicit illustration of this.
The simple example
Consider two games Game A and Game B, with the following rules:
In Game A, you lose $1 every time you play.
In Game B, you count how much money you have left — if it is an even number you win $3, otherwise you lose $5.
Say you begin with $100 in your pocket. If you start playing Game A exclusively, you will obviously lose all your money in 100 rounds. Similarly, if you decide to play Game B exclusively, you will also lose all your money in 100 rounds.
However, consider playing the games alternatively, starting with Game B, followed by A, then by B, and so on (BABABA...). It shou
|
https://en.wikipedia.org/wiki/Avoidance%20response
|
An avoidance response is a response that prevents an aversive stimulus from occurring. It is a kind of negative reinforcement. An avoidance response is a behavior based on the concept that animals will avoid performing behaviors that result in an aversive outcome. This can involve learning through operant conditioning when it is used as a training technique. It is a reaction to undesirable sensations or feedback that leads to avoiding the behavior that is followed by this unpleasant or fear-inducing stimulus.
Whether the aversive stimulus is brought on intentionally by another or is naturally occurring, it is adaptive to learn to avoid situations that have previously yielded negative outcomes. A simple example of this is conditioned food aversion, or the aversion developed to food that has previously resulted in sickness. Food aversions can also be conditioned using classical conditioning, so that an animal learns to avoid a stimulus previously neutral that has been associated with a negative outcome. This is displayed nearly universally in animals since it is a defense against potential poisoning. A wide variety of species, even slugs, have developed the ability to learn food aversions.
Experiments
An experiment conducted by Solomon and Wynne in 1953 shows the properties of negative reinforcement. The subjects, dogs, were put in a shuttle box (a chamber containing two rectangular compartments divided by a barrier a few inches high). The dogs had the ability to move freely between compartments by going over the barrier. Both compartments had a metal floor designed to administer an unpleasant electric shock. Each compartment also had a light above each, which would turn on and off. Every few minutes, the light in the room the dog was occupying was turned off, while the other remained on. If after 10 seconds in the dark, the dog did not move to the lit compartment, a shock was delivered to the floor of the room the dog was in. The shock continued until the dog mov
|
https://en.wikipedia.org/wiki/Gompertz%E2%80%93Makeham%20law%20of%20mortality
|
The Gompertz–Makeham law states that the human death rate is the sum of an age-dependent component (the Gompertz function, named after Benjamin Gompertz), which increases exponentially with age and an age-independent component (the Makeham term, named after William Makeham). In a protected environment where external causes of death are rare (laboratory conditions, low mortality countries, etc.), the age-independent mortality component is often negligible. In this case the formula simplifies to a Gompertz law of mortality. In 1825, Benjamin Gompertz proposed an exponential increase in death rates with age.
The Gompertz–Makeham law of mortality describes the age dynamics of human mortality rather accurately in the age window from about 30 to 80 years of age. At more advanced ages, some studies have found that death rates increase more slowly – a phenomenon known as the late-life mortality deceleration – but more recent studies disagree.
The decline in the human mortality rate before the 1950s was mostly due to a decrease in the age-independent (Makeham) mortality component, while the age-dependent (Gompertz) mortality component was surprisingly stable. Since the 1950s, a new mortality trend has started in the form of an unexpected decline in mortality rates at advanced ages and "rectangularization" of the survival curve.
The hazard function for the Gompertz-Makeham distribution is most often characterised as . The empirical magnitude of the beta-parameter is about .085, implying a doubling of mortality every .69/.085 = 8 years (Denmark, 2006).
The quantile function can be expressed in a closed-form expression using the Lambert W function:
The Gompertz law is the same as a Fisher–Tippett distribution for the negative of age, restricted to negative values for the random variable (positive values for age).
Future of human longevity
The birth year cohorts of those born after 1950 should be the first to see a considerable delay in the historical course of death,
|
https://en.wikipedia.org/wiki/H%C3%B3rreo
|
An hórreo is a typical granary from the northwest of the Iberian Peninsula (Asturias, Galicia, where it might be called a Galician granary, and Northern Portugal), built in wood or stone, raised from the ground (to keep rodents out) by pillars ( in Asturian, pegoyos in Cantabrian, in Galician, in Portuguese, in Basque) ending in flat staddle stones (vira-ratos in Galician, mueles or tornarratos in Asturian, or zubiluzea in Basque) to prevent access by rodents. Ventilation is allowed by the slits in its walls.
Names
In some areas, hórreos are known as horriu, (Asturian), (Leonese), (Cantabrian), hórreo, paneira, canastro, piorno, cabazo (Galician), , , , (Portuguese), , , (Basque).
Distribution
Hórreos are mainly found in the Northwest of Spain (Galicia and Asturias) and Northern Portugal. There are two main types of hórreo, rectangular-shaped, the more extended, usually found in Galicia and coastal areas of Asturias; and square-shaped hórreos from Asturias, León, western Cantabria and eastern Galicia.
Origins
The oldest document containing an image of an hórreo is the Cantigas de Santa Maria by Alfonso X "El Sabio" (song CLXXXVII) from the 13th century. In this depiction, three rectangular hórreos of gothic style are illustrated.
Types
There are several types of Asturian hórreo, according to the characteristics of the roof (thatched, tiled, slate, pitched or double pitched), the materials used for the pillars or the decoration. The oldest still standing date from the 15th century, and even nowadays they are built ex novo. There are an estimated 18,000 hórreos and paneras in Asturias, some are poorly preserved but there is a growing awareness from owners and authorities to maintain them in good shape.
The longest hórreo in Galicia is located in Carnota, A Coruña, and is long.
Other similar granary structures include Asturian paneras (basically, big hórreos with more than four pillars), cabaceiras (Galician round basketwork hórreo), trojes or in
|
https://en.wikipedia.org/wiki/Readers%E2%80%93writer%20lock
|
In computer science, a readers–writer (single-writer lock, a multi-reader lock, a push lock, or an MRSW lock) is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, whereas write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers and readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updated atomically and is invalid (and should not be read by another thread) until the update is complete.
Readers–writer locks are usually constructed on top of mutexes and condition variables, or on top of semaphores.
Upgradable RW lock
Some RW locks allow the lock to be atomically upgraded from being locked in read-mode to write-mode, as well as being downgraded from write-mode to read-mode. Upgrading a lock from read-mode to write-mode is prone to deadlocks, since whenever two threads holding reader locks both attempt to upgrade to writer locks, a deadlock is created that can only be broken by one of the threads releasing its reader lock. The deadlock can be avoided by allowing only one thread to acquire the lock in "read-mode with intent to upgrade to write" while there are no threads in write mode and possibly non-zero threads in read-mode.
Priority policies
RW locks can be designed with different priority policies for reader vs. writer access. The lock can either be designed to always give priority to readers (read-preferring), to always give priority to writers (write-preferring) or be unspecified with regards to priority. These policies lead to different tradeoffs with regards to concurrency and starvation.
Read-preferring RW locks allow for maximum concurrency, but can lead to write-starvation if contention is high. This is becau
|
https://en.wikipedia.org/wiki/Ethyl%20benzoate
|
Ethyl benzoate, C9H10O2, is an ester formed by the condensation of benzoic acid and ethanol. It is a colorless liquid that is almost insoluble in water, but miscible with most organic solvents.
As with many volatile esters, ethyl benzoate has a pleasant odor described as sweet, wintergreen, fruity, medicinal, cherry and grape. It is a component of some fragrances and artificial fruit flavors.
Preparation
A simple and commonly used method for the preparation of ethyl benzoate in laboratory is the acidic esterification of benzoic acid with ethanol and sulfuric acid as catalyst:
|
https://en.wikipedia.org/wiki/Mermin%E2%80%93Wagner%20theorem
|
In quantum field theory and statistical mechanics, the Mermin–Wagner theorem (also known as Mermin–Wagner–Hohenberg theorem, Mermin–Wagner–Berezinskii theorem, or Coleman theorem) states that continuous symmetries cannot be spontaneously broken at finite temperature in systems with sufficiently short-range interactions in dimensions . Intuitively, this means that long-range fluctuations can be created with little energy cost, and since they increase the entropy, they are favored.
This is because if such a spontaneous symmetry breaking occurred, then the corresponding Goldstone bosons, being massless, would have an infrared divergent correlation function.
The absence of spontaneous symmetry breaking in dimensional infinite systems was rigorously proved by David Mermin, Herbert Wagner (1966), and Pierre Hohenberg (1967) in statistical mechanics and by in quantum field theory. That the theorem does not apply to discrete symmetries can be seen in the two-dimensional Ising model.
Introduction
Consider the free scalar field of mass in two Euclidean dimensions. Its propagator is:
For small is a solution to Laplace's equation with a point source:
This is because the propagator is the reciprocal of in space. To use Gauss's law, define the electric field analog to be . The divergence of the electric field is zero. In two dimensions, using a large Gaussian ring:
So that the function G has a logarithmic divergence both at small and large r.
The interpretation of the divergence is that the field fluctuations cannot stay centred around a mean. If you start at a point where the field has the value 1, the divergence tells you that as you travel far away, the field is arbitrarily far from the starting value. This makes a two dimensional massless scalar field slightly tricky to define mathematically. If you define the field by a Monte Carlo simulation, it doesn't stay put, it slides to infinitely large values with time.
This happens in one dimension too, when the fi
|
https://en.wikipedia.org/wiki/Sinoconodon
|
Sinoconodon is an extinct genus of mammaliamorphs that appears in the fossil record of the Lufeng Formation of China in the Sinemurian stage of the Early Jurassic period, about 193 million years ago. While sharing many plesiomorphic traits with other non-mammaliaform cynodonts, it possessed a special, secondarily evolved jaw joint between the dentary and the squamosal bones, which in more derived taxa would replace the primitive tetrapod one between the articular and quadrate bones. The presence of a dentary-squamosal joint is a trait historically used to define mammals.
Description
This animal had skull of which suggest a presacral body length of and weight about due to the similar parameters to the European hedgehog. Sinoconodon closely resembled early mammaliaforms like Morganucodon, but it is regarded as more basal, differing substantially from Morganucodon in its dentition and growth habits. Like most other non-mammalian tetrapods, such as reptiles and amphibians, it was polyphyodont, replacing many of its teeth throughout its lifetime, and it seems to have grown slowly but continuously until its death. It was thus somewhat less mammal-like than mammaliaforms such as morganucodonts and docodonts. The combination of basal tetrapod and mammalian features makes it a unique transitional fossil.
Taxonomy
Sinoconodon was named by Patterson and Olson in 1961. Its type is Sinoconodon rigneyi. It was assigned to Triconodontidae by Patterson and Olson in 1961; to Triconodonta by Jenkins and Crompton in 1979; to Sinoconodontidae by Carroll in 1988; to Mammaliamorpha by Wible in 1991; to Mammalia by Luo and Wu in 1994; to Mammalia by Kielan-Jaworowska et al. in 2004; and to Mammaliaformes by Luo et al. in 2001 and Bi et al. in 2014.
Phylogeny
|
https://en.wikipedia.org/wiki/Gravity%20%28alcoholic%20beverage%29
|
Gravity, in the context of fermenting alcoholic beverages, refers to the specific gravity (abbreviated SG), or relative density compared to water, of the wort or must at various stages in the fermentation. The concept is used in the brewing and wine-making industries. Specific gravity is measured by a hydrometer, refractometer, pycnometer or oscillating U-tube electronic meter.
The density of a wort is largely dependent on the sugar content of the wort. During alcohol fermentation, yeast converts sugars into carbon dioxide and alcohol. By monitoring the decline in SG over time the brewer obtains information about the health and progress of the fermentation and determines that it is complete when gravity stops declining. If the fermentation is finished, the specific gravity is called the final gravity (abbreviated FG). For example, for a typical strength beer, original gravity (abbreviated OG) could be 1.050 and FG could be 1.010.
Several different scales have been used for measuring the original gravity. For historical reasons, the brewing industry largely uses the Plato scale (°P), which is essentially the same as the Brix scale used by the wine industry. For example, OG 1.050 is roughly equivalent to 12°P.
By considering the original gravity, the brewer or vintner obtains an indication as to the probable ultimate alcoholic content of their product. The OE (Original Extract) is often referred to as the "size" of the beer and is, in Europe, often printed on the label as or sometimes just as a per cent. In the Czech Republic, for example, common descriptions are "10 degree beers", "12 degree beers" which refers to the gravity in Plato of the wort before the fermentation.
Low vs. high gravity beers
The difference between the original gravity of the wort and the final gravity of the beer is an indication of how much sugar has been turned into alcohol. The bigger the difference, the greater the amount of alcohol present and hence the stronger the beer. This is wh
|
https://en.wikipedia.org/wiki/Higman%27s%20lemma
|
In mathematics, Higman's lemma states that the set of finite sequences over a finite alphabet, as partially ordered by the subsequence relation, is well-quasi-ordered. That is, if is an infinite sequence of words over some fixed finite alphabet, then there exist indices such that can be obtained from by deleting some (possibly none) symbols. More generally this remains true when the alphabet is not necessarily finite, but is itself well-quasi-ordered, and the subsequence relation allows the replacement of symbols by earlier symbols in the well-quasi-ordering of labels. This is a special case of the later Kruskal's tree theorem. It is named after Graham Higman, who published it in 1952.
Reverse-mathematical calibration
Higman's lemma has been reverse mathematically calibrated (in terms of subsystems of second-order arithmetic) as equivalent to over the base theory .
|
https://en.wikipedia.org/wiki/Lozani%C4%87%27s%20triangle
|
Lozanić's triangle (sometimes called Losanitsch's triangle) is a triangular array of binomial coefficients in a manner very similar to that of Pascal's triangle. It is named after the Serbian chemist Sima Lozanić, who researched it in his investigation into the symmetries exhibited by rows of paraffins (archaic term for alkanes).
The first few lines of Lozanić's triangle are
1
1 1
1 1 1
1 2 2 1
1 2 4 2 1
1 3 6 6 3 1
1 3 9 10 9 3 1
1 4 12 19 19 12 4 1
1 4 16 28 38 28 16 4 1
1 5 20 44 66 66 44 20 5 1
1 5 25 60 110 126 110 60 25 5 1
1 6 30 85 170 236 236 170 85 30 6 1
1 6 36 110 255 396 472 396 255 110 36 6 1
1 7 42 146 365 651 868 868 651 365 146 42 7 1
1 7 49 182 511 1001 1519 1716 1519 1001 511 182 49 7 1
1 8 56 231 693 1512 2520 3235 3235 2520 1512 693 231 56 8 1
listed in .
Like Pascal's triangle, outer edge diagonals of Lozanić's triangle are all 1s, and most of the enclosed numbers are the sum of the two numbers above. But for numbers at odd positions k in even-numbered rows n (starting the numbering for both with 0), after adding the two numbers above, subtract the number at position (k − 1)/2 in row n/2 − 1 of Pascal's triangle.
The diagonals next to the edge diagonals contain the positive integers in order, but with each integer stated twi
|
https://en.wikipedia.org/wiki/University%20Marine%20Biological%20Station%20Millport
|
The University Marine Biological Station Millport (UMBSM) was a higher education institution located on the island of Great Cumbrae in the Firth of Clyde, Scotland, and run by the university of London (of which it was a central academic body). It closed in 2013 and is now Millport Field Centre, run by the Field Studies Council.
Located just outside the town, it had an extensive curriculum and research programme, with an influx of students throughout the academic year. A Museum and Aquarium (named after the founder, David Robertson) were open to visitors. In May 2003, the station took delivery of the Macduff-built, 22-metre marine research vessel RV Aora. The station also functioned as a Meteorological Office Weather Station and Admiralty Tide Monitor.
History
The Ark, an lighter, was fitted out as a floating laboratory by the father of modern oceanography, Sir John Murray. She formed the Scottish Marine Station for 12 years from 1884. In 1885 she was moved from Granton and drawn up on the shore at Port Loy, Cumbrae. She attracted a stream of distinguished scientists, drawn by the richness of the fauna and flora of the Firth of Clyde, but closed in 1903.
In Millport, an amateur naturalist, David Robertson, was encouraged by meeting Anton Dohrn and by the wealth of findings from the Challenger expedition. In 1894 he formed a committee to build a marine station in Millport and took over The Ark. Millport Marine Biological Station was opened in 1897 by Sir John Murray. The Ark was totally destroyed by a great storm on the night of 20 January 1900.
On 21 July 1904 Scotia, the ship of Dr William Speirs Bruce's Scottish National Antarctic Expedition, returned to her first Scottish landing site, on the Isle of Cumbrae.
From this beginning the station was gradually built up to its present size. The original building proved too small for the purpose and an architectural copy was built alongside. From 1966 to 1987 the station ran under the Directorship of Ronald Ian Cur
|
https://en.wikipedia.org/wiki/Meckel%27s%20cartilage
|
In humans, the cartilaginous bar of the mandibular arch is formed by what are known as Meckel's cartilages (right and left) also known as Meckelian cartilages; above this the incus and malleus are developed. Meckel's cartilage arises from the first pharyngeal arch.
The dorsal end of each cartilage is connected with the ear-capsule and is ossified to form the malleus; the ventral ends meet each other in the region of the symphysis menti, and are usually regarded as undergoing ossification to form that portion of the mandible which contains the incisor teeth.
The intervening part of the cartilage disappears; the portion immediately adjacent to the malleus is replaced by fibrous membrane, which constitutes the sphenomandibular ligament, while from the connective tissue covering the remainder of the cartilage the greater part of the mandible is ossified.
Johann Friedrich Meckel, the Younger discovered this cartilage in 1820.
Evolution
Meckel's cartilage is a piece of cartilage from which the mandibles (lower jaws) of vertebrates evolved. Originally it was the lower of two cartilages which supported the first branchial arch in early fish. Then it grew longer and stronger, and acquired muscles capable of closing the developing jaw.
In early fish and in chondrichthyans (cartilaginous fish such as sharks), the Meckelian Cartilage continued to be the main component of the lower jaw. But in the adult forms of osteichthyans (bony fish) and their descendants (amphibians, reptiles, birds, and mammals), the cartilage is covered in bone – although in their embryos the jaw initially develops as the Meckelian Cartilage. In all tetrapods the cartilage partially ossifies (changes to bone) at the rear end of the jaw and becomes the articular bone, which forms part of the jaw joint in all tetrapods except mammals.
In some extinct mammal groups like eutriconodonts, the Meckel's cartilage still connected otherwise entirely modern ear bones to the jaw.
Additional images
|
https://en.wikipedia.org/wiki/Interlocus%20contest%20evolution
|
Interlocus contest evolution (ICE) is a process of intergenomic conflict by which different loci within a single genome antagonistically coevolve. ICE supposes that the Red Queen process, which is characterized by a never-ending antagonistic evolutionary arms race, does not only apply to species but also to genes within the genome of a species.
Because sexual recombination allows different gene loci to evolve semi-autonomously, genes have the potential to coevolve antagonistically. ICE occurs when "an allelic substitution at one locus selects for a new allele at the interacting locus, and vice versa." As a result, ICE can lead to a chain reaction of perpetual gene substitution at antagonistically interacting loci, and no stable equilibrium can be achieved. The rate of evolution thus increases at that locus.
ICE is thought to be the dominant mode of evolution for genes controlling social behavior. The ICE process can explain many biological phenomena, including intersexual conflict, parent offspring conflict, and interference competition.
Intersexual conflict
A fundamental conflict between the sexes lies in differences in investment: males generally invest predominantly in fertilization while females invest predominantly in offspring. This conflict manifests itself in many traits associated with sexual reproduction. Genes expressed in only one sex are selectively neutral in the other sex; male- and female-linked genes can therefore be acted upon separated by selection and will evolve semi-autonomously. Thus, one sex of a species may evolve to better itself rather than better the species as a whole, sometimes with negative results for the opposite sex: loci will antagonistically coevolve to enhance male reproductive success at females’ expense on the one hand, and to enhance female resistance to male coercion on the other. This is an example of intralocus sexual conflict, and is unlikely to be resolved fully throughout the genome. However, in some cases this confl
|
https://en.wikipedia.org/wiki/Primitive%20streak
|
The primitive streak is a structure that forms in the early embryo in amniotes. In amphibians the equivalent structure is the blastopore. During early embryonic development, the embryonic disc becomes oval shaped, and then pear-shaped with the broad end towards the anterior, and the narrower region projected to the posterior. The primitive streak forms a longitudinal midline structure in the narrower posterior (caudal) region of the developing embryo on its dorsal side. At first formation the primitive streak extends for half the length of the embryo. In the human embryo this appears by stage 6, about 17 days.
The primitive streak establishes bilateral symmetry, determines the site of gastrulation, and initiates germ layer formation. To form the primitive streak mesenchymal stem cells are arranged along the prospective midline, establishing the second embryonic axis, and the site where cells will ingress and migrate during the process of gastrulation and germ layer formation.
The primitive streak extends through this midline and creates the left–right and cranial–caudal body axes. Gastrulation involves the ingression of mesoderm progenitors and their migration to their ultimate position, where they will differentiate into the mesoderm germ layer that, together with endoderm and ectoderm germ layers, will give rise to all the tissues of the adult organism.
Structure
The epiblast, a single epithelial layer of the bilaminar embryonic disc, is the source of all embryonic material in amniotes, and some of its cells will give rise to the primitive streak. In amphibians the equivalent structure is the blastopore. The primitive streak forms a longitudinal midline structure in the narrower caudal (posterior) region of the developing embryo on its dorsal side. At first formation the primitive streak extends for half the length of the embryo. In the human embryo this appears by Carnegie stage 6, about 17 days.
Towards the cranial (anterior) end of the disc the primitive
|
https://en.wikipedia.org/wiki/224%20%28number%29
|
224 (two hundred [and] twenty-four) is the natural number following 223 and preceding 225.
In mathematics
224 is a practical number,
and a sum of two positive cubes . It is also , making it one of the smallest numbers to be the sum of distinct positive cubes in more than one way.
224 is the smallest k with λ(k) = 24, where λ(k) is the Carmichael function.
The mathematician and philosopher Alex Bellos suggested in 2014 that a candidate for the lowest uninteresting number would be 224 because it was, at the time, "the lowest number not to have its own page on [the English-language version of] Wikipedia".
In other areas
In the SHA-2 family of six cryptographic hash functions, the weakest is SHA-224, named because it produces 224-bit hash values. It was defined in this way so that the number of bits of security it provides (half of its output length, 112 bits) would match the key length of two-key Triple DES.
The ancient Phoenician shekel was a standardized measure of silver, equal to 224 grains, although other forms of the shekel employed in other ancient cultures (including the Babylonians and Hebrews) had different measures. Likely not coincidentally, as far as ancient Burma and Thailand, silver was measured in a unit called a tikal, equal to 224 grains.
See also
224 (disambiguation)
|
https://en.wikipedia.org/wiki/River%20ecosystem
|
River ecosystems are flowing waters that drain the landscape, and include the biotic (living) interactions amongst plants, animals and micro-organisms, as well as abiotic (nonliving) physical and chemical interactions of its many parts. River ecosystems are part of larger watershed networks or catchments, where smaller headwater streams drain into mid-size streams, which progressively drain into larger river networks. The major zones in river ecosystems are determined by the river bed's gradient or by the velocity of the current. Faster moving turbulent water typically contains greater concentrations of dissolved oxygen, which supports greater biodiversity than the slow-moving water of pools. These distinctions form the basis for the division of rivers into upland and lowland rivers.
The food base of streams within riparian forests is mostly derived from the trees, but wider streams and those that lack a canopy derive the majority of their food base from algae. Anadromous fish are also an important source of nutrients. Environmental threats to rivers include loss of water, dams, chemical pollution and introduced species. A dam produces negative effects that continue down the watershed. The most important negative effects are the reduction of spring flooding, which damages wetlands, and the retention of sediment, which leads to the loss of deltaic wetlands.
River ecosystems are prime examples of lotic ecosystems. Lotic refers to flowing water, from the Latin , meaning washed. Lotic waters range from springs only a few centimeters wide to major rivers kilometers in width. Much of this article applies to lotic ecosystems in general, including related lotic systems such as streams and springs. Lotic ecosystems can be contrasted with lentic ecosystems, which involve relatively still terrestrial waters such as lakes, ponds, and wetlands. Together, these two ecosystems form the more general study area of freshwater or aquatic ecology.
The following unifying characterist
|
https://en.wikipedia.org/wiki/Dryness%20%28taste%29
|
Dryness is a property of beverages that describes the lack of a sweet taste. This may be due to a lack of sugars, the presence of some other taste that masks sweetness, or an underabundance of simple carbohydrates that can be converted to sugar by enzymes in the mouth (amylase in particular). The term "dry" may be applied to types of beer, wine, cider, distilled spirits, or any other beverage.
In a dry martini, "dry" originally referred to the inclusion of dry gin, however it is often incorrectly used to refer to the amount of vermouth used in the drink. A "perfect" martini – or any other cocktail that uses vermouth, such as a Perfect Manhattan – is a martini made with equal parts dry and sweet vermouth.
|
https://en.wikipedia.org/wiki/IBM%2037xx
|
IBM 37xx (or 37x5) is a family of IBM Systems Network Architecture (SNA) programmable communications controllers used mainly in mainframe environments.
All members of the family ran one of three IBM-supplied programs.
Emulation Program (EP) mimicked the operation of the older IBM 270x non-programmable controllers.
Network Control Program (NCP) supported Systems Network Architecture devices.
Partitioned Emulation Program (PEP) combined the functions of the two.
Models
370x series
3705 — the oldest of the family, introduced in 1972 to replace the non-programmable IBM 270x family. The 3705 could control up to 352 communications lines.
3704 was a smaller version, introduced in 1973. It supported up to 32 lines.
371x
The 3710 communications controller was introduced in 1984.
372x series
The 3725 and the 3720 systems were announced in 1983. The 3725 replaced the hardware line scanners used on previous 370x machines with multiple microcoded processors.
The 3725 was a large-scale node and front end processor.
The 3720 was a smaller version of the 3725, which was sometimes used as a remote concentrator.
The 3726 was an expansion unit for the 3725.
With the expansion unit, the 3725 could support up to 256 lines at data rates up to 256 kbit/s, and connect to up to eight mainframe channels.
Marketing of the 372x machines was discontinued in 1989.
IBM discontinued support for the 3705, 3720, 3725 in 1999.
374x series
The 3745, announced in 1988, provides up to eight T1 circuits. At the time of the announcement, IBM was estimated to have nearly 85% of the over US$825 million market for communications controllers over rivals such as NCR Comten and Amdahl Corporation. The 3745 is no longer marketed, but still supported and used.
The 3746 "Nways Controller" model 900, unveiled in 1992, was an expansion unit for the 3745 supporting additional Token Ring and ESCON connections. A stand-alone model 950 appeared in 1995.
Successors
IBM no longer manufactures 37xx p
|
https://en.wikipedia.org/wiki/Ethnopsychopharmacology
|
A growing body of research has begun to highlight differences in the way racial and ethnic groups respond to psychiatric medication.
It has been noted that there are "dramatic cross-ethnic and cross-national variations in the dosing practices and side-effect profiles in response to practically all classes of psychotropics."
Differences in drug metabolism
Drug metabolism is controlled by a number of specific enzymes, and the action of these enzymes varies among individuals. For example, most individuals show normal activity of the IID6 isoenzyme that is responsible for the metabolism of many tricyclic antidepressant medications and most antipsychotic drugs. However, studies have found that one-third of Asian Americans and African Americans have a genetic alteration that decreases the metabolic rate of the IID6 isoenzyme, leading to a greater risk of side effects and toxicity. The CYP2D6 enzyme, important for the way in which the liver clears many drugs from the body, varies greatly between individuals in ways that can be ethnically specific.
Though enzyme activity is genetically influenced, it can also be altered by cultural and environmental factors such as diet, the use of other medications, alcohol and disease states.
Differences in pharmacodynamics
If two individuals have the same blood level of a medication there may still be differences in the way that the body responds due to pharmacodynamic differences; pharmacodynamic responses may also be influenced by racial and cultural factors.
In addition to biology and environment, culturally determined attitudes toward illness may affect how an individual responds to psychiatric medication.
Cultural factors
In addition to biology and environment, culturally determined attitudes toward illness and its treatment may affect how an individual responds to psychiatric medication. Some cultures see suffering and illness as unavoidable and not amenable to medication, while others treat symptoms with polypharmacy, oft
|
https://en.wikipedia.org/wiki/Plant%20defense%20against%20herbivory
|
Plant defense against herbivory or host-plant resistance (HPR) is a range of adaptations evolved by plants which improve their survival and reproduction by reducing the impact of herbivores. Plants can sense being touched, and they can use several strategies to defend against damage caused by herbivores. Many plants produce secondary metabolites, known as allelochemicals, that influence the behavior, growth, or survival of herbivores. These chemical defenses can act as repellents or toxins to herbivores or reduce plant digestibility. Another defensive strategy of plants is changing their attractiveness. To prevent overconsumption by large herbivores, plants alter their appearance by changing their size or quality, reducing the rate at which they are consumed.
Other defensive strategies used by plants include escaping or avoiding herbivores at any time in any placefor example, by growing in a location where plants are not easily found or accessed by herbivores or by changing seasonal growth patterns. Another approach diverts herbivores toward eating non-essential parts or enhances the ability of a plant to recover from the damage caused by herbivory. Some plants encourage the presence of natural enemies of herbivores, which in turn protect the plant. Each type of defense can be either constitutive (always present in the plant) or induced (produced in reaction to damage or stress caused by herbivores).
Historically, insects have been the most significant herbivores, and the evolution of land plants is closely associated with the evolution of insects. While most plant defenses are directed against insects, other defenses have evolved that are aimed at vertebrate herbivores, such as birds and mammals. The study of plant defenses against herbivory is important, not only from an evolutionary viewpoint, but also for the direct impact that these defenses have on agriculture, including human and livestock food sources; as beneficial 'biological control agents' in biologica
|
https://en.wikipedia.org/wiki/Limitation%20of%20size
|
In the philosophy of mathematics, specifically the philosophical foundations of set theory, limitation of size is a concept developed by Philip Jourdain and/or Georg Cantor to avoid Cantor's paradox. It identifies certain "inconsistent multiplicities", in Cantor's terminology, that cannot be sets because they are "too large". In modern terminology these are called proper classes.
Use
The axiom of limitation of size is an axiom in some versions of von Neumann–Bernays–Gödel set theory or Morse–Kelley set theory. This axiom says that any class that is not "too large" is a set, and a set cannot be "too large". "Too large" is defined as being large enough that the class of all sets can be mapped one-to-one into it.
|
https://en.wikipedia.org/wiki/Overdispersion
|
In statistics, overdispersion is the presence of greater variability (statistical dispersion) in a data set than would be expected based on a given statistical model.
A common task in applied statistics is choosing a parametric model to fit a given set of empirical observations. This necessitates an assessment of the fit of the chosen model. It is usually possible to choose the model parameters in such a way that the theoretical population mean of the model is approximately equal to the sample mean. However, especially for simple models with few parameters, theoretical predictions may not match empirical observations for higher moments. When the observed variance is higher than the variance of a theoretical model, overdispersion has occurred. Conversely, underdispersion means that there was less variation in the data than predicted. Overdispersion is a very common feature in applied data analysis because in practice, populations are frequently heterogeneous (non-uniform) contrary to the assumptions implicit within widely used simple parametric models.
Examples
Poisson
Overdispersion is often encountered when fitting very simple parametric models, such as those based on the Poisson distribution. The Poisson distribution has one free parameter and does not allow for the variance to be adjusted independently of the mean. The choice of a distribution from the Poisson family is often dictated by the nature of the empirical data. For example, Poisson regression analysis is commonly used to model count data. If overdispersion is a feature, an alternative model with additional free parameters may provide a better fit. In the case of count data, a Poisson mixture model like the negative binomial distribution can be proposed instead, in which the mean of the Poisson distribution can itself be thought of as a random variable drawn – in this case – from the gamma distribution thereby introducing an additional free parameter (note the resulting negative binomial distribution
|
https://en.wikipedia.org/wiki/Phytogeography
|
Phytogeography (from Greek φυτόν, phytón = "plant" and γεωγραφία, geographía = "geography" meaning also distribution) or botanical geography is the branch of biogeography that is concerned with the geographic distribution of plant species and their influence on the earth's surface. Phytogeography is concerned with all aspects of plant distribution, from the controls on the distribution of individual species ranges (at both large and small scales, see species distribution) to the factors that govern the composition of entire communities and floras. Geobotany, by contrast, focuses on the geographic space's influence on plants.
Fields
Phytogeography is part of a more general science known as biogeography. Phytogeographers are concerned with patterns and process in plant distribution. Most of the major questions and kinds of approaches taken to answer such questions are held in common between phyto- and zoogeographers.
Phytogeography in wider sense (or geobotany, in German literature) encompasses four fields, according with the focused aspect, environment, flora (taxa), vegetation (plant community) and origin, respectively:
plant ecology (or mesology – however, the physiognomic-ecological approach on vegetation and biome study are also generally associated with this field);
plant geography (or phytogeography in strict sense, chorology, floristics);
plant sociology (or phytosociology, synecology – however, this field does not prescind from flora study, as its approach to study vegetation relies upon a fundamental unit, the plant association, which is defined upon flora).
historical plant geography (or paleobotany, paleogeobotany)
Phytogeography is often divided into two main branches: ecological phytogeography and historical phytogeography. The former investigates the role of current day biotic and abiotic interactions in influencing plant distributions; the latter are concerned with historical reconstruction of the origin, dispersal, and extinction of taxa.
Ove
|
https://en.wikipedia.org/wiki/Dimethyl%20dicarbonate
|
Dimethyl dicarbonate (DMDC) is a colorless liquid and a pungent odor at high concentration at room temperature. It is primarily used as a beverage preservative, processing aid, or sterilant (INS No. 242) being highly active against typical beverage spoiling microorganisms like yeast, bacteria, or mould.
Usage
Dimethyl dicarbonate is used to stabilize beverages by preventing microbial spoilage. It can be used in various non-alcoholic as well as alcoholic drinks like wine, cider, beer-mix beverages or hard seltzers. Beverage spoiling microbes are killed by methoxycarbonylation of proteins.
It acts by inhibiting enzymes involved in the microbial metabolism, e.g. acetate kinase and L-glutamic acid decarboxylase. It has also been proposed that DMDC inhibits the enzymes alcohol dehydrogenase and glyceraldehyde 3-phosphate dehydrogenase by causing the methoxycarbonylation of their histidine components.
In wine, it is often used to replace potassium sorbate, as it inactivates wine spoilage yeasts such as Brettanomyces. Once it has been added to beverages, the efficacy of the chemical is provided by the following reactions:
DMDC + water → methanol + carbon dioxide
DMDC + ethanol → ethyl methyl carbonate
DMDC + ammonia → methyl carbamate
DMDC + amino acid → derived carboxymethyl
The application of DMDC is particularly useful when wine needs to be sterilized but cannot be sterile filtered, pasteurized, or sulfured. DMDC is also used to stabilize non-alcoholic beverages such as carbonated or non-carbonated juice beverages, isotonic sports beverages, iced teas and flavored waters. DMDC is produced by Lanxess under the trade name Velcorin®
DMDC is added before the filling of the beverage. It then breaks down into small amounts of methanol and carbon dioxide, which are both natural constituents of fruit and vegetable juices.
The EU Scientific Committee on Food, the FDA in the United States and the JECFA of the WHO have confirmed the safe use in beverages. The FDA approved i
|
https://en.wikipedia.org/wiki/PNP%20agar
|
PNP Agar is an agar medium used in microbiology to identify Staphylococcus species that have phosphatase activity. The medium changes color when p-nitrophenylphosphate disodium (PNP) is dephosphorylated.
PNP agar is composed of Mueller–Hinton agar buffered to pH 5.6 to 5.8, with the addition of 0.495 mg/mL PNP.
|
https://en.wikipedia.org/wiki/Aigo
|
Beijing Huaqi Information Digital Technology Co., Ltd, trading as Aigo (stylized as aigo), is a Chinese consumer electronics company. It is headquartered in the Ideal Plaza () in Haidian District, Beijing.
History
Beijing Huaqi Information Digital Technology Co Ltd (北京华旗资讯科技发展有限公司) is a consumer electronics manufacturer headquartered in Beijing. It was founded by Féng Jūn, who is the current president, in 1993. The company initially produced keyboards. aigo may be participating in a trend that sees Chinese nationals preferring to purchase locally produced durable goods.
Products
aigo's products include MIDs, digital media players, computer cases, digital cameras, cpu cooling fans, computer peripherals, and monitors.
Subsidiaries
aigo has 27 subsidiaries and several R&D facilities. An incomplete list of aigo's subsidiaries can be found here.
aigo Music
Established in 1993 and located in Beijing, aigo Music operates a digital music service much like iTunes. The first of its kind in China, it is, as of 2009, the biggest portal for legal downloading of music in the country. Strategic partnerships with Warner Music, EMI and Sony allow a wide range of music to be offered at 0.99 yuan per song.
Beijing aifly Education and Technology Co Ltd
aigo set up this English as a Second Language brand with help from Crazy English founder Li Yang.
Beijing aigo Digital Animation Institution
An aigo subsidiary that specializes in 3D animated films.
Huaqi Information Technology (Singapore) Pte Ltd
Set up in October 2003, it operates two official aigo outlet stores in Singapore.
Shenzhen aigo R&D Co Ltd
Established in 2006, this Shenzhen-based research and development facility focuses on the development of mobile multimedia software.
Sponsorships
aigo is a sponsor of a number of sporting events, the majority involving automobile racing.
Motorsport
aigo was an official partner of the Vodafone McLaren Mercedes Formula One team.
As of 2008, aigo sponsors Chinese driver "Frankie"
|
https://en.wikipedia.org/wiki/Lateral%20cutaneous%20nerve%20of%20forearm
|
The lateral cutaneous nerve of forearm (or lateral antebrachial cutaneous nerve) is a sensory nerve representing the continuation of the musculocutaneous nerve beyond the lateral edge of the tendon of the biceps brachii muscle. The lateral cutaneous nerve provides sensory innervation to the skin of the lateral forearm. It pierces the deep fascia of forearm to enter the subcutaneous compartment before splitting into a volar branch and a dorsal branch.
Anatomy
Course and relations
It passes behind the cephalic vein and divides opposite the elbow-joint into a volar branch and a dorsal branch.
Branches
Volar branch
The volar branch (ramus volaris; anterior branch) descends along the radial border of the forearm to the wrist, and supplies the skin over the lateral half of its volar surface.
At the wrist-joint it is placed in front of the radial artery, and some filaments, piercing the deep fascia, accompany that vessel to the dorsal surface of the carpus.
The nerve then passes downward to the ball of the thumb, where it ends in cutaneous filaments.
It communicates with the superficial branch of the radial nerve, and with the palmar cutaneous branch of the median nerve.
Dorsal branch
The dorsal branch (ramus dorsalis; posterior branch) descends, along the dorsal surface of the radial side of the forearm to the wrist.
It supplies the skin of the lower two-thirds of the dorso-lateral surface of the forearm, communicating with the superficial branch of the radial nerve and the posterior cutaneous nerve of forearm of the radial nerve.
See also
Medial cutaneous nerve of forearm
Superior lateral cutaneous nerve of arm
Additional images
|
https://en.wikipedia.org/wiki/Medial%20cutaneous%20nerve%20of%20forearm
|
The medial cutaneous nerve of the forearm (also known as the medial antebrachial cutaneous nerve) is a sensory branch of the medial cord of the brachial plexus derived from the ventral rami of spinal nerves C8-T1. It provides sensory innervation to the skin of the medial forearm and skin overlying the olecranon. It descends through the (upper) arm within the brachial fascia alongside the basilic vein, then divides into an anterior branch and a posterior branch upon emerging from the brachial fascia; the two terminal branches travel as far distally as the wrist.
Anatomy
Course and relations
It gives off a branch near the axilla, which pierces the fascia and supplies the skin covering the biceps brachii, nearly as far as the elbow.
The nerve then runs down the ulnar side of the arm medial to the brachial artery, pierces the deep fascia with the basilic vein, about the middle of the arm, and divides into a volar and an ulnar branch.
Branches
Volar branch
The volar branch (ramus volaris; anterior branch), the larger, passes usually in front of, but occasionally behind, the vena mediana cubiti (median basilic vein).
It then descends on the front of the ulnar side of the forearm, distributing filaments to the skin as far as the wrist, and communicating with the palmar cutaneous branch of the ulnar nerve.
Ulnar branch
The ulnar branch (ramus ulnaris; posterior branch) passes obliquely downward on the medial side of the basilic vein, in front of the medial epicondyle of the humerus, to the back of the forearm, and descends on its ulnar side as far as the wrist, distributing filaments to the skin.
It communicates with the medial brachial cutaneous, the dorsal antebrachial cutaneous branch of the radial, and the dorsal branch of the ulnar.
See also
Dorsal antebrachial cutaneous nerve
Lateral antebrachial cutaneous nerve
Medial brachial cutaneous nerve
Additional images
|
https://en.wikipedia.org/wiki/Medial%20cutaneous%20nerve%20of%20arm
|
The medial brachial cutaneous nerve (lesser internal cutaneous nerve; medial cutaneous nerve of arm) is a sensory branch of the medial cord of the brachial plexus derived from spinal nerves C8-T1. It provides sensory innervation to the medial arm. It descends accompanied by the basilic vein.
Anatomy
Origin
It is the smallest and medial-most branch of the brachial plexus, and arising from the medial cord receives its fibers from the eighth cervical and first thoracic spinal nerves.
Course
It passes through the axilla, at first lying behind, and then medial to the axillary vein, and communicates with the intercostobrachial nerve.
It descends along the medial side of the brachial artery to the middle of the arm, where it pierces the deep fascia, and is distributed to the skin of the back of the lower third of the arm, extending as far as the elbow, where some filaments are lost in the skin in front of the medial epicondyle, and others over the olecranon.
It communicates with the ulnar branch of the medial antebrachial cutaneous nerve.
Eponym
The term nerve of Wrisberg (after Heinrich August Wrisberg) has been used to describe this nerve.
However, the term "nerve of Wrisberg" can also refer to the nervus intermedius branch of the facial nerve.
See also
Superior lateral cutaneous nerve of arm
Inferior lateral cutaneous nerve of arm
Posterior cutaneous nerve of arm
|
https://en.wikipedia.org/wiki/International%20Centre%20for%20Theoretical%20Physics
|
The Abdus Salam International Centre for Theoretical Physics (ICTP) is an international research institute for physical and mathematical sciences that operates under a tripartite agreement between the Italian Government, United Nations Educational, Scientific and Cultural Organization (UNESCO), and International Atomic Energy Agency (IAEA). It is located near the Miramare Park, about 10 kilometres from the city of Trieste, Italy. The centre was founded in 1964 by Pakistani Nobel Laureate Abdus Salam.
ICTP is part of the Trieste System, a network of national and international scientific institutes in Trieste, promoted by the Italian physicist Paolo Budinich.
Mission
Foster the growth of advanced studies and research in physical and mathematical sciences, especially in support of excellence in developing countries;
Develop high-level scientific programmes keeping in mind the needs of developing countries, and provide an international forum of scientific contact for scientists from all countries;
Conduct research at the highest international standards and maintain a conducive environment of scientific inquiry for the entire ICTP community.
Research
Research at ICTP is carried out by seven scientific sections:
High Energy, Cosmology and Astroparticle Physics
Condensed Matter and Statistical Physics
Mathematics
Earth System Physics
Science, Technology and Innovation
Quantitative Life Sciences
New Research Areas (which includes studies related to Energy and Sustainability and Computing Sciences)
The scientific community at ICTP includes staff research scientists, postdoctoral fellows and long- and short-term visitors engaged in independent or collaborative research. Throughout the year, the sections organize conferences, workshops, seminars and colloquiums in their respective fields. ICTP also has visitor programmes specifically for scientific visitors from developing countries, including programmes under federation and associateship schemes.
Postgraduate progra
|
https://en.wikipedia.org/wiki/Stealth%20wallpaper
|
For computer network security, stealth wallpaper is a material designed to prevent an indoor Wi-Fi network from extending or "leaking" to the outside of a building, where malicious persons may attempt to eavesdrop or attack a network. While it is simple to prevent all electronic signals from passing through a building by covering the interior with metal, stealth wallpaper accomplishes the more difficult task of blocking Wi-Fi signals while still allowing cellphone signals to pass through.
The first stealth wallpaper was originally designed by UK defense contractor BAE Systems
In 2012, The Register reported that a commercial wallpaper had been developed by Institut Polytechnique Grenoble and the Centre Technique du Papier with planned sale in 2013. This wallpaper blocks three selected Wi-Fi frequencies. Nevertheless, it does allow GSM and 4G signals to pass through the network, therefore allowing cell phone use to remain unaffected by the wallpaper.
See also
Electromagnetic shielding
Faraday cage
TEMPEST
Wallpaper
Wireless security
|
https://en.wikipedia.org/wiki/Fritz%20John
|
Fritz John (14 June 1910 – 10 February 1994) was a German-born mathematician specialising in partial differential equations and ill-posed problems. His early work was on the Radon transform and he is remembered for John's equation. He was a 1984 MacArthur Fellow.
Life and career
John was born in Berlin, Imperial Germany, the son of Hedwig (née Bürgel) and Hermann Jacobson-John. He studied mathematics from 1929 to 1933 in Göttingen where he was influenced by Richard Courant, among others. Following Hitler's rise to power in 1933 "non-aryans" were being expelled from teaching posts, and John, being half Jewish, emigrated from Germany to England.
John published his first paper in 1934 on Morse theory. He was awarded his doctorate in 1934 with a thesis entitled Determining a function from its integrals over certain manifolds from Göttingen. With Richard Courant's assistance he spent a year at St John's College, Cambridge. During this time he published papers on the Radon transform, a theme to which he would return throughout his career.
John was appointed an assistant professor at the University of Kentucky in 1935 and he emigrated to the United States, becoming naturalised in 1941. He stayed at Kentucky until 1946, apart from between 1943 and 1945, during which he did war service for the Ballistic Research Laboratory at the Aberdeen Proving Ground in Maryland. In 1946 he moved to New York University, where he remained for the rest of his career.
Throughout the 1940s and 1950s he continued to work on the Radon transform, in particular its application to linear partial differential equations, convex geometry, and the mathematical theory of water waves. He also worked in numerical analysis and on ill-posed problems. His textbook on partial differential equations was highly influential and was re-edited many times. He made several contributions to convex geometry, including his famous result that within every convex body there is one unique ellipsoid of maximal vol
|
https://en.wikipedia.org/wiki/Microspore
|
Microspores are land plant spores that develop into male gametophytes, whereas megaspores develop into female gametophytes. The male gametophyte gives rise to sperm cells, which are used for fertilization of an egg cell to form a zygote. Megaspores are structures that are part of the alternation of generations in many seedless vascular cryptogams, all gymnosperms and all angiosperms. Plants with heterosporous life cycles using microspores and megaspores arose independently in several plant groups during the Devonian period. Microspores are haploid, and are produced from diploid microsporocytes by meiosis.
Morphology
The microspore has three different types of wall layers. The outer layer is called the perispore, the next is the exospore, and the inner layer is the endospore. The perispore is the thickest of the three layers while the exospore and endospore are relatively equal in width.
Seedless vascular plants
In heterosporous seedless vascular plants, modified leaves called microsporophylls bear microsporangia containing many microsporocytes that undergo meiosis, each producing four microspores. Each microspore may develop into a male gametophyte consisting of a somewhat spherical antheridium within the microspore wall. Either 128 or 256 sperm cells with flagella are produced in each antheridium. The only heterosporous ferns are aquatic or semi-aquatic, including the genera Marsilea, Regnellidium, Pilularia, Salvinia, and Azolla. Heterospory also occurs in the lycopods in the spikemoss genus Selaginella and in the quillwort genus Isoëtes.
Types of seedless vascular plants:
Water ferns
Spikemosses
Quillworts
Gymnosperms
In seed plants the microspores develop into pollen grains each containing a reduced, multicellular male gametophyte. The megaspores, in turn, develop into reduced female gametophytes that produce egg cells that, once fertilized, develop into seeds. Pollen cones or microstrobili usually develop toward the tips of the lower branches in cluste
|
https://en.wikipedia.org/wiki/Ventriculitis
|
Ventriculitis is the inflammation of the ventricles in the brain. The ventricles are responsible for containing and circulating cerebrospinal fluid throughout the brain. Ventriculitis is caused by infection of the ventricles, leading to swelling and inflammation. This is especially prevalent in patients with external ventricular drains and intraventricular stents. Ventriculitis can cause a wide variety of short-term symptoms and long-term side effects ranging from headaches and dizziness to unconsciousness and death if not treated early. It is treated with some appropriate combination of antibiotics in order to rid the patient of the underlying infection. Much of the current research involving ventriculitis focuses specifically around defining the disease and what causes it. This will allow for much more advancement in the subject. There is also a lot of attention being paid to possible treatments and prevention methods to help make this disease even less prevalent and dangerous.
Signs and symptoms
There is great deal of variety in the symptoms associated with ventriculitis. The symptoms vary based on a number of different factors including severity of inflammation, underlying cause, and the individual patient.
Patients often present with headaches, painful cranial pressure, and neck pain early in the progression of the disease. Patients with a more advanced infection have been known to complain of many neurological effects such as dizziness, vertigo, confusion, and slurred speech. Very advanced cases can lead to mental instability, nausea, vomiting, rigors, and temporary loss of consciousness. Many patients with ventriculitis also experience some degree of hydrocephalus, which is the buildup of cerebrospinal fluid due to the inability of the ventricles to reabsorb and correctly circulate the fluid. Brain abscess is another common disorder resulting from the inflammation. If left untreated, ventriculitis can lead to serious inhibition of mental function and even
|
https://en.wikipedia.org/wiki/Alor%20Setar%20Tower
|
Alor Setar Tower (), also known as Kedah Tower is a 4-story, 165.5-meter-tall telecommunication tower in Alor Setar, Kedah, Malaysia, It is Malaysia's third tallest tower, the tallest structure in the state and the tallest structure in the city.
Apart from serving the role of a telecommunication tower, it also caters as a tourist destination for the town. The tower also houses some restaurants and a souvenir shop. The tower is an observatory tower to look for the crescent moon to mark the beginning of Muslim months such as Ramadhan, Shawwal, and Zulhijjah, to celebrate Ramadhan, Hari Raya Aidilfitri and Hari Raya Aidiladha, respectively.
The observation deck is at a height of from the base of the structure.
Also, the open deck or skydeck is at a height of and the antenna tower is with the height of .
History
Before Alor Setar Tower was built, the tallest structure in Kedah was Holiday Villa Alor Setar, which is . Located at the Alor Setar City Centre, it was built in 1994. It is also the second structure in Northern Region of Peninsular Malaysia to surpass after Penang's Komtar. Alor Setar Tower was started in 1994, topped-out in 1995, it surpassed the previous Holiday Villa Alor Setar, became the third to surpass . The tower was completed in 1996. Opened in 14 August 1997 by Mahathir Mohamad, who was the prime minister at the time.
Following the completion of the Alor Setar Tower, many high-rise and low-rise buildings started taking shape, including Amansuri Residences, Aman Central, SADA Tower, G Residence Alor Setar, VIVRE Residence, AS100, and AMEX Alor Mengkudu.
Gallery
Channels listed by frequency
Television
TV Alhijrah UHF 559.25 MHz (Ch 32)
Radio
Fly FM 99.1 MHz
Buletin FM 107.3 MHz
Menara Alor Setar is a telecommunication tower in Alor Setar, Kedah and a popular tourist destination in the northern region. With a total height of , it is the third tallest telecommunication tower in Malaysia (behind the Kuala Lumpur Tower, and the newly opened Kua
|
https://en.wikipedia.org/wiki/Arc%20converter
|
The arc converter, sometimes called the arc transmitter, or Poulsen arc after Danish engineer Valdemar Poulsen who invented it in 1903, was a variety of spark transmitter used in early wireless telegraphy. The arc converter used an electric arc to convert direct current electricity into radio frequency alternating current. It was used as a radio transmitter from 1903 until the 1920s when it was replaced by vacuum tube transmitters. One of the first transmitters that could generate continuous sinusoidal waves, it was one of the first technologies used to transmit sound (amplitude modulation) by radio. It is on the list of IEEE Milestones as a historic achievement in electrical engineering.
History
Elihu Thomson discovered that a carbon arc shunted with a series tuned circuit would "sing". This "singing arc" was probably limited to audio frequencies. Bureau of Standards credits William Duddell with the shunt resonant circuit around 1900.
The English engineer William Duddell discovered how to make a resonant circuit using a carbon arc lamp. Duddell's "musical arc" operated at audio frequencies, and Duddell himself concluded that it was impossible to make the arc oscillate at radio frequencies.
Valdemar Poulsen succeeded in raising the efficiency and frequency to the desired level. Poulsen's arc could generate frequencies of up to 200 kilohertz and was patented in 1903.
After a few years of development the arc technology was transferred to Germany and Great Britain in 1906 by Poulsen, his collaborator Peder Oluf Pedersen and their financial backers. In 1909 the American patents as well as a few arc converters were bought by Cyril Frank Elwell. The subsequent development in Europe and the United States was rather different, since in Europe there were severe difficulties for many years implementing the Poulsen technology, whereas in the United States an extended commercial radiotelegraph system was soon established with the Federal Telegraph Company. Later the US Nav
|
https://en.wikipedia.org/wiki/Fixation%20index
|
The fixation index (FST) is a measure of population differentiation due to genetic structure. It is frequently estimated from genetic polymorphism data, such as single-nucleotide polymorphisms (SNP) or microsatellites. Developed as a special case of Wright's F-statistics, it is one of the most commonly used statistics in population genetics. Its values range from 0 to 1, with 0.15 being substantially differentiated and 1 being complete differentiation.
Interpretation
This comparison of genetic variability within and between populations is frequently used in applied population genetics. The values range from 0 to 1. A zero value implies complete panmixis; that is, that the two populations are interbreeding freely. A value of one implies that all genetic variation is explained by the population structure, and that the two populations do not share any genetic diversity.
For idealized models such as Wright's finite island model, FST can be used to estimate migration rates. Under that model, the migration rate is
,
where is the migration rate per generation, and is the mutation rate per generation.
The interpretation of FST can be difficult when the data analyzed are highly polymorphic. In this case, the probability of identity by descent is very low and FST can have an arbitrarily low upper bound, which might lead to misinterpretation of the data. Also, strictly speaking FST is not a distance in the mathematical sense, as it does not satisfy the triangle inequality.
For populations of plants which clearly belong to the same species, values of FST greater than 15% are considered "great" or "significant" differentiation, while values below 5% are considered "small" or "insignificant" differentiation.
Values for mammal populations between subspecies, or closely related species, typical values are of the order of 5% to 20%. FST between the Eurasian and North American populations of the gray wolf were reported at 9.9%, those between the Red wolf and Gray wolf popu
|
https://en.wikipedia.org/wiki/Cuneiform%20cartilages
|
In the human larynx, the cuneiform cartilages (from Latin: cuneus 'wedge' + forma 'form'; also known as cartilages of Wrisberg) are two small, elongated pieces of yellow elastic cartilage, placed one on either side, in the aryepiglottic fold.
The cuneiforms are paired cartilages that sit on top of and move with the arytenoids. They are located above and in front of the corniculate cartilages, and the presence of these two pairs of cartilages result in small bulges on the surface of the mucous membrane.
Covered by the aryepiglottic folds, the cuneiforms form the lateral aspect of the laryngeal inlet, while the corniculates form the posterior aspect, and the epiglottis the anterior.
Function of the cuneiform cartilages is to support the vocal folds and lateral aspects of the epiglottis. They also provide a degree of solidity to the folds in which they are embedded.
Additional images
|
https://en.wikipedia.org/wiki/Manual%20scavenging
|
Manual scavenging is a term used mainly in India for "manually cleaning, carrying, disposing of, or otherwise handling, human excreta in an insanitary latrine or in an open drain or sewer or in a septic tank or a pit". Manual scavengers usually use hand tools such as buckets, brooms and shovels. The workers have to move the excreta, using brooms and tin plates, into baskets, which they carry to disposal locations sometimes several kilometers away. The practice of employing human labour for cleaning of sewers and septic tanks is also prevalent in Bangladesh and Pakistan. These sanitation workers, called "manual scavengers", rarely have any personal protective equipment. The work is regarded as a dehumanizing practice.
The occupation of sanitation work is intrinsically linked with caste in India. All kinds of cleaning are considered lowly and are assigned to people from the lowest rung of the social hierarchy. In the caste-based society, it is mainly the Dalits who work as sanitation workers - as manual scavengers, cleaners of drains, as garbage collectors and sweepers of roads. It was estimated in 2019 that between 40 and 60 percent of the six million households of Dalit sub-castes are engaged in sanitation work. The most common Dalit caste performing sanitation work is the Valmiki (also Balmiki) caste.
The construction of dry toilets and employment of manual scavengers to clean such dry toilets was prohibited in India in 1993. The law was extended and clarified to include ban on use of human labour for direct cleaning of sewers, ditches, pits and septic tanks in 2013. However, despite the laws, manual scavenging was reported in many states including Maharashtra, Gujarat, Madhya Pradesh, Uttar Pradesh, and Rajasthan in 2014. In 2021, the NHRC observed that eradication of manual scavenging as claimed by state and local governments is far from over. Government data shows that in the period 1993–2021, 971 people died due to cleaning of sewers and septic tanks.
The te
|
https://en.wikipedia.org/wiki/Lingual%20artery
|
The lingual artery arises from the external carotid artery between the superior thyroid artery and facial artery. It can be located easily in the tongue.
Structure
The lingual artery first branches off from the external carotid artery. It runs obliquely upward and medially to the greater horns of the hyoid bone.
It then curves downward and forward, forming a loop which is crossed by the hypoglossal nerve. It then passes beneath the digastric muscle and stylohyoid muscle running horizontally forward, beneath the hyoglossus. This takes it through the sublingual space. Finally, ascending almost perpendicularly to the tongue, it turns forward on its lower surface as far as the tip of the tongue, now called the deep lingual artery (profunda linguae).
Branches
The lingual artery gives 4 main branches: the deep lingual artery, the sublingual artery, the suprahyoid branch, and the dorsal lingual branch.
Deep lingual artery
The deep lingual artery (or ranine artery) is the terminal portion of the lingual artery after the sublingual artery is given off. As seen in the picture, it travels superiorly in a tortuous course along the under (ventral) surface of the tongue, below the longitudinalis inferior, and above the mucous membrane.
It lies on the lateral side of the genioglossus, the main large extrinsic tongue muscle, accompanied by the lingual nerve. However, as seen in the picture, the deep lingual artery passes inferior to the hyoglossus (the cut muscle on the bottom) while the lingual nerve (not pictured) passes superior to it (for a comparison, the hypoglossal nerve, pictured, passes superior to the hyoglossus). At the tip of the tongue, it is said to anastomose with the artery of the opposite side, but this is denied by Hyrtl. In the mouth, these vessels are placed one on either side of the frenulum linguæ.
Sublingual artery
The sublingual artery arises at the anterior margin of the hyoglossus, and runs forward between the genioglossus and mylohyoid muscle to th
|
https://en.wikipedia.org/wiki/Iron%28II%29%20gluconate
|
Iron(II) gluconate, or ferrous gluconate, is a black compound often used as an iron supplement. It is the iron(II) salt of gluconic acid. It is marketed under brand names such as Fergon, Ferralet and Simron.
Uses
Medical
Ferrous gluconate is effectively used in the treatment of hypochromic anemia. The use of this compound compared with other iron preparations results in satisfactory reticulocyte responses, a high percentage utilization of iron, and daily increase in hemoglobin that a normal level occurs in a reasonably short time.
Food additive
Ferrous gluconate is also used as a food additive when processing black olives. It is represented by the food labeling E number E579 in Europe. It imparts a uniform jet black color to the olives.
Toxicity
Ferrous gluconate may be toxic in case of overdose. Children may show signs of toxicity with ingestions of 10–20 mg/kg of elemental iron. Serious toxicity may result from ingestions of more than 60 mg/kg. Iron exerts both local and systemic effects: it is corrosive to the gastrointestinal mucosa, it can have a negative impact on the heart and blood (dehydration, low blood pressure, fast and weak pulse, shock), lungs, liver, gastrointestinal system (diarrhea, nausea, vomiting blood), nervous system (chills, dizziness, coma, convulsions, headache), and skin (flushing, loss of color, bluish-colored lips and fingernails). The symptoms may disappear in a few hours, but then emerge again after 1 or more days.
See also
Acceptable daily intake
Iron poisoning
|
https://en.wikipedia.org/wiki/Critical%20heat%20flux
|
In the study of heat transfer, critical heat flux (CHF) is the heat flux at which boiling ceases to be an effective form of transferring heat from a solid surface to a liquid.
Description
Boiling systems are those in which liquid coolant absorbs energy from a heated solid surface and undergoes a change in phase. In flow boiling systems, the saturated fluid progresses through a series of flow regimes as vapor quality is increased. In systems that utilize boiling, the heat transfer rate is significantly higher than if the fluid were a single phase (i.e. all liquid or all vapor). The more efficient heat transfer from the heated surface is due to heat of vaporization and sensible heat. Therefore, boiling heat transfer has played an important role in industrial heat transfer processes such as macroscopic heat transfer exchangers in nuclear and fossil power plants, and in microscopic heat transfer devices such as heat pipes and microchannels for cooling electronic chips.
The use of boiling as a means of heat removal is limited by a condition called critical heat flux (CHF). The most serious problem that can occur around CHF is that the temperature of the heated surface may increase dramatically due to significant reduction in heat transfer. In industrial applications such as electronics cooling or instrumentation in space, the sudden increase in temperature may possibly compromise the integrity of the device.
Two-phase heat transfer
The convective heat transfer between a uniformly heated wall and the working fluid is described by Newton's law of cooling:
where represents the heat flux, represents the proportionally constant called the heat transfer coefficient, represents the wall temperature and represents the fluid temperature. If decreases significantly due to the occurrence of the CHF condition, will increase for fixed and while will decrease for fixed .
Modes of CHF
The understanding of CHF phenomenon and an accurate prediction of the CHF condition
|
https://en.wikipedia.org/wiki/Entomopathogenic%20nematode
|
Entomopathogenic nematodes (EPN) are a group of nematodes (thread worms), that cause death to insects. The term entomopathogenic has a Greek origin, with entomon, meaning insect, and pathogenic, which means causing disease. They are animals that occupy a bio control middle ground between microbial pathogens and predator/parasitoids. Although many other parasitic thread worms cause diseases in living organisms (sterilizing or otherwise debilitating their host), entomopathogenic nematodes are specific in only infecting insects. Entomopathogenic nematodes (EPNs) live parasitically inside the infected insect host, and so they are termed as endoparasitic. They infect many different types of insects living in the soil like the larval forms of moths, butterflies, flies and beetles as well as adult forms of beetles, grasshoppers and crickets. EPNs have been found all over the world in a range of ecologically diverse habitats. They are highly diverse, complex and specialized. The most commonly studied entomopathogenic nematodes are those that can be used in the biological control of harmful insects, the members of Steinernematidae and Heterorhabditidae. They are the only insect-parasitic nematodes possessing an optimal balance of biological control attributes.
Classification
Life cycle
Because of their economic importance, the life cycles of the genera belonging to families Heterorhabditidae and Steinernematidae are well studied. Although not closely related, phylogenetically, both share similar life histories (Poinar 1993). The cycle begins with an infective juvenile, whose only function is to seek out and infect new hosts. When a host has been located, the nematodes penetrate into the insect body cavity, usually via natural body openings (mouth, anus, spiracles) or areas of thin cuticle. (Shapiro-Ilan, David I., and Randy Gaugler. "Nematodes.") After entering an insect, infective juveniles release an associated mutualistic bacterium from their gut which multiplies rapi
|
https://en.wikipedia.org/wiki/List%20of%20captive-bred%20meat%20animals
|
The following is a list of animals that are or may have been raised in captivity for consumption by people. For other animals commonly eaten by people, see Game (food).
See also
Game (food)
List of meat dishes
Marine mammals as food
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.