source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Climatic%20adaptation
|
Climatic adaptation refers to adaptations of an organism that are triggered due to the patterns of variation of abiotic factors that determine a specific climate. Annual means, seasonal variation and daily patterns of abiotic factors are properties of a climate where organisms can be adapted to. Changes in behavior, physical structure, internal mechanisms and metabolism are forms of adaptation that is caused by climate properties. Organisms of the same species that occur in different climates can be compared to determine which adaptations are due to climate and which are influenced majorly by other factors. Climatic adaptations limits to adaptations that have been established, characterizing species that live within the specific climate. It is different from climate change adaptations which refers to the ability to adapt to gradual changes of a climate. Once a climate has changed, the climate change adaptation that led to the survival of the specific organisms as a species can be seen as a climatic adaptation. Climatic adaptation is constrained by the genetic variability of the species in question.
Climate patterns
The patterns of variation of abiotic factors determine a climate and thus climatic adaptation. There are many different climates around the world, each with its unique patterns. Because of this, the manner of climatic adaptation shows large differences between the climates. A subarctic climate, for instance, shows daylight time and temperature fluctuations as most important factors, while in rainforest climate, the most important factor is characterized by the stable high precipitation rate and high average temperature that doesn't fluctuate a lot. Humid continental climate is marked by seasonal temperature variances which commonly lead to seasonal climate adaptations. Because the variance of these abiotic factors differ depending on the type of climate, differences in the manner of climatic adaptation are expected.
Research
Research on climatic adaptat
|
https://en.wikipedia.org/wiki/How%20to%20Solve%20it%20by%20Computer
|
How to Solve it by Computer is a computer science book by R. G. Dromey, first published by Prentice-Hall in 1982.
It is occasionally used as a textbook, especially in India.
It is an introduction to the whys of algorithms and data structures.
Features of the book:
The design factors associated with problems
The creative process behind coming up with innovative solutions for algorithms and data structures
The line of reasoning behind the constraints, factors and the design choices made.
The very fundamental algorithms portrayed by this book are mostly presented in pseudocode and/or Pascal notation.
See also
How to Solve It, by George Pólya, the author's mentor and inspiration for writing the book.
|
https://en.wikipedia.org/wiki/Ogden%27s%20lemma
|
In the theory of formal languages, Ogden's lemma (named after William F. Ogden) is a generalization of the pumping lemma for context-free languages.
Statement
We will use underlines to indicate "marked" positions.
Special cases
Ogden's lemma is often stated in the following form, which can be obtained by "forgetting about" the grammar, and concentrating on the language itself:
If a language is context-free, then there exists some number (where may or may not be a pumping length) such that for any string of length at least in and every way of "marking" or more of the positions in , can be written as
with strings and , such that
has at least one marked position,
has at most marked positions, and
for all .
In the special case where every position is marked, Ogden's lemma is equivalent to the pumping lemma for context-free languages. Ogden's lemma can be used to show that certain languages are not context-free in cases where the pumping lemma is not sufficient. An example is the language .
Example applications
Non-context-freeness
The special case of Ogden's lemma is often sufficient to prove some languages are not context-free. For example, is a standard example of non-context-free language (, p. 128).
Similarly, one can prove the "copy twice" language is not context-free, by using Ogden's lemma on .
And the given example last section is not context-free by using Ogden's lemma on .
Inherent ambiguity
Ogden's lemma can be used to prove the inherent ambiguity of some languages, which is implied by the title of Ogden's paper.
Example: Let . The language is inherently ambiguous. (Example from page 3 of Ogden's paper.)
Similarly, is inherently ambiguous, and for any CFG of the language, letting be the constant for Ogden's lemma, we find that has at least different parses. Thus has an unbounded degree of inherent ambiguity.
Undecidability
The proof can be extended to show that deciding whether a CFG is inherently ambiguous is undecida
|
https://en.wikipedia.org/wiki/GPS%20tracking%20unit
|
A GPS tracking unit, geotracking unit, satellite tracking unit, or simply tracker is a navigation device normally on a vehicle, asset, person or animal that uses satellite navigation to determine its movement and determine its WGS84 UTM geographic position (geotracking) to determine its location. Satellite tracking devices may send special satellite signals that are processed by a receiver.
Locations are stored in the tracking unit or transmitted to an Internet-connected device using the cellular network (GSM/GPRS/CDMA/LTE or SMS), radio, or satellite modem embedded in the unit or WiFi work worldwide.
GPS antenna size limits tracker size, often smaller than a half-dollar (diameter 30.61 mm). In 2020 tracking is a $2 billion business plus military-in the gulf war 10% or more targets used trackers. Virtually every cellphone tracks its movements.
Tracks can be map displayed in real time, using GPS tracking software and devices with GPS capability.
Architecture
A GPS "track me" essentially contains a GPS module that receives the GPS signal and calculates the coordinates. For data loggers, it contains large memory to store the coordinates. Data pushers additionally contain a GSM/GPRS/CDMA/LTE modem to transmit this information to a central computer either via SMS or GPRS in form of IP packets. Satellite-based GPS tracking units will operate anywhere on the globe using satellite technology such as GlobalStar or Iridium. They do not require a cellular connection.
Types
There are three types of GPS trackers, though most GPS-equipped phones can work in any of these modes depending on the mobile applications installed:
Data loggers
GPS loggers log the position of the device at regular intervals in its internal memory. GPS loggers may have either a memory card slot, or internal flash memory card and a USB port. Some act as a USB flash drive, which allows downloading the track log data for further computer analysis. The track list or point of interest list may be in GPX
|
https://en.wikipedia.org/wiki/Bounce%20Address%20Tag%20Validation
|
In computing, Bounce Address Tag Validation (BATV) is a method, defined in an Internet Draft, for determining whether the bounce address specified in an E-mail message is valid. It is designed to reject backscatter, that is, bounce messages to forged return addresses.
Overview
The basic idea is to send all e-mail with a return address that includes a timestamp and a cryptographic token that cannot be forged. Any e-mail that is returned as a bounce without a valid signature can then be rejected. E-mail that is being bounced back should have an empty (null) return address so that bounces are never created for a bounce and therefore preventing messages from bouncing back and forth forever.
BATV replaces an envelope sender like mailbox@example.com with prvs=tag-value=mailbox@example.com, where prvs, called "Simple Private Signature", is just one of the possible tagging schemes; actually, the only one fully specified in the draft. The BATV draft gives a framework that other possible techniques can fit into. Other types of implementations, such as using public key signatures that can be verified by third parties, are mentioned but left undefined. The overall framework is vague/flexible enough that similar systems such as Sender Rewriting Scheme can fit into this framework.
History
Sami Farin proposed an Anti-Bogus Bounce System in 2003 in news.admin.net-abuse.email, which used the same basic idea of putting a hard to forge hash in a message's bounce address.
In late 2004, Goodman et al. proposed a much more complex "Signed Envelope Sender" that included a hash of the message body and was intended to address a wide variety of forgery threats, including bounces from forged mail. Several months later, Levine and Crocker proposed BATV under its current name and close to its current form.
Problems
The draft anticipates some problems running BATV.
Some mailing lists managers (e.g. ezmlm) still key on the bounce address, and will not recognize it after BATV mangling.
|
https://en.wikipedia.org/wiki/Geiriadur%20Prifysgol%20Cymru
|
Geiriadur Prifysgol Cymru (GPC) (The University of Wales Dictionary) is the only standard historical dictionary of the Welsh language, aspiring to be "comparable in method and scope to the Oxford English Dictionary". Vocabulary is defined in Welsh, and English equivalents are given. Detailed attention is given to variant forms, collocations, and etymology.
The first edition was published in four volumes between 1967 and 2002, containing 7.3 million words of text in 3,949 pages, documenting 106,000 headwords. There are almost 350,000 dated citations dating from the year 631 up to 2000, with 323,000 Welsh definitions and 290,000 English equivalents, of which 85,000 have included etymologies.
History
In 1921, a small team at the National Library of Wales, Aberystwyth, organised by the Rev. J. Bodvan Anwyl, arranged for volunteer readers to record words. The task of editing the dictionary was undertaken by R. J. Thomas in the 1948/49 academic year.
The first edition of Volume I appeared in 1967, followed by Volume II in 1987, Volume III in 1998, and Volume IV, edited by Gareth A. Bevan and P. J. Donovan, in December 2002. Work then immediately began on a second edition.
Following the retirement of the previous editors Gareth A. Bevan and Patrick J. Donovan, Andrew Hawke was appointed as managing editor in January 2008.
The second edition is based not only on the Dictionary's own collection of citation slips, but also on a wide range of electronic resources such as the Welsh Prose 1300-1425 website (Cardiff University), JISC Historic Books including EEBO and EECO (The British Library), Welsh Newspapers Online and Welsh Journals Online (National Library of Wales), and the National Terminology Portal (Bangor University).
In 2011, collaborative work began to convert the Dictionary data so that it could be used in the XML-based iLEX dictionary writing system, as well as to produce an online dictionary. After three years of work, on 26 June 2014, GPC Online was launche
|
https://en.wikipedia.org/wiki/Standard%20components%20%28food%20processing%29
|
Standard components is a food technology term, when manufacturers buy in a standard component they would use a pre-made product in the production of their food.
They help products to be the same in consistency, they are quick and easy to use in batch production of food products.
Some examples are pre-made stock cubes, marzipan, icing, ready made pastry.
Usage
Manufacturers use standard components as they save time and sometimes cost a lot less and it also helps with consistency in products.
If a manufacturer is to use a standard component from another supplier it is essential that a precise and accurate specification is produced by the manufacturer so that the component meets the standards set by the manufacturer.
Advantages
Saves preparation time.
Fewer steps in the production process
Less effort and skill required by staff
Less machinery and equipment needed
Good quality
Saves money from all aspects
Can be bought in bulk
High-quality consistency
Food preparation is hygienic
Disadvantages
Have to rely on other manufacturers to supply products
Fresh ingredients may taste better
May require special storage conditions
Less reliable than doing it yourself
Cost more to make
Can't control the nutritional value of the product
There is a larger risk of cross contamination.
GCSE food technology
|
https://en.wikipedia.org/wiki/Vestigial%20twin
|
A vestigial twin is a form of parasitic twinning, where the parasitic "twin" is so malformed and incomplete that it typically consists entirely of extra limbs or organs. It also can be a complete living being trapped inside the host person, however the parasitic twin is anencephalic and lacks consciousness.
This phenomenon occurs when a fertilized ovum or partially formed embryo splits incompletely. The result can be anything from two whole people joined by a bit of skin (conjoined twins), to one person with extra body parts belonging to the vestigial twin.
Most vestigial limbs are non-functional, and although they may have bones, muscles and nerve endings, they are not under the control of the host. The possession of six or more digits on the hands and feet (polydactyly) usually has a genetic or chromosomal cause, and is not a case of vestigial twinning.
|
https://en.wikipedia.org/wiki/Emerging%20infectious%20disease
|
An emerging infectious disease (EID) is an infectious disease whose incidence has increased recently (in the past 20 years), and could increase in the near future. The minority that are capable of developing efficient transmission between humans can become major public and global concerns as potential causes of epidemics or pandemics. Their many impacts can be economic and societal, as well as clinical. EIDs have been increasing steadily since at least 1940.
For every decade since 1940, there has been a consistent increase in the number of EID events from wildlife-related zoonosis. Human activity is the primary driver of this increase, with loss of biodiversity a leading mechanism.
Emerging infections account for at least 12% of all human pathogens. EIDs can be caused by newly identified microbes, including novel species or strains of virus (e.g. novel coronaviruses, ebolaviruses, HIV). Some EIDs evolve from a known pathogen, as occurs with new strains of influenza. EIDs may also result from spread of an existing disease to a new population in a different geographic region, as occurs with West Nile fever outbreaks. Some known diseases can also emerge in areas undergoing ecologic transformation (as in the case of Lyme disease). Others can experience a resurgence as a re-emerging infectious disease, like tuberculosis (following drug resistance) or measles. Nosocomial (hospital-acquired) infections, such as methicillin-resistant Staphylococcus aureus are emerging in hospitals, and are extremely problematic in that they are resistant to many antibiotics. Of growing concern are adverse synergistic interactions between emerging diseases and other infectious and non-infectious conditions leading to the development of novel syndemics.
Many EID are zoonotic, deriving from pathogens present in animals, with only occasional cross-species transmission into human populations. For instance, most emergent viruses are zoonotic (whereas other novel viruses may have been circula
|
https://en.wikipedia.org/wiki/Pelargonic%20acid
|
Pelargonic acid, also called nonanoic acid, is an organic compound with structural formula . It is a nine-carbon fatty acid. Nonanoic acid is a colorless oily liquid with an unpleasant, rancid odor. It is nearly insoluble in water, but very soluble in organic solvents. The esters and salts of pelargonic acid are called pelargonates or nonanoates.
The acid is named after the pelargonium plant, since oil from its leaves contains esters of the acid.
Preparation
Together with azelaic acid, it is produced industrially by ozonolysis of oleic acid.
Alternatively, pelargonic acid can be produced in a two-step process beginning with coupled dimerization and hydroesterification of 1,3-butadiene. This step produces a doubly unsaturated C9-ester, which can be hydrogenated to give esters of pelargonic acid.
\overset{1,3-butadiene}{2CH2=CH-CH=CH2}{} + CO + CH3OH -> CH2=CH(CH2)3CH=CHCH2CO2CH3
CH2=CH(CH2)3CH=CHCH2CO2CH3{} + 2 H2 -> \underset{pelargonic\ acid\ ester}{CH3(CH2)7CO2CH3}
A laboratory preparation involves permanganate oxidation of 1-decene.
Occurrence and uses
Pelargonic acid occurs naturally as esters in the oil of pelargonium.
Synthetic esters of pelargonic acid, such as methyl pelargonate, are used as flavorings. Pelargonic acid is also used in the preparation of plasticizers and lacquers. The derivative 4-nonanoylmorpholine is an ingredient in some pepper sprays. The ammonium salt of pelargonic acid, ammonium pelargonate, is a herbicide. It is commonly used in conjunction with glyphosate, a non-selective herbicide, for a quick burn-down effect in the control of weeds in turfgrass.
The methyl form and ethylene glycol pelargonate act as nematicides against Meloidogyne javanica on Solanum lycopersicum, and the methyl against Heterodera glycines and M. incognita on Glycine max.
Esters of pelargonic acid are precursors to lubricants.
Pharmacological effects
Pelargonic acid may be more potent than valproic acid in treating seizures. Moreover, in contrast to val
|
https://en.wikipedia.org/wiki/Wiener%27s%20Tauberian%20theorem
|
In mathematical analysis, Wiener's tauberian theorem is any of several related results proved by Norbert Wiener in 1932. They provide a necessary and sufficient condition under which any function in or
can be approximated by linear combinations of translations of a given function.
Informally, if the Fourier transform of a function vanishes on a certain set , the Fourier transform of any linear combination of translations of also vanishes on . Therefore, the linear combinations of translations of cannot approximate a function whose Fourier transform does not vanish
on .
Wiener's theorems make this precise, stating that linear combinations of translations of are dense if and only if the zero set of the Fourier
transform of is empty (in the case of ) or of Lebesgue measure zero (in the case of ).
Gelfand reformulated Wiener's theorem in terms of commutative C*-algebras, when it states that the spectrum of the group ring
of the group of real numbers is the dual group of . A similar result is true when
is replaced by any locally compact abelian group.
The condition in
Let be an integrable function. The span of translations
is dense in if and only if the Fourier transform of has no real zeros.
Tauberian reformulation
The following statement is equivalent to the previous result, and explains why Wiener's result is a Tauberian theorem:
Suppose the Fourier transform of has no real zeros, and suppose the convolution
tends to zero at infinity for some . Then the convolution tends to zero at infinity for any
.
More generally, if
for some the Fourier transform of which has no real zeros, then also
for any .
Discrete version
Wiener's theorem has a counterpart in
: the span of the translations of is dense if and only if the Fourier series
has no real zeros. The following statements are equivalent version of this result:
Suppose the Fourier series of has no real zeros, and for some bounded sequence the convolution
tends to zero a
|
https://en.wikipedia.org/wiki/Semiconductor%20fabrication%20plant
|
In the microelectronics industry, a semiconductor fabrication plant (commonly called a fab; sometimes foundry) is a factory for semiconductor device fabrication.
Fabs require many expensive devices to function. Estimates put the cost of building a new fab over one billion U.S. dollars with values as high as $3–4 billion not being uncommon. TSMC invested $9.3 billion in its Fab15 300 mm wafer manufacturing facility in Taiwan. The same company estimations suggest that their future fab might cost $20 billion. A foundry model emerged in the 1990s: Foundries that produced their own designs were known as integrated device manufacturers (IDMs). Companies that farmed out manufacturing of their designs to foundries were termed fabless semiconductor companies. Those foundries, which did not create their own designs, were called pure-play semiconductor foundries.
The central part of a fab is the clean room, an area where the environment is controlled to eliminate all dust, since even a single speck can ruin a microcircuit, which has nanoscale features much smaller than dust particles. The clean room must also be damped against vibration to enable nanometer-scale alignment of machines and must be kept within narrow bands of temperature and humidity. Vibration control may be achieved by using deep piles in the cleanroom's foundation that anchor the cleanroom to the bedrock, careful selection of the construction site, and/or using vibration dampers. Controlling temperature and humidity is critical for minimizing static electricity. Corona discharge sources can also be used to reduce static electricity. Often, a fab will be constructed in the following manner: (from top to bottom): the roof, which may contain air handling equipment that draws, purifies and cools outside air, an air plenum for distributing the air to several floor-mounted fan filter units, which are also part of the cleanroom's ceiling, the cleanroom itself, which may or may not have more than one story, a re
|
https://en.wikipedia.org/wiki/Cell%20signaling
|
In biology, cell signaling (cell signalling in British English) or cell communication is the ability of a cell to receive, process, and transmit signals with its environment and with itself. Cell signaling is a fundamental property of all cellular life in prokaryotes and eukaryotes. Signals that originate from outside a cell (or extracellular signals) can be physical agents like mechanical pressure, voltage, temperature, light, or chemical signals (e.g., small molecules, peptides, or gas). Cell signaling can occur over short or long distances, and as a result can be classified as autocrine, juxtacrine, intracrine, paracrine, or endocrine. Signaling molecules can be synthesized from various biosynthetic pathways and released through passive or active transports, or even from cell damage.
Receptors play a key role in cell signaling as they are able to detect chemical signals or physical stimuli. Receptors are generally proteins located on the cell surface or within the interior of the cell such as the cytoplasm, organelles, and nucleus. Cell surface receptors usually bind with extracellular signals (or ligands), which causes a conformational change in the receptor that leads it to initiate enzymic activity, or to open or close ion channel activity. Some receptors do not contain enzymatic or channel-like domains but are instead linked to enzymes or transporters. Other intracellular receptors like nuclear receptors have a different mechanism such as changing their DNA binding properties and cellular localization to the nucleus.
Signal transduction begins with the transformation (or transduction) of a signal into a chemical one, which can directly activate an ion channel (ligand-gated ion channel) or initiate a second messenger system cascade that propagates the signal through the cell. Second messenger systems can amplify a signal, in which activation of a few receptors results in multiple secondary messengers being activated, thereby amplifying the initial sig
|
https://en.wikipedia.org/wiki/Logical%20matrix
|
A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a matrix with entries from the Boolean domain Such a matrix can be used to represent a binary relation between a pair of finite sets. It is an important tool in combinatorial mathematics and theoretical computer science.
Matrix representation of a relation
If R is a binary relation between the finite indexed sets X and Y (so ), then R can be represented by the logical matrix M whose row and column indices index the elements of X and Y, respectively, such that the entries of M are defined by
In order to designate the row and column numbers of the matrix, the sets X and Y are indexed with positive integers: i ranges from 1 to the cardinality (size) of X, and j ranges from 1 to the cardinality of Y. See the article on indexed sets for more detail.
Example
The binary relation R on the set is defined so that aRb holds if and only if a divides b evenly, with no remainder. For example, 2R4 holds because 2 divides 4 without leaving a remainder, but 3R4 does not hold because when 3 divides 4, there is a remainder of 1. The following set is the set of pairs for which the relation R holds.
{(1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 4), (3, 3), (4, 4)}.
The corresponding representation as a logical matrix is
which includes a diagonal of ones, since each number divides itself.
Other examples
A permutation matrix is a (0, 1)-matrix, all of whose columns and rows each have exactly one nonzero element.
A Costas array is a special case of a permutation matrix.
An incidence matrix in combinatorics and finite geometry has ones to indicate incidence between points (or vertices) and lines of a geometry, blocks of a block design, or edges of a graph.
A design matrix in analysis of variance is a (0, 1)-matrix with constant row sums.
A logical matrix may represent an adjacency matrix in graph theory: non-symmetric matrices correspond to directed graphs, symmetric matrices to ordinary grap
|
https://en.wikipedia.org/wiki/Fan-in
|
Fan-in is the number of inputs a logic gate can handle. For instance the fan-in for the AND gate shown in the figure is 3. Physical logic gates with a large fan-in tend to be slower than those with a small fan-in. This is because the complexity of the input circuitry increases the input capacitance of the device. Using logic gates with higher fan-in will help in reducing the depth of a logic circuit; this is because circuit design is realized by the target logic family at a digital level, meaning any large fan-in logic gates are simply the smaller fan-in gates chained together in series at a given depth to widen the circuit instead.
Fan-in tree of a node refers to a collection of signals that contribute to the input signal of that node.
In quantum logic gates the fan-in always has to be equal to the number of outputs, the Fan-out. Gates for which the numbers of inputs and outputs differ would not be reversible (unitary) and are therefore not allowed.
See also
Fan-out, a related concept, which is the number of inputs that a given logic output drives.
|
https://en.wikipedia.org/wiki/Lippmann%E2%80%93Schwinger%20equation
|
The Lippmann–Schwinger equation (named after Bernard Lippmann and Julian Schwinger) is one of the most used equations to describe particle collisions – or, more precisely, scattering – in quantum mechanics. It may be used in scattering of molecules, atoms, neutrons, photons or any other particles and is important mainly in atomic, molecular, and optical physics, nuclear physics and particle physics, but also for seismic scattering problems in geophysics. It relates the scattered wave function with the interaction that produces the scattering (the scattering potential) and therefore allows calculation of the relevant experimental parameters (scattering amplitude and cross sections).
The most fundamental equation to describe any quantum phenomenon, including scattering, is the Schrödinger equation. In physical problems, this differential equation must be solved with the input of an additional set of initial and/or boundary conditions for the specific physical system studied. The Lippmann–Schwinger equation is equivalent to the Schrödinger equation plus the typical boundary conditions for scattering problems. In order to embed the boundary conditions, the Lippmann–Schwinger equation must be written as an integral equation. For scattering problems, the Lippmann–Schwinger equation is often more convenient than the original Schrödinger equation.
The Lippmann–Schwinger equation's general form is (in reality, two equations are shown below, one for the sign and other for the sign):
The potential energy describes the interaction between the two colliding systems. The Hamiltonian describes the situation in which the two systems are infinitely far apart and do not interact. Its eigenfunctions are and its eigenvalues are the energies . Finally, is a mathematical technicality necessary for the calculation of the integrals needed to solve the equation. It is a consequence of causality, ensuring that scattered waves consist only of outgoing waves. This is made rigorous by
|
https://en.wikipedia.org/wiki/Fascia%20lata
|
The fascia lata is the deep fascia of the thigh. It encloses the thigh muscles and forms the outer limit of the fascial compartments of thigh, which are internally separated by the medial intermuscular septum and the lateral intermuscular septum. The fascia lata is thickened at its lateral side where it forms the iliotibial tract, a structure that runs to the tibia and serves as a site of muscle attachment.
Structure
The fascia lata is an investment for the whole of the thigh, but varies in thickness in different parts. It is thicker in the upper and lateral part of the thigh, where it receives a fibrous expansion from the gluteus maximus, and where the tensor fasciae latae is inserted between its layers; it is very thin behind and at the upper and medial part, where it covers the adductor muscles, and again becomes stronger around the knee, receiving fibrous expansions from the tendon of the biceps femoris laterally, from the sartorius medially, and from the quadriceps femoris in front.
Function
The fascia lata surrounds the tensor fasciae latae muscle. It is a fibrous sheath that encircles the thigh subcutaneously. This encircling of the muscle allows the muscles to be bound together tightly.
Above and behind
The fascia lata is attached, above and behind (i.e. proximal and posterior), to the back of the sacrum and coccyx; laterally, to the iliac crest; in front, to the inguinal ligament, and to the superior ramus of the pubis; and medially, to the inferior ramus of the pubis, to the inferior ramus and tuberosity of the ischium, and to the lower border of the sacrotuberous ligament.
From its attachment to the iliac crest it passes down over the gluteus medius to the upper border of the gluteus maximus, where it splits into two layers, one passing superficial to and the other beneath this muscle; at the lower border of the muscle the two layers reunite.
Laterally
Laterally, the fascia lata receives the greater part of the tendon of insertion of the gluteus maxi
|
https://en.wikipedia.org/wiki/Wigner%20distribution%20function
|
The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis.
The WDF was first proposed in physics to account for quantum corrections to classical statistical mechanics in 1932 by Eugene Wigner, and it is of importance in quantum mechanics in phase space (see, by way of comparison: Wigner quasi-probability distribution, also called the Wigner function or the Wigner–Ville distribution).
Given the shared algebraic structure between position-momentum and time-frequency conjugate pairs, it also usefully serves in signal processing, as a transform in time-frequency analysis, the subject of this article. Compared to a short-time Fourier transform, such as the Gabor transform, the Wigner distribution function provides the highest possible temporal vs frequency resolution which is mathematically possible within the limitations of the uncertainty principle. The downside is the introduction of large cross terms between every pair of signal components and between positive and negative frequencies, which makes the original formulation of the function a poor fit for most analysis applications. Subsequent modifications have been proposed which preserve the sharpness of the Wigner distribution function but largely suppress cross terms.
Mathematical definition
There are several different definitions for the Wigner distribution function. The definition given here is specific to time-frequency analysis. Given the time series , its non-stationary auto-covariance function is given by
where denotes the average over all possible realizations of the process and is the mean, which may or may not be a function of time. The Wigner function is then given by first expressing the autocorrelation function in terms of the average time and time lag , and then Fourier transforming the lag.
So for a single (mean-zero) time series, the Wigner function is simply given by
The motivation for the Wigner function is that it reduces to
|
https://en.wikipedia.org/wiki/Penrose%20interpretation
|
The Penrose interpretation is a speculation by Roger Penrose about the relationship between quantum mechanics and general relativity. Penrose proposes that a quantum state remains in superposition until the difference of space-time curvature attains a significant level.
Overview
Penrose's idea is inspired by quantum gravity, because it uses both the physical constants and . It is an alternative to the Copenhagen interpretation, which posits that superposition fails when an observation is made (but that it is non-objective in nature), and the many-worlds interpretation, which states that alternative outcomes of a superposition are equally "real", while their mutual decoherence precludes subsequent observable interactions.
Penrose's idea is a type of objective collapse theory. For these theories, the wavefunction is a physical wave, which experiences wave function collapse as a physical process, with observers not having any special role. Penrose theorises that the wave function cannot be sustained in superposition beyond a certain energy difference between the quantum states. He gives an approximate value for this difference: a Planck mass worth of matter, which he calls the "'one-graviton' level". He then hypothesizes that this energy difference causes the wave function to collapse to a single state, with a probability based on its amplitude in the original wave function, a procedure derived from standard quantum mechanics. Penrose's "'one-graviton' level" criterion forms the basis of his prediction, providing an objective criterion for wave function collapse. Despite the difficulties of specifying this in a rigorous way, he proposes that the basis states into which the collapse takes place are mathematically described by the stationary solutions of the Schrödinger–Newton equation.
Recent work indicates an increasingly deep inter-relation between quantum mechanics and gravitation.
Physical consequences
Accepting that wavefunctions are physically real, Penrose
|
https://en.wikipedia.org/wiki/Integrated%20injection%20logic
|
Integrated injection logic (IIL, I2L, or I2L) is a class of digital circuits built with multiple collector bipolar junction transistors (BJT). When introduced it had speed comparable to TTL yet was almost as low power as CMOS, making it ideal for use in VLSI (and larger) integrated circuits. The gates can be made smaller with this logic family than with CMOS because complementary transistors are not needed. Although the logic voltage levels are very close (High: 0.7V, Low: 0.2V), I2L has high noise immunity because it operates by current instead of voltage. I2L was developed in 1971 by Siegfried K. Wiedmann and Horst H. Berger who originally called it merged-transistor logic (MTL).
A disadvantage of this logic family is that the gates draw power when not switching unlike with CMOS.
Construction
The I2L inverter gate is constructed with a PNP common base current source transistor and an NPN common emitter open collector inverter transistor (i.e. they are connected to the GND). On a wafer, these two transistors are merged. A small voltage (around 1 volts) is supplied to the emitter of the current source transistor to control the current supplied to the inverter transistor. Transistors are used for current sources on integrated circuits because they are much smaller than resistors.
Because the inverter is open collector, a wired AND operation may be performed by connecting an output from each of two or more gates together. Thus the fan-out of an output used in such a way is one. However, additional outputs may be produced by adding more collectors to the inverter transistor. The gates can be constructed very simply with just a single layer of interconnect metal.
In a discrete implementation of an I2L circuit, bipolar NPN transistors with multiple collectors can be replaced with multiple discrete 3-terminal NPN transistors connected in parallel having their bases connected together and their emitters connected likewise. The current source transistor may be replaced
|
https://en.wikipedia.org/wiki/Lingual%20nerve
|
The lingual nerve carries sensory innervation from the anterior two-thirds of the tongue. It contains fibres from both the mandibular division of the trigeminal nerve (CN V) and from the facial nerve (CN VII). The fibres from the trigeminal nerve are for touch, pain and temperature (general sensation), and the ones from the facial nerve are for taste (special sensation).
Structure
Origin
The lingual nerve arises from the posterior trunk of mandibular nerve (CN V) within the infratemporal fossa.
Course
The lingual nerve first courses deep to the lateral pterygoid muscle and superior to the tensor veli palatini muscle; while passing between these two muscles, it is joined by the chorda tympani, and often by a communicating branch from the inferior alveolar nerve.
The nerve then comes to pass inferoanteriorly upon the medial pterygoid muscle towards the medial aspect of the ramus of mandible, eventually meeting the mandible at the junction of the ramus and body of mandible. Here, the lingual nerve is anterior and somewhat medial (deep) to the inferior alveolar nerve.
It crosses obliquely to the side of the tongue beneath the constrictor pharyngis superior and styloglossus, and then between the hyoglossus and deep part of the submandibular gland; it finally runs from laterally to medially inferiorly crossing the duct of the submandibular gland, and along the tongue to its tip becoming the sublingual nerve, lying immediately beneath the mucous membrane.
The submandibular ganglion is suspended by two nerve filaments from the lingual nerve.
Distribution
General sensory
The lingual nerve supplies general somatic afferent (i.e. general sensory) innervation to the mucous membrane of the anterior two-thirds of the tongue (i.e. body of tongue) (whereas the posterior one-third (i.e. root of tongue) is innervated via the glossopharyngeal nerve (CN IX)), the floor of the oral cavity, and the mandibular/inferior lingual gingiva.
Special sensory and parasymathetic autono
|
https://en.wikipedia.org/wiki/Swansea%20Devil
|
The Swansea Devil, also called Old Nick, is a wood carving of the Devil in Swansea, Wales. It was carved by an architect whose designs for St. Mary's Church had been rejected by a committee. Some years later when designing an office building across the road, he placed a carving of Satan facing the church and prophesied "When your church is destroyed and burnt to the ground my devil will remain laughing". This prophecy later came true when the church was bombed during the Second World War.
History
In the 1890s it was decided that St. Mary's Church in the centre of Swansea would be rebuilt. The task of designing the new church was put to tender. Among those who applied were a local architect and Sir Arthur Blomfield. The committee accepted Blomfield's designs and the church was built. The local man took his rejection as a slight against his talent. After several years a row of cottages adjacent to the church became available for purchase. The offended architect bought these houses, and tore them down. In their place he erected a red brick building to house the brewery offices, on which he placed a carving of Satan, facing the church. The local man is reputed to have prophesied: "When your church is destroyed and burnt to the ground my devil will remain laughing."
Blitz
Swansea, being a major strategic target in South Wales, was bombed heavily during World War II. One of the buildings destroyed during the three night blitz in February 1941 was St. Mary's Church. The building on which Old Nick was mounted was not hit and remained standing through the war thus allowing Old Nick to continue laughing over the burnt remains of the church.
Post war
In 1962 the brewery offices were torn down, while St. Mary's was rebuilt to the original designs. The devil was left to rot in a garage in Hereford,until a local historian returned him to Swansea during the 1980s.
Present day
Occupying the land of the brewery offices is now the Quadrant Shopping Centre, opened in 1979. Once re
|
https://en.wikipedia.org/wiki/Anchor%20Stone%20Blocks
|
Anchor Stone Blocks () are components of stone construction sets made in Rudolstadt, Germany, marketed as a construction toy.
Description
Anchor Stone pieces are made of a mixture of quartz sand, chalk, and linseed oil (German Patent 13,770; US Patent 233,780), precisely pressed in molds so that they fit together perfectly. The stones come in three colors in imitation of the red brick, tan limestone, and blue slate of European buildings. They are not recommended for play by children under 3 years of age because of their small size (CE No. 0494).
History
Origin
Anchor stones originated with the wooden building blocks designed by Friedrich Fröbel, the creator of the Kindergarten system of education. He had observed how children enjoyed playing with geometrically-shaped blocks.
The first Anchor Stone was produced when Otto Lilienthal and his brother Gustav decided to make a model of a stone building, using miniature blocks of stone. To this end, they started production of a limited number of blocks, made of a mixture of quartz sand, chalk, and linseed oil. Unfortunately, the Lilienthals, though brilliant inventors, had limited commercial success.
The stone blocks saw little popularity until 1880, when Friedrich Adolf Richter, a wealthy businessman who had built a small empire in Rudolstadt, purchased the rights to the process for 1000 marks, plus about 4800 marks (including 800 marks still owing) for the tooling and machines being used to produce them. He developed a series of sets of individually-packaged stones, which quickly became popular. Promoted by extensive advertising, 42,000 sets were sold in 1883 (Annual Report for 1883 of the Schwarzburg-Rudolstadt Factory Inspection Service, Archives, Heidecksburg, Rudolstadt). In 1894, Richter applied his "Anchor" trademark to the Richter's Anchor Stone Building Sets (Richters Anker-Steinbaukasten). More than 600 different sets were produced over the multi-decade life of these sets; more than 1,000 stone shapes wer
|
https://en.wikipedia.org/wiki/Round%20window
|
The round window is one of the two openings from the middle ear into the inner ear. It is sealed by the secondary tympanic membrane (round window membrane), which vibrates with opposite phase to vibrations entering the inner ear through the oval window. It allows fluid in the cochlea to move, which in turn ensures that hair cells of the basilar membrane will be stimulated and that audition will occur.
Structure
The round window is situated below (inferior to) and a little behind (posterior to) the oval window, from which it is separated by a rounded elevation, the promontory.
It is located at the bottom of a funnel-shaped depression (the round window niche) and, in the macerated bone, opens into the cochlea of the internal ear; in the fresh state it is closed by a membrane, the secondary tympanic membrane (, or ) or round window membrane, which is a complex saddle point shape. The visible central portion is concave (curved inwards) toward the tympanic cavity and convex (curved outwards) toward the cochlea; but towards the edges, where it is hidden in the round window niche, it curves the other way.
This membrane consists of three layers:
an external, or mucous, derived from the mucous lining of the tympanic cavity;
an internal, from the lining membrane of the cochlea;
and an intermediate, or fibrous layer.
The membrane vibrates with opposite phase to vibrations entering the cochlea through the oval window as the fluid in the cochlea is displaced when pressed by the stapes at the oval window. This ensures that hair cells of the basilar membrane will be stimulated and that audition will occur.
Both the oval and round windows are about the same size, approximately . The entrance to the round window niche is often much smaller than this.
Function
The stapes bone transmits movement to the oval window. As the stapes footplate moves into the oval window, the round window membrane moves out, and this allows movement of the fluid within the cochlea, leading to mo
|
https://en.wikipedia.org/wiki/NProtect%20GameGuard
|
nProtect GameGuard (sometimes called GG) is an anti-cheating rootkit developed by INCA Internet. It is widely installed in many online games to block possibly malicious applications and prevent common methods of cheating. nProtect GameGuard provides B2B2C (Business to Business to Consumer) security services for online game companies and portal sites. The software is considered to be one of three software programs which "dominate the online game security market".
GameGuard uses rootkits to proactively prevent cheat software from running. GameGuard hides the game application process, monitors the entire memory range, terminates applications defined by the game vendor and INCA Internet to be cheats (QIP for example), blocks certain calls to Direct X functions and Windows APIs, keylogs keyboard input, and auto-updates itself to change as new possible threats surface.
Since GameGuard essentially works like a rootkit, players may experience unintended and potentially unwanted side effects. If set, GameGuard blocks any installation or activation of hardware and peripherals (e.g., a mouse) while the program is running. Since GameGuard monitors any changes in the computer's memory, it will cause performance issues when the protected game loads multiple or large resources all at once.
Additionally, some versions of GameGuard had an unpatched privilege escalation bug, allowing any program to issue commands as if they were running under an Administrator account.
GameGuard possesses a database on game hacks based on security references from more than 260 game clients. Some editions of GameGuard are now bundled with INCA Internet's Tachyon anti-virus/anti-spyware library, and others with nProtect Key Crypt, an anti-key-logger software that protects the keyboard input information.
List of online games using GameGuard
GameGuard is used in many online games.
9Dragons
Atlantica Online
Blackshot
Blade & Soul
Cabal Online
City Racer
Combat Arms: Reloaded
Combat Arms: Th
|
https://en.wikipedia.org/wiki/Ethics%20of%20care
|
The ethics of care (alternatively care ethics or EoC) is a normative ethical theory that holds that moral action centers on interpersonal relationships and care or benevolence as a virtue. EoC is one of a cluster of normative ethical theories that were developed by some feminists and environmentalists since the 1980s. While consequentialist and deontological ethical theories emphasize generalizable standards and impartiality, ethics of care emphasize the importance of response to the individual. The distinction between the general and the individual is reflected in their different moral questions: "what is just?" versus "how to respond?" Carol Gilligan, who is considered the originator of the ethics of care, criticized the application of generalized standards as "morally problematic, since it breeds moral blindness or indifference".
Assumptions of the framework include: persons are understood to have varying degrees of dependence and interdependence; other individuals affected by the consequences of one's choices deserve consideration in proportion to their vulnerability; and situational details determine how to safeguard and promote the interests of individuals.
Historical background
The originator of the ethics of care was Carol Gilligan, an American ethicist and psychologist. Gilligan created this model as a critique to her mentor, developmental psychologist Lawrence Kohlberg's model of moral development. Gilligan observed that measuring moral development by Kohlberg's stages of moral development found boys to be more morally mature than girls, and this result held for adults as well (although when education is controlled for there are no gender differences). Gilligan argued that Kohlberg's model was not objective, but rather a masculine perspective on morality, founded on principles of justice and rights. In her 1982 book In a Different Voice, she further posited that men and women have tendencies to view morality in different terms. Her theory claimed women t
|
https://en.wikipedia.org/wiki/Head%20shadow
|
A head shadow (or acoustic shadow) is a region of reduced amplitude of a sound because it is obstructed by the head. It is an example of diffraction.
Sound may have to travel through and around the head in order to reach an ear. The obstruction caused by the head can account for attenuation (reduced amplitude) of overall intensity as well as cause a filtering effect. The filtering effects of head shadowing are an essential element of sound localisation—the brain weighs the relative amplitude, timbre, and phase of a sound heard by the two ears and uses the difference to interpret directional information.
The shadowed ear, the ear further from the sound source, receives sound slightly later (up to approximately 0.7 ms later) than the unshadowed ear, and the timbre, or frequency spectrum, of the shadowed sound wave is different because of the obstruction of the head.
The head shadow causes particular difficulty in sound localisation in people suffering from unilateral hearing loss. It is a factor to consider when correcting hearing loss with directional hearing aids.
See also
Interaural intensity difference
Hearing
|
https://en.wikipedia.org/wiki/Digital%20Compression%20System
|
Digital Compression System, or DCS, is a sound system developed by Williams Electronics. This advanced sound board was used in Williams and Bally pinball games, coin-op arcade video games by Midway Manufacturing, and mechanical and video slot machines by Williams Gaming. This sound system became the standard for these game platforms.
The DCS Sound system was created by Williams sound engineers Matt Booty and Ed Keenan, and further developed by Andrew Eloff.
Versions of DCS
DCS ROM-based mono: The first version of DCS used an Analog Devices ADSP2105 DSP (clocked at 10 MHz) and a DMA-driven DAC, outputting in mono. This was used for the majority of Williams and Midway's pinball games (starting with 1993's Indiana Jones: The Pinball Adventure), as well as Midway's video games, up until the late 1990s. The pinball game, The Twilight Zone, was originally supposed to use the DCS System, but because the DCS board was still in development at the time, all of the music and sounds for this game were reprogrammed for the Yamaha YM2151 / Harris CVSD sound board.
DCS-95: This was a revised version of the original DCS System (allowing for 16MB of data instead of 8MB to be addressed), used for Williams and Midway's WPC-95 pinball system.
DCS2 ROM-based stereo: This version used the ADSP2104 DSP and two DMA-driven DACs, outputting in stereo. This was used in Midway's Zeus-based hardware, and in the short-lived Pinball 2000 platform.
DCS2 RAM-based stereo: This version used the ADSP2115 DSP and two DMA-driven DACs, outputting in stereo. This was used in Midway's 3DFX-based hardware (NFL Blitz, etc.). This system would be adopted by Atari Games, following their acquisition by WMS Industries.
DCS2 RAM-based multi-channel: This version used the ADSP2181 DSP and up to six DMA-driven DACs, outputting in multichannel sound.
Pinball games using DCS
Attack From Mars (1995) (DCS95)
Cactus Canyon (1998) (DCS95)
The Champion Pub (1998) (DCS95)
Cirqus Voltaire (1997) (DCS95)
Congo (1995) (DC
|
https://en.wikipedia.org/wiki/Hoffman%E2%80%93Singleton%20graph
|
In the mathematical field of graph theory, the Hoffman–Singleton graph is a 7-regular undirected graph with 50 vertices and 175 edges. It is the unique strongly regular graph with parameters (50,7,0,1). It was constructed by Alan Hoffman and Robert Singleton while trying to classify all Moore graphs, and is the highest-order Moore graph known to exist. Since it is a Moore graph where each vertex has degree 7, and the girth is 5, it is a (7,5)-cage.
Construction
Here are some constructions of the Hoffman–Singleton graph.
Construction from pentagons and pentagrams
Take five pentagons Ph and five pentagrams Qi . Join vertex j of Ph to vertex h·i+j of Qi. (All indices are modulo 5.)
Construction from PG(3,2)
Take a Fano plane on seven elements, such as {abc, ade, afg, bef, bdg, cdf, ceg} and apply all 2520 even permutations on the 7-set abcdefg. Canonicalize each such Fano plane (e.g. by reducing to lexicographic order) and discard duplicates. Exactly 15 Fano planes remain. Each 3-set (triplet) of the set abcdefg is present in exactly 3 Fano planes. The incidence between the 35 triplets and 15 Fano planes induces PG(3,2), with 15 points and 35 lines. To make the Hoffman-Singleton graph, create a graph vertex for each of the 15 Fano planes and 35 triplets. Connect each Fano plane to its 7 triplets, like a Levi graph, and also connect disjoint triplets to each other like the odd graph O(4).
A very similar construction from PG(3,2) is used to build the Higman–Sims graph, which has the Hoffman-Singleton graph as a subgraph.
Construction on a groupoid
Let be the set . Define a binary operation on such that for each and in ,
.
Then the Hoffman-Singleton graph has vertices and that there exists an edge between and whenever for some .
Algebraic properties
The automorphism group of the Hoffman–Singleton graph is a group of order isomorphic to PΣU(3,52) the semidirect product of the projective special unitary group PSU(3,52) with the cyclic group of order 2 gener
|
https://en.wikipedia.org/wiki/Moore%20graph
|
In graph theory, a Moore graph is a regular graph whose girth (the shortest cycle length) is more than twice its diameter (the distance between the farthest two vertices). If the degree of such a graph is and its diameter is , its girth must equal . This is true, for a graph of degree and diameter , if and only if its number of vertices equals
an upper bound on the largest possible number of vertices in any graph with this degree and diameter. Therefore, these graphs solve the degree diameter problem for their parameters.
Another equivalent definition of a Moore graph is that it has girth and precisely cycles of length , where and are, respectively, the numbers of vertices and edges of . They are in fact extremal with respect to the number of cycles whose length is the girth of the graph.
Moore graphs were named by after Edward F. Moore, who posed the question of describing and classifying these graphs.
As well as having the maximum possible number of vertices for a given combination of degree and diameter, Moore graphs have the minimum possible number of vertices for a regular graph with given degree and girth. That is, any Moore graph is a cage. The formula for the number of vertices in a Moore graph can be generalized to allow a definition of Moore graphs with even girth as well as odd girth, and again these graphs are cages.
Bounding vertices by degree and diameter
Let be any graph with maximum degree and diameter , and consider the tree formed by breadth first search starting from any vertex . This tree has 1 vertex at level 0 ( itself), and at most vertices at level 1 (the neighbors of ). In the next level, there are at most vertices: each neighbor of uses one of its adjacencies to connect to and so can have at most neighbors at level 2. In general, a similar argument shows that at any level , there can be at most vertices. Thus, the total number of vertices can be at most
originally defined a Moore graph as a graph for which this boun
|
https://en.wikipedia.org/wiki/Tychoplankton
|
Tychoplankton (Greek, "tycho", accident, chance) are organisms, such as free-living or attached benthic organisms and other non-planktonic organisms, that are carried into the plankton through a disturbance of their benthic habitat, or by winds and currents. This can occur by direct turbulence or by disruption of the substrate and subsequent entrainment in the water column. Tychoplankton are, therefore, a primary subdivision for sorting planktonic organisms by duration of lifecycle spent in the plankton, as neither their entire lives nor particular reproductive portions are confined to planktonic existence.
They are also known as accidental plankton or pseudo-plankton (compare: pseudoplankton), although "pseudoplankton" also defines organisms that do not themselves float but, rather, are attached to other organisms that float.
See also
Pseudoplankton
|
https://en.wikipedia.org/wiki/Cogan%20syndrome
|
Cogan syndrome (also Cogan's syndrome) is a rare disorder characterized by recurrent inflammation of the front of the eye (the cornea) and often fever, fatigue, and weight loss, episodes of vertigo (dizziness), tinnitus (ringing in the ears) and hearing loss. It can lead to deafness or blindness if untreated. The classic form of the disease was first described by D. G. Cogan in 1945.
Signs and symptoms
Cogan syndrome is a rare, rheumatic disease characterized by inflammation of the ears and eyes. Cogan syndrome can lead to vision difficulty, hearing loss and dizziness. The condition may also be associated with blood-vessel inflammation (called vasculitis) in other areas of the body that can cause major organ damage in 15% of those affected or, in a small number of cases, even death. It most commonly occurs in a person's 20s or 30s. The cause is not known. However, one theory is that it is an autoimmune disorder in which the body's immune system mistakenly attacks tissue in the eye and ear.
Causes
It is currently thought that Cogan syndrome is an autoimmune disease. The inflammation in the eye and ear are due to the patient's own immune system producing antibodies that attack the inner ear and eye tissue. Autoantibodies can be demonstrated in the blood of some patients, and these antibodies have been shown to attack inner ear tissue in laboratory studies. Infection with the bacteria Chlamydia pneumoniae has been demonstrated in some patients prior to the development of Cogan syndrome, leading some researchers to hypothesize that the autoimmune disease may be initiated by the infection. C. pneumoniae is a common cause of mild pneumonia, and the vast majority of patients who are infected with the bacteria do not develop Cogan syndrome.
Diagnosis
While the white blood cell count, erythrocyte sedimentation rate, and C-reactive protein tests may be abnormal and there may be abnormally high levels of platelets in the blood or too few red blood cells in the blood, none o
|
https://en.wikipedia.org/wiki/Pax%20genes
|
In evolutionary developmental biology, Paired box (Pax) genes are a family of genes coding for tissue specific transcription factors containing an N-terminal paired domain and usually a partial, or in the case of four family members (PAX3, PAX4, PAX6 and PAX7), a complete homeodomain to the C-terminus. An octapeptide as well as a Pro-Ser-Thr-rich C terminus may also be present. Pax proteins are important in early animal development for the specification of specific tissues, as well as during epimorphic limb regeneration in animals capable of such.
The paired domain was initially described in 1987 as the "paired box" in the Drosophila protein paired (prd; ).
Groups
Within the mammalian family, there are four well defined groups of Pax genes.
Pax group 1 (Pax 1 and 9),
Pax group 2 (Pax 2, 5 and 8),
Pax group 3 (Pax 3 and 7) and
Pax group 4 (Pax 4 and 6).
Two more families, Pox-neuro and Pax-α/β, exist in basal bilaterian species. Orthologous genes exist throughout the Metazoa, including extensive study of the ectopic expression in Drosophila using murine Pax6. The two rounds of whole-genome duplications in vertebrate evolution is responsible for the creation of as many as 4 paralogs for each Pax protein.
Members
PAX1 has been identified in mice with the development of vertebrate and embryo segmentation, and some evidence this is also true in humans. It transcribes a 440 amino acid protein from 4 exons and 1,323 in humans. In the mouse Pax1 mutation has been linked to undulated mutant suffering from skeletal malformations .
PAX2 has been identified with kidney and optic nerve development. It transcribes a 417 amino acid protein from 11 exons and 4,261 in humans. Mutation of PAX2 in humans has been associated with renal-coloboma syndrome as well as oligomeganephronia.
PAX3 has been identified with ear, eye and facial development. It transcribes a 479 amino acid protein in humans. Mutations in it can cause Waardenburg syndrome. PAX3 is frequently expressed in me
|
https://en.wikipedia.org/wiki/X.75
|
X.75 is an International Telecommunication Union (ITU) (formerly CCITT) standard specifying the interface for interconnecting two X.25 networks. X.75 is almost identical to X.25. The significant difference is that while X.25 specifies the interface between a subscriber (Data Terminal Equipment (DTE)) and the network (Data Circuit-terminating Equipment (DCE)), X.75 specifies the interface between two networks (Signalling Terminal Equipment (STE)), and refers to these two STE as STE-X and STE-Y. This gives rise to some subtle differences in the protocol compared with X.25. For example, X.25 only allows network-generated reset and clearing causes to be passed from the network (DCE) to the subscriber (DTE), and not the other way around, since the subscriber is not a network. However, at the interconnection of two X.25 networks, either network might reset or clear an X.25 call, so X.75 allows network-generated reset and clearing causes to be passed in either direction.
Although outside the scope of both X.25 and X.75, which define external interfaces to an X.25 network, X.75 can also be found as the protocol operating between switching nodes inside some X.25 networks.
Further reading
External links
ITU-T Recommendation X.75
Network layer protocols
Wide area networks
ITU-T recommendations
ITU-T X Series Recommendations
X.25
|
https://en.wikipedia.org/wiki/Ohio%20Scientific
|
Ohio Scientific, Inc. (OSI, originally Ohio Scientific Instruments, Inc.), was a privately owned American computer company based in Ohio that built and marketed computer systems, expansions, and software from 1975 to 1986. Their best-known products were the Challenger series of microcomputers and Superboard single-board computers. The company was the first to market microcomputers with hard disk drives in 1977.
The company was incorporated as Ohio Scientific Instruments in Hiram, Ohio, by husband and wife Mike and Charity Cheiky and business associate Dale A. Dreisbach in 1975. Originally a maker of electronic teaching aids, the company leaned quickly into microcomputer production, after their original educational products failed in the marketplace while their computer-oriented products sparked high interest in the hobbyist community. The company moved to Aurora, Ohio, occupying a 72,000-square-foot factory. The company reached the $1 million revenue mark in 1976; by the end of 1980, the company generated $18 million in revenue. Ohio Scientific's manufacturing presence likewise expanded into greater Ohio as well as California and Puerto Rico.
In 1980, the company was acquired by telecommunications conglomerate M/A-COM of Burlington, Massachusetts, for $5 million. M-A/COM soon consolidated the company's product lines, in order to focus their new subsidiary on manufacturing business systems. During their tenure under M-A/COM, Ohio Scientific was renamed M/A-COM Office Systems. M-A/COM struggled financially themselves and sold the division in 1983 to Kendata Inc. of Trumbull, Connecticut, who immediately renamed it back to Ohio Scientific. Kendata, previously only a corporate reseller of computer systems, failed to maintain Ohio Scientific's manufacturing lines and subsequently sold the division to AB Fannyudde of Sweden. The flagship Aurora factory, by then only employing 16 people, was finally shut down in October 1983.
Beginnings (1975–1976)
Ohio Scientific was
|
https://en.wikipedia.org/wiki/Digital%20Control%20Bus
|
DCB (Digital Control Bus, Digital Connection Bus or Digital Communication Bus in some sources) was a proprietary data interchange interface by Roland Corporation, developed in 1981 and introduced in 1982 in their Roland Juno-60 and Roland Jupiter-8 products. DCB functions were basically the same as MIDI, but unlike MIDI (which is capable of transmitting a wide array of information), DCB could provide note on/off, program change and VCF/VCA control only. DCB-to-MIDI adapters were produced for a number of early Roland products. The DCB interface was made in 2 variants, the earlier one used 20-pin sockets and cables, later switching to the 14-pin Amphenol DDK connector vaguely resembling a parallel port.
Supporting equipment
DCB was quickly replaced by MIDI in the early 1980s which Roland helped co-develop with Sequential Circuits. The only DCB-equipped instruments produced were the Roland Jupiter-8 and JUNO-60; Roland produced at least two DCB sequencers, the JSQ-60 and the MSQ-700. The latter was capable of saving eight sequences, or a total of 3000 notes, and was capable of transmitting and receiving data via MIDI (though it could not convert signals between DCB and MIDI, nor could it use both protocols simultaneously). Roland later released the MD-8, a rather large black box capable of converting MIDI signals to DCB and vice versa. While this allows note on/off to be sent to a JUNO-60 by MIDI, the solution pales in comparison to the full MIDI implementation on the JUNO-60's successor, the Roland Juno-106. A few other companies offer similar conversion boxes to connect DCB instruments to regular MIDI systems for the support of vintage synthesizers in modern sound production environments; one of the more fully-featured devices being the Kenton PRO-DCB Mk3 which has some bi-directional control limited to a few parameters.
Implementation
Following information comes from the Roland JUNO-60 Service Notes, First Edition, page 17–19.
Physical connection
DCB
|
https://en.wikipedia.org/wiki/PComb3H
|
pComb3H, a derivative of pComb3 optimized for expression of human fragments, is a phagemid used to express proteins such as zinc finger proteins and antibody fragments on phage pili for the purpose of phage display selection.
For the purpose of phage production, it contains the bacterial ampicillin resistance gene (for B-lactamase), allowing the growth of only transformed bacteria.
|
https://en.wikipedia.org/wiki/Initial%20algebra
|
In mathematics, an initial algebra is an initial object in the category of -algebras for a given endofunctor . This initiality provides a general framework for induction and recursion.
Examples
Functor
Consider the endofunctor sending to , where is the one-point (singleton) set, the terminal object in the category. An algebra for this endofunctor is a set (called the carrier of the algebra) together with a function . Defining such a function amounts to defining a point and a function .
Define
and
Then the set of natural numbers together with the function is an initial -algebra. The initiality (the universal property for this case) is not hard to establish; the unique homomorphism to an arbitrary -algebra , for an element of and a function on , is the function sending the natural number to , that is, , the -fold application of to .
The set of natural numbers is the carrier of an initial algebra for this functor: the point is zero and the function is the successor function.
Functor
For a second example, consider the endofunctor on the category of sets, where is the set of natural numbers. An algebra for this endofunctor is a set together with a function . To define such a function, we need a point and a function . The set of finite lists of natural numbers is an initial algebra for this functor. The point is the empty list, and the function is cons, taking a number and a finite list, and returning a new finite list with the number at the head.
In categories with binary coproducts, the definitions just given are equivalent to the usual definitions of a natural number object and a list object, respectively.
Final coalgebra
Dually, a final coalgebra is a terminal object in the category of -coalgebras. The finality provides a general framework for coinduction and corecursion.
For example, using the same functor as before, a coalgebra is defined as a set together with a function . Defining such a function amounts to defining a partial funct
|
https://en.wikipedia.org/wiki/Adherence%20%28medicine%29
|
In medicine, patient compliance (also adherence, capacitance) describes the degree to which a patient correctly follows medical advice. Most commonly, it refers to medication or drug compliance, but it can also apply to other situations such as medical device use, self care, self-directed exercises, or therapy sessions. Both patient and health-care provider affect compliance, and a positive physician-patient relationship is the most important factor in improving compliance. Access to care plays a role in patient adherence, whereby greater wait times to access care contributing to greater absenteeism. The cost of prescription medication also plays a major role.
Compliance can be confused with concordance, which is the process by which a patient and clinician make decisions together about treatment.
Worldwide, non-compliance is a major obstacle to the effective delivery of health care. 2003 estimates from the World Health Organization indicated that only about 50% of patients with chronic diseases living in developed countries follow treatment recommendations with particularly low rates of adherence to therapies for asthma, diabetes, and hypertension. Major barriers to compliance are thought to include the complexity of modern medication regimens, poor health literacy and not understanding treatment benefits, the occurrence of undiscussed side effects, poor treatment satisfaction, cost of prescription medicine, and poor communication or lack of trust between a patient and his or her health-care provider. Efforts to improve compliance have been aimed at simplifying medication packaging, providing effective medication reminders, improving patient education, and limiting the number of medications prescribed simultaneously. Studies show a great variation in terms of characteristics and effects of interventions to improve medicine adherence. It is still unclear how adherence can consistently be improved in order to promote clinically important effects.
Terminology
In me
|
https://en.wikipedia.org/wiki/Acoustic%20cleaning
|
Acoustic cleaning is a maintenance method used in material-handling and storage systems that handle bulk granular or particulate materials, such as grain elevators, to remove the buildup of material on surfaces. An acoustic cleaning apparatus, usually built into the material-handling equipment, works by generating powerful sound waves which shake particulates loose from surfaces, reducing the need for manual cleaning.
History and design
An acoustic cleaner consists of a sound source similar to an air horn found on trucks and trains, attached to the material-handling equipment, which directs a loud sound into the interior. It is powered by compressed air rather than electricity so there is no danger of sparking, which could set off an explosion. It consists of two parts:
The acoustic driver. In the driver, compressed air escaping past a diaphragm causes it to vibrate, generating the sound. It is usually made from solid machined stainless steel. The diaphragm, the only moving part, is usually manufactured from special aerospace grade titanium to ensure performance and longevity.
The bell, a flaring horn, usually made from spun 316 grade stainless steel. The bell serves as a sound resonator, and its flaring shape couples the sound efficiently to the air, increasing the volume of sound radiated.
The overall length of acoustic cleaner horns range from 430 mm to over 3 metres long. The device can operate from a pressure range of 4.8 to 6.2 bars or 70 to 90 psi. The resultant sound pressure level will be around 200 dB.
There are generally 4 ways to control the operation of an acoustic cleaner:
The most common is by a simple timer
Supervisory control and data acquisition (SCADA)
Programmable logic controller (PLC)
Manually by ball valve
An acoustic cleaner will typically sound for 10 seconds and then wait for a further 500 seconds before sounding again. This ratio for on/off is approximately proportional to the working life of the diaphragm. Provided the operatin
|
https://en.wikipedia.org/wiki/Hugo%20von%20Seeliger
|
Hugo von Seeliger (23 September 1849 – 2 December 1924), also known as Hugo Hans Ritter von Seeliger, was a German astronomer, often considered the most important astronomer of his day.
Biography
He was born in Biala, completed high school in Teschen in 1867, and studied at the Universities of Heidelberg and Leipzig. He earned a doctorate in astronomy in 1872 from the latter, studying under Carl Christian Bruhns. He was on the staff of the University of Bonn Observatory until 1877, as an assistant to Friedrich Wilhelm Argelander. In 1874, he directed the German expedition to the Auckland Islands to observe the transit of Venus. In 1881, he became the Director of the Gotha Observatory, and in 1882 became a Professor of Astronomy and Director of the Observatory at the University of Munich, which post he held until his death. His students included Hans Kienle, Ernst Anding, Julius Bauschinger, Paul ten Bruggencate, Gustav Herglotz, Richard Schorr, and especially Karl Schwarzschild, who earned a doctorate under him in 1898, and acknowledged Seeliger's influence in speeches throughout his career.
Seeliger was elected an Associate of the Royal Astronomical Society in 1892, and President of the Astronomische Gesellschaft from 1897 to 1921. He received numerous honours and medals, including knighthood (Ritter), between 1896 and 1917.
His contributions to astronomy include an explanation of the anomalous motion of the perihelion of Mercury (later one of the main tests of general relativity), a theory of nova coming from the collision of a star with a cloud of gas, and his confirmation of James Clerk Maxwell's theories of the composition of the rings of Saturn by studying variations in their albedo. He is also the discoverer of an apparent paradox in Newton's gravitational law, known as Seeliger's Paradox. However his main interest was in the stellar statistics of the Bonner Durchmusterung and Bonn section of the Astronomische Gesellschaft star catalogues, and in the conc
|
https://en.wikipedia.org/wiki/Neovascularization
|
Neovascularization is the natural formation of new blood vessels (neo- + vascular + -ization), usually in the form of functional microvascular networks, capable of perfusion by red blood cells, that form to serve as collateral circulation in response to local poor perfusion or ischemia.
Growth factors that inhibit neovascularization include those that affect endothelial cell division and differentiation. These growth factors often act in a paracrine or autocrine fashion; they include fibroblast growth factor, placental growth factor, insulin-like growth factor, hepatocyte growth factor, and platelet-derived endothelial growth factor.
There are three different pathways that comprise neovascularization: (1) vasculogenesis, (2) angiogenesis, and (3) arteriogenesis.
Three pathways of neovascularization
Vasculogenesis
Vasculogenesis is the de novo formation of blood vessels. This primarily occurs in the developing embryo with the development of the first primitive vascular plexus, but also occurs to a limited extent with post-natal vascularization. Embryonic vasculogenesis occurs when endothelial cells precursors (hemangioblasts) begin to proliferate and migrate into avascular areas. There, they aggregate to form the primitive network of vessels characteristic of embryos. This primitive vascular system is necessary to provide adequate blood flow to cells, supplying oxygen and nutrients, and removing metabolic wastes.
Angiogenesis
Angiogenesis is the most common type of neovascularization seen in development and growth, and is important to both physiological and pathological processes. Angiogenesis occurs through the formation of new vessels from pre-existing vessels. This occurs through the sprouting of new capillaries from post-capillary venules, requiring precise coordination of multiple steps and the participation and communication of multiple cell types. The complex process is initiated in response to local tissue ischemia or hypoxia, leading to the release of
|
https://en.wikipedia.org/wiki/Conditional%20random%20field
|
Conditional random fields (CRFs) are a class of statistical modeling methods often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without considering "neighbouring" samples, a CRF can take context into account. To do so, the predictions are modelled as a graphical model, which represents the presence of dependencies between the predictions. What kind of graph is used depends on the application. For example, in natural language processing, "linear chain" CRFs are popular, for which each prediction is dependent only on its immediate neighbours. In image processing, the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions.
Other examples where CRFs are used are: labeling or parsing of sequential data for natural language processing or biological sequences, part-of-speech tagging, shallow parsing, named entity recognition, gene finding, peptide critical functional region finding, and object recognition and image segmentation in computer vision.
Description
CRFs are a type of discriminative undirected probabilistic graphical model.
Lafferty, McCallum and Pereira define a CRF on observations and random variables as follows:
Let be a graph such that , so that is indexed by the vertices of .
Then is a conditional random field when each random variable , conditioned on , obeys the Markov property with respect to the graph; that is, its probability is dependent only on its neighbours in G:
, where means
that and are neighbors in .
What this means is that a CRF is an undirected graphical model whose nodes can be divided into exactly two disjoint sets and , the observed and output variables, respectively; the conditional distribution is then modeled.
Inference
For general graphs, the problem of exact inference in CRFs is intractable. The inference problem for a CRF is basically the same as for an MR
|
https://en.wikipedia.org/wiki/Global%20Descriptor%20Table
|
The Global Descriptor Table (GDT) is a data structure used by Intel x86-family processors starting with the 80286 in order to define the characteristics of the various memory areas used during program execution, including the base address, the size, and access privileges like executability and writability. These memory areas are called segments in Intel terminology.
Description
The GDT is a table of 8-byte entries. Each entry may refer to a segment descriptor, Task State Segment (TSS), Local Descriptor Table (LDT), or call gate. Call gates were designed for transferring control between x86 privilege levels, although this mechanism is not used on most modern operating systems.
There is also a Local Descriptor Table (LDT). Multiple LDTs can be defined in the GDT, but only one is current at any one time: usually associated with the current Task. While the LDT contains memory segments which are private to a specific process, the GDT contains global segments. The x86 processors have facilities for automatically switching the current LDT on specific machine events, but no facilities for automatically switching the GDT.
Every memory access performed by a process always goes through a segment. On the 80386 processor and later, because of 32-bit segment offsets and limits, it is possible to make segments cover the entire addressable memory, which makes segment-relative addressing transparent to the user.
In order to reference a segment, a program must use its index inside the GDT or the LDT. Such an index is called a segment selector (or selector). The selector must be loaded into a segment register to be used. Apart from the machine instructions which allow one to set/get the position of the GDT, and of the Interrupt Descriptor Table (IDT), in memory, every machine instruction referencing memory has an implicit segment register, occasionally two. Most of the time this segment register can be overridden by adding a segment prefix before the instruction.
Loading a select
|
https://en.wikipedia.org/wiki/Anopheles%20gambiae
|
The Anopheles gambiae complex consists of at least seven morphologically indistinguishable species of mosquitoes in the genus Anopheles. The complex was recognised in the 1960s and includes the most important vectors of malaria in sub-Saharan Africa, particularly of the most dangerous malaria parasite, Plasmodium falciparum. It is one of the most efficient malaria vectors known. The An. gambiae mosquito additionally transmits Wuchereria bancrofti which causes lymphatic filariasis, a symptom of which is elephantiasis.
Discovery and elements
The Anopheles gambiae complex or Anopheles gambiae sensu lato was recognized as a species complex only in the 1960s. The A. gambiae complex consists of:
Anopheles arabiensis
Anopheles bwambae
Anopheles melas
Anopheles merus
Anopheles quadriannulatus
Anopheles gambiae sensu stricto (s.s.)
Anopheles amharicus
The individual species of the complex are morphologically difficult to distinguish from each other, although it is possible for larvae and adult females. The species exhibit different behavioural traits. For example, Anopheles quadriannulatus is both a saltwater and mineralwater species. A. melas and A. merus are saltwater species, while the remainder are freshwater species.
Anopheles quadriannulatus generally takes its blood meal from animals (zoophilic), whereas Anopheles gambiae sensu stricto generally feeds on humans, i.e. is considered anthropophilic.
Identification to the individual species level using the molecular methods of Scott et al. (1993) can have important implications in subsequent control measures.
Anopheles gambiae in the strict sense
An. gambiae sensu stricto (s.s.) has been discovered to be currently in a state of diverging into two different species—the Mopti (M) and Savannah (S) strains—though as of 2007, the two strains are still considered to be a single species.
The mechanism of species recognition appears to be sounds emitted by the wings and identified by Johnston's organ.
Genome
An.
|
https://en.wikipedia.org/wiki/Psychology%20of%20programming
|
The psychology of programming (PoP) is the field of research that deals with the psychological aspects of writing programs (often computer programs). The field has also been called the empirical studies of programming (ESP). It covers research into computer programmers' cognition, tools and methods for programming-related activities, and programming education.
Psychologically, computer programming is a human activity which involves cognitions such as reading and writing computer language, learning, problem solving, and reasoning.
History
The history of psychology of programming dates back to late 1970s and early 1980s, when researchers realized that computational power should not be the only thing to be evaluated in programming tools and technologies, but also the usability from the users. In the first Workshop on Empirical Studies of Programmers, Ben Shneiderman listed several important destinations for researchers. These destinations include refining the use of current languages, improving present and future languages, developing special purpose languages, and improving tools and methods. Two important workshop series have been devoted to psychology of programming in the last two decades: the Workshop on Empirical Studies of Programmers (ESP), based primarily in the US, and the Psychology of Programming Interest Group Workshop (PPIG), having a European character. ESP has a broader scope than pure psychology in programming, and on the other hand, PPIG is more focused in the field of PoP. However, PPIG workshops and the organization PPIG itself is informal in nature, It is group of people who are interested in PoP that comes together and publish their discussions.
Goals and purposes
It is desirable to achieve a programming performance such that creating a program meets its specifications, is on schedule, is adaptable for the future and runs efficiently. Being able to satisfy all these goals at a low cost is a difficult and common problem in software engineering a
|
https://en.wikipedia.org/wiki/Rydberg%20ionization%20spectroscopy
|
Rydberg ionization spectroscopy is a spectroscopy technique in which multiple photons are absorbed by an atom causing the removal of an electron to form an ion.
Resonance ionization spectroscopy
The ionization threshold energy of atoms and small molecules are typically larger than the photon energies that are most easily available experimentally. However, it can be possible to span this ionization threshold energy if the photon energy is resonant with an intermediate electronically excited state. While it is often possible to observe the lower Rydberg levels in conventional spectroscopy of atoms and small molecules, Rydberg states are even more important in laser ionization experiments. Laser spectroscopic experiments often involve ionization through a photon energy resonance at an intermediate level, with an unbound final electron state and an ionic core. On resonance for phototransitions permitted by selection rules, the intensity of the laser in combination with the excited state lifetime makes ionization an expected outcome. This RIS approach and variations permit sensitive detection of specific species.
Low Rydberg levels and resonance enhanced multiphoton ionization
High photon intensity experiments can involve multiphoton processes with the absorption of integer multiples of the photon energy. In experiments that involve a multiphoton resonance, the intermediate is often a Rydberg state, and the final state is often an ion. The initial state of the system, photon energy, angular momentum and other selection rules can help in determining the nature of the intermediate state. This approach is exploited in resonance enhanced multiphoton ionization spectroscopy (REMPI). An advantage of this spectroscopic technique is that the ions can be detected with almost complete efficiency and even resolved for their mass. It is also possible to gain additional information by performing experiments to look at the energy of the liberated photoelectron in these experiments.
|
https://en.wikipedia.org/wiki/Zero-crossing%20rate
|
The zero-crossing rate (ZCR) is the rate at which a signal changes from positive to zero to negative or from negative to zero to positive. Its value has been widely used in both speech recognition and music information retrieval, being a key feature to classify percussive sounds.
ZCR is defined formally as
where is a signal of length and is an indicator function.
In some cases only the "positive-going" or "negative-going" crossings are counted, rather than all the crossings, since between a pair of adjacent positive zero-crossings there must be a single negative zero-crossing.
For monophonic tonal signals, the zero-crossing rate can be used as a primitive pitch detection algorithm. Zero crossing rates are also used for Voice activity detection (VAD), which determines whether human speech is present in an audio segment or not.
See also
Zero crossing
Digital signal processing
|
https://en.wikipedia.org/wiki/Q-difference%20polynomial
|
In combinatorial mathematics, the q-difference polynomials or q-harmonic polynomials are a polynomial sequence defined in terms of the q-derivative. They are a generalized type of Brenke polynomial, and generalize the Appell polynomials. See also Sheffer sequence.
Definition
The q-difference polynomials satisfy the relation
where the derivative symbol on the left is the q-derivative. In the limit of , this becomes the definition of the Appell polynomials:
Generating function
The generalized generating function for these polynomials is of the type of generating function for Brenke polynomials, namely
where is the q-exponential:
Here, is the q-factorial and
is the q-Pochhammer symbol. The function is arbitrary but assumed to have an expansion
Any such gives a sequence of q-difference polynomials.
|
https://en.wikipedia.org/wiki/Dominating%20decision%20rule
|
In decision theory, a decision rule is said to dominate another if the performance of the former is sometimes better, and never worse, than that of the latter.
Formally, let and be two decision rules, and let be the risk of rule for parameter . The decision rule is said to dominate the rule if for all , and the inequality is strict for some .
This defines a partial order on decision rules; the maximal elements with respect to this order are called admissible decision rules.
|
https://en.wikipedia.org/wiki/Weinstein%20conjecture
|
In mathematics, the Weinstein conjecture refers to a general existence problem for periodic orbits of Hamiltonian or Reeb vector flows. More specifically, the conjecture claims that on a compact contact manifold, its Reeb vector field should carry at least one periodic orbit.
By definition, a level set of contact type admits a contact form obtained by contracting the Hamiltonian vector field into the symplectic form. In this case, the Hamiltonian flow is a Reeb vector field on that level set. It is a fact that any contact manifold (M,α) can be embedded into a canonical symplectic manifold, called the symplectization of M, such that M is a contact type level set (of a canonically defined Hamiltonian) and the Reeb vector field is a Hamiltonian flow. That is, any contact manifold can be made to satisfy the requirements of the Weinstein conjecture. Since, as is trivial to show, any orbit of a Hamiltonian flow is contained in a level set, the Weinstein conjecture is a statement about contact manifolds.
It has been known that any contact form is isotopic to a form that admits a closed Reeb orbit; for example, for any contact manifold there is a compatible open book decomposition, whose binding is a closed Reeb orbit. This is not enough to prove the Weinstein conjecture, though, because the Weinstein conjecture states that every contact form admits a closed Reeb orbit, while an open book determines a closed Reeb orbit for a form which is only isotopic to the given form.
The conjecture was formulated in 1978 by Alan Weinstein. In several cases, the existence of a periodic orbit was known. For instance, Rabinowitz showed that on star-shaped level sets of a Hamiltonian function on a symplectic manifold, there were always periodic orbits (Weinstein independently proved the special case of convex level sets). Weinstein observed that the hypotheses of several such existence theorems could be subsumed in the condition that the level set be of contact type. (Weinstein's ori
|
https://en.wikipedia.org/wiki/Q-exponential
|
In combinatorial mathematics, a q-exponential is a q-analog of the exponential function,
namely the eigenfunction of a q-derivative. There are many q-derivatives, for example, the classical q-derivative, the Askey-Wilson operator, etc. Therefore, unlike the classical exponentials, q-exponentials are not unique. For example, is the q-exponential corresponding to the classical q-derivative while are eigenfunctions of the Askey-Wilson operators.
Definition
The q-exponential is defined as
where is the q-factorial and
is the q-Pochhammer symbol. That this is the q-analog of the exponential follows from the property
where the derivative on the left is the q-derivative. The above is easily verified by considering the q-derivative of the monomial
Here, is the q-bracket.
For other definitions of the q-exponential function, see , , and .
Properties
For real , the function is an entire function of . For , is regular in the disk .
Note the inverse, .
Addition Formula
The analogue of does not hold for real numbers and . However, if these are operators satisfying the commutation relation , then holds true.
Relations
For , a function that is closely related is It is a special case of the basic hypergeometric series,
Clearly,
Relation with Dilogarithm
has the following infinite product representation:
On the other hand, holds.
When ,
By taking the limit ,
where is the dilogarithm.
In physics
The Q-exponential function is also known as the quantum dilogarithm.
|
https://en.wikipedia.org/wiki/Windows%20Mobility%20Center
|
Windows Mobility Center is a component of Microsoft Windows, introduced in Windows Vista, that centralizes information and settings most relevant to mobile computing.
History
A mobility center that displayed device settings pertinent to mobile devices was first shown during the Windows Hardware Engineering Conference of 2004. It was based on the Activity Center user interface design that originated with Microsoft's abandoned Windows "Neptune" project, and was slated for inclusion in Windows Vista, then known by its codename Longhorn.
Overview
The Windows Mobility Center user interface consists of square tiles that each contain information and settings related to a component, such as audio settings, battery life and power schemes, display brightness, and wireless network strength and status. The tiles that appear within the interface depend on the hardware of the system and device drivers.
Windows Mobility Center is located in the Windows Control Panel and also be launched by pressing the keys in Windows Vista and 7. By default, WMC is inaccessible on desktop computers, but this limitation can be bypassed if one modifies the Windows Registry.
Windows Mobility Center is extensible; original equipment manufacturers can customize the interface with additional tiles and company branding. Though not supported by Microsoft, it is possible for individual developers to create tiles for the interface as well.
See also
Features new to Windows Vista
|
https://en.wikipedia.org/wiki/Relativistic%20dynamics
|
For classical dynamics at relativistic speeds, see relativistic mechanics.
Relativistic dynamics refers to a combination of relativistic and quantum concepts to describe the relationships between the motion and properties of a relativistic system and the forces acting on the system. What distinguishes relativistic dynamics from other physical theories is the use of an invariant scalar evolution parameter to monitor the historical evolution of space-time events. In a scale-invariant theory, the strength of particle interactions does not depend on the energy of the particles involved.
Twentieth century experiments showed that the physical description of microscopic and submicroscopic objects moving at or near the speed of light raised questions about such fundamental concepts as space, time, mass, and energy. The theoretical description of the physical phenomena required the integration of concepts from relativity and quantum theory.
Vladimir Fock was the first to propose an evolution parameter theory for describing relativistic quantum phenomena, but the evolution parameter theory introduced by Ernst Stueckelberg is more closely aligned with recent work. Evolution parameter theories were used by Feynman, Schwinger and others to formulate quantum field theory in the late 1940s and early 1950s. Silvan S. Schweber wrote a nice historical exposition of Feynman’s investigation of such a theory. A resurgence of interest in evolution parameter theories began in the 1970s with the work of Horwitz and Piron, and Fanchi and Collins.
Invariant Evolution Parameter Concept
Some researchers view the evolution parameter as a mathematical artifact while others view the parameter as a physically measurable quantity. To understand the role of an evolution parameter and the fundamental difference between the standard theory and evolution parameter theories, it is necessary to review the concept of time.
Time t played the role of a monotonically increasing evolution parameter in c
|
https://en.wikipedia.org/wiki/CDC%206000%20series
|
The CDC 6000 series is a discontinued family of mainframe computers manufactured by Control Data Corporation in the 1960s. It consisted of the CDC 6200, CDC 6300, CDC 6400, CDC 6500, CDC 6600 and CDC 6700 computers, which were all extremely rapid and efficient for their time. Each is a large, solid-state, general-purpose, digital computer that performs scientific and business data processing as well as multiprogramming, multiprocessing, Remote Job Entry, time-sharing, and data management tasks under the control of the operating system called SCOPE (Supervisory Control Of Program Execution). By 1970 there also was a time-sharing oriented operating system named KRONOS. They were part of the first generation of supercomputers. The 6600 was the flagship of Control Data's 6000 series.
Overview
The CDC 6000 series computers are composed of four main functional devices:
the central memory
one or two high-speed central processors
ten peripheral processors (Peripheral Processing Unit, or PPU) and
a display console.
The 6000 series has a distributed architecture.
The family's members differ primarily by the number and kind of central processor(s):
The CDC 6600 is a single CPU with 10 functional units that can operate in parallel, each working on an instruction at the same time.
The CDC 6400 is a single CPU with an identical instruction set, but with a single unified arithmetic function unit that can only do one instruction at a time.
The CDC 6500 is a dual-CPU system with two 6400 central processors
The CDC 6700 is also a dual-CPU system, with a 6600 and a 6400 central processor.
Certain features and nomenclature had also been used in the earlier CDC 3000 series:
Arithmetic was ones complement.
The name COMPASS was used by CDC for the assembly languages on both families.
The name SCOPE was used for its operating system implementations on the 3000 and 6000 series.
The only currently (as of 2018) running CDC 6000 series machine, a 6500, has been restored by Liv
|
https://en.wikipedia.org/wiki/2000%20Today
|
2000 Today was an internationally broadcast television special to commemorate the beginning of the Year 2000. This program included New Year's Eve celebrations, musical performances, and other features from participating nations.
Most international broadcasts such as the Olympic Games coverage originate from a limited area for worldwide distribution. 2000 Today was rare in that its live and taped programming originated from member countries and represented all continents including Europe, Asia, Africa, South America and North America & Oceania.
Development
2000 Today was conceived as part of the Millennium celebrations, given the numerical significance of the change from 1999 to 2000. 2000 Today was commissioned by the BBC as one of the five main millennium projects that were broadcast across TV, radio and online services throughout 1999 and 2000.
Most nations that observe the Islamic calendar were not involved in 2000 Today. However, a few predominantly Muslim nations were represented among the programme's worldwide broadcasters such as Egypt (ERTU) and Indonesia (RCTI). Africa was minimally represented in 2000 Today. The only participating nations from that continent were Egypt and South Africa. Portugal-based RTP África distributed the programme to some African nations.
Antarctica was mentioned on the programme schedule, although it was unclear if 2000 Today coverage was recorded or live.
Production
The programme was produced and televised by an international consortium of 60 broadcasters, headed by the BBC in the United Kingdom and WGBH (Now known as GBH) in Boston, United States. The editorial board also included representatives from ABC (Australia), CBC (Canada), CCTV (China), ETC (Egypt), RTL (Germany), SABC (South Africa), TF1 (France), TV Asahi (Japan), TV Globo (Brazil) and ABC (USA). The BBC provided the production hub for receiving and distributing the 78 international satellite feeds required for this broadcast. The idents for the programme were de
|
https://en.wikipedia.org/wiki/SS-50%20bus
|
The SS-50 bus was an early computer bus designed as a part of the SWTPC 6800 Computer System that used the Motorola 6800 CPU. The SS-50 motherboard would have around seven 50-pin connectors for CPU and memory boards plus eight 30-pin connectors for I/O boards. The I/O section was sometimes called the SS-30 bus.
Southwest Technical Products Corporation introduced this bus in November 1975 and soon other companies were selling add-in boards. Some of the early boards were floppy disk systems from Midwest Scientific Instruments, Smoke Signal Broadcasting, and Percom Data; an EPROM programmer from the Micro Works; video display boards from Gimix; memory boards from Seals. By 1978 there were a dozen SS-50 board suppliers and several compatible SS-50 computers.
In 1979 SWTPC modified the SS-50 bus to support the new Motorola MC6809 processor. These changes were compatible with most existing boards and this upgrade gave the SS-50 Bus a long life. SS-50 based computers were made until the late 1980s.
The SS-50C bus, the S/09 version of the SS-50 bus, extended the address by four address lines to 20 address lines to allow up to a megabyte of memory in a system.
Boards for the SS-50 bus were typically 9 inches wide and 5.5 inches high. The board had Molex 0.156 inch connectors while the motherboard had the pins. This arrangement made for low cost printed circuit boards that did not need gold plated edge connectors. The tin plated Molex connectors were only rated for a few insertions and were sometimes a problem in hobbyist systems where the boards were being swapped often. Later systems would often come with gold plated Molex connectors.
The SS-30 I/O Bus had the address decoding on the motherboard. Each slot was allocated 4 address (the later MC6809 version upped this to 16 address.) This made for very simple I/O boards, the Motorola peripheral chips connected directly to this bus. Cards designed using the SS-30 bus often had their external connectors mounted such that
|
https://en.wikipedia.org/wiki/Pichia%20kudriavzevii
|
Pichia kudriavzevii (formerly Candida krusei) is a budding yeast (a species of fungus) involved in chocolate production. P. kudriavzevii is an emerging fungal nosocomial pathogen primarily found in the immunocompromised and those with hematological malignancies. It has natural resistance to fluconazole, a standard antifungal agent. It is most often found in patients who have had prior fluconazole exposure, sparking debate and conflicting evidence as to whether fluconazole should be used prophylactically. Mortality due to P. kudriavzevii fungemia is much higher than the more common C. albicans. Other Candida species that also fit this profile are C. parapsilosis, C. glabrata, C. tropicalis, C. guillermondii and C. rugosa.
P. kudriavzevii can be successfully treated with voriconazole, amphotericin B, and echinocandins (micafungin, caspofungin, and anidulafungin).
Role in chocolate production
Cacao beans have to be fermented to remove the bitter taste and break them down. This takes place with two fungi: P. kudriavzevii and Geotrichum. Most of the time, the two fungi are already present on the seed pods and seeds of the cacao plant, but specific strains are used in modern chocolate making. Each chocolate company uses its own strains, which have been selected to provide optimum flavor and aroma to the chocolate.
The yeasts produce enzymes to break down the pulp on the outside of the beans and generate acetic acid, killing the cacao embryo inside the seed, developing a chocolatey aroma and eliminating the bitterness in the beans.
Growth and Metabolism
P. kudriavzevii grows at a maximum temperature of 43–45 °C. Candida species are a major differential diagnosis and these generally require biotin for growth and some have additional vitamin requirements, but P. kudriavzevii can grow in vitamin-free media. Also, P. kudriavzevii grows on Sabouraud's dextrose agar as spreading colonies with a matte or a rough whitish-yellow surface, in contrast to the convex colonies
|
https://en.wikipedia.org/wiki/Modularity%20%28biology%29
|
Modularity refers to the ability of a system to organize discrete, individual units that can overall increase the efficiency of network activity and, in a biological sense, facilitates selective forces upon the network. Modularity is observed in all model systems, and can be studied at nearly every scale of biological organization, from molecular interactions all the way up to the whole organism.
Evolution of Modularity
The exact evolutionary origins of biological modularity has been debated since the 1990s. In the mid 1990s, Günter Wagner argued that modularity could have arisen and been maintained through the interaction of four evolutionary modes of action:
[1] Selection for the rate of adaptation: If different complexes evolve at different rates, then those evolving more quickly reach fixation in a population faster than other complexes. Thus, common evolutionary rates could be forcing the genes for certain proteins to evolve together while preventing other genes from being co-opted unless there is a shift in evolutionary rate.
[2] Constructional selection: When a gene exists in many duplicated copies, it may be maintained because of the many connections it has (also termed pleiotropy). There is evidence that this is so following whole genome duplication, or duplication at a single locus. However, the direct relationship that duplication processes have with modularity has yet to be directly examined.
[3] Stabilizing selection: While seeming antithetical to forming novel modules, Wagner maintains that it is important to consider the effects of stabilizing selection as it may be "an important counter force against the evolution of modularity". Stabilizing selection, if ubiquitously spread across the network, could then be a "wall" that makes the formation of novel interactions more difficult and maintains previously established interactions. Against such strong positive selection, other evolutionary forces acting on the network must exist, with gaps of relaxed
|
https://en.wikipedia.org/wiki/Computer%20network
|
A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.
The nodes of a computer network can include personal computers, servers, networking hardware, or other specialized or general-purpose hosts. They are identified by network addresses and may have hostnames. Hostnames serve as memorable labels for the nodes and are rarely changed after initial assignment. Network addresses serve for locating and identifying the nodes by communication protocols such as the Internet Protocol.
Computer networks may be classified by many criteria, including the transmission medium used to carry signals, bandwidth, communications protocols to organize network traffic, the network size, the topology, traffic control mechanisms, and organizational intent.
Computer networks support many applications and services, such as access to the World Wide Web, digital video and audio, shared use of application and storage servers, printers and fax machines, and use of email and instant messaging applications.
History
Computer networking may be considered a branch of computer science, computer engineering, and telecommunications, since it relies on the theoretical and practical application of the related disciplines. Computer networking was influenced by a wide array of technology developments and historical milestones.
In the late 1950s, a network of computers was built for the U.S. military Semi-Automatic Ground Environment (SAGE) radar system using the Bell 101 modem. It was the first commercial modem for computers, released by AT&T Corporation in 1958. The modem allowed digital data to be transmitted over regular unconditioned telephone
|
https://en.wikipedia.org/wiki/Size%E2%80%93weight%20illusion
|
The size–weight illusion, also known as the Charpentier illusion, is named after the French physician Augustin Charpentier because he was the first to demonstrate the illusion experimentally.
It is also called De Moor's illusion, named after Belgian physician Jean Demoor (1867–1941).
Description
The illusion occurs when a person underestimates the weight of a larger object (e.g. a box) when compared to a smaller object of the same mass. The illusion also occurs when the objects are not lifted against gravity, but accelerated horizontally, so it should be called a size-mass illusion. Similar illusions occurs with differences in material and colour: metal containers feel lighter than wooden containers of the same size and mass, and darker objects feel heavier than brighter objects of the same size and mass. These illusions have all been described as contrast with the expected weight, although the size-weight illusion occurs independent of visual estimates of the volume of material and the illusion does not depend on expectations, but occurs also if visual size information is only provided while already lifting. The expected weight or density can be measured by matching visible and hidden weights, lifted in the same manner. This gives an expected density of about 1.7 for metal canisters and 0.14 for polystyrene blocks. Density expectations may assist in selecting suitable objects to throw.
Explanation
An early explanation of these illusions was that people judge the weight of an object from its appearance and then lift it with a pre-determined force. They expect a larger object to be heavier and therefore lift it with greater force: the larger object is then lifted more easily than the smaller one, causing it to be perceived as lighter. This hypothesis was disproved by an experiment in which two objects of the same mass, same cross section, but different height were placed on observers' supported hands, and produced a passive size–weight illusion. Recent studies ha
|
https://en.wikipedia.org/wiki/Cyrillic%20Projector
|
The Cyrillic Projector is a sculpture created by American artist Jim Sanborn in the early 1990s, and purchased by the University of North Carolina at Charlotte in 1997. It is currently installed between the campus's Friday and Fretwell Buildings.
An encrypted trilogy
The encrypted sculpture Cyrillic Projector is part of an encrypted family of three intricate puzzle-sculptures by Sanborn, the other two named Kryptos and Antipodes. The Kryptos sculpture (located at CIA headquarters in Langley, Virginia) has text which is duplicated on Antipodes. Antipodes has two sides — one with the Latin alphabet and one with Cyrillic. The Latin side is similar to Kryptos. The Cyrillic side is similar to the Cyrillic Projector.
Solution
The encrypted text of the Cyrillic Projector was first reportedly solved by Frank Corr in early July 2003, followed by an equivalent decryption by Mike Bales in September of the same year. Both endeavors gave results in the Russian language. The first English translation of the text was led by Elonka Dunin.
The sculpture includes two messages. The first is a Russian text that explains the use of psychological control to develop and maintain potential sources of information. The second is a partial quote about the Soviet dissident, Nobel Peace Prize awarded scientist Sakharov. The text is from a classified KGB memo, detailing concerns that his report at the 1982 Pugwash conference was going to be used by the U.S. for anti-Soviet propaganda purposes.
Notes
|
https://en.wikipedia.org/wiki/Rydberg%E2%80%93Ritz%20combination%20principle
|
The Rydberg–Ritz combination principle is an empirical rule proposed by Walther Ritz in 1908 to describe the relationship of the spectral lines for all atoms, as a generalization of an earlier rule by Johannes Rydberg for the hydrogen atom and the alkali metals. The principle states that the spectral lines of any element include frequencies that are either the sum or the difference of the frequencies of two other lines. Lines of the spectra of elements could be predicted from existing lines. Since the frequency of light is proportional to the wavenumber or reciprocal wavelength, the principle can also be expressed in terms of wavenumbers which are the sum or difference of wavenumbers of two other lines.
Another related version is that the wavenumber or reciprocal wavelength of each spectral line can be written as the difference of two terms. The simplest example is the hydrogen atom, described by the Rydberg formula
where is the wavelength, is the Rydberg constant, and and are positive integers such that . This is the difference of two terms of form .
The exact Ritz Combination formula was mathematically derived from this where:
Where:
is the wavenumber,
is the limit of the series,
is a universal constant, (now known as R)
is the numeral, (now known as n)
and are constants.
Relation to quantum theory
The combination principle is explained using quantum theory. Light consists of photons whose energy E is proportional to the frequency and wavenumber of the light: (where h is the Planck constant, c is the speed of light, and is the wavelength. A combination of frequencies or wavenumbers is then equivalent to a combination of energies.
According to the quantum theory of the hydrogen atom proposed by Niels Bohr in 1913, an atom can have only certain energy levels. Absorption or emission of a particle of light or photon corresponds to a transition between two possible energy levels, and the photon energy equals the difference between their two ener
|
https://en.wikipedia.org/wiki/Schottky%20problem
|
In mathematics, the Schottky problem, named after Friedrich Schottky, is a classical question of algebraic geometry, asking for a characterisation of Jacobian varieties amongst abelian varieties.
Geometric formulation
More precisely, one should consider algebraic curves of a given genus , and their Jacobians . There is a moduli space of such curves, and a moduli space of abelian varieties, , of dimension , which are principally polarized. There is a morphismwhich on points (geometric points, to be more accurate) takes isomorphism class to . The content of Torelli's theorem is that is injective (again, on points). The Schottky problem asks for a description of the image of , denoted .
The dimension of is , for , while the dimension of is g(g + 1)/2. This means that the dimensions are the same (0, 1, 3, 6) for g = 0, 1, 2, 3. Therefore is the first case where the dimensions change, and this was studied by F. Schottky in the 1880s. Schottky applied the theta constants, which are modular forms for the Siegel upper half-space, to define the Schottky locus in . A more precise form of the question is to determine whether the image of essentially coincides with the Schottky locus (in other words, whether it is Zariski dense there).
Dimension 1 case
All elliptic curves are the Jacobian of themselves, hence the moduli stack of elliptic curves is a model for .
Dimensions 2 and 3
In the case of Abelian surfaces, there are two types of Abelian varieties: the Jacobian of a genus 2 curve, or the product of Jacobians of elliptic curves. This means the moduli spacesembed into . There is a similar description for dimension 3 since an Abelian variety can be the product of Jacobians.
Period lattice formulation
If one describes the moduli space in intuitive terms, as the parameters on which an abelian variety depends, then the Schottky problem asks simply what condition on the parameters implies that the abelian variety comes from a curve's Jacobian. The classical cas
|
https://en.wikipedia.org/wiki/Electroelution
|
Electroelution is a method used to extract a nucleic acid or a protein sample from an electrophoresis gel by applying a negative current in the plane of the smallest dimension of the gel, drawing the macromolecule to the surface for extraction and subsequent analysis. Electroblotting is based upon the same principle.
DNA extraction
Using this method, DNA fragments can be recovered from a particular region of agarose or polyacrylamide gels. The gel piece containing the fragment is excised (cut out from the whole gel) and placed in a dialysis bag with buffer. Electrophoresis causes the DNA to migrate out of the gel into the dialysis bag buffer. The DNA fragments are recovered from this buffer and purified, using phenol–chloroform extraction followed by ethanol precipitation. This method is simple, rapid and yields high (75%) recovery of DNA fragments from gel pieces.
|
https://en.wikipedia.org/wiki/Olfactory%20tubercle
|
The olfactory tubercle (OT), also known as the tuberculum olfactorium, is a multi-sensory processing center that is contained within the olfactory cortex and ventral striatum and plays a role in reward cognition. The OT has also been shown to play a role in locomotor and attentional behaviors, particularly in relation to social and sensory responsiveness, and it may be necessary for behavioral flexibility. The OT is interconnected with numerous brain regions, especially the sensory, arousal, and reward centers, thus making it a potentially critical interface between processing of sensory information and the subsequent behavioral responses.
The OT is a composite structure that receives direct input from the olfactory bulb and contains the morphological and histochemical characteristics of the ventral pallidum and the striatum of the forebrain. The dopaminergic neurons of the mesolimbic pathway project onto the GABAergic medium spiny neurons of the nucleus accumbens and olfactory tubercle (receptor D3 is abundant in these two areas ). In addition, the OT contains tightly packed cell clusters known as the islands of Calleja, which consist of granule cells. Even though it is part of the olfactory cortex and receives direct input from the olfactory bulb, it has not been shown to play a role in processing of odors.
Structure
The olfactory tubercle differs in location and relative size between humans, other primates, rodents, birds, and other animals. In most cases, the olfactory tubercle is identified as a round bulge along the basal forebrain anterior to the optic chiasm and posterior to the olfactory peduncle. In humans and other primates, visual identification of the olfactory tubercle is not easy because the basal forebrain bulge is small in these animals. With regard to functional anatomy, the olfactory tubercle can be considered to be a part of three larger networks. First, it is considered to be part of the basal forebrain, the nucleus accumbens, and the amygdalo
|
https://en.wikipedia.org/wiki/Pyrophyte
|
Pyrophytes are plants which have adapted to tolerate fire.
Fire acts favourably for some species. "Passive pyrophytes" resist the effects of fire, particularly when it passes over quickly, and hence can out-compete less resistant plants, which are damaged. "Active pyrophytes" have a similar competing advantage to passive pyrophytes, but they also contain volatile oils and hence encourage the incidence of fires which are beneficial to them. "Pyrophile" plants are plants which require fire in order to complete their cycle of reproduction.
Passive pyrophytes
These resist fire with adaptations including thick bark, tissue with high moisture content, or underground storage structures. Examples include:
Longleaf pine (Pinus palustris)
Giant sequoia (Sequoiadendron giganteum)
Coast redwood (Sequoia sempervirens)
Cork oak (Quercus suber)
Niaouli (Melaleuca quinquenervia) which is extending in areas where bush fires are a mode of clearing (e.g. New Caledonia).
Venus fly trap (Dionaea muscipula) – this grows low to the ground in acid marshes in North Carolina, and resists fires passing over due to being close to the moist soil; fire suppression threatens the species in its natural environment.
White asphodel (Asphodelus albus)
For some species of pine, such as Aleppo pine (Pinus halepensis), European black pine (Pinus nigra) and lodgepole pine (Pinus contorta), the effects of fire can be antagonistic: if moderate, it helps pine cone bursting, seed dispersion and the cleaning of the underwoods; if intense, it destroys these resinous trees.
Active pyrophytes
Some trees and shrubs such as the Eucalyptus of Australia actually encourage the spread of fires by producing inflammable oils, and are dependent on their resistance to the fire which keeps other species of tree from invading their habitat.
Pyrophile plants
Other plants which need fire for their reproduction are called pyrophilic. Longleaf pine (Pinus palustris) is a pyrophile, depending on fire to clear the
|
https://en.wikipedia.org/wiki/Dichroic%20glass
|
Dichroic glass is glass which can display multiple different colors depending on lighting conditions.
One dichroic material is a modern composite non-translucent glass that is produced by stacking layers of metal oxides which give the glass shifting colors depending on the angle of view, causing an array of colors to be displayed as an example of thin-film optics. The resulting glass is used for decorative purposes such as stained glass, jewelry and other forms of glass art. The commercial title of "dichroic" can also display three or more colors (trichroic or pleochroic) and even iridescence in some cases. The term dichroic is used more precisely when labelling interference filters for laboratory use.
Another dichroic glass material first appeared in a few pieces of Roman glass from the 4th century and consists of a translucent glass containing colloidal gold and silver particles dispersed in the glass matrix in certain proportions so that the glass has the property of displaying a particular transmitted color and a completely different reflected color, as certain wavelengths of light either pass through or are reflected. In ancient dichroic glass, as seen in the most famous piece, the 4th-century Lycurgus cup in the British Museum, the glass has a green color when lit from in front in reflected light, and another, purple-ish red, when lit from inside or behind the cup so that the light passes through the glass. This is not due to alternating thin metal films but colloidal silver and gold particles dispersed throughout the glass, in an effect similar to that seen in gold ruby glass, though that has only one color whatever the lighting.
Invention
Modern dichroic glass is available as a result of materials research carried out by NASA and its contractors, who developed it for use in dichroic filters. However, color changing glass dates back to at least the 4th century AD, though only very few pieces, mostly fragments, survive. It was also made in the Renaissanc
|
https://en.wikipedia.org/wiki/Tree%20shelter
|
A tree shelter, tree guard or tree tube (sometimes also Tuley tube) is a structure that protects planted tree saplings from browsing animals and other dangers as the trees grow.
The purpose of tree shelters is to protect young trees from browsing by herbivores by forming a physical barrier along with providing a barrier to chemical spray applications. Additionally, tree tubes accelerate growth by providing a mini-greenhouse environment that reduces moisture stress, channels growth into the main stem and roots and allows efficient control of weeds that can rob young seedlings of soil moisture and sunlight. Young trees protected in this way have a survival rate of around 85%, but without a tree guard only about half of all planted trees grow to adulthood.
Wrought iron, wire and wooden tree guards were used in Victorian England since the 1820s, but not always because of their cost. Plastic tube tree shelters were invented in Scotland in 1979 by Graham Tuley. They are particularly popular in the UK in landscape-scale planting schemes and their use has been established in the United States since 2000. About 1 million shelters were in use in the United Kingdom in 1983–1984, and 10 million were produced in 1991.
Many variations of tree shelters exist. There is considerable debate among tree shelter manufacturers as to the ideal colour, size, shape and texture for optimal plant growth. One style used in northern climates of North America has a height of 5 feet to offer the best protection from deer browse, with vent holes in the upper portion of the tube to allow for hardening off of hardwood trees going into the winter months and no vent holes in the lower portion to shield seedlings from herbicide spray and rodent damage.
The use of plastic tube tree shelters leads to the contamination of the environment with microplastics as the tubes, which are normally not collected, degrade over time. Alternatives include wooden or metal fencing to keep animals out.
|
https://en.wikipedia.org/wiki/GHP%20formalism
|
The GHP formalism (or Geroch–Held–Penrose formalism) is a technique used in the mathematics of general relativity that involves singling out a pair of null directions at each point of spacetime. It is a rewriting of the Newman–Penrose formalism which respects the covariance of Lorentz transformations preserving two null directions. This is desirable for Petrov Type D spacetimes, where the pair is made up of degenerate principal null directions, and spatial surfaces, where the null vectors are the natural null orthogonal vectors to the surface.
The New Covariance
The GHP formalism notices that given a spin-frame with the complex rescaling does not change normalization. The magnitude of this transformation is a boost, and the phase tells one how much to rotate. A quantity of weight is one that transforms like One then defines derivative operators which take tensors under these transformations to tensors. This simplifies many NP equations, and allows one to define scalars on 2-surfaces in a natural way.
See also
General relativity
NP formalism
|
https://en.wikipedia.org/wiki/Center%20for%20Veterinary%20Medicine
|
The Center for Veterinary Medicine (CVM) is a branch of the U.S. Food and Drug Administration (FDA) that regulates the manufacture and distribution of food, food additives, and drugs that will be given to animals. These include animals from which human foods are derived, as well as food additives and drugs for pets or companion animals. CVM is responsible for regulating drugs, devices, and food additives given to, or used on, over one hundred million companion animals, plus millions of poultry, cattle, swine, and minor animal species. Minor animal species include animals other than cattle, swine, chickens, turkeys, horses, dogs, and cats.
CVM monitors the safety of animal foods and medications. Much of the center's work focuses on animal medications used in food animals to ensure that significant drug residues are not present in the meat or other products from these animals.
CVM does not regulate vaccines for animals; these are handled by the United States Department of Agriculture
History
In 1953, a Veterinary Medical Branch of the FDA was created within the Bureau of Medicine. A separate Bureau of Veterinary Medicine (BVM) was established in 1965. At this time, the BVM included a Division of Veterinary Medical Review, Division of Veterinary New Drugs, and a Division of Veterinary Research. In 1970, the Division of Compliance and Division of Nutritional Sciences were added. The Bureau underwent reorganization in 1976 and in 1984, it was renamed the Center for Veterinary Medicine.
Dr. Steven Solomon, DVM became the Director of the Center in 2017. He received his Doctor of Veterinary Medicine (DVM) degree from The Ohio State University and a Master's in Public Health from Johns Hopkins University. He succeeded Tracey Forfa, who had been acting director for a few months. The previous director was Dr. Bernadette Dunham; she served as Director from 2008 to 2016.
Mission and vision
The mission of the center is "protecting human and animal health" and the vision
|
https://en.wikipedia.org/wiki/Hermitian%20symmetric%20space
|
In mathematics, a Hermitian symmetric space is a Hermitian manifold which at every point has an inversion symmetry preserving the Hermitian structure. First studied by Élie Cartan, they form a natural generalization of the notion of Riemannian symmetric space from real manifolds to complex manifolds.
Every Hermitian symmetric space is a homogeneous space for its isometry group and has a unique decomposition as a product of irreducible spaces and a Euclidean space. The irreducible spaces arise in pairs as a non-compact space that, as Borel showed, can be embedded as an open subspace of its compact dual space. Harish Chandra showed that each non-compact space can be realized as a bounded symmetric domain in a complex vector space. The simplest case involves the groups SU(2), SU(1,1) and their common complexification SL(2,C). In this case the non-compact space is the unit disk, a homogeneous space for SU(1,1). It is a bounded domain in the complex plane C. The one-point compactification of C, the Riemann sphere, is the dual space, a homogeneous space for SU(2) and SL(2,C).
Irreducible compact Hermitian symmetric spaces are exactly the homogeneous spaces of simple compact Lie groups by maximal closed connected subgroups which contain a maximal torus and have center isomorphic to the circle group. There is a complete classification of irreducible spaces, with four classical series, studied by Cartan, and two exceptional cases; the classification can be deduced from Borel–de Siebenthal theory, which classifies closed connected subgroups containing a maximal torus. Hermitian symmetric spaces appear in the theory of Jordan triple systems, several complex variables, complex geometry, automorphic forms and group representations, in particular permitting the construction of the holomorphic discrete series representations of semisimple Lie groups.
Hermitian symmetric spaces of compact type
Definition
Let H be a connected compact semisimple Lie group, σ an automorphism of H
|
https://en.wikipedia.org/wiki/Urban%20prairie
|
Urban prairie is a term to describe vacant urban land that has reverted to green space. Previous structures occupying the urban lots have been demolished, leaving patchy areas of green space that are usually untended and unmanaged, forming an involuntary park. Sometimes, however, the prairie spaces are intentionally created to facilitate amenities, such as green belts, community gardens and wildlife reserve habitats.
History
Urban prairies can result from several factors. The value of aging buildings may fall too low to provide financial incentives for their owners to maintain them. Vacant properties may have resulted from deurbanization or crime, or may have been seized by local government as a response to unpaid property taxes. Since vacant structures can pose health and safety threats (such as fire hazards), or be used as a location for criminal activity, cities often demolish them.
Sometimes areas are cleared of buildings as part of a revitalization plan with the intention of redeveloping the land. In flood-prone areas, government agencies may purchase developed lots and then demolish the structures to improve drainage during floods. Some neighborhoods near major industrial or environmental clean-up sites are acquired and leveled to create a buffer zone and minimize the risks associated with pollution or industrial accidents. Such areas may become nothing more than fields of overgrown vegetation, which then provide habitat for wildlife. Sometimes it is possible for residents of the city to fill up the unplanned empty space with urban parks or community gardens.
Urban prairie is sometimes planned by the government or non-profit groups for community gardens and conservation, to restore or reintroduce a wildlife habitat, help the environment, and educate people about the prairie. Detroit, Michigan is one particular city that has many urban prairies.
In the case of the city of Christchurch in New Zealand, earthquake damage from the 2011 earthquake caused the und
|
https://en.wikipedia.org/wiki/Krull%27s%20theorem
|
In mathematics, and more specifically in ring theory, Krull's theorem, named after Wolfgang Krull, asserts that a nonzero ring has at least one maximal ideal. The theorem was proved in 1929 by Krull, who used transfinite induction. The theorem admits a simple proof using Zorn's lemma, and in fact is equivalent to Zorn's lemma, which in turn is equivalent to the axiom of choice.
Variants
For noncommutative rings, the analogues for maximal left ideals and maximal right ideals also hold.
For pseudo-rings, the theorem holds for regular ideals.
A slightly stronger (but equivalent) result, which can be proved in a similar fashion, is as follows:
Let R be a ring, and let I be a proper ideal of R. Then there is a maximal ideal of R containing I.
This result implies the original theorem, by taking I to be the zero ideal (0). Conversely, applying the original theorem to R/I leads to this result.
To prove the stronger result directly, consider the set S of all proper ideals of R containing I. The set S is nonempty since I ∈ S. Furthermore, for any chain T of S, the union of the ideals in T is an ideal J, and a union of ideals not containing 1 does not contain 1, so J ∈ S. By Zorn's lemma, S has a maximal element M. This M is a maximal ideal containing I.
Krull's Hauptidealsatz
Another theorem commonly referred to as Krull's theorem:
Let be a Noetherian ring and an element of which is neither a zero divisor nor a unit. Then every minimal prime ideal containing has height 1.
Notes
|
https://en.wikipedia.org/wiki/Dot%20blot
|
A dot blot (or slot blot) is a technique in molecular biology used to detect proteins. It represents a simplification of the western blot method, with the exception that the proteins to be detected are not first separated by electrophoresis. Instead, the sample is applied directly on a membrane in a single spot, and the blotting procedure is performed.
The technique offers significant savings in time, as chromatography or gel electrophoresis, and the complex blotting procedures for the gel are not required. However, it offers no information on the size of the target protein.
Uses
Performing a dot blot is similar in idea to performing a western blot, with the advantage of faster speed and lower cost.
Dot blots are also performed to screen the binding capabilities of an antibody.
Methods
A general dot blot protocol involves spotting 1–2 microliters of a samples onto a nitrocellulose or PVDF membrane and letting it air dry. Samples can be in the form of tissue culture supernatants, blood serum, cell extracts, or other preparations.
The membrane is incubated in blocking buffer to prevent non-specific binding of antibodies. It is then incubated with a primary antibody followed by a detection antibody or a primary antibody conjugated to a detection molecule (commonly HRP or alkaline phosphatase). After antibody binding, the membrane is incubated with a chemiluminescent substrate and imaged.
Apparatus
Dot blot is conventionally performed on a piece of nitrocellulose membrane or PVDF membrane. After the protein samples are spotted onto the membrane, the membrane is placed in a plastic container and sequentially incubated in blocking buffer, antibody solutions, or rinsing buffer on shaker. Finally, for chemiluminescence imaging, the piece of membrane need to be wrapped in a transparent plastic film filled with enzyme substrate.
Vacuum-assisted dot blot apparatus has been used to facilitate the rinsing and incubating process by using vacuum to extract the solution
|
https://en.wikipedia.org/wiki/Cyclic%20module
|
In mathematics, more specifically in ring theory, a cyclic module or monogenous module is a module over a ring that is generated by one element. The concept is a generalization of the notion of a cyclic group, that is, an Abelian group (i.e. Z-module) that is generated by one element.
Definition
A left R-module M is called cyclic if M can be generated by a single element i.e. for some x in M. Similarly, a right R-module N is cyclic if for some .
Examples
2Z as a Z-module is a cyclic module.
In fact, every cyclic group is a cyclic Z-module.
Every simple R-module M is a cyclic module since the submodule generated by any non-zero element x of M is necessarily the whole module M. In general, a module is simple if and only if it is nonzero and is generated by each of its nonzero elements.
If the ring R is considered as a left module over itself, then its cyclic submodules are exactly its left principal ideals as a ring. The same holds for R as a right R-module, mutatis mutandis.
If R is F[x], the ring of polynomials over a field F, and V is an R-module which is also a finite-dimensional vector space over F, then the Jordan blocks of x acting on V are cyclic submodules. (The Jordan blocks are all isomorphic to ; there may also be other cyclic submodules with different annihilators; see below.)
Properties
Given a cyclic R-module M that is generated by x, there exists a canonical isomorphism between M and , where denotes the annihilator of x in R.
Every module is a sum of cyclic submodules.
See also
Finitely generated module
|
https://en.wikipedia.org/wiki/Bafilomycin
|
The bafilomycins are a family of macrolide antibiotics produced from a variety of Streptomycetes. Their chemical structure is defined by a 16-membered lactone ring scaffold. Bafilomycins exhibit a wide range of biological activity, including anti-tumor, anti-parasitic, immunosuppressant and anti-fungal activity. The most used bafilomycin is bafilomycin A1, a potent inhibitor of cellular autophagy. Bafilomycins have also been found to act as ionophores, transporting potassium K+ across biological membranes and leading to mitochondrial damage and cell death.
Bafilomycin A1 specifically targets the vacuolar-type H+ -ATPase (V-ATPase) enzyme, a membrane-spanning proton pump that acidifies either the extracellular environment or intracellular organelles such as the lysosome of animal cells or the vacuole of plants and fungi. At higher micromolar concentrations, bafilomycin A1 also acts on P-type ATPases, which have a phosphorylated transitional state.
Bafilomycin A1 serves as an important tool compound in many in vitro research applications; however, its clinical use is limited by a substantial toxicity profile.
Discovery and history
Bafilomycin A1, B1 and C1 were first isolated from Streptomyces griseus in 1983. During a screen seeking to identify microbial secondary metabolites whose activity mimicked that of two cardiac glycosides, bafilomycin C1 was identified as an inhibitor of P-ATPase with a ki of 11 μM. Bafilomycin C1 was found to have activity against Caenorhabditis elegans, ticks, and tapeworms, in addition to stimulating the release of γ-aminobutyruc acid (GABA) from rat synaptosomes. Independently, bafilomycin A1 and other derivatives were isolated from S. griseus and shown to have antibiotic activity against some yeast, Gram-positive bacteria and fungi. Bafilomycin A1 was also shown to have an anti-proliferative effect on concanavalin-A-stimulated T cells. However, its high toxicity has prevented use in clinical trials.
Two years later, bafilomycins D a
|
https://en.wikipedia.org/wiki/Fura-2-acetoxymethyl%20ester
|
Fura-2-acetoxymethyl ester, often abbreviated Fura-2AM, is a membrane-permeant derivative of the ratiometric calcium indicator Fura-2 used in biochemistry to measure cellular calcium concentrations by fluorescence. When added to cells, Fura-2AM crosses cell membranes and once inside the cell, the acetoxymethyl groups are removed by cellular esterases. Removal of the acetoxymethyl esters regenerates "Fura-2", the pentacarboxylate calcium indicator. Measurement of Ca2+-induced fluorescence at both 340 nm and 380 nm allows for calculation of calcium concentrations based 340/380 ratios. The use of the ratio automatically cancels out certain variables such as local differences in fura-2 concentration or cell thickness that would otherwise lead to artifacts when attempting to image calcium concentrations in cells.
|
https://en.wikipedia.org/wiki/Dynamin
|
Dynamin is a GTPase responsible for endocytosis in the eukaryotic cell. Dynamin is part of the "dynamin superfamily", which includes classical dynamins, dynamin-like proteins, Mx proteins, OPA1, mitofusins, and GBPs. Members of the dynamin family are principally involved in the scission of newly formed vesicles from the membrane of one cellular compartment and their targeting to, and fusion with, another compartment, both at the cell surface (particularly caveolae internalization) as well as at the Golgi apparatus. Dynamin family members also play a role in many processes including division of organelles, cytokinesis and microbial pathogen resistance.
Structure
Dynamin itself is a 96 kDa enzyme, and was first isolated when researchers were attempting to isolate new microtubule-based motors from the bovine brain. Dynamin has been extensively studied in the context of clathrin-coated vesicle budding from the cell membrane. Beginning from the N-terminus, Dynamin consists of a GTPase domain connected to a helical stalk domain via a flexible neck region containing a Bundle Signalling Element and GTPase Effector Domain. At the opposite end of the stalk domain is a loop that links to a membrane-binding Pleckstrin homology domain. The protein strand then loops back towards the GTPase domain and terminates with a Proline Rich Domain that binds to the Src Homology domains of many proteins.
Function
During clathrin-mediated endocytosis, the cell membrane invaginates to form a budding vesicle. Dynamin binds to and assembles around the neck of the endocytic vesicle, forming a helical polymer arranged such that the GTPase domains dimerize in an asymmetric manner across helical rungs. The polymer constricts the underlying membrane upon GTP binding and hydrolysis via conformational changes emanating from the flexible neck region that alters the overall helical symmetry. Constriction around the vesicle neck leads to the formation of a hemi-fission membrane state that ultima
|
https://en.wikipedia.org/wiki/Charles%20H.%20Bennett%20%28illustrator%29
|
Charles Henry Bennett (26 July 1828 – 2 April 1867) was a British Victorian illustrator who pioneered techniques in comic illustration.
Beginnings
Charles Henry Bennett was born at 3 Tavistock Row in Covent Garden on 26 July 1828 and was baptised a month later in St. Paul's Church, Covent Garden. He was the eldest of the three children of Charles and Harriet Bennett, originally from Teston in Kent. His father was a boot-maker. Little is known of Charles' childhood, although some speculate that he received some education, possibly at St. Clement Dane's School.
At the age of twenty, Charles married Elizabeth Toon, the daughter of a Shoreditch warehouseman, on Christmas Day 1848, also in St. Paul's Church. Their first son, who they named after Charles, was born a year later and by 1851 the family was settled in Lyon's Inn in the Strand. At the time of their wedding, Charles was attempting to support his family by selling newspapers; however, in the 1851 census three years later he described himself as an artist and portrait painter.
By 1861, Charles and Elizabeth had six children and lived in Wimbledon. Charles, the eldest, was by this time at school, while the youngest, George, was just seven months old.
Early career
As a child, Charles developed a passion for art, drawing for his inspiration the motley crowds he saw daily in the market. His father did not support Charles' artwork and considered it a waste of time.
As an adult, Charles became part of the London bohemian scene, and was a founder member of the Savage Club, each member of which was "a working man in literature or art". As well as socializing over convivial dinners, members of the club published a magazine called The Train. Charles Bennett contributed many illustrations, signed 'Bennett' rather than with his CHB monogram, but the magazine was short-lived.
The mid-nineteenth century saw the launch of many cheap, mostly short-lived periodicals in London, and Bennett contributed small illustrat
|
https://en.wikipedia.org/wiki/Romance%20of%20the%20Three%20Kingdoms%20II
|
is the second in the Romance of the Three Kingdoms series of turn-based strategy games produced by Koei and based on the historical novel Romance of the Three Kingdoms.
Gameplay
Upon starting the game, players choose from one of six scenarios that determine the initial layout of power in ancient China. The scenarios loosely depict allegiances and territories controlled by the warlords as according to the novel, although gameplay does not follow events in the novel after the game begins.
The six scenarios are listed as follows:
Dong Zhuo seizes control of Luoyang (AD 189)
Warlords struggle for power (AD 194)
Liu Bei seeks shelter in Jing Province (AD 201)
Cao Cao covets supremacy over China (AD 208)
The empire divides into three (AD 215)
Rise of Wei, Wu and Shu (AD 220)
After choosing the scenario, players determine which warlord(s) they will control. Custom characters may be inserted into territories unoccupied by other forces, as well. A total of 41 different provinces exist, as well as over 200 unique characters. Each character has three statistics, which range from 10 to 100 (the higher the better). A warlord's Intelligence, War Ability and Charm influence how successful he or she will be when performing certain tasks, such as dueling or increasing land value in a province.
The player wins the game by conquering all territories in China. This is accomplished by being in control of every province on the map.
New features
A reputation system that affects the rate of officers' loyalties towards their lords
Added treasures and special items that can increase an officer's stats
Advisers can help their lords predict the chances of success in executing a plan. An adviser with Intelligence stat of 100 will always accurately predict the result.
Intercepting messengers
Ability to create new lords on the map based on custom characters created by players
Reception
Computer Gaming World stated that Romance of the Three Kingdoms II "did a better job of simulating
|
https://en.wikipedia.org/wiki/Signage
|
Signage is the design or use of signs and symbols to communicate a message. A signage also means signs collectively or being considered as a group. The term signage is documented to have been popularized in 1975 to 1980.
Signs are any kind of visual graphics created to display information to a particular audience. This is typically manifested in the form of wayfinding information in places such as streets or on the inside and outside buildings. Signs vary in form and size based on location and intent, from more expansive banners, billboards, and murals, to smaller street signs, street name signs, sandwich boards and lawn signs. Newer signs may also use digital or electronic displays.
The main purpose of signs is to communicate, to convey information designed to assist the receiver with decision-making based on the information provided. Alternatively, promotional signage may be designed to persuade receivers of the merits of a given product or service. Signage is distinct from labeling, which conveys information about a particular product or service.
Definition and etymology
The term, 'sign' comes from the old French signe (noun), signer (verb), meaning a gesture or a motion of the hand. This, in turn, stems from Latin 'signum' indicating an"identifying mark, token, indication, symbol; proof; military standard, ensign; a signal, an omen; sign in the heavens, constellation." In the English, the term is also associated with a flag or ensign. In France, a banner not infrequently took the place of signs or sign boards in the Middle Ages. Signs, however, are best known in the form of painted or carved , inns, cinemas, etc. They are one of various emblematic methods for publicly calling attention to the place to which they refer.
The term, 'signage' appears to have come into use in the 20th century as a collective noun used to describe a class of signs, especially advertising and promotional signs which came to prominence in the first decades of the twentieth century.
|
https://en.wikipedia.org/wiki/Darboux%27s%20theorem%20%28analysis%29
|
In mathematics, Darboux's theorem is a theorem in real analysis, named after Jean Gaston Darboux. It states that every function that results from the differentiation of another function has the intermediate value property: the image of an interval is also an interval.
When ƒ is continuously differentiable (ƒ in C1([a,b])), this is a consequence of the intermediate value theorem. But even when ƒ′ is not continuous, Darboux's theorem places a severe restriction on what it can be.
Darboux's theorem
Let be a closed interval, be a real-valued differentiable function. Then has the intermediate value property: If and are points in with , then for every between and , there exists an in such that .
Proofs
Proof 1. The first proof is based on the extreme value theorem.
If equals or , then setting equal to or , respectively, gives the desired result. Now assume that is strictly between and , and in particular that . Let such that . If it is the case that we adjust our below proof, instead asserting that has its minimum on .
Since is continuous on the closed interval , the maximum value of on is attained at some point in , according to the extreme value theorem.
Because , we know cannot attain its maximum value at . (If it did, then for all , which implies .)
Likewise, because , we know cannot attain its maximum value at .
Therefore, must attain its maximum value at some point . Hence, by Fermat's theorem, , i.e. .
Proof 2. The second proof is based on combining the mean value theorem and the intermediate value theorem.
Define .
For define and .
And for define and .
Thus, for we have .
Now, define with .
is continuous in .
Furthermore, when and when ; therefore, from the Intermediate Value Theorem, if then, there exists such that .
Let's fix .
From the Mean Value Theorem, there exists a point such that .
Hence, .
Darboux function
A Darboux function is a real-valued function ƒ which has the "intermediate value property": for
|
https://en.wikipedia.org/wiki/Insulator%20%28genetics%29
|
An insulator is a type of cis-regulatory element known as a long-range regulatory element. Found in multicellular eukaryotes and working over distances from the promoter element of the target gene, an insulator is typically 300 bp to 2000 bp in length. Insulators contain clustered binding sites for sequence specific DNA-binding proteins and mediate intra- and inter-chromosomal interactions.
Insulators function either as an enhancer-blocker or a barrier, or both. The mechanisms by which an insulator performs these two functions include loop formation and nucleosome modifications. There are many examples of insulators, including the CTCF insulator, the gypsy insulator, and the β-globin locus. The CTCF insulator is especially important in vertebrates, while the gypsy insulator is implicated in Drosophila. The β-globin locus was first studied in chicken and then in humans for its insulator activity, both of which utilize CTCF.
The genetic implications of insulators lie in their involvement in a mechanism of imprinting and their ability to regulate transcription. Mutations to insulators are linked to cancer as a result of cell cycle disregulation, tumourigenesis, and silencing of growth suppressors.
Function
Insulators have two main functions:
Enhancer-blocking insulators prevent distal enhancers from acting on the promoter of neighbouring genes
Barrier insulators prevent silencing of euchromatin by the spread of neighbouring heterochromatin
While enhancer-blocking is classified as an inter-chromosomal interaction, acting as a barrier is classified as an intra-chromosomal interaction. The need for insulators arises where two adjacent genes on a chromosome have very different transcription patterns; it is critical that the inducing or repressing mechanisms of one do not interfere with the neighbouring gene. Insulators have also been found to cluster at the boundaries of topologically associating domains (TADs) and may have a role in partitioning the genome into "chr
|
https://en.wikipedia.org/wiki/Basic%20Math%20%28video%20game%29
|
Basic Math is an educational cartridge for the Atari Video Computer System (later called the Atari 2600) developed by Gary Palmer of Atari, Inc. It was one of the nine launch titles offered when the console went on sale in September 1977. In 1980, Basic Math was renamed Fun with Numbers.
Gameplay
The player's objective is to solve basic arithmetic problems. Game variations determine whether the player solves addition, subtraction, multiplication, or division problems, and whether he/she could select the top number (the console randomly selects the lower number).
The player uses the joystick to enter an answer to the math problem. The game uses sound effects to signal whether the answer is right or wrong. If the player's answer is incorrect the game will then show the correct answer.
Reception
Basic Math was reviewed in Video magazine as part of a general review of the Atari VCS. It was described as "very basic" with reviewers drolly noting that "the controls of this game may be a little more complicated than the actual problems", and the game was scored a 5 out of 10.
In a retrospective look, Kevin Bunch wrote:
Legacy
On April 1, 2022, Atari announced Basic Math: Recharged, a web-based re-imagining of the original title.
See also
Math Gran Prix (1982)
|
https://en.wikipedia.org/wiki/Saltation%20%28biology%29
|
In biology, saltation () is a sudden and large mutational change from one generation to the next, potentially causing single-step speciation. This was historically offered as an alternative to Darwinism. Some forms of mutationism were effectively saltationist, implying large discontinuous jumps.
Speciation, such as by polyploidy in plants, can sometimes be achieved in a single and in evolutionary terms sudden step. Evidence exists for various forms of saltation in a variety of organisms.
History
Prior to Charles Darwin most evolutionary scientists had been saltationists. Jean-Baptiste Lamarck was a gradualist but similar to other scientists of the period had written that saltational evolution was possible. Étienne Geoffroy Saint-Hilaire endorsed a theory of saltational evolution that "monstrosities could become the founding fathers (or mothers) of new species by instantaneous transition from one form to the next." Geoffroy wrote that environmental pressures could produce sudden transformations to establish new species instantaneously. In 1864 Albert von Kölliker revived Geoffroy's theory that evolution proceeds by large steps, under the name of heterogenesis.
With the publication of On the Origin of Species in 1859 Charles Darwin wrote that most evolutionary changes proceeded gradually but he did not deny the existence of jumps.
From 1860 to 1880 saltation had a minority interest but by 1890 had become a major interest to scientists. In their paper on evolutionary theories in the 20th century Levit et al wrote:
The advocates of saltationism deny the Darwinian idea of slowly and gradually growing divergence of character as the only source of evolutionary progress. They would not necessarily completely deny gradual variation, but claim that cardinally new ‘body plans’ come into being as a result of saltations (sudden, discontinuous and crucial changes, for example, the series of macromutations). The latter are responsible for the sudden appearance of new higher
|
https://en.wikipedia.org/wiki/Peter%20B.%20Andrews
|
Peter Bruce Andrews (born 1937) is an American mathematician and Professor of Mathematics, Emeritus at Carnegie Mellon University in Pittsburgh, Pennsylvania, and the creator of the mathematical logic Q0. He received his Ph.D. from Princeton University in 1964 under the tutelage of Alonzo Church. He received the Herbrand Award in 2003. His research group designed the TPS automated theorem prover. A subsystem ETPS (Educational Theorem Proving System) of TPS is used to help students learn logic by interactively constructing natural deduction proofs.
Publications
Andrews, Peter B. (1965). A Transfinite Type Theory with Type Variables. North Holland Publishing Company, Amsterdam.
Andrews, Peter B. (1971). "Resolution in type theory". Journal of Symbolic Logic 36, 414–432.
Andrews, Peter B. (1981). "Theorem proving via general matings". J. Assoc. Comput. March. 28, no. 2, 193–214.
Andrews, Peter B. (1986). An introduction to mathematical logic and type theory: to truth through proof. Computer Science and Applied Mathematics. . Academic Press, Inc., Orlando, FL.
Andrews, Peter B. (1989). "On connections and higher-order logic". J. Automat. Reason. 5, no. 3, 257–291.
Andrews, Peter B.; Bishop, Matthew; Issar, Sunil; Nesmith, Dan; Pfenning, Frank; Xi, Hongwei (1996). "TPS: a theorem-proving system for classical type theory". J. Automat. Reason. 16, no. 3, 321–353.
Andrews, Peter B. (2002). An introduction to mathematical logic and type theory: to truth through proof. Second edition. Applied Logic Series, 27. . Kluwer Academic Publishers, Dordrecht.
|
https://en.wikipedia.org/wiki/International%20Society%20of%20Biometeorology
|
The International Society of Biometeorology (ISB) is a professional society for scientists interested in biometeorology, specifically environmental and ecological aspects of the interaction of the atmosphere and biosphere. The organization's stated purpose is: "to provide one international organization for the promotion of interdisciplinary collaboration of meteorologists, physicians, physicists, biologists, climatologists, ecologists and other scientists and to promote the development of Biometeorology".
The International Society of Biometeorology was founded in 1956 at UNESCO headquarters in Paris, France, by S. W. Tromp, a Dutch geologist, H. Ungeheuer, a German meteorologist, and several human physiologists of which F. Sargent II of the United States became the first President of the society.
ISB affiliated organizations include: the International Association for Urban Climate, the International Society for Agricultural Meteorology, the International Union of Biological Sciences, the World Health Organization, and the World Meteorological Organization. ISB affiliate members include: the American Meteorological Society, the Centre for Renewable Energy Sources, the German Meteorological Society, the Society for the Promotion of Medicine-Meteorological Research e.V., International Society of Medical Hydrology and Climatology, and the UK Met Office.
Publications
ISB publishes the following journals:
Bulletin of the Society of Biometeorology
International Journal of Biometeorology
|
https://en.wikipedia.org/wiki/Greg%20Fahy
|
Gregory M. Fahy is a California-based cryobiologist, biogerontologist, and businessman. He is Vice President and Chief Scientific Officer at Twenty-First Century Medicine, Inc, and has co-founded Intervene Immune, a company developing clinical methods to reverse immune system aging. He is the 2022–2023 president of the Society for Cryobiology.
Education
A native of California, Fahy holds a Bachelor of Science degree in Biology from the University of California, Irvine and a PhD in pharmacology and cryobiology from the Medical College of Georgia in Augusta.
He currently serves on the board of directors of two organizations and as a referee for numerous scientific journals and funding agencies, and holds 35 patents on cryopreservation methods, aging interventions, transplantation, and other topics.
Career
Fahy is the world's foremost expert in organ cryopreservation by vitrification. Fahy introduced the modern successful approach to vitrification for cryopreservation in cryobiology and he is widely credited, along with William F. Rall, for introducing vitrification into the field of reproductive biology.
In 2005, where he was a keynote speaker at the annual Society for Cryobiology meeting, Fahy announced that Twenty-First Century Medicine had successfully cryopreserved a rabbit kidney at −130 °C by vitrification and transplanted it into a rabbit after rewarming, with subsequent long-term life support by the vitrified-rewarmed kidney as the sole kidney. This research breakthrough was later published in the peer-reviewed journal Organogenesis.
Fahy is also a biogerontologist and is the originator and Editor-in-Chief of The Future of Aging: Pathways to Human Life Extension, a multi-authored book on the future of biogerontology. He currently serves on the editorial boards of Rejuvenation Research and the Open Geriatric Medicine Journal and served for 16 years as a Director of the American Aging Association and for 6 years as the editor of AGE News, the organization
|
https://en.wikipedia.org/wiki/BitLocker
|
BitLocker is a full volume encryption feature included with Microsoft Windows versions starting with Windows Vista. It is designed to protect data by providing encryption for entire volumes. By default, it uses the Advanced Encryption Standard (AES) algorithm in cipher block chaining (CBC) or "xor–encrypt–xor (XEX)-based Tweaked codebook mode with ciphertext Stealing" (XTS) mode with a 128-bit or 256-bit key. CBC is not used over the whole disk; it is applied to each individual sector.
History
BitLocker originated as a part of Microsoft's Next-Generation Secure Computing Base architecture in 2004 as a feature tentatively codenamed "Cornerstone" and was designed to protect information on devices, particularly if a device was lost or stolen. Another feature, titled "Code Integrity Rooting", was designed to validate the integrity of Microsoft Windows boot and system files. When used in conjunction with a compatible Trusted Platform Module (TPM), BitLocker can validate the integrity of boot and system files before decrypting a protected volume; an unsuccessful validation will prohibit access to a protected system. BitLocker was briefly called Secure Startup before Windows Vista's release to manufacturing.
BitLocker is available on:
Enterprise and Ultimate editions of Windows Vista and Windows 7
Pro and Enterprise editions of Windows 8 and 8.1
Windows Server 2008 and later
Pro, Enterprise, and Education editions of Windows 10
Pro, Enterprise, and Education editions of Windows 11
Features
Initially, the graphical BitLocker interface in Windows Vista could only encrypt the operating system volume. Starting with Windows Vista with Service Pack 1 and Windows Server 2008, volumes other than the operating system volume could be encrypted using the graphical tool. Still, some aspects of the BitLocker (such as turning autolocking on or off) had to be managed through a command-line tool called manage-bde.wsf.
The version of BitLocker included in Windows 7 and Windows S
|
https://en.wikipedia.org/wiki/Adipocyte%20protein%202
|
aP2 (adipocyte Protein 2) is a carrier protein for fatty acids that is primarily expressed in adipocytes and macrophages. aP2 is also called fatty acid binding protein 4 (FABP4). Blocking this protein either through genetic engineering or drugs has the possibility of treating heart disease and the metabolic syndrome.
See also
Fatty acid-binding protein
|
https://en.wikipedia.org/wiki/Whitehead%27s%20theory%20of%20gravitation
|
In theoretical physics, Whitehead's theory of gravitation was introduced by the mathematician and philosopher Alfred North Whitehead in 1922. While never broadly accepted, at one time it was a scientifically plausible alternative to general relativity. However, after further experimental and theoretical consideration, the theory is now generally regarded as obsolete.
Principal features
Whitehead developed his theory of gravitation by considering how the world line of a particle is affected by those of nearby particles. He arrived at an expression for what he called the "potential impetus" of one particle due to another, which modified Newton's law of universal gravitation by including a time delay for the propagation of gravitational influences. Whitehead's formula for the potential impetus involves the Minkowski metric, which is used to determine which events are causally related and to calculate how gravitational influences are delayed by distance. The potential impetus calculated by means of the Minkowski metric is then used to compute a physical spacetime metric , and the motion of a test particle is given by a geodesic with respect to the metric . Unlike the Einstein field equations, Whitehead's theory is linear, in that the superposition of two solutions is again a solution. This implies that Einstein's and Whitehead's theories will generally make different predictions when more than two massive bodies are involved.
Following the notation of Chiang and Hamity
, introduce a Minkowski spacetime with metric tensor , where the indices run from 0 through 3, and let the masses of a set of gravitating particles be .
The Minkowski arc length of particle is denoted by . Consider an event with co-ordinates . A retarded event with co-ordinates on the world-line of particle is defined by the relations . The unit tangent vector at is . We also need the invariants . Then, a gravitational tensor potential is defined by
where
It is the metric that appears in th
|
https://en.wikipedia.org/wiki/Specific%20strength
|
The specific strength is a material's (or muscle's) strength (force per unit area at failure) divided by its density. It is also known as the strength-to-weight ratio or strength/weight ratio or strength-to-mass ratio. In fiber or textile applications, tenacity is the usual measure of specific strength. The SI unit for specific strength is Pa⋅m3/kg, or N⋅m/kg, which is dimensionally equivalent to m2/s2, though the latter form is rarely used. Specific strength has the same units as specific energy, and is related to the maximum specific energy of rotation that an object can have without flying apart due to centrifugal force.
Another way to describe specific strength is breaking length, also known as self support length: the maximum length of a vertical column of the material (assuming a fixed cross-section) that could suspend its own weight when supported only at the top. For this measurement, the definition of weight is the force of gravity at the Earth's surface (standard gravity, 9.80665 m/s2) applying to the entire length of the material, not diminishing with height. This usage is more common with certain specialty fiber or textile applications.
The materials with the highest specific strengths are typically fibers such as carbon fiber, glass fiber and various polymers, and these are frequently used to make composite materials (e.g. carbon fiber-epoxy). These materials and others such as titanium, aluminium, magnesium and high strength steel alloys are widely used in aerospace and other applications where weight savings are worth the higher material cost.
Note that strength and stiffness are distinct. Both are important in design of efficient and safe structures.
Calculations of breaking length
where is the length, is the tensile strength, is the density and is the acceleration due to gravity ( m/s)
Examples
The data of this table is from best cases, and has been established for giving a rough figure.
Note: Multiwalled carbon nanotubes have the h
|
https://en.wikipedia.org/wiki/Conductor%20gallop
|
Conductor gallop is the high-amplitude, low-frequency oscillation of overhead power lines due to wind. The movement of the wires occurs most commonly in the vertical plane, although horizontal or rotational motion is also possible. The natural frequency mode tends to be around 1 Hz, leading the often graceful periodic motion to also be known as conductor dancing. The oscillations can exhibit amplitudes in excess of a metre, and the displacement is sometimes sufficient for the phase conductors to infringe operating clearances (coming too close to other objects), and causing flashover. The forceful motion also adds significantly to the loading stress on insulators and electricity pylons, raising the risk of mechanical failure of either.
The mechanisms that initiate gallop are not always clear, though it is thought to be often caused by asymmetric conductor aerodynamics due to ice build up on one side of a wire. The crescent of encrusted ice approximates an aerofoil, altering the normally round profile of the wire and increasing the tendency to oscillate.
Gallop can be a significant problem for transmission system operators, particularly where lines cross open, windswept country and are at risk to ice loading. If gallop is likely to be a concern, designers can employ smooth-faced conductors, whose improved icing and aerodynamic characteristics reduce the motion. Additionally, anti-gallop devices may be mounted to the line to convert the lateral motion to a less damaging twisting one. Increasing the tension in the line and adopting more rigid insulator attachments have the effect of reducing galloping motion. These measures can be costly, are often impractical after the line has been constructed, and can increase the tendency for the line to exhibit high frequency oscillations.
If ice loading is suspected, it may be possible to increase power transfer on the line, and so raise its temperature by Joule heating, melting the ice. The sudden loss of ice from a line can r
|
https://en.wikipedia.org/wiki/Interphalangeal%20joints%20of%20the%20hand
|
The interphalangeal joints of the hand are the hinge joints between the phalanges of the fingers that provide flexion towards the palm of the hand.
There are two sets in each finger (except in the thumb, which has only one joint):
"proximal interphalangeal joints" (PIJ or PIP), those between the first (also called proximal) and second (intermediate) phalanges
"distal interphalangeal joints" (DIJ or DIP), those between the second (intermediate) and third (distal) phalanges
Anatomically, the proximal and distal interphalangeal joints are very similar. There are some minor differences in how the palmar plates are attached proximally and in the segmentation of the flexor tendon sheath, but the major differences are the smaller dimension and reduced mobility of the distal joint.
Joint structure
The PIP joint exhibits great lateral stability. Its transverse diameter is greater than its antero-posterior diameter and its thick collateral ligaments are tight in all positions during flexion, contrary to those in the metacarpophalangeal joint.
Dorsal structures
The capsule, extensor tendon, and skin are very thin and lax dorsally, allowing for both phalanx bones to flex more than 100° until the base of the middle phalanx makes contact with the condylar notch of the proximal phalanx.
At the level of the PIP joint the extensor mechanism splits into three bands. The central slip attaches to the dorsal tubercle of the middle phalanx near the PIP joint. The pair of lateral bands, to which contribute the extensor tendons, continue past the PIP joint dorsally to the joint axis. These three bands are united by a transverse retinacular ligament, which runs from the palmar border of the lateral band to the flexor sheath at the level of the joint and which prevents dorsal displacement of that lateral band. On the palmar side of the joint axis of motion, lies the oblique retinacular ligament [of Landsmeer] which stretches from the flexor sheath over the proximal phalanx to the te
|
https://en.wikipedia.org/wiki/Medial%20meniscus
|
The medial meniscus is a fibrocartilage semicircular band that spans the knee joint medially, located between the medial condyle of the femur and the medial condyle of the tibia. It is also referred to as the internal semilunar fibrocartilage. The medial meniscus has more of a crescent shape while the lateral meniscus is more circular. The anterior aspects of both menisci are connected by the transverse ligament. It is a common site of injury, especially if the knee is twisted.
Structure
The meniscus attaches to the tibia via coronary ligaments.
Its anterior end, thin and pointed, is attached to the anterior intercondyloid fossa of the tibia, in front of the anterior cruciate ligament;
Its posterior end is fixed to the posterior intercondyloid fossa of the tibia, between the attachments of the lateral meniscus and the posterior cruciate ligament.
It is fused with the tibial collateral ligament which makes it far less mobile than the lateral meniscus. The points of attachment are relatively widely separated and, because the meniscus is wider posteriorly than anteriorly, the anterior crus is considerably thinner than the posterior crus. The greatest displacement of the meniscus is caused by external rotation, while internal rotation relaxes it.
During rotational movements of the tibia (with the knee flexed 90 degrees), the medial meniscus remains relatively fixed while the lateral part of the lateral meniscus is displaced across the tibial condyle below.
Function
The medial meniscus separates the tibia and femur to decrease the contact area between the bones, and serves as a shock absorber reducing the peak contact force experienced. It also reduces friction between the two bones to allow smooth movement in the knee and distribute load during movement.
Clinical significance
Injury
Acute injury to the medial meniscus frequently accompanies an injury to the ACL (anterior cruciate ligament) or MCL (medial collateral ligament). A person occasionally injures the
|
https://en.wikipedia.org/wiki/Lateral%20meniscus
|
The lateral meniscus (external semilunar fibrocartilage) is a fibrocartilaginous band that spans the lateral side of the interior of the knee joint. It is one of two menisci of the knee, the other being the medial meniscus. It is nearly circular and covers a larger portion of the articular surface than the medial. It can occasionally be injured or torn by twisting the knee or applying direct force, as seen in contact sports.
Structure
The lateral meniscus is grooved laterally for the tendon of the popliteus, which separates it from the fibular collateral ligament.
Its anterior end is attached in front of the intercondyloid eminence of the tibia, lateral to, and behind, the anterior cruciate ligament, with which it blends; the posterior end is attached behind the intercondyloid eminence of the tibia and in front of the posterior end of the medial meniscus.
The anterior attachment of the lateral meniscus is twisted on itself so that its free margin looks backward and upward, its anterior end resting on a sloping shelf of bone on the front of the lateral process of the intercondyloid eminence.
Close to its posterior attachment it sends off a strong fasciculus, the ligament of Wrisberg, which passes upward and medialward, to be inserted into the medial condyle of the femur, immediately behind the attachment of the posterior cruciate ligament.
The lateral meniscus gives off from its anterior convex margin a fasciculus which forms the transverse ligament.
Variation
Occasionally a small fasciculus passes forward to be inserted into the lateral part of the anterior cruciate ligament.
Clinical significance
The lateral meniscus is less likely to be injured or torn than the medial meniscus. Diagnosis of lateral meniscus tear is done with McMurray's test. If a tear is detected, treatment depends on the type and size of the tear. Small tears can be treated conservatively, with rest, ice, and pain medications until the pain is under control, then exercise may be starte
|
https://en.wikipedia.org/wiki/Contact%20immunity
|
Contact immunity is the property of some vaccines, where a vaccinated individual can confer immunity upon unimmunized individuals through contact with bodily fluids or excrement. In other words, if person “A” has been vaccinated for virus X and person “B” has not, person “B” can receive immunity to virus X just by coming into contact with person “A”. The term was coined by Romanian physician Ioan Cantacuzino.
The potential for contact immunity exists primarily in "live" or attenuated vaccines. Vaccination with a live, but attenuated, virus can produce immunity to more dangerous forms of the virus. These attenuated viruses produce little or no illness in most people. However, the live virus multiplies briefly, may be shed in body fluids or excrement, and can be contracted by another person. If this contact produces immunity and carries no notable risk, it benefits an additional person, and further increases the immunity of the group.
The most prominent example of contact immunity was the oral polio vaccine (OPV). This live, attenuated polio vaccine was widely used in the US between 1960 and 1990; it continues to be used in polio eradication programs in developing countries because of its low cost and ease of administration. It is popular, in part, because it is capable of contact immunity. Recently immunized children "shed" live virus in their feces for a few days after immunization. About 25 percent of people coming into contact with someone immunized with OPV gained protection from polio through this form of contact immunity. Although contact immunity is an advantage of OPV, the risk of vaccine-associated paralytic poliomyelitis—affecting 1 child per 2.4 million OPV doses administered—led the Centers for Disease Control and Prevention (CDC) to cease recommending its use in the US as of January 1, 2010, in favor of inactivated poliovirus vaccine (IPV). The CDC continues to recommend OPV over IPV for global polio eradication activities.
The main drawback of liv
|
https://en.wikipedia.org/wiki/Salo%20Finkelstein
|
Salo Finkelstein (born 1896 or 1897, date of death unknown) was a mental calculator. He was born in Łódź (then within the Russian Empire, now in Poland) to a Jewish family.
While at school he was above average in mathematics, and discovered his calculating abilities as well as his faculty in memorizing numbers. At the age of 23, he began demonstrating this in public but lost interest for some time. He found employment with the Polish government in the State Statistical office.
In 1928 he performed before Professor Hans Henning in the Free City of Danzig. Henning previously tested other calculators, Dr. Ferrol and Gottfried Ruckle, and found Finkelstein to be superior. In 1931 Finkelstein went on an international tour demonstrating his abilities and submitting himself for tests.
In 1932 he arrived in the United States and tried without success to find employment in a bank as a checker of calculations. In 1937 an article was published that described and analyzed his abilities, with the general conclusion that although he could perform calculations much more rapidly than most people, his thinking processes seem to obey the same laws and are not indicative of any unnatural powers. In particular, during multiplication, the time for performing operations was proportional not to the numbers of digits in multiplied numbers, but to the number of separate "acts of attention" necessary to perform multiplication by ordinary rules. Also, the correctness of the results was not always 100 percent, decreased rapidly with the growth of the number of "acts of attention", and apparently depended on concentration.
After failing to secure himself a job that matched his abilities and unwilling to become a stage calculator, he attempted a career playing chess between 1941 and 1949. After that his further fate is unknown.
Notes
|
https://en.wikipedia.org/wiki/Laurel%20wreath
|
A laurel wreath is a round wreath made of connected branches and leaves of the bay laurel (), an aromatic broadleaf evergreen, or later from spineless butcher's broom (Ruscus hypoglossum) or cherry laurel (Prunus laurocerasus). It is a symbol of triumph and is worn as a chaplet around the head, or as a garland around the neck.
Wreaths and crowns in antiquity, including the laurel wreath, trace back to Ancient Greece. In Greek mythology, the god Apollo, who is patron of lyrical poetry, musical performance
and skill-based athletics, is conventionally depicted wearing a laurel wreath on his head in all three roles. Wreaths were awarded to victors in athletic competitions, including the ancient Olympics; for victors in athletics they were made of wild olive tree known as "kotinos" (), (sc. at Olympia) – and the same for winners of musical and poetic competitions. In Rome they were symbols of martial victory, crowning a successful commander during his triumph. Whereas ancient laurel wreaths are most often depicted as a horseshoe shape, modern versions are usually complete rings.
In common modern idiomatic usage, a laurel wreath or "crown" refers to a victory. The expression "resting on one's laurels" refers to someone relying entirely on long-past successes for continued fame or recognition, where to "look to one's laurels" means to be careful of losing rank to competition.
Background
Apollo, the patron of sport, is associated with the wearing of a laurel wreath. This association arose from the ancient Greek mythology story of Apollo and Daphne. Apollo mocked the god of love, Eros (Cupid), for his use of bow and arrow, since Apollo is also patron of archery. The insulted Eros then prepared two arrows—one of gold and one of lead. He shot Apollo with the gold arrow, instilling in the god a passionate love for the river nymph Daphne. He shot Daphne with the lead arrow, instilling in her a hatred of Apollo. Apollo pursued Daphne until she begged to be free of him and w
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.