text
stringlengths 82
2.62k
| source
stringlengths 31
108
|
---|---|
passage: The adjoint state method is a numerical method for efficiently computing the gradient of a function or operator in a numerical optimization problem. It has applications in geophysics, seismic imaging, photonics and more recently in neural networks.
The adjoint state space is chosen to simplify the physical interpretation of equation constraints.
Adjoint state techniques allow the use of integration by parts, resulting in a form which explicitly contains the physically interesting quantity. An adjoint state equation is introduced, including a new unknown variable.
The adjoint method formulates the gradient of a function towards its parameters in a constraint optimization form. By using the dual form of this constraint optimization problem, it can be used to calculate the gradient very fast. A nice property is that the number of computations is independent of the number of parameters for which you want the gradient.
The adjoint method is derived from the dual problem and is used e.g. in the Landweber iteration method.
The name adjoint state method refers to the dual form of the problem, where the adjoint matrix
$$
A^*=\overline A ^T
$$
is used.
When the initial problem consists of calculating the product
$$
s^T x
$$
and
$$
x
$$
must satisfy
$$
Ax=b
$$
, the dual problem can be realized as calculating the product , where
$$
r
$$
must satisfy
$$
A^* r = s
$$
.
And
$$
r
$$
is called the adjoint state vector.
|
https://en.wikipedia.org/wiki/Adjoint_state_method
|
passage: Thomson referred to the correctness of Joule's formula as "Mayer's hypothesis", on account of it having been first assumed by Mayer. Thomson arranged numerous experiments in coordination with Joule, eventually concluding by 1854 that Joule's formula was correct and the effect of temperature on the density of saturated steam accounted for all discrepancies with Regnault's data. Therefore, in terms of the modern Kelvin scale
$$
T
$$
, the first scale could be expressed as follows:
$$
T_{1848} = 100 \times \frac{\log(T / \text{273 K})}{\log(\text{373 K} / \text{273 K})}
$$
The parameters of the scale were arbitrarily chosen to coincide with the Celsius scale at 0° and 100 °C or 273 and 373 K (the melting and boiling points of water). On this scale, an increase of approximately 222 degrees corresponds to a doubling of Kelvin temperature, regardless of the starting temperature, and "infinite cold" (absolute zero) has a numerical value of negative infinity.
#### Modern absolute scale
Thomson understood that with Joule's proposed formula for
$$
\mu
$$
, the relationship between work and heat for a perfect thermodynamic engine was simply the constant
$$
J
$$
. In 1854, Thomson and Joule thus formulated a second absolute scale that was more practical and convenient, agreeing with air thermometers for most purposes.
|
https://en.wikipedia.org/wiki/Kelvin
|
passage: According to inferentialist semantics, the meaning of the word brother is determined by these and all similar inferences that can be drawn.
## History
Semantics was established as an independent field of inquiry in the 19th century but the study of semantic phenomena began as early as the ancient period as part of philosophy and logic. In ancient Greece, Plato (427–347 BCE) explored the relation between names and things in his dialogue Cratylus. It considers the positions of naturalism, which holds that things have their name by nature, and conventionalism, which states that names are related to their referents by customs and conventions among language users. The book On Interpretation by Aristotle (384–322 BCE) introduced various conceptual distinctions that greatly influenced subsequent works in semantics. He developed an early form of the semantic triangle by holding that spoken and written words evoke mental concepts, which refer to external things by resembling them. For him, mental concepts are the same for all humans, unlike the conventional words they associate with those concepts. The Stoics incorporated many of the insights of their predecessors to develop a complex theory of language through the perspective of logic. They discerned different kinds of words by their semantic and syntactic roles, such as the contrast between names, common nouns, and verbs. They also discussed the difference between statements, commands, and prohibitions.
In ancient India, the orthodox school of Nyaya held that all names refer to real objects.
|
https://en.wikipedia.org/wiki/Semantics
|
passage: In aerodynamics, a hypersonic speed is one that exceeds five times the speed of sound, often stated as starting at speeds of Mach 5 and above.
The precise Mach number at which a craft can be said to be flying at hypersonic speed varies, since individual physical changes in the airflow (like molecular dissociation and ionization) occur at different speeds; these effects collectively become important around Mach 5–10. The hypersonic regime can also be alternatively defined as speeds where specific heat capacity changes with the temperature of the flow as kinetic energy of the moving object is converted into heat.
## Characteristics of flow
While the definition of hypersonic flow can be quite vague and is generally debatable (especially because of the absence of discontinuity between supersonic and hypersonic flows), a hypersonic flow may be characterized by certain physical phenomena that can no longer be analytically discounted as in supersonic flow. The peculiarities in hypersonic flows are as follows:
1. Shock layer
1. Aerodynamic heating
1.
### Entropy layer
1. Real gas effects
1. Low density effects
1. Independence of aerodynamic coefficients with Mach number.
### Small shock stand-off distance
As a body's Mach number increases, the density behind a bow shock generated by the body also increases, which corresponds to a decrease in volume behind the shock due to conservation of mass. Consequently, the distance between the bow shock and the body decreases at higher Mach numbers.
Entropy layer
|
https://en.wikipedia.org/wiki/Hypersonic_speed
|
passage: ### British North America 1612–1783
An English lottery, authorized by King James I in 1612, granted the Virginia Company of London the right to raise money to help establish settlers in the first permanent English colony at Jamestown, Virginia.
Lotteries in colonial America played a significant part in the financing of both private and public ventures. It has been recorded that more than 200 lotteries were sanctioned between 1744 and 1776, and played a major role in financing roads, libraries, churches, colleges, canals, bridges, etc.
In the 1740s, the foundation of Princeton and Columbia Universities was financed by lotteries, as was the University of Pennsylvania by the Academy Lottery in 1755.
During the French and
### India
n Wars, several colonies used lotteries to help finance fortifications and their local militia. In May 1758, the Province of Massachusetts Bay raised money with a lottery for the "Expedition against
### Canada
".
Benjamin Franklin organized a lottery to raise money to purchase cannons for the defense of Philadelphia. Several of these lotteries offered prizes in the form of "Pieces of Eight". George Washington's Mountain Road Lottery in 1768 was unsuccessful, but these rare lottery tickets bearing Washington's signature became collectors' items; one example sold for about $15,000 in 2007. Washington was also a manager for Col. Bernard Moore's "Slave Lottery" in 1769, which advertised land and slaves as prizes in The Virginia Gazette.
|
https://en.wikipedia.org/wiki/Lottery
|
passage: ### Examples
White noise is the simplest example of a stationary process.
An example of a discrete-time stationary process where the sample space is also discrete (so that the random variable may take one of N possible values) is a Bernoulli scheme. Other examples of a discrete-time stationary process with continuous sample space include some autoregressive and moving average processes which are both subsets of the autoregressive moving average model. Models with a non-trivial autoregressive component may be either stationary or non-stationary, depending on the parameter values, and important non-stationary special cases are where unit roots exist in the model.
#### Example 1
Let
$$
Y
$$
be any scalar random variable, and define a time-series
$$
\left\{X_t\right\}
$$
, by
$$
X_t=Y \qquad \text{ for all } t.
$$
Then
$$
\left\{X_t\right\}
$$
is a stationary time series, for which realisations consist of a series of constant values, with a different constant value for each realisation. A law of large numbers does not apply on this case, as the limiting value of an average from a single realisation takes the random value determined by
$$
Y
$$
, rather than taking the expected value of
$$
Y
$$
.
The time average of
$$
X_t
$$
does not converge since the process is not ergodic.
#### Example 2
As a further example of a stationary process for which any single realisation has an apparently noise-free structure,
|
https://en.wikipedia.org/wiki/Stationary_process
|
passage: The distinguished sequences are called "exact sequences", hence the name. The precise axioms for this distinguished class do not matter for the construction of the Grothendieck group.
The Grothendieck group is defined in the same way as before as the abelian group with one generator [M] for each (isomorphism class of) object(s) of the category
$$
\mathcal{A}
$$
and one relation
$$
[A]-[B]+[C] = 0
$$
for each exact sequence
$$
A\hookrightarrow B\twoheadrightarrow C
$$
.
Alternatively and equivalently, one can define the Grothendieck group using a universal property: A map
$$
\chi: \mathrm{Ob}(\mathcal{A})\to X
$$
from
$$
\mathcal{A}
$$
into an abelian group X is called "additive" if for every exact sequence
$$
A\hookrightarrow B\twoheadrightarrow C
$$
one has
$$
\chi(A)-\chi(B)+\chi(C)=0
$$
; an abelian group G together with an additive mapping
$$
\phi: \mathrm{Ob}(\mathcal{A})\to G
$$
is called the Grothendieck group of
$$
\mathcal{A}
$$
iff every additive map
$$
\chi: \mathrm{Ob}(\mathcal{A})\to X
$$
factors uniquely through
$$
\phi
$$
.
|
https://en.wikipedia.org/wiki/Grothendieck_group
|
passage: If a complete type is realized by b in
$$
\mathcal{M}
$$
, then the type is typically denoted
$$
tp_n^{\mathcal{M}}(\boldsymbol{b}/A)
$$
and referred to as the complete type of b over A.
A type p(x) is said to be isolated by , for
$$
\varphi \in p(x)
$$
, if for all
$$
\psi(\boldsymbol{x}) \in p(\boldsymbol{x}),
$$
we have
$$
\operatorname{Th}(\mathcal M) \models \varphi(\boldsymbol{x}) \rightarrow \psi(\boldsymbol{x})
$$
. Since finite subsets of a type are always realized in
$$
\mathcal{M}
$$
, there is always an element b ∈ Mn such that φ(b) is true in
$$
\mathcal{M}
$$
; i.e.
$$
\mathcal{M} \models \varphi(\boldsymbol{b})
$$
, thus b realizes the entire isolated type. So isolated types will be realized in every elementary substructure or extension. Because of this, isolated types can never be omitted (see below).
A model that realizes the maximum possible variety of types is called a saturated model, and the ultrapower construction provides one way of producing saturated models.
## Examples of types
Consider the language L with one binary relation symbol, which we denote as
$$
\in
$$
.
|
https://en.wikipedia.org/wiki/Type_%28model_theory%29
|
passage: For the example, we have a canonical form available that reduces any string to one of length at most three, by decreasing the length monotonically. In general, it is not true that one can get a canonical form for the elements, by stepwise cancellation. One may have to use relations to expand a string many-fold, in order eventually to find a cancellation that brings the length right down.
The upshot is, in the worst case, that the relation between strings that says they are equal in
$$
G
$$
is an Undecidable problem.
##
### Example
s
The following groups have a solvable word problem:
- Automatic groups, including:
- Finite groups
- Negatively curved (aka.
|
https://en.wikipedia.org/wiki/Word_problem_for_groups
|
passage: ## Practical implications
Biologists and conservationists need to categorise and identify organisms in the course of their work. Difficulty assigning organisms reliably to a species constitutes a threat to the validity of research results, for example making measurements of how abundant a species is in an ecosystem moot. Surveys using a phylogenetic species concept reported 48% more species and accordingly smaller populations and ranges than those using nonphylogenetic concepts; this was termed "taxonomic inflation", which could cause a false appearance of change to the number of endangered species and consequent political and practical difficulties. Some observers claim that there is an inherent conflict between the desire to understand the processes of speciation and the need to identify and to categorise.
Conservation laws in many countries make special provisions to prevent species from going extinct. Hybridization zones between two species, one that is protected and one that is not, have sometimes led to conflicts between lawmakers, land owners and conservationists. One of the classic cases in North America is that of the protected northern spotted owl which hybridises with the unprotected California spotted owl and the barred owl; this has led to legal debates.
It has been argued, that since species are not comparable, simply counting them is not a valid measure of biodiversity; alternative measures of phylogenetic biodiversity have been proposed.
## History
|
https://en.wikipedia.org/wiki/Species
|
passage: In this type of fuzzy classification, generally, an input vector
$$
\textbf{x}
$$
is associated with multiple classes, each with a different confidence value.
Boosted ensembles of FDTs have been recently investigated as well, and they have shown performances comparable to those of other very efficient fuzzy classifiers.
## Metrics
Algorithms for constructing decision trees usually work top-down, by choosing a variable at each step that best splits the set of items. Different algorithms use different metrics for measuring "best". These generally measure the homogeneity of the target variable within the subsets. Some examples are given below. These metrics are applied to each candidate subset, and the resulting values are combined (e.g., averaged) to provide a measure of the quality of the split. Depending on the underlying metric, the performance of various heuristic algorithms for decision tree learning may vary significantly.
### Estimate of Positive Correctness
A simple and effective metric can be used to identify the degree to which true positives outweigh false positives (see Confusion matrix). This metric, "Estimate of Positive Correctness" is defined below:
$$
E_P = TP - FP
$$
In this equation, the total false positives (FP) are subtracted from the total true positives (TP). The resulting number gives an estimate on how many positive examples the feature could correctly identify within the data, with higher numbers meaning that the feature could correctly classify more positive samples.
|
https://en.wikipedia.org/wiki/Decision_tree_learning
|
passage: Thus Φ is a natural isomorphism with inverse Φ−1 = Ψ.
### Hom-set adjunction induces all of the above
Given functors F : D → C, G : C → D, and a hom-set adjunction Φ : homC(F−,−) → homD(−,G−), one can construct a counit–unit adjunction
$$
(\varepsilon,\eta):F\dashv G
$$
,
which defines families of initial and terminal morphisms, in the following steps:
- Let
$$
\varepsilon_X=\Phi_{GX,X}^{-1}(1_{GX})\in\mathrm{hom}_C(FGX,X)
$$
for each X in C, where
$$
1_{GX}\in\mathrm{hom}_D(GX,GX)
$$
is the identity morphism.
- Let
$$
\eta_Y=\Phi_{Y,FY}(1_{FY})\in\mathrm{hom}_D(Y,GFY)
$$
for each Y in D, where
$$
1_{FY}\in\mathrm{hom}_C(FY,FY)
$$
is the identity morphism.
- The bijectivity and naturality of Φ imply that each (GX, εX) is a terminal morphism from F to X in C, and each (FY, ηY) is an initial morphism from Y to G in D.
-
|
https://en.wikipedia.org/wiki/Adjoint_functors
|
passage: Important complexity classes
The complexity class P/poly is the set of languages that are decidable by polynomial-size circuit families. It turns out that there is a natural connection between circuit complexity and time complexity. Intuitively, a language with small time complexity (that is, requires relatively few sequential operations on a Turing machine), also has a small circuit complexity (that is, requires relatively few Boolean operations). Formally, it can be shown that if a language is in
$$
\mathsf{DTIME}(t(n))
$$
, where
$$
t
$$
is a function
$$
t:\mathbb{N} \to \mathbb{N}
$$
, then it has circuit complexity
$$
O(t^2(n))
$$
. It follows directly from this fact that . In other words, any problem that can be solved in polynomial time by a deterministic Turing machine can also be solved by a polynomial-size circuit family. It is further the case that the inclusion is proper, i.e.
$$
\textsf{P}\subsetneq \textsf{P/poly}
$$
(for example, there are some undecidable problems that are in P/poly).
P/poly has a number of properties that make it highly useful in the study of the relationships between complexity classes. In particular, it is helpful in investigating problems related to P versus NP.
|
https://en.wikipedia.org/wiki/Complexity_class
|
passage: ### Bits per pixel
The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). A 1 bpp image uses 1 bit for each pixel, so each pixel can be either on or off. Each additional bit doubles the number of colors available, so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8 colors:
- 1 bpp, 21 = 2 colors (monochrome)
- 2 bpp, 22 = 4 colors
- 3 bpp, 23 = 8 colors
- 4 bpp, 24 = 16 colors
- 8 bpp, 28 = 256 colors
- 16 bpp, 216 = 65,536 colors ("Highcolor" )
- 24 bpp, 224 = 16,777,216 colors ("Truecolor")
For color depths of 15 or more bits per pixel, the depth is normally the sum of the bits allocated to each of the red, green, and blue components. Highcolor, usually meaning 16 bpp, normally has five bits for red and blue each, and six bits for green, as the human eye is more sensitive to errors in green than in the other two primary colors. For applications involving transparency, the 16 bits may be divided into five bits each of red, green, and blue, with one bit left for transparency. A 24-bit depth allows 8 bits per component. On some systems, 32-bit depth is available: this means that each 24-bit pixel has an extra 8 bits to describe its opacity (for purposes of combining with another image).
|
https://en.wikipedia.org/wiki/Pixel
|
passage: This led to the question of whether philosophical problems are really firstly linguistic problems. The resurgence of the view that language plays a significant role in the creation and circulation of concepts, and that the study of philosophy is essentially the study of language, is associated with what has been called the linguistic turn and philosophers such as Wittgenstein in 20th-century philosophy. These debates about language in relation to meaning and reference, cognition and consciousness remain active today.
### Mental faculty, organ or instinct
One definition sees language primarily as the mental faculty that allows humans to undertake linguistic behaviour: to learn languages and to produce and understand utterances. This definition stresses the universality of language to all humans, and it emphasizes the biological basis for the human capacity for language as a unique development of the human brain. Proponents of the view that the drive to language acquisition is innate in humans argue that this is supported by the fact that all cognitively normal children raised in an environment where language is accessible will acquire language without formal instruction. Languages may even develop spontaneously in environments where people live or grow up together without a common language; for example, creole languages and spontaneously developed sign languages such as Nicaraguan Sign Language. This view, which can be traced back to the philosophers Kant and Descartes, understands language to be largely innate, for example, in Chomsky's theory of universal grammar, or American philosopher Jerry Fodor's extreme innatist theory. These kinds of definitions are often applied in studies of language within a cognitive science framework and in neurolinguistics.
|
https://en.wikipedia.org/wiki/Language
|
passage: This field can represent any logic obtainable with the system
$$
(\land, \lor)
$$
and has the added benefit of the arsenal of algebraic analysis tools for fields.
More specifically, if one associates
$$
F
$$
with 0 and
$$
T
$$
with 1, one can interpret the logical "AND" operation as multiplication on
$$
\mathbb{F}_2
$$
and the "XOR" operation as addition on
$$
\mathbb{F}_2
$$
:
$$
\begin{matrix}
r = p \land q & \Leftrightarrow & r = p \cdot q \pmod 2 \\[3pt]
r = p \oplus q & \Leftrightarrow & r = p + q \pmod 2 \\
\end{matrix}
$$
The description of a Boolean function as a polynomial in
$$
\mathbb{F}_2
$$
, using this basis, is called the function's algebraic normal form.
## Exclusive or in natural language
Disjunction is often understood exclusively in natural languages. In English, the disjunctive word "or" is often understood exclusively, particularly when used with the particle "either". The English example below would normally be understood in conversation as implying that Mary is not both a singer and a poet. Jennings quotes numerous authors saying that the word "or" has an exclusive sense. See Chapter 3, "The First Myth of 'Or'":
1. Mary is a singer or a poet.
However, disjunction can also be understood inclusively, even in combination with "either".
|
https://en.wikipedia.org/wiki/Exclusive_or
|
passage: Ultra prefilters as maximal prefilters
To characterize ultra prefilters in terms of "maximality," the following relation is needed.
Given two families of sets
$$
M
$$
and
$$
N,
$$
the family
$$
M
$$
is said to be coarser than
$$
N,
$$
and
$$
N
$$
is finer than and subordinate to
$$
M,
$$
written
$$
M \leq N
$$
or , if for every
$$
C \in M,
$$
there is some
$$
F \in N
$$
such that
$$
F \subseteq C.
$$
The families
$$
M
$$
and
$$
N
$$
are called equivalent if
$$
M \leq N
$$
and
$$
N \leq M.
$$
The families
$$
M
$$
and
$$
N
$$
are comparable if one of these sets is finer than the other.
The subordination relationship, i.e.
$$
\,\geq,\,
$$
is a preorder so the above definition of "equivalent" does form an equivalence relation.
If
$$
M \subseteq N
$$
then
$$
M \leq N
$$
but the converse does not hold in general.
However, if
$$
N
$$
is upward closed, such as a filter, then
$$
M \leq N
$$
if and only if
$$
M \subseteq N.
$$
Every prefilter is equivalent to the filter that it generates. This shows that it is possible for filters to be equivalent to sets that are not filters.
|
https://en.wikipedia.org/wiki/Ultrafilter_on_a_set
|
passage: O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(n) Red–black tree O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(n) Splay tree O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(n) AVL tree O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(n) K-d tree O(log (n)) O(log (n)) O(log (n)) O(log (n)) O(n) O(n) O(n) O(n) O(n)
- Linear search on a list of n elements. In the absolute worst case, the search must visit every element once. This happens when the value being searched for is either the last element in the list, or is not in the list.
|
https://en.wikipedia.org/wiki/Best%2C_worst_and_average_case
|
passage: In mathematics, the Menger sponge (also known as the Menger cube, Menger universal curve, Sierpinski cube, or Sierpinski sponge) is a fractal curve. It is a three-dimensional generalization of the one-dimensional Cantor set and two-dimensional Sierpinski carpet. It was first described by Karl Menger in 1926, in his studies of the concept of topological dimension.. English translation reprinted in
## Construction
The construction of a Menger sponge can be described as follows:
1. Begin with a cube.
1. Divide every face of the cube into nine squares in a similar manner to a Rubik's Cube. This sub-divides the cube into 27 smaller cubes.
1. Remove the smaller cube in the middle of each face and remove the smaller cube in the center of the larger cube, leaving 20 smaller cubes. This is a level 1 Menger sponge (resembling a void cube).
1. Repeat steps two and three for each of the remaining smaller cubes and continue to iterate ad infinitum.
The second iteration gives a level 2 sponge, the third iteration gives a level 3 sponge, and so on. The Menger sponge itself is the limit of this process after an infinite number of iterations.
## Properties
The
$$
n
$$
th stage of the Menger sponge,
$$
M_n
$$
, is made up of
$$
20^n
$$
smaller cubes, each with a side length of (1/3)n.
|
https://en.wikipedia.org/wiki/Menger_sponge
|
passage: However Lavrentiev in 1926 showed that there are circumstances where there is no optimum solution but one can be approached arbitrarily closely by increasing numbers of sections. The Lavrentiev Phenomenon identifies a difference in the infimum of a minimization problem across different classes of admissible functions. For instance the following problem, presented by Manià in 1934:
$$
L[x] = \int_0^1 (x^3-t)^2 x'^6,
$$
$$
{A} = \{x \in W^{1,1}(0,1) : x(0)=0,\ x(1)=1\}.
$$
Clearly,
$$
x(t) = t^{\frac{1}{3}}
$$
minimizes the functional, but we find any function
$$
x \in W^{1, \infty}
$$
gives a value bounded away from the infimum.
|
https://en.wikipedia.org/wiki/Calculus_of_variations
|
passage: The ocean has absorbed 20 to 30% of emitted over the last two decades. is only removed from the atmosphere for the long term when it is stored in the Earth's crust, which is a process that can take millions of years to complete.
### Land surface changes
Around 30% of Earth's land area is largely unusable for humans (glaciers, deserts, etc.), 26% is forests, 10% is shrubland and 34% is agricultural land. Deforestation is the main land use change contributor to global warming, as the destroyed trees release , and are not replaced by new trees, removing that carbon sink. Between 2001 and 2018, 27% of deforestation was from permanent clearing to enable agricultural expansion for crops and livestock. Another 24% has been lost to temporary clearing under the shifting cultivation agricultural systems. 26% was due to logging for wood and derived products, and wildfires have accounted for the remaining 23%. Some forests have not been fully cleared, but were already degraded by these impacts. Restoring these forests also recovers their potential as a carbon sink.
Local vegetation cover impacts how much of the sunlight gets reflected back into space (albedo), and how much heat is lost by evaporation. For instance, the change from a dark forest to grassland makes the surface lighter, causing it to reflect more sunlight. Deforestation can also modify the release of chemical compounds that influence clouds, and by changing wind patterns.
|
https://en.wikipedia.org/wiki/Climate_change
|
passage: In mathematics, Borel summation is a summation method for divergent series, introduced by . It is particularly useful for summing divergent asymptotic series, and in some sense gives the best possible sum for such series. There are several variations of this method that are also called Borel summation, and a generalization of it called Mittag-Leffler summation.
## Definition
There are (at least) three slightly different methods called Borel summation. They differ in which series they can sum, but are consistent, meaning that if two of the methods sum the same series they give the same answer.
Throughout let denote a formal power series
$$
A(z) = \sum_{k = 0}^\infty a_kz^k,
$$
and define the Borel transform of to be its corresponding exponential series
$$
\mathcal{B}A(t) \equiv \sum_{k=0}^\infty \frac{a_k}{k!}t^k.
$$
|
https://en.wikipedia.org/wiki/Borel_summation
|
passage: ## Historical development
In the 19th century, several disparate mathematical properties were understood that would later be seen as consequences of compactness. On the one hand, Bernard Bolzano (1817) had been aware that any bounded sequence of points (in the line or plane, for instance) has a subsequence that must eventually get arbitrarily close to some other point, called a limit point.
Bolzano's proof relied on the method of bisection: the sequence was placed into an interval that was then divided into two equal parts, and a part containing infinitely many terms of the sequence was selected.
The process could then be repeated by dividing the resulting smaller interval into smaller and smaller parts – until it closes down on the desired limit point. The full significance of Bolzano's theorem, and its method of proof, would not emerge until almost 50 years later when it was rediscovered by Karl Weierstrass.
In the 1880s, it became clear that results similar to the Bolzano–Weierstrass theorem could be formulated for spaces of functions rather than just numbers or geometrical points.
The idea of regarding functions as themselves points of a generalized space dates back to the investigations of Giulio Ascoli and Cesare Arzelà.
The culmination of their investigations, the Arzelà–Ascoli theorem, was a generalization of the Bolzano–Weierstrass theorem to families of continuous functions, the precise conclusion of which was that it was possible to extract a uniformly convergent sequence of functions from a suitable family of functions. The uniform limit of this sequence then played precisely the same role as Bolzano's "limit point".
|
https://en.wikipedia.org/wiki/Compact_space
|
passage: ### For the digestive system
- Lower digestive tract: laxatives, antispasmodics, antidiarrhoeals, bile acid sequestrants, opioids.
- Upper digestive tract: antacids, reflux suppressants, antiflatulents, antidopaminergics, proton pump inhibitors (PPIs), H2-receptor antagonists, cytoprotectants, prostaglandin analogues.
### For the cardiovascular system
- Affecting blood pressure/(antihypertensive drugs): ACE inhibitors, angiotensin receptor blockers, beta-blockers, α blockers, calcium channel blockers, thiazide diuretics, loop diuretics, aldosterone inhibitors.
- Coagulation: anticoagulants, heparin, antiplatelet drugs, fibrinolytics, anti-hemophilic factors, haemostatic drugs.
- General: β-receptor blockers ("beta blockers"), calcium channel blockers, diuretics, cardiac glycosides, antiarrhythmics, nitrate, antianginals, vasoconstrictors, vasodilators.
- HMG-CoA reductase inhibitors (statins) for lowering LDL cholesterol inhibitors: hypolipidaemic agents.
### For the central nervous system
Drugs affecting the central nervous system include psychedelics, hypnotics, anaesthetics, antipsychotics, eugeroics, antidepressants (including tricyclic antidepressants, monoamine oxidase inhibitors, lithium salts, and selective serotonin reuptake inhibitors (SSRIs)), antiemetics, anticonvulsants/antiepileptics, anxiolytics, barbiturates, movement disorder (e.g., Parkinson's disease) drugs, nootropics, stimulants (including amphetamines), benzodiazepines, cyclopyrrolones, dopamine antagonists, antihistamines, cholinergics, anticholinergics, emetics, cannabinoids, and 5-HT (serotonin) antagonists.
|
https://en.wikipedia.org/wiki/Medication
|
passage: For this reason, orbitals with the same value of n are said to comprise a "shell". Orbitals with the same value of n and also the same value of are even more closely related, and are said to comprise a "subshell".
## Quantum numbers
Because of the quantum mechanical nature of the electrons around a nucleus, atomic orbitals can be uniquely defined by a set of integers known as quantum numbers. These quantum numbers occur only in certain combinations of values, and their physical interpretation changes depending on whether real or complex versions of the atomic orbitals are employed.
### Complex orbitals
In physics, the most common orbital descriptions are based on the solutions to the hydrogen atom, where orbitals are given by the product between a radial function and a pure spherical harmonic. The quantum numbers, together with the rules governing their possible values, are as follows:
The principal quantum number describes the energy of the electron and is always a positive integer. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Each atom has, in general, many orbitals associated with each value of n; these orbitals together are sometimes called electron shells.
The azimuthal quantum number describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where is some integer , ranges across all (integer) values satisfying the relation
$$
0 \le \ell \le n_0-1
$$
.
|
https://en.wikipedia.org/wiki/Atomic_orbital
|
passage: In this case classifying maps give rise to the first Chern class of
$$
X
$$
, in
$$
H^2(X)
$$
(integral cohomology).
There is a further, analogous theory with quaternionic (real dimension four) line bundles. This gives rise to one of the Pontryagin classes, in real four-dimensional cohomology.
In this way foundational cases for the theory of characteristic classes depend only on line bundles. According to a general splitting principle this can determine the rest of the theory (if not explicitly).
There are theories of holomorphic line bundles on complex manifolds, and invertible sheaves in algebraic geometry, that work out a line bundle theory in those areas.
|
https://en.wikipedia.org/wiki/Line_bundle
|
passage: Let there exist a set of values of some function in scattered points BLOCK11. Find a function
$$
\mathbf{f}(\mathbf{x})
$$
that will meet the condition
$$
\mathbf{f}(\mathbf{x})=1
$$
for points lying on the shape and
$$
\mathbf{f}(\mathbf{x})\neq1
$$
for points not lying on the shape
As J. C. Carr et al. showed, this function takes the form
$$
\mathbf{f}(\mathbf{x})=\sum_{i=1}^N \lambda_i \varphi(\mathbf{x},\mathbf{c}_i)
$$
where
$$
\varphi
$$
is a radial basis function and
$$
\lambda
$$
are the coefficients that are the solution of the following linear system of equations:
$$
\begin{bmatrix} \varphi(c_1,c_1) & \varphi(c_1,c_2) & ... & \varphi(c_1,c_N) \\ \varphi(c_2,c_1) & \varphi(c_2,c_2) & ... & \varphi(c_2,c_N) \\ ... & ... & ... & ... \\ \varphi(c_N,c_1) & \varphi(c_N,c_2) & ... & \varphi(c_N,c_N) \end{bmatrix}* \begin{bmatrix} \lambda_1 \\ \lambda_2 \\ ... \\ \lambda_N \end{bmatrix}=\begin{bmatrix} h_1 \\ h_2 \\ ... \\ h_N \end{bmatrix}
$$
For determination of surface, it is necessary to estimate the value of function
$$
\mathbf{f}(\mathbf{x})
$$
in specific points x.
|
https://en.wikipedia.org/wiki/Hierarchical_RBF
|
passage: The following are the approximated solutions after five iterations.
0.6 2.27272 -1.1 1.875 1.04727 1.7159 -0.80522 0.88522 0.93263 2.05330 -1.0493 1.13088 1.01519 1.95369 -0.9681 0.97384 0.98899 2.0114 -1.0102 1.02135
The exact solution of the system is .
|
https://en.wikipedia.org/wiki/Jacobi_method
|
passage: This is backed up by studies in the Midwest.
## Cost factors
Cogenerators find favor because most buildings already burn fuels, and the cogeneration can extract more value from the fuel. Local production has no electricity transmission losses on long distance power lines or energy losses from the Joule effect in transformers where in general 8-15% of the energy is lost (see also cost of electricity by source). Some larger installations utilize combined cycle generation. Usually this consists of a gas turbine whose exhaust boils water for a steam turbine in a Rankine cycle. The condenser of the steam cycle provides the heat for space heating or an absorptive chiller. Combined cycle plants with cogeneration have the highest known thermal efficiencies, often exceeding 85%. In countries with high pressure gas distribution, small turbines can be used to bring the gas pressure to domestic levels whilst extracting useful energy. If the UK were to implement this countrywide an additional 2-4 GWe would become available. (Note that the energy is already being generated elsewhere to provide the high initial gas pressure – this method simply distributes the energy via a different route.)
Microgrid
A microgrid is a localized grouping of electricity generation, energy storage, and loads that normally operates connected to a traditional centralized grid (macrogrid). This single point of common coupling with the macrogrid can be disconnected. The microgrid can then function autonomously. Generation and loads in a microgrid are usually interconnected at low voltage and it can operate in DC, AC, or the combination of both.
|
https://en.wikipedia.org/wiki/Distributed_generation
|
passage: The theorem establishes Shannon's channel capacity for such a communication link, a bound on the maximum amount of error-free information per time unit that can be transmitted with a specified bandwidth in the presence of the noise interference, assuming that the signal power is bounded, and that the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley.
## Statement of the theorem
The Shannon–Hartley theorem states the channel capacity
$$
C
$$
, meaning the theoretical tightest upper bound on the information rate of data that can be communicated at an arbitrarily low error rate using an average received signal power
$$
S
$$
through an analog communication channel subject to additive white Gaussian noise (AWGN) of power
$$
C = B \log_2 \left( 1+\frac{S}{N} \right)
$$
where
-
$$
C
$$
is the channel capacity in bits per second, a theoretical upper bound on the net bit rate (information rate, sometimes denoted
$$
I
$$
) excluding error-correction codes;
-
$$
B
$$
is the bandwidth of the channel in hertz (passband bandwidth in case of a bandpass signal);
-
$$
S
$$
is the average received signal power over the bandwidth (in case of a carrier-modulated passband transmission, often denoted C), measured in watts (or volts squared);
-
$$
N
$$
is the average power of the noise and interference over the bandwidth, measured in watts (or volts squared); and
-
$$
S/N
$$
is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the communication signal to the noise and interference at the receiver (expressed as a linear power ratio, not as logarithmic decibels).
|
https://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theorem
|
passage: ## Examples
$$
{0 \choose 0}_q = {1 \choose 0}_q = 1
$$
$$
{1 \choose 1}_q = \frac{1-q}{1-q}=1
$$
$$
{2 \choose 1}_q = \frac{1-q^2}{1-q}=1+q
$$
$$
{3 \choose 1}_q = \frac{1-q^3}{1-q}=1+q+q^2
$$
$$
{3 \choose 2}_q = \frac{(1-q^3)(1-q^2)}{(1-q)(1-q^2)}=1+q+q^2
$$
$$
{4 \choose 2}_q = \frac{(1-q^4)(1-q^3)}{(1-q)(1-q^2)}=(1+q^2)(1+q+q^2)=1+q+2q^2+q^3+q^4
$$
$$
{6 \choose 3}_q = \frac{(1-q^6)(1-q^5)(1-q^4)}{(1-q)(1-q^2)(1-q^3)}=(1+q^2)(1+q^3)(1+q+q^2+q^3+q^4)=1 + q + 2 q^2 + 3 q^3 + 3 q^4 + 3 q^5 + 3 q^6 + 2 q^7 + q^8 + q^9
$$
## Combinatorial descriptions
### Inversions
One combinatorial description of Gaussian binomial coefficients involves inversions.
|
https://en.wikipedia.org/wiki/Gaussian_binomial_coefficient
|
passage: Hayward, CA: Living Control Systems Publishing. .
- Mansell, Warren (Ed.), (2020). The Interdisciplinary Handbook of Perceptual Control Theory: Living Control Systems IV. Cambridge: Academic Press. .
- Marken, Richard S. (1992) Mind readings: Experimental studies of purpose. Benchmark Publications: New Canaan, CT.
- Marken, Richard S. (2002) More mind readings: Methods and models in the study of purpose. Chapel Hill, NC: New View.
- Pfau, Richard H. (2017). Your Behavior: Understanding and Changing the Things You Do. St. Paul, MN: Paragon House.
- Plooij, F. X. (1984). The behavioral development of free-living chimpanzee babies and infants. Norwood, N.J.: Ablex.
- Plooij, F. X. (2003). "The trilogy of mind". In M. Heimann (Ed.), Regression periods in human infancy (pp. 185–205). Mahwah, NJ: Erlbaum.
- Powers, William T. (1973). Behavior: The control of perception. Chicago: Aldine de Gruyter. . [2nd exp. ed. = Powers (2005)].
- Powers, William T. (1989). Living control systems. [Selected papers 1960–1988.] New Canaan, CT: Benchmark Publications. .
- Powers, William T. (1992). Living control systems II. [Selected papers 1959–1990.]
|
https://en.wikipedia.org/wiki/Perceptual_control_theory
|
passage: ### Esophagus
#### Gastroesophageal reflux disease (GERD)
A condition that is a result of stomach contents consistently coming back up into the esophagus causing troublesome symptoms or complications. Symptoms are considered troublesome based on how disruptive they are to a patient's daily life and well-being. This definition was standardized by the Montreal Consensus in 2006. Symptoms include a painful feeling in the middle of the chest and feeling stomach contents coming back up into the mouth. Other symptoms include chest pain, nausea, difficulty swallowing, painful swallowing, coughing, and hoarseness. Risk factors include obesity, pregnancy, smoking, hiatal hernia, certain medications, and certain foods. Diagnosis is usually based on symptoms and medical history, with further testing only after treatment has been ineffective. Further diagnosis can be achieved by measuring how much acid enters the esophagus or looking into the esophagus with a scope. Treatment and management options include lifestyle modifications, medications, and surgery if there is no improvement with other interventions. Lifestyle modifications include not lying down for three hours after eating, lying down on the left side, elevating head while laying by elevating head of the bed or using extra pillows, losing weight, stopping smoking, and avoiding coffee, mint, alcohol, chocolate, fatty foods, acidic foods, and spicy foods. Medications include antacids, proton pump inhibitors, H2 receptor blockers. Surgery is usually a Nissen fundoplication and is performed by a surgeon.
|
https://en.wikipedia.org/wiki/Gastroenterology
|
passage: Absolute continuity is a fundamental concept in the Lebesgue theory of integration, allowing the formulation of a generalized version of the fundamental theorem of calculus that applies to the Lebesgue integral.
### Differentiation
The notion of the derivative of a function or differentiability originates from the concept of approximating a function near a given point using the "best" linear approximation. This approximation, if it exists, is unique and is given by the line that is tangent to the function at the given point
$$
a
$$
, and the slope of the line is the derivative of the function at
$$
a
$$
.
A function
$$
f:\mathbb{R}\to\mathbb{R}
$$
is differentiable at if the limit
$$
f'(a)=\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}
$$
exists. This limit is known as the derivative of at , and the function
$$
f'
$$
, possibly defined on only a subset of
$$
\mathbb{R}
$$
, is the derivative (or derivative function) of . If the derivative exists everywhere, the function is said to be differentiable.
As a simple consequence of the definition,
$$
f
$$
is continuous at if it is differentiable there. Differentiability is therefore a stronger regularity condition (condition describing the "smoothness" of a function) than continuity, and it is possible for a function to be continuous on the entire real line but not differentiable anywhere (see Weierstrass's nowhere differentiable continuous function).
|
https://en.wikipedia.org/wiki/Real_analysis
|
passage: The interrupted customer remains in the service area until server is fixed.
Customer waiting behavior
- Balking: customers decide not to join the queue if it is too long
- Jockeying: customers switch between queues if they think they will get served faster by doing so
- Reneging: customers leave the queue if they have waited too long for service
Arriving customers not served (either due to the queue having no buffer, or due to balking or reneging by the customer) are also known as dropouts. The average rate of dropouts is a significant parameter describing a queue.
## Queueing networks
Queue networks are systems in which multiple queues are connected by customer routing. When a customer is serviced at one node, it can join another node and queue for service, or leave the network.
For networks of m nodes, the state of the system can be described by an m–dimensional vector (x1, x2, ..., xm) where xi represents the number of customers at each node.
The simplest non-trivial networks of queues are called tandem queues. The first significant results in this area were Jackson networks, for which an efficient product-form stationary distribution exists and the mean value analysis (which allows average metrics such as throughput and sojourn times) can be computed. If the total number of customers in the network remains constant, the network is called a closed network and has been shown to also have a product–form stationary distribution by the Gordon–Newell theorem.
|
https://en.wikipedia.org/wiki/Queueing_theory
|
passage: These are the elements of
$$
\operatorname{Hom}_G(F,M)
$$
, that is, functions
$$
\phi_n\colon G^n \to M
$$
that obey
$$
g\phi_n(g_1,g_2,\ldots, g_n)= \phi_n(gg_1,gg_2,\ldots, gg_n).
$$
The coboundary operator
$$
\delta\colon C^n \to C^{n+1}
$$
is now naturally defined by, for example,
$$
\delta \phi_2(g_1, g_2,g_3)= \phi_2(g_2,g_3)-\phi_2(g_1,g_3)+ \phi_2(g_1,g_2).
$$
The relation to the coboundary operator d that was defined in the previous section, and which acts on the "inhomogeneous" cochains
$$
\varphi
$$
, is given by reparameterizing so that
$$
\begin{align}
\varphi_2(g_1,g_2) &= \phi_3(1, g_1,g_1g_2) \\
\varphi_3(g_1,g_2,g_3) &= \phi_4(1, g_1,g_1g_2, g_1g_2g_3),
\end{align}
$$
and so on.
|
https://en.wikipedia.org/wiki/Group_cohomology
|
passage: (4)
$$
X
$$
is a quotient space of a weakly locally compact space.
As explained in the final topology article, condition (2) is well-defined, even though the family of continuous maps from arbitrary compact spaces is not a set but a proper class.
The equivalence between conditions (1) and (2) follows from the fact that every inclusion from a subspace is a continuous map; and on the other hand, every continuous map
$$
f:K\to X
$$
from a compact space
$$
K
$$
has a compact image
$$
f(K)
$$
and thus factors through the inclusion of the compact subspace
$$
f(K)
$$
into
$$
X.
$$
### Definition 2
Informally, a space whose topology is determined by all continuous maps from arbitrary compact Hausdorff spaces.
A topological space
$$
X
$$
is called compactly-generated or a k-space if it satisfies any of the following equivalent conditions:
(1) The topology on
$$
X
$$
coincides with the final topology with respect to the family of all continuous maps
$$
f:K\to X
$$
from all compact Hausdorff spaces
$$
K.
$$
In other words, it satisfies the condition:
a set
$$
A\subseteq X
$$
is open (resp. closed) in
$$
X
$$
exactly when
$$
f^{-1}(A)
$$
is open (resp.
|
https://en.wikipedia.org/wiki/Compactly_generated_space
|
passage: The action of the mapping class groups
$$
\operatorname{Mod}(S)
$$
on the vertices carries over to the full complex. The action is not properly discontinuous (the stabiliser of a simple closed curve is an infinite group).
This action, together with combinatorial and geometric properties of the curve complex, can be used to prove various properties of the mapping class group. In particular, it explains some of the hyperbolic properties of the mapping class group: while as mentioned in the previous section the mapping class group is not a hyperbolic group it has some properties reminiscent of those.
### Other complexes with a mapping class group action
#### Pants complex
The pants complex of a compact surface
$$
S
$$
is a complex whose vertices are the pants decompositions of
$$
S
$$
(isotopy classes of maximal systems of disjoint simple closed curves). The action of
$$
\operatorname{Mod}(S)
$$
extends to an action on this complex. This complex is quasi-isometric to Teichmüller space endowed with the Weil–Petersson metric.
#### Markings complex
The stabilisers of the mapping class group's action on the curve and pants complexes are quite large. The markings complex is a complex whose vertices are markings of
$$
S
$$
, which are acted upon by, and have trivial stabilisers in, the mapping class group
$$
\operatorname{Mod}(S)
$$
.
|
https://en.wikipedia.org/wiki/Mapping_class_group_of_a_surface
|
passage: ### Surjections as right invertible functions
The function is said to be a right inverse of the function if f(g(y)) = y for every y in Y (g can be undone by f). In other words, g is a right inverse of f if the composition of g and f in that order is the identity function on the domain Y of g. The function g need not be a complete inverse of f because the composition in the other order, , may not be the identity function on the domain X of f. In other words, f can undo or "reverse" g, but cannot necessarily be reversed by it.
Every function with a right inverse is necessarily a surjection. The proposition that every surjective function has a right inverse is equivalent to the axiom of choice.
If is surjective and B is a subset of Y, then f(f −1(B)) = B. Thus, B can be recovered from its preimage .
For example, in the first illustration in the gallery, there is some function g such that g(C) = 4. There is also some function f such that f(4) = C. It doesn't matter that g is not unique (it would also work if g(C) equals 3); it only matters that f "reverses" g.
### Surjections as epimorphisms
A function is surjective if and only if it is right-cancellative: given any functions , whenever g o f = h o f, then g = h.
|
https://en.wikipedia.org/wiki/Surjective_function
|
passage: Upon differentiation, a small number of genes, including OCT4 and NANOG, are methylated and their promoters repressed to prevent their further expression. Consistently, DNA methylation-deficient embryonic stem cells rapidly enter apoptosis upon in vitro differentiation.
#### Nucleosome positioning
While the DNA sequence of most cells of an organism is the same, the binding patterns of transcription factors and the corresponding gene expression patterns are different. To a large extent, differences in transcription factor binding are determined by the chromatin accessibility of their binding sites through histone modification and/or pioneer factors. In particular, it is important to know whether a nucleosome is covering a given genomic binding site or not. This can be determined using a chromatin immunoprecipitation assay.
##### Histone acetylation and methylation
DNA-nucleosome interactions are characterized by two states: either tightly bound by nucleosomes and transcriptionally inactive, called heterochromatin, or loosely bound and usually, but not always, transcriptionally active, called euchromatin. The epigenetic processes of histone methylation and acetylation, and their inverses demethylation and deacetylation primarily account for these changes. The effects of acetylation and deacetylation are more predictable. An acetyl group is either added to or removed from the positively charged Lysine residues in histones by enzymes called histone acetyltransferases or histone deactylases, respectively.
|
https://en.wikipedia.org/wiki/Cellular_differentiation
|
passage: To further prove her point, she completed another experiment with infants who have not been influenced by the environment of social norms, like the adult male getting more opportunities than women. She found no difference between infants besides size. After this research proved the original hypothesis wrong, Hollingworth was able to show there is no difference between the physiological and psychological traits of men and women, and women are not impaired during menstruation.
The first half of the 1900s was filled with new theories and it was a turning point for women's recognition within the field of psychology. In addition to the contributions made by Leta Stetter Hollingworth and Anna Freud, Mary Whiton Calkins invented the paired associates technique of studying memory and developed self-psychology. Karen Horney developed the concept of "womb envy" and neurotic needs. Psychoanalyst Melanie Klein impacted developmental psychology with her research of play therapy. These great discoveries and contributions were made during struggles of sexism, discrimination, and little recognition for their work.
#### 1950–1999
Women in the second half of the 20th century continued to do research that had large-scale impacts on the field of psychology. Mary Ainsworth's work centered around attachment theory. Building off fellow psychologist John Bowlby, Ainsworth spent years doing fieldwork to understand the development of mother-infant relationships. In doing this field research, Ainsworth developed the Strange Situation Procedure, a laboratory procedure meant to study attachment style by separating and uniting a child with their mother several different times under different circumstances.
|
https://en.wikipedia.org/wiki/Psychology
|
passage: If one or more candidates remain and overload resolution succeeds, the invocation is well-formed.
## Example
The following example illustrates a basic instance of SFINAE:
```cpp
struct Test {
typedef int foo;
};
template <typename T>
void f(typename T::foo) {} // Definition #1
template <typename T>
void f(T) {} // Definition #2
int main() {
f<Test>(10); // Call #1.
f<int>(10); // Call #2. Without error (even though there is no int::foo)
// thanks to SFINAE.
return 0;
}
```
Here, attempting to use a non-class type in a qualified name (`T::foo`) results in a deduction failure for `f<int>` because `int` has no nested type named `foo`, but the program is well-formed because a valid function remains in the set of candidate functions.
Although SFINAE was initially introduced to avoid creating ill-formed programs when unrelated template declarations were visible (e.g., through the inclusion of a header file), many developers later found the behavior useful for compile-time introspection. Specifically, it allows a template to determine certain properties of its template arguments at instantiation time.
|
https://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error
|
passage: Architectural innovations include shared-nothing and shared-everything architectures for managing multi-server configurations.
## Strong versus eventual consistency (storage)
In the context of scale-out data storage, scalability is defined as the maximum storage cluster size which guarantees full data consistency, meaning there is only ever one valid version of stored data in the whole cluster, independently from the number of redundant physical data copies. Clusters which provide "lazy" redundancy by updating copies in an asynchronous fashion are called 'eventually consistent'. This type of scale-out design is suitable when availability and responsiveness are rated higher than consistency, which is true for many web file-hosting services or web caches (if you want the latest version, wait some seconds for it to propagate). For all classical transaction-oriented applications, this design should be avoided.
Many open-source and even commercial scale-out storage clusters, especially those built on top of standard PC hardware and networks, provide eventual consistency only, such as some NoSQL databases like CouchDB and others mentioned above. Write operations invalidate other copies, but often don't wait for their acknowledgements. Read operations typically don't check every redundant copy prior to answering, potentially missing the preceding write operation. The large amount of metadata signal traffic would require specialized hardware and short distances to be handled with acceptable performance (i.e., act like a non-clustered storage device or database).
|
https://en.wikipedia.org/wiki/Scalability
|
passage: In mathematics, a hyperbola is a type of smooth curve lying in a plane, defined by its geometric properties or by equations for which it is the solution set. A hyperbola has two pieces, called connected components or branches, that are mirror images of each other and resemble two infinite bows. The hyperbola is one of the three kinds of conic section, formed by the intersection of a plane and a double cone. (The other conic sections are the parabola and the ellipse. A circle is a special case of an ellipse.) If the plane intersects both halves of the double cone but does not pass through the apex of the cones, then the conic is a hyperbola.
Besides being a conic section, a hyperbola can arise as the locus of points whose difference of distances to two fixed foci is constant, as a curve for each point of which the rays to two fixed foci are reflections across the tangent line at that point, or as the solution of certain bivariate quadratic equations such as the reciprocal relationship
$$
xy = 1.
$$
In practical applications, a hyperbola can arise as the path followed by the shadow of the tip of a sundial's gnomon, the shape of an open orbit such as that of a celestial object exceeding the escape velocity of the nearest gravitational body, or the scattering trajectory of a subatomic particle, among others.
Each branch of the hyperbola has two arms which become straighter (lower curvature) further out from the center of the hyperbola.
|
https://en.wikipedia.org/wiki/Hyperbola
|
passage: 2f'(x_0)=
\frac{f\left(x_0 + h\right) - f(x_0)}{h}
-\frac{f\left(x_0 - h\right) - f(x_0)}{h}
-2\frac{f^{(3)}(x_0)}{3!}h^2 + \cdots
\end{array}
$$
.
$$
\begin{array} {l}
f'(x_0)=
\frac{f\left(x_0 + h\right) - f\left(x_0 - h\right)}{2h} - \frac{f^{(3)}(x_0)}{3!}h^2 + \cdots
\end{array}
$$
.
$$
\begin{array} {l}
f'(x_0)=
\frac{f\left(x_0 + h\right) - f\left(x_0 - h\right)}{2h} + O\left(h^2\right)
\end{array}
$$
.
|
https://en.wikipedia.org/wiki/Compact_stencil
|
passage: This elegant 1926 derivation was obtained before the development of the Schrödinger equation.
A subtlety of the quantum mechanical operator for the LRL vector is that the momentum and angular momentum operators do not commute; hence, the quantum operator cross product of and must be defined carefully. Typically, the operators for the Cartesian components are defined using a symmetrized (Hermitian) product,
$$
A_s = - m k \hat{r}_s + \frac{1}{2} \sum_{i=1}^3 \sum_{j=1}^3 \varepsilon_{sij} (p_i \ell_j + \ell_j p_i),
$$
Once this is done, one can show that the quantum LRL operators satisfy commutations relations exactly analogous to the Poisson bracket relations in the previous section—just replacing the Poisson bracket with
$$
1/(i\hbar)
$$
times the commutator.
From these operators, additional ladder operators for can be defined,
$$
\begin{align}
J_0 &= A_3, \\
J_{\pm 1} &= \mp \tfrac{1}{\sqrt{2}} \left( A_1 \pm i A_2 \right).
\end{align}
$$
These further connect different eigenstates of , so different spin multiplets, among themselves.
|
https://en.wikipedia.org/wiki/Laplace%E2%80%93Runge%E2%80%93Lenz_vector
|
passage: The complete-linkage dendrogram
The dendrogram is now complete. It is ultrametric because all tips (
$$
a
$$
to
$$
e
$$
) are equidistant from
$$
r
$$
:
$$
\delta(a,r)=\delta(b,r)=\delta(e,r)=\delta(c,r)=\delta(d,r)=21.5
$$
The dendrogram is therefore rooted by
$$
r
$$
, its deepest node.
## Comparison with other linkages
Alternative linkage schemes include single linkage clustering and average linkage clustering - implementing a different linkage in the naive algorithm is simply a matter of using a different formula to calculate inter-cluster distances in the initial computation of the proximity matrix and in step 4 of the above algorithm. An optimally efficient algorithm is however not available for arbitrary linkages. The formula that should be adjusted has been highlighted using bold text.
Complete linkage clustering avoids a drawback of the alternative single linkage method - the so-called chaining phenomenon, where clusters formed via single linkage clustering may be forced together due to single elements being close to each other, even though many of the elements in each cluster may be very distant to each other. Complete linkage tends to find compact clusters of approximately equal diameters.
+ Comparison of dendrograms obtained under different clustering methods from the same distance matrix.
Single-linkage clustering.
Complete-linkage clustering.
Average linkage clustering: WPGMA.
Average linkage clustering: UPGMA.
|
https://en.wikipedia.org/wiki/Complete-linkage_clustering
|
passage: Older adults can manage their problems with prospective memory by using appointment books, for example.
Gene transcription profiles were determined for the human frontal cortex of individuals from age 26 to 106 years. Numerous genes were identified with reduced expression after age 40, and especially after age 70. Genes that play central roles in memory and learning were among those showing the most significant reduction with age. There was also a marked increase in DNA damage, likely oxidative damage, in the promoters of those genes with reduced expression. It was suggested that DNA damage may reduce the expression of selectively vulnerable genes involved in memory and learning.
## Disorders
Much of the current knowledge of memory has come from studying memory disorders, particularly loss of memory, known as amnesia. Amnesia can result from extensive damage to: (a) the regions of the medial temporal lobe, such as the hippocampus, dentate gyrus, subiculum, amygdala, the parahippocampal, entorhinal, and perirhinal cortices or the (b) midline diencephalic region, specifically the dorsomedial nucleus of the thalamus and the mammillary bodies of the hypothalamus. There are many sorts of amnesia, and by studying their different forms, it has become possible to observe apparent defects in individual sub-systems of the brain's memory systems, and thus hypothesize their function in the normally working brain. Other neurological disorders such as Alzheimer's disease and Parkinson's disease can also affect memory and cognition.
|
https://en.wikipedia.org/wiki/Memory
|
passage: If it vanishes, then the inverse image of
$$
1
$$
under the mapis the set of double coverings giving spin structures. Now, this subset of
$$
H^1(P_E,\mathbb{Z}/2)
$$
can be identified with
$$
H^1(M,\mathbb{Z}/2)
$$
, showing this latter cohomology group classifies the various spin structures on the vector bundle
$$
E \to M
$$
. This can be done by looking at the long exact sequence of homotopy groups of the fibrationand applying
$$
\text{Hom}(-,\mathbb{Z}/2)
$$
, giving the sequence of cohomology groupsBecause
$$
H^1(M,\mathbb{Z}/2)
$$
is the kernel, and the inverse image of
$$
1 \in H^1(\operatorname{SO}(n),\mathbb{Z}/2)
$$
is in bijection with the kernel, we have the desired result.
#### Remarks on classification
When spin structures exist, the inequivalent spin structures on a manifold have a one-to-one correspondence (not canonical) with the elements of H1(M,Z2), which by the universal coefficient theorem is isomorphic to H1(M,Z2). More precisely, the space of the isomorphism classes of spin structures is an affine space over H1(M,Z2).
|
https://en.wikipedia.org/wiki/Spin_structure
|
passage: This description is called a profile.
### Calibration
Calibration is like characterization, except that it can include the adjustment of the device, as opposed to just the measurement of the device. Color management is sometimes sidestepped by calibrating devices to a common standard color space such as sRGB; when such calibration is done well enough, no color translations are needed to get all devices to handle colors consistently. This avoidance of the complexity of color management was one of the goals in the development of sRGB.
## Color profiles
### Embedding
Image formats themselves (such as TIFF, JPEG, PNG, EPS, PDF, and SVG) may contain embedded color profiles but are not required to do so by the image format. The International Color Consortium standard was created to bring various developers and manufacturers together. The ICC standard permits the exchange of output device characteristics and color spaces in the form of metadata. This allows the embedding of color profiles into images as well as storing them in a database or a profile directory.
### Working spaces
Working spaces, such as sRGB, Adobe RGB or ProPhoto are color spaces that facilitate good results while editing. For instance, pixels with equal values of R,G,B should appear neutral. Using a large (gamut) working space will lead to posterization, while using a small working space will lead to clipping. This trade-off is a consideration for the critical image editor.
## Color transformation
Color transformation, or color space conversion, is the transformation of the representation of a color from one color space to another.
|
https://en.wikipedia.org/wiki/Color_management
|
passage: The combination of good performance for sparse matrices and the ability to compute several (without computing all) eigenvalues are the main reasons for choosing to use the Lanczos algorithm.
### Application to tridiagonalization
Though the eigenproblem is often the motivation for applying the Lanczos algorithm, the operation the algorithm primarily performs is tridiagonalization of a matrix, for which numerically stable Householder transformations have been favoured since the 1950s. During the 1960s the Lanczos algorithm was disregarded. Interest in it was rejuvenated by the
### Kaniel–Paige convergence theory
and the development of methods to prevent numerical instability, but the Lanczos algorithm remains the alternative algorithm that one tries only if Householder is not satisfactory.
Aspects in which the two algorithms differ include:
- Lanczos takes advantage of
$$
A
$$
being a sparse matrix, whereas Householder does not, and will generate fill-in.
- Lanczos works throughout with the original matrix
$$
A
$$
(and has no problem with it being known only implicitly), whereas raw Householder wants to modify the matrix during the computation (although that can be avoided).
- Each iteration of the Lanczos algorithm produces another column of the final transformation matrix
$$
V
$$
, whereas an iteration of Householder produces another factor in a unitary factorisation
$$
Q_1 Q_2 \dots Q_n
$$
of
$$
V
$$
.
|
https://en.wikipedia.org/wiki/Lanczos_algorithm
|
passage: A true plane wave cannot physically exist, because it would have to fill all space. Nevertheless, the plane wave model is important and widely used in physics. The waves emitted by any source with finite extent into a large homogeneous region of space can be well approximated by plane waves when viewed over any part of that region that is sufficiently small compared to its distance from the source. That is the case, for example, of the light waves from a distant star that arrive at a telescope.
### Plane standing wave
A standing wave is a field whose value can be expressed as the product of two functions, one depending only on position, the other only on time. A plane standing wave, in particular, can be expressed as
$$
F(\vec x, t) = G(\vec x \cdot \vec n) \, S(t)
$$
where
$$
G
$$
is a function of one scalar parameter (the displacement
$$
d = \vec x \cdot \vec n
$$
) with scalar or vector values, and
$$
S
$$
is a scalar function of time.
This representation is not unique, since the same field values are obtained if
$$
S
$$
and
$$
G
$$
are scaled by reciprocal factors. If
$$
\left|S(t)\right|
$$
is bounded in the time interval of interest (which is usually the case in physical contexts),
$$
S
$$
and
$$
G
$$
can be scaled so that the maximum value of
$$
\left|S(t)\right|
$$
is 1.
|
https://en.wikipedia.org/wiki/Plane_wave
|
passage: Because the target of the particle beams of early accelerators was usually the atoms of a piece of matter, with the goal being to create collisions with their nuclei in order to investigate nuclear structure, accelerators were commonly referred to as atom smashers in the 20th century. The term persists despite the fact that many modern accelerators create collisions between two subatomic particles, rather than a particle and an atomic nucleus.
## Uses
Beams of high-energy particles are useful for fundamental and applied research in the sciences and also in many technical and industrial fields unrelated to fundamental research. There are approximately 30,000 accelerators worldwide; of these, only about 1% are research machines with energies above 1 GeV, while about 44% are for radiotherapy, 41% for ion implantation, 9% for industrial processing and research, and 4% for biomedical and other low-energy research.
### Particle physics
For the most basic inquiries into the dynamics and structure of matter, space, and time, physicists seek the simplest kinds of interactions at the highest possible energies. These typically entail particle energies of many GeV, and interactions of the simplest kinds of particles: leptons (e.g. electrons and positrons) and quarks for the matter, or photons and gluons for the field quanta. Since isolated quarks are experimentally unavailable due to color confinement, the simplest available experiments involve the interactions of, first, leptons with each other, and second, of leptons with nucleons, which are composed of quarks and gluons.
|
https://en.wikipedia.org/wiki/Particle_accelerator
|
passage: Consider the common case of a floor in a game: the fill area is far wider than it is tall. In this case, none of the square maps are a good fit. The result is blurriness and/or shimmering, depending on how the fit is chosen. Anisotropic filtering corrects this by sampling the texture as a non-square shape. The goal is to sample a texture to match the pixel footprint as projected into texture space, and such a footprint is not always axis aligned to the texture. Further, when dealing with sample theory a pixel is not a little square therefore its footprint would not be a projected square. Footprint assembly in texture space samples some approximation of the computed function of a projected pixel in texture space but the details are often approximate, highly proprietary and steeped in opinions about sample theory. Conceptually though the goal is to sample a more correct anisotropic sample of appropriate orientation to avoid the conflict between aliasing on one axis vs. blurring on the other when projected size differs.
In anisotropic implementations, the filtering may incorporate the same filtering algorithms used to filter the square maps of traditional mipmapping during the construction of the intermediate or final result.
### Percentage Closer filtering
Depth-based shadow mapping can use an interesting Percentage Closer Filter (PCF) with depth-mapped textures that broadens one's perception of the kinds of texture filters that might be applied. In PCF a depth map of the scene is rendered from the light source.
|
https://en.wikipedia.org/wiki/Texture_filtering
|
passage: The Hankel transform can express a fairly arbitrary function as an integral of Bessel functions of different scales. For the spherical Bessel functions the orthogonality relation is:
$$
\int_0^\infty x^2 j_\alpha(ux) j_\alpha(vx) \,dx = \frac{\pi}{2uv} \delta(u - v)
$$
for .
Another important property of Bessel's equations, which follows from Abel's identity, involves the Wronskian of the solutions:
$$
A_\alpha(x) \frac{dB_\alpha}{dx} - \frac{dA_\alpha}{dx} B_\alpha(x) = \frac{C_\alpha}{x}
$$
where and are any two solutions of Bessel's equation, and is a constant independent of (which depends on α and on the particular Bessel functions considered). In particular,
$$
J_\alpha(x) \frac{dY_\alpha}{dx} - \frac{dJ_\alpha}{dx} Y_\alpha(x) = \frac{2}{\pi x}
$$
and
$$
I_\alpha(x) \frac{dK_\alpha}{dx} - \frac{dI_\alpha}{dx} K_\alpha(x) = -\frac{1}{x},
$$
for .
For , the even entire function of genus 1, , has only real zeros.
|
https://en.wikipedia.org/wiki/Bessel_function
|
passage: The interaction of electrical and osmotic relations in plant cells appears to have arisen from an osmotic function of electrical excitability in a common unicellular ancestors of plants and animals under changing salinity conditions. Further, the present function of rapid signal transmission is seen as a newer accomplishment of metazoan cells in a more stable osmotic environment. It is likely that the familiar signaling function of action potentials in some vascular plants (e.g. Mimosa pudica) arose independently from that in metazoan excitable cells.
Unlike the rising phase and peak, the falling phase and after-hyperpolarization seem to depend primarily on cations that are not calcium. To initiate repolarization, the cell requires movement of potassium out of the cell through passive transportation on the membrane. This differs from neurons because the movement of potassium does not dominate the decrease in membrane potential. To fully repolarize, a plant cell requires energy in the form of ATP to assist in the release of hydrogen from the cell – utilizing a transporter called proton ATPase.
## Taxonomic distribution and evolutionary advantages
Action potentials are found throughout multicellular organisms, including plants, invertebrates such as insects, and vertebrates such as reptiles and mammals. Sponges seem to be the main phylum of multicellular eukaryotes that does not transmit action potentials, although some studies have suggested that these organisms have a form of electrical signaling, too.
|
https://en.wikipedia.org/wiki/Action_potential
|
passage: A generalization of the formula is known as the Lagrange–Bürmann formula:
$$
[z^n] H (g(z)) = \frac{1}{n} [w^{n-1}] (H' (w) \phi(w)^n)
$$
where is an arbitrary analytic function.
Sometimes, the derivative can be quite complicated. A simpler version of the formula replaces with to get
$$
[z^n] H (g(z)) = [w^n] H(w) \phi(w)^{n-1} (\phi(w) - w \phi'(w)),
$$
which involves instead of .
|
https://en.wikipedia.org/wiki/Lagrange_inversion_theorem
|
passage: This is the canonical factorization of .
"One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by the Bourbaki group and imported into English. As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement " maps onto " differs from " maps into ", in that the former implies that is surjective, while the latter makes no assertion about the nature of . In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical.
### Restriction and extension
If
$$
f : X \to Y
$$
is a function and is a subset of , then the restriction of
$$
f
$$
to S, denoted
$$
f|_S
$$
, is the function from to defined by
$$
f|_S(x) = f(x)
$$
for all in .
|
https://en.wikipedia.org/wiki/Function_%28mathematics%29
|
passage: Unfavorable reactions can be driven by highly favorable ones, as in the case of iron-sulfur chemistry. For example, this was probably important for carbon fixation. Carbon fixation by reaction of CO2 with H2S via iron-sulfur chemistry is favorable, and occurs at neutral pH and 100 °C. Iron-sulfur surfaces, which are abundant near hydrothermal vents, can drive the production of small amounts of amino acids and other biomolecules.
### Chemiosmosis
In 1961, Peter Mitchell proposed chemiosmosis as a cell's primary system of energy conversion. The mechanism, now ubiquitous in living cells, powers energy conversion in micro-organisms and in the mitochondria of eukaryotes, making it a likely candidate for early life. Mitochondria produce adenosine triphosphate (ATP), the energy currency of the cell used to drive cellular processes such as chemical syntheses. The mechanism of ATP synthesis involves a closed membrane in which the ATP synthase enzyme is embedded. The energy required to release strongly bound ATP has its origin in protons that move across the membrane. In modern cells, those proton movements are caused by the pumping of ions across the membrane, maintaining an electrochemical gradient. In the first organisms, the gradient could have been provided by the difference in chemical composition between the flow from a hydrothermal vent and the surrounding seawater, or perhaps meteoric quinones that were conducive to the development of chemiosmotic energy across lipid membranes if at a terrestrial origin.
|
https://en.wikipedia.org/wiki/Abiogenesis
|
passage: In string theory, a heterotic string is a closed string (or loop) which is a hybrid ('heterotic') of a superstring and a bosonic string. There are two kinds of heterotic superstring theories, the heterotic SO(32) and the heterotic E8 × E8, abbreviated to HO and HE. Apart from that there exist seven more heterotic string theories which are not supersymmetric and hence are only of secondary importance in most applications. Heterotic string theory was first developed in 1985 by David Gross, Jeffrey Harvey, Emil Martinec, and Ryan Rohm (the so-called "Princeton string quartet"), in one of the key papers that fueled the first superstring revolution.
## Overview
In string theory, the left-moving and the right-moving excitations of strings are completely decoupled for a closed string, and it is possible to construct a string theory whose left-moving (counter-clockwise) excitations are treated as a bosonic string propagating in D = 26 dimensions, while the right-moving (clockwise) excitations are treated as a superstring in D = 10 dimensions.
The mismatched 16 dimensions must be compactified on an even, self-dual lattice (a discrete subgroup of a linear space). There are two possible even self-dual lattices in 16 dimensions, and it leads to two types of the heterotic string. They differ by the gauge group in 10 dimensions. One gauge group is SO(32) (the HO string) while the other is E8 × E8 (the HE string).
|
https://en.wikipedia.org/wiki/Heterotic_string_theory
|
passage: The main part of the proof is the case
$$
X = \mathbf C^n
$$
. Likewise, on a locally Noetherian scheme
$$
X
$$
, the structure sheaf
$$
\mathcal O_X
$$
is a coherent sheaf of rings.
## Basic constructions of coherent sheaves
- An
$$
\mathcal O_X
$$
-module
$$
\mathcal F
$$
on a ringed space
$$
X
$$
is called locally free of finite rank, or a vector bundle, if every point in
$$
X
$$
has an open neighborhood
$$
U
$$
such that the restriction
$$
\mathcal F|_U
$$
is isomorphic to a finite direct sum of copies of
$$
\mathcal O_X|_U
$$
. If
$$
\mathcal F
$$
is free of the same rank
$$
n
$$
near every point of
$$
X
$$
, then the vector bundle
$$
\mathcal F
$$
is said to be of rank
$$
n
$$
.
|
https://en.wikipedia.org/wiki/Coherent_sheaf
|
passage: It is defined as
$$
\mathbf{e}_3(t) = \frac{\overline{\mathbf{e}_3}(t)} {\left\| \overline{\mathbf{e}_3}(t) \right\|}
, \quad
\overline{\mathbf{e}_3}(t) = \boldsymbol{\gamma}(t) - \bigr\langle \boldsymbol{\gamma}(t), \mathbf{e}_1(t) \bigr\rangle \, \mathbf{e}_1(t)
- \bigl\langle \boldsymbol{\gamma}'''(t), \mathbf{e}_2(t) \bigr\rangle \,\mathbf{e}_2(t)
$$
In 3-dimensional space, the equation simplifies to
$$
\mathbf{e}_3(t) = \mathbf{e}_1(t) \times \mathbf{e}_2(t)
$$
or to
$$
\mathbf{e}_3(t) = -\mathbf{e}_1(t) \times \mathbf{e}_2(t),
$$
That either sign may occur is illustrated by the examples of a right-handed helix and a left-handed helix.
### Torsion
The second generalized curvature is called and measures the deviance of from being a plane curve. In other words, if the torsion is zero, the curve lies completely in the same osculating plane (there is only one osculating plane for every point ).
|
https://en.wikipedia.org/wiki/Differentiable_curve
|
passage: if and only if each
$$
x \in \mathcal{K}
$$
is mapped by
$$
g
$$
onto the support of
$$
f(x)
$$
in
$$
\mathcal{L}
$$
. If such an approximation exists, one can construct a homotopy
$$
H
$$
transforming
$$
f
$$
into
$$
g
$$
by defining it on each simplex; there it always exists, because simplices are contractible.
The simplicial approximation theorem guarantees for every continuous function
$$
f:V_K \rightarrow V_L
$$
the existence of a simplicial approximation at least after refinement of
$$
\mathcal{K}
$$
, for instance by replacing
$$
\mathcal{K}
$$
by its iterated barycentric subdivision. The theorem plays an important role for certain statements in algebraic topology in order to reduce the behavior of continuous maps on those of simplicial maps, for instance in
#### Lefschetz's fixed-point theorem
.
Lefschetz's fixed-point theorem
The Lefschetz number is a useful tool to find out whether a continuous function admits fixed-points. This data is computed as follows: Suppose that
$$
X
$$
and
$$
Y
$$
are topological spaces that admit finite triangulations. A continuous map
$$
f: X\rightarrow Y
$$
induces homomorphisms between its simplicial homology groups with coefficients in a field
$$
K
$$
.
|
https://en.wikipedia.org/wiki/Triangulation_%28topology%29
|
passage: ### General form
Suppose that
$$
\mathcal{A}
$$
,
$$
\mathcal{B}
$$
, and
$$
\mathcal{C}
$$
are categories, and
$$
S
$$
and
$$
T
$$
(for source and target) are functors:
We can form the comma category
$$
(S \downarrow T)
$$
as follows:
- The objects are all triples
$$
(A, B, h)
$$
with
$$
A
$$
an object in
$$
\mathcal{A}
$$
,
$$
B
$$
an object in
$$
\mathcal{B}
$$
, and
$$
h : S(A)\rightarrow T(B)
$$
a morphism in
$$
\mathcal{C}
$$
.
- The morphisms from
$$
(A, B, h)
$$
to
$$
(A', B', h')
$$
are all pairs
$$
(f, g)
$$
where
$$
f : A \rightarrow A'
$$
and
$$
g : B \rightarrow B'
$$
are morphisms in
$$
\mathcal A
$$
and
$$
\mathcal B
$$
respectively, such that the following diagram commutes:
Morphisms are composed by taking
$$
(f', g') \circ (f, g)
$$
to be
$$
(f' \circ f, g' \circ g)
$$
, whenever the latter expression is defined. The identity morphism on an object
$$
(A, B, h)
$$
is
$$
(\mathrm{id}_{A}, \mathrm{id}_{B})
$$
.
|
https://en.wikipedia.org/wiki/Comma_category
|
passage: For other likelihood families, there is (usually) no closed-form solution for the local likelihood estimate, and iterative procedures such as iteratively reweighted least squares must be used to compute the estimate.
Example (local logistic regression). All response observations are 0 or 1, and the mean function is the "success" probability,
$$
\mu(x_i) = \Pr (Y_i=1 | x_i)
$$
. Since
$$
\mu(x_i)
$$
must be between 0 and 1, a local polynomial model should not be used for
$$
\mu(x)
$$
directly.
|
https://en.wikipedia.org/wiki/Local_regression
|
passage: The frequency of these cells' activity is detected by cells in the dorsal striatum at the base of the forebrain. His model separated explicit timing and implicit timing. Explicit timing is used in estimating the duration of a stimulus. Implicit timing is used to gauge the amount of time separating one from an impending event that is expected to occur in the near future. These two estimations of time do not involve the same neuroanatomical areas. For example, implicit timing often occurs to achieve a motor task, involving the cerebellum, left parietal cortex, and left premotor cortex. Explicit timing often involves the supplementary motor area and the right prefrontal cortex.
Two visual stimuli, inside someone's field of view, can be successfully regarded as simultaneous up to five milliseconds.
In the popular essay "Brain Time", David Eagleman explains that different types of sensory information (auditory, tactile, visual, etc.) are processed at different speeds by different neural architectures. The brain must learn how to overcome these speed disparities if it is to create a temporally unified representation of the external world:
Experiments have shown that rats can successfully estimate a time interval of approximately 40 seconds, despite having their cortex entirely removed. This suggests that time estimation may be a low-level process.
## Ecological perspectives
In recent history, ecologists and psychologists have been interested in whether and how time is perceived by non-human animals, as well as which functional purposes are served by the ability to perceive time.
|
https://en.wikipedia.org/wiki/Time_perception
|
passage: In 3D computer graphics, 3D modeling is the process of developing a mathematical coordinate-based representation of a surface of an object (inanimate or living) in three dimensions via specialized software by manipulating edges, vertices, and polygons in a simulated 3D space.
Three-dimensional (3D) models represent a physical body using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created manually, algorithmically (procedural modeling), or by scanning. Their surfaces may be further defined with texture mapping.
## Outline
The product is called a 3D model, while someone who works with 3D models may be referred to as a 3D artist or a 3D modeler.
A 3D model can also be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena.
3D models may be created automatically or manually. The manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. The 3D model can be physically created using
## 3D printing
devices that form 2D layers of the model with three-dimensional material, one layer at a time. Without a 3D model, a 3D print is not possible.
## 3D modeling software
is a class of 3D computer graphics software used to produce 3D models. Individual programs of this class are called modeling applications.
|
https://en.wikipedia.org/wiki/3D_modeling
|
passage: Even in cases where the image and codomain of a function are different, a new function can be uniquely defined with its codomain as the image of the original function. For example, as a function from the integers to the integers, the doubling function
$$
f(n) = 2n
$$
is not surjective because only the even integers are part of the image. However, a new function
$$
\tilde{f}(n) = 2n
$$
whose domain is the integers and whose codomain is the even integers is surjective. For
$$
\tilde{f},
$$
the word range is unambiguous.
|
https://en.wikipedia.org/wiki/Range_of_a_function
|
passage: In physics and chemistry, the law of conservation of mass or principle of mass conservation states that for any system closed to all transfers of matter the mass of the system must remain constant over time.
The law implies that mass can neither be created nor destroyed, although it may be rearranged in space, or the entities associated with it may be changed in form. For example, in chemical reactions, the mass of the chemical components before the reaction is equal to the mass of the components after the reaction. Thus, during any chemical reaction and low-energy thermodynamic processes in an isolated system, the total mass of the reactants, or starting materials, must be equal to the mass of the products.
The concept of mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. Historically, mass conservation in chemical reactions was primarily demonstrated in the 17th century and finally confirmed by Antoine Lavoisier in the late 18th century. The formulation of this law was of crucial importance in the progress from alchemy to the modern natural science of chemistry.
In reality, the conservation of mass only holds approximately and is considered part of a series of assumptions in classical mechanics. The law has to be modified to comply with the laws of quantum mechanics and special relativity under the principle of mass–energy equivalence, which states that energy and mass form one conserved quantity. For very energetic systems the conservation of mass only is shown not to hold, as is the case in nuclear reactions and particle-antiparticle annihilation in particle physics.
|
https://en.wikipedia.org/wiki/Conservation_of_mass
|
passage: The line AB is the interval AB and the two rays A/B and B/A. Points on the line AB are said to be collinear.
An angle consists of a point O (the vertex) and two non-collinear rays out from O (the sides).
A triangle is given by three non-collinear points (called vertices) and their three segments AB, BC, and CA.
If three points A, B, and C are non-collinear, then a plane ABC is the set of all points collinear with pairs of points on one or two of the sides of triangle ABC.
If four points A, B, C, and D are non-coplanar, then a space (3-space) ABCD is the set of all points collinear with pairs of points selected from any of the four faces (planar regions) of the tetrahedron ABCD.
## Axioms of ordered geometry
1. There exist at least two points.
1. If A and B are distinct points, there exists a C such that [ABC].
1. If [ABC], then A and C are distinct (A ≠ C).
1. If [ABC], then [CBA] but not [CAB].
1. If C and D are distinct points on the line AB, then A is on the line CD.
1. If AB is a line, there is a point C not on the line AB.
1. (Axiom of Pasch) If ABC is a triangle and [BCD] and [CEA], then there exists a point F on the line DE for which [AFB].
|
https://en.wikipedia.org/wiki/Ordered_geometry
|
passage: Unless for very simple examples, a primary decomposition may be hard to compute and may have a very complicated output. The following example has been designed for providing such a complicated output, and, nevertheless, being accessible to hand-written computation.
Let
$$
\begin {align}
P&=a_0x^m + a_1x^{m-1}y +\cdots +a_my^m \\
Q&=b_0x^n + b_1x^{n-1}y +\cdots +b_ny^n
\end {align}
$$
be two homogeneous polynomials in , whose coefficients
$$
a_1, \ldots, a_m, b_0, \ldots, b_n
$$
are polynomials in other indeterminates
$$
z_1, \ldots, z_h
$$
over a field . That is, and belong to
$$
R=k[x,y,z_1, \ldots, z_h],
$$
and it is in this ring that a primary decomposition of the ideal
$$
I=\langle P,Q\rangle
$$
is searched. For computing the primary decomposition, we suppose first that 1 is a greatest common divisor of and .
This condition implies that has no primary component of height one. As is generated by two elements, this implies that it is a complete intersection (more precisely, it defines an algebraic set, which is a complete intersection), and thus all primary components have height two.
|
https://en.wikipedia.org/wiki/Primary_decomposition
|
passage: Every tropical variety is the intersection of a finite number of tropical hypersurfaces. A finite set of polynomials
$$
\{f_1,\ldots,f_r\}\subseteq\mathrm{I}(X)
$$
is called a tropical basis for X if
$$
\operatorname{Trop}(X)
$$
is the intersection of the tropical hypersurfaces of
$$
\operatorname{Trop}(f_1),\ldots,\operatorname{Trop}(f_r)
$$
. In general, a generating set of
$$
\mathrm{I}(X)
$$
is not sufficient to form a tropical basis. The intersection of a finite number of a tropical hypersurfaces is called a tropical prevariety and in general is not a tropical variety.
#### Initial ideals
Choosing a vector
$$
\mathbf{w}
$$
in
$$
\R^n
$$
defines a map from the monomial terms of
$$
K[x_1^{\pm 1},\ldots,x_n^{\pm1}]
$$
to
$$
\R
$$
by sending the term m to
$$
\operatorname{Trop}(m)(\mathbf{w})
$$
. For a Laurent polynomial
$$
f=m_1+\cdots+m_s
$$
, define the initial form of f to be the sum of the terms
$$
m_i
$$
of f for which
$$
\operatorname{Trop}(m_i)(\mathbf{w})
$$
is minimal.
|
https://en.wikipedia.org/wiki/Tropical_geometry
|
passage: It is usually required that R and S must have at least one common attribute, but if this constraint is omitted, and R and S have no common attributes, then the natural join becomes exactly the Cartesian product.
The natural join can be simulated with Codd's primitives as follows. Let c1, ..., cm be the attribute names common to R and S, r1, ..., rn be the attribute names unique to R and let s1, ..., sk be the attributes unique to S. Furthermore, assume that the attribute names x1, ..., xm are neither in R nor in S. In a first step the common attribute names in S can now be renamed:
$$
T = \rho_{x_1/c_1,\ldots,x_m/c_m}(S) = \rho_{x_1/c_1}(\rho_{x_2/c_2}(\ldots\rho_{x_m/c_m}(S)\ldots))
$$
Then we take the Cartesian product and select the tuples that are to be joined:
$$
U = \pi_{r_1,\ldots,r_n,c_1,\ldots,c_m,s_1,\ldots,s_k}(P)
$$
A natural join is a type of equi-join where the join predicate arises implicitly by comparing all columns in both tables that have the same column-names in the joined tables. The resulting joined table contains only one column for each pair of equally named columns.
|
https://en.wikipedia.org/wiki/Join_%28SQL%29
|
passage: After ten years, Kleene and Post showed in 1954 that there are intermediate Turing degrees between those of the computable sets and the halting problem, but they failed to show that any of these degrees contains a computably enumerable set. Very soon after this, Friedberg and Muchnik independently solved Post's problem by establishing the existence of computably enumerable sets of intermediate degree. This groundbreaking result opened a wide study of the Turing degrees of the computably enumerable sets which turned out to possess a very complicated and non-trivial structure.
There are uncountably many sets that are not computably enumerable, and the investigation of the Turing degrees of all sets is as central in computability theory as the investigation of the computably enumerable Turing degrees. Many degrees with special properties were constructed: hyperimmune-free degrees where every function computable relative to that degree is majorized by a (unrelativized) computable function; high degrees relative to which one can compute a function f which dominates every computable function g in the sense that there is a constant c depending on g such that g(x) < f(x) for all x > c; random degrees containing algorithmically random sets; 1-generic degrees of 1-generic sets; and the degrees below the halting problem of limit-computable sets.
The study of arbitrary (not necessarily computably enumerable) Turing degrees involves the study of the Turing jump. Given a set A, the Turing jump of A is a set of natural numbers encoding a solution to the halting problem for oracle Turing machines running with oracle A.
|
https://en.wikipedia.org/wiki/Computability_theory
|
passage: A later document in the timeline of global initiatives for malnutrition was the 1996 Rome Declaration on World Food Security, organized by the Food and Agriculture Organization. This document reaffirmed the right to have access to safe and nutritious food by everyone, also considering that everyone gets sufficient food, and set the goals for all nations to improve their commitment to food security by halving their number of undernourished people by 2015. In 2004 the Food and Agriculture Organization adopted the Right to Food Guidelines, which offered states a framework of how to increase the right to food on a national basis.
## Special populations
Undernutrition is an important determinant of maternal and child health, accounting for more than a third of child deaths and more than 10 percent of the total global disease burden according to 2008 studies.
Children
Undernutrition adversely affects the cognitive development of children, contributing to poor earning capacity and poverty in adulthood. The development of childhood undernutrition coincides with the introduction of complementary weaning foods which are usually nutrient deficient. The World Health Organization estimated in 2008 that malnutrition accounted for 54 percent of child mortality worldwide, about 1 million children. There is a strong association between undernutrition and child mortality.
Another estimate in 2008 also by WHO stated that childhood underweight was the cause for about 35% of all deaths of children under the age of five years worldwide. Over 90% of the stunted children below five years of age live in sub-Saharan Africa and South Central Asia. Although access to adequate food and improving nutritional intake is an obvious solution to tackling undernutrition in children, the progress in reducing children undernutrition has been disappointing.
|
https://en.wikipedia.org/wiki/Malnutrition
|
passage: ### Multiplicative group of a field
If
$$
F
$$
is a field then the multiplicative group over
$$
F
$$
is the algebraic group
$$
\mathbf G_{\mathbf m}
$$
such that for any field extension
$$
E/F
$$
the
$$
E
$$
-points are isomorphic to the group
$$
E^\times
$$
. To define it properly as an algebraic group one can take the affine variety defined by the equation
$$
xy = 1
$$
in the affine plane over
$$
F
$$
with coordinates
$$
x, y
$$
. The multiplication is then given by restricting the regular rational map
$$
F^2 \times F^2 \to F^2
$$
defined by
$$
((x, y), (x',y')) \mapsto (xx', yy')
$$
and the inverse is the restriction of the regular rational map
$$
(x, y) \mapsto (y, x)
$$
.
###
### Definition
Let
$$
F
$$
be a field with algebraic closure
$$
\overline F
$$
. Then a -torus is an algebraic group defined over
$$
F
$$
which is isomorphic over
$$
\overline F
$$
to a finite product of copies of the multiplicative group.
|
https://en.wikipedia.org/wiki/Algebraic_torus
|
passage: We start from the first law of thermodynamics for closed systems for an infinitesimal process:
$$
\mathrm{d}U = \delta Q - \delta W,
$$
where
is a small amount of heat added to the system,
is a small amount of work performed by the system.
In a homogeneous system in which only reversible processes or pure heat transfer are considered, the second law of thermodynamics gives with the absolute temperature and the infinitesimal change in entropy of the system. Furthermore, if only work is done, As a result,
$$
\mathrm{d}U = T\,\mathrm{d}S - p\,\mathrm{d}V.
$$
Adding to both sides of this expression gives
$$
\mathrm{d}U + \mathrm{d}(pV) = T\,\mathrm{d}S - p\,\mathrm{d}V + \mathrm{d}(p\,V),
$$
or
$$
\mathrm{d}(U + pV) = T\,\mathrm{d}S + V\,\mathrm{d}p.
$$
So
$$
\mathrm{d}H(S, p) = T\,\mathrm{d}S + V\,\mathrm{d}p,
$$
and the coefficients of the natural variable differentials and are just the single variables and .
## Other expressions
The above expression of in terms of entropy and pressure may be unfamiliar to some readers.
|
https://en.wikipedia.org/wiki/Enthalpy
|
passage: ## Constructing a child table
The child table cldtab is composed of three n arrays, up, down and nextlIndex. The information about the edges of the corresponding suffix tree is stored and maintained by the up and down arrays. The nextlIndex array stores the links in the linked list used for node branching the suffix tree.
The up, down and nextlIndex array are defined as follows:
1. The element up[i] records the starting index of the longest lcp-second interval’s child interval, which ends at index i-1.
1. The initial index of the second child interval of the longest lcp-interval, starting at index i is stored in the element down[i].
1. If and only if the interval is neither the first child nor the final child of its parent, the element nextlIndex[i] contains the first index of the next sibling interval of the longest lcp-interval, starting at index i.
By performing a bottom-up traversal of the lcp-interval of the tree, the child table can be constructed in linear time. The up/down values and the nextlIndex values can be computed separately by using two distinct algorithms.
## Constructing a suffix link table
The suffix links for an enhanced suffix array can be computed by generating the suffix link interval [1,..,r] for each [i,..j] interval during the preprocessing. The left and right elements l and r of the interval are maintained in the first index of [i,..,j].
|
https://en.wikipedia.org/wiki/Suffix_array
|
passage: (By convention, 1 is the empty product.) Testing whether the integer is prime can be done in polynomial time, for example, by the AKS primality test. If composite, however, the polynomial time tests give no insight into how to obtain the factors.
Given a general algorithm for integer factorization, any integer can be factored into its constituent prime factors by repeated application of this algorithm. The situation is more complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if where are very large primes, trial division will quickly produce the factors 3 and 19 but will take divisions to find the next factor. As a contrasting example, if is the product of the primes , , and , where , Fermat's factorization method will begin with which immediately yields and hence the factors and . While these are easily recognized as composite and prime respectively, Fermat's method will take much longer to factor the composite number because the starting value of for is a factor of 10 from .
## Current state of the art
Among the -bit numbers, the most difficult to factor in practice using existing algorithms are those semiprimes whose factors are of similar size. For this reason, these are the integers used in cryptographic applications.
In 2019, a 240-digit (795-bit) number (RSA-240) was factored by a team of researchers including Paul Zimmermann, utilizing approximately 900 core-years of computing power. These researchers estimated that a 1024-bit RSA modulus would take about 500 times as long.
|
https://en.wikipedia.org/wiki/Integer_factorization
|
passage: ### Plancherel theorem and Parseval's theorem
Let and be integrable, and let and be their Fourier transforms. If and are also square-integrable, then the Parseval formula follows:
$$
\langle f, g\rangle_{L^{2}} = \int_{-\infty}^{\infty} f(x) \overline{g(x)} \,dx = \int_{-\infty}^\infty \hat{f}(\xi) \overline{\hat{g}(\xi)} \,d\xi,
$$
where the bar denotes complex conjugation.
The Plancherel theorem, which follows from the above, states that
$$
\|f\|^2_{L^{2}} = \int_{-\infty}^\infty \left| f(x) \right|^2\,dx = \int_{-\infty}^\infty \left| \hat{f}(\xi) \right|^2\,d\xi.
$$
Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to a unitary operator on . On , this extension agrees with original Fourier transform defined on , thus enlarging the domain of the Fourier transform to (and consequently to for ). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. The terminology of these formulas is not quite standardised.
|
https://en.wikipedia.org/wiki/Fourier_transform
|
passage: Popen(command)` `subprocess.call(["program", "arg1", "arg2", ...])` `os.execv(path, args)` S-Lang `system(command)` Fortran `CALL EXECUTE_COMMAND_LINE (COMMAND «, WAIT» «, EXITSTAT» «, CMDSTAT» «, CMDMSG»)` Windows PowerShell `[Diagnostics.Process]::Start(command)` `«Invoke-Item »program arg1 arg2 ...` Bash shell `output=`command``or`output=$(command)` `program arg1 arg2 ...`
Fortran 2008 or newer.
|
https://en.wikipedia.org/wiki/Comparison_of_programming_languages_%28basic_instructions%29
|
passage: There are many other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semiclassical radiation known as Unruh radiation.
### Singularities
Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well.
|
https://en.wikipedia.org/wiki/General_relativity
|
passage: $$
|-
|
|
$$
= H_a \left(j \frac{2}{T} \cdot \frac{ \sin(\omega_d T/2) }{ \cos(\omega_d T/2) }\right)
$$
|-
|
|
$$
= H_a \left(j \frac{2}{T} \cdot \tan \left( \omega_d T/2 \right) \right)
$$
|}
|
https://en.wikipedia.org/wiki/Bilinear_transform
|
passage: In Version 3 Unix, the interface is extended by calling compar(III), with an interface identical to modern-day memcmp. This function may be overridden by the user's program to implement any kind of ordering, in an equivalent fashion to the `compar` argument to standard qsort (though program-global, of course).
Version 4 Unix adds a C implementation, with an interface equivalent to the standard.
It was rewritten in 1983 for the Berkeley Software Distribution. The function was standardized in ANSI C (1989).
The assembly implementation is removed in Version 6 Unix.
In 1991, Bell Labs employees observed that AT&T and BSD versions of qsort would consume quadratic time for some simple inputs. Thus Jon Bentley and Douglas McIlroy engineered a new faster and more robust implementation. McIlroy would later produce a more complex quadratic-time input, termed AntiQuicksort, in 1998. This function constructs adversarial data on-the-fly.
## Example
The following piece of C code shows how to sort a list of integers using qsort.
|
https://en.wikipedia.org/wiki/Qsort
|
passage: Elastin is the key protein of the extracellular matrix and is the main component of the elastic fibres. Elastin gives the necessary elasticity and resilience required for the persistent stretching involved in breathing, known as lung compliance. It is also responsible for the elastic recoil needed. Elastin is more concentrated in areas of high stress such as the openings of the alveoli, and alveolar junctions. The connective tissue links all the alveoli to form the lung parenchyma which has a sponge-like appearance. The alveoli have interconnecting air passages in their walls known as the pores of Kohn.
#### Respiratory epithelium
All of the lower respiratory tract including the trachea, bronchi, and bronchioles is lined with respiratory epithelium. This is a ciliated epithelium interspersed with goblet cells which produce mucin the main component of mucus, ciliated cells, basal cells, and in the terminal bronchioles–club cells with actions similar to basal cells, and macrophages. The epithelial cells, and the submucosal glands throughout the respiratory tract secrete airway surface liquid (ASL), the composition of which is tightly regulated and determines how well mucociliary clearance works.
Pulmonary neuroendocrine cells are found throughout the respiratory epithelium including the alveolar epithelium, though they only account for around 0.5 percent of the total epithelial population. PNECs are innervated airway epithelial cells that are particularly focused at airway junction points.
|
https://en.wikipedia.org/wiki/Lung
|
passage: As the load increases, the machine records the corresponding deformation, plotting a stress-strain curve that would look similar to the following:
The compressive strength of the material corresponds to the stress at the red point shown on the curve. In a compression test, there is a linear region where the material follows Hooke's law. Hence, for this region,
$$
\sigma = E\varepsilon,
$$
where, this time, refers to the Young's modulus for compression. In this region, the material deforms elastically and returns to its original length when the stress is removed.
This linear region terminates at what is known as the yield point. Above this point the material behaves plastically and will not return to its original length once the load is removed.
There is a difference between the engineering stress and the true stress. By its basic definition the uniaxial stress is given by:
$$
\acute\sigma = \frac{F}{A},
$$
where is load applied [N] and is area [m2].
As stated, the area of the specimen varies on compression. In reality therefore the area is some function of the applied load i.e. . Indeed, stress is defined as the force divided by the area at the start of the experiment. This is known as the engineering stress, and is defined by
$$
\sigma_e = \frac{F}{A_0},
$$
where is the original specimen area [m2].
|
https://en.wikipedia.org/wiki/Compressive_strength
|
passage: Another method of handling the temperature sensitivity was to enclose the magnetic core "stack" in a temperature-controlled oven. Examples of this are the heated-air core memory of the IBM 1620 (which could take up to 30 minutes to reach operating temperature, about and the heated-oil-bath core memory of the IBM 7090, early IBM 7094s, and IBM 7030. Core was heated instead of cooled because the primary requirement was a consistent temperature, and it was easier (and cheaper) to maintain a constant temperature well above room temperature than one at or below it.
#### Diagnosing
Diagnosing hardware problems in core memory required time-consuming diagnostic programs to be run. While a quick test checked if every bit could contain a one and a zero, these diagnostics tested the core memory with worst-case patterns and had to run for several hours. As most computers had just a single core-memory board, these diagnostics also moved themselves around in memory, making it possible to test every bit. An advanced test was called a "Shmoo test" in which the half-select currents were modified along with the time at which the sense line was tested ("strobed"). The data plot of this test seemed to resemble a cartoon character called "Shmoo," and the name stuck. In many occasions, errors could be resolved by gently tapping the printed circuit board with the core array on a table. This slightly changed the positions of the cores along the wires running through them, and could fix the problem. The procedure was seldom needed, as core memory proved to be very reliable compared to other computer components of the day.
|
https://en.wikipedia.org/wiki/Magnetic-core_memory
|
passage: The levitation illusion can be enhanced by optimizing the curve of the lower edge so the shadow line remains high as the disk settles. A mirror can further enhance the effect by hiding the support surface and showing separation between
moving disk surface and mirror image.
Disk imperfections, seen in shadow, that could hamper the illusion, can be hidden in a skin pattern that blurs under motion.
#### US Quarter example
A clean US Quarter (minted 1970–2022), rotating on a flat hand mirror, viewed from the side near the mirror surface, demonstrates the phenomenon for a few seconds.
Lit by a point source directly over the center of the soon to settle quarter,
side ridges are illuminated when the rotation axis is away from the viewer,
and in shadow when the rotation axis is toward the viewer. Vibration blurs the ridges and heads or tails is too foreshortened to show rotation.
## History of research
### Moffatt
In the early 2000s, research was sparked by an article in the April 20, 2000 edition of Nature, where Keith Moffatt showed that viscous dissipation in the thin layer of air between the disk and the table would be sufficient to account for the observed abruptness of the settling process. He also showed that the motion concluded in a finite-time singularity. His first theoretical hypothesis was contradicted by subsequent research, which showed that rolling friction is actually the dominant factor.
|
https://en.wikipedia.org/wiki/Euler%27s_Disk
|
passage: ## Integral and weak forms
Conservation equations can usually also be expressed in integral form: the advantage of the latter is substantially that it requires less smoothness of the solution, which paves the way to weak form, extending the class of admissible solutions to include discontinuous solutions. By integrating in any space-time domain the current density form in 1-D space:
$$
y_t + j_x (y)= 0
$$
and by using Green's theorem, the integral form is:
$$
\int_{- \infty}^\infty y \, dx + \int_0^\infty j (y) \, dt = 0
$$
In a similar fashion, for the scalar multidimensional space, the integral form is:
$$
\oint \left[y \, d^N r + j (y) \, dt\right] = 0
$$
where the line integration is performed along the boundary of the domain, in an anticlockwise manner.
Moreover, by defining a test function φ(r,t) continuously differentiable both in time and space with compact support, the weak form can be obtained pivoting on the initial condition.
|
https://en.wikipedia.org/wiki/Conservation_law
|
passage: Straight lines, geodesics (the shortest path between the points contained within it) are modeled by either half-circles whose origin is on the x-axis, or straight vertical rays orthogonal to the x-axis.
A circle (curve equidistant from a central point) with center
$$
\langle x, y \rangle
$$
and radius
$$
r
$$
is modeled by a circle with center and radius .
A hypercycle (a curve equidistant from a straight line, its axis) is modeled by either a circular arc which intersects the -axis at the same two ideal points as the half-circle which models its axis but at an acute or obtuse angle, or a straight line which intersects the -axis at the same point as the vertical line which models its axis, but at an acute or obtuse angle.
A horocycle (a curve whose normals all converge asymptotically in the same direction, its center) is modeled by either a circle tangent to the -axis (but excluding the ideal point of intersection, which is its center), or a line parallel to the -axis, in which case the center is the ideal point at .
|
https://en.wikipedia.org/wiki/Poincar%C3%A9_half-plane_model
|
passage: In 1908 Shuman formed the Sun Power Company with the intent of building larger solar power plants. He, along with his technical advisor A.S.E. Ackermann and British physicist Sir Charles Vernon Boys, developed an improved system using mirrors to reflect solar energy upon collector boxes, increasing heating capacity to the extent that water could now be used instead of ether. Shuman then constructed a full-scale steam engine powered by low-pressure water, enabling him to patent the entire solar engine system by 1912.
Shuman built the world's first solar thermal power station in Maadi, Egypt, between 1912 and 1913. His plant used parabolic troughs to power a engine that pumped more than of water per minute from the Nile River to adjacent cotton fields. Although the outbreak of World War I and the discovery of cheap oil in the 1930s discouraged the advancement of solar energy, Shuman's vision, and basic design were resurrected in the 1970s with a new wave of interest in solar thermal energy. In 1916 Shuman was quoted in the media advocating solar energy's utilization, saying:
### Water heating
Solar hot water systems use sunlight to heat water. In middle geographical latitudes (between 40 degrees north and 40 degrees south), 60 to 70% of the domestic hot water use, with water temperatures up to , can be provided by solar heating systems.
|
https://en.wikipedia.org/wiki/Solar_energy
|
passage: As before, we have the two constraint equations for the top row and right column:
u + a + b + c + v = 0
v + d + e + f + u* = 0.
Multiple solutions are possible. The standard procedure is to
- first try to determine the corner cells, after which we will try to determine the rest of the border.
There are 28 ways of choosing two numbers from the set of 8 bone numbers for the corner cells u and v. However, not all pairs are admissible. Among the 28 pairs, 16 pairs are made of an even and an odd number, 6 pairs have both as even numbers, while 6 pairs have them both as odd numbers.
We can prove that the corner cells u and v cannot have an even and an odd number. This is because if this were so, then the sums u + v and v + u will be odd, and since 0 is an even number, the sums a + b + c and d + e + f should be odd as well. The only way that the sum of three integers will result in an odd number is when 1) two of them are even and one is odd, or 2) when all three are odd. Since the corner cells are assumed to be odd and even, neither of these two statements are compatible with the fact that we only have 3 even and 3 odd bone numbers at our disposal. This proves that u and v cannot have different parity. This eliminates 16 possibilities.
Using similar type reasoning we can also draw some conclusions about the sets {a, b, c} and {d, e, f}.
|
https://en.wikipedia.org/wiki/Magic_square
|
passage: By reducing by , one obtains a new polynomial such that
$$
I=\langle f,k\rangle:
$$
$$
k=g - xf= xy -x.
$$
None of and is reducible by the other, but is reducible by , which gives another polynomial in :
$$
h=xk-(y-1) f= y^2 -y.
$$
Under lexicographic ordering with
$$
x>y
$$
we have
As and belong to , and none of them is reducible by the others, none of
$$
\{f,k\},
$$
$$
\{f,h\},
$$
and
$$
\{h,k\}
$$
is a Gröbner basis of .
On the other hand, is a Gröbner basis of , since the S-polynomials
$$
\begin{align}
yf-xk&=y(x^2-y)-x(xy-x)=f-h\\
yk-xh&=y(xy-x)-x(y^2-y)=0\\
y^2f-x^2h&= y(yf-xk)+x(yk-xh)
\end{align}
$$
can be reduced to zero by and .
The method that has been used here for finding and , and proving that is a Gröbner basis is a direct application of Buchberger's algorithm.
|
https://en.wikipedia.org/wiki/Gr%C3%B6bner_basis
|
passage: For example, Adobe RGB and sRGB are two different absolute color spaces, both based on the RGB color model. When defining a color space, the usual reference standard is the CIELAB or CIEXYZ color spaces, which were specifically designed to encompass all colors the average human can see.
Since "color space" identifies a particular combination of the color model and the mapping function, the word is often used informally to identify a color model. However, even though identifying a color space automatically identifies the associated color model, this usage is incorrect in a strict sense. For example, although several specific color spaces are based on the RGB color model, there is no such thing as the singular RGB color space.
## History
In 1802, Thomas Young postulated the existence of three types of photoreceptors (now known as cone cells) in the eye, each of which was sensitive to a particular range of visible light. Hermann von Helmholtz developed the Young–Helmholtz theory further in 1850: that the three types of cone photoreceptors could be classified as short-preferring (blue), middle-preferring (green), and long-preferring (red), according to their response to the wavelengths of light striking the retina. The relative strengths of the signals detected by the three types of cones are interpreted by the brain as a visible color. But it is not clear that they thought of colors as being points in color space.
The color-space concept was likely due to Hermann Grassmann, who developed it in two stages. First, he developed the idea of vector space, which allowed the algebraic representation of geometric concepts in n-dimensional space.
|
https://en.wikipedia.org/wiki/Color_space
|
passage: While these were successfully used by Heinrich Scherk in 1830 to derive his surfaces, they were generally regarded as practically unusable. Catalan proved in 1842/43 that the helicoid is the only ruled minimal surface.
Progress had been fairly slow until the middle of the century when the Björling problem was solved using complex methods. The "first golden age" of minimal surfaces began. Schwarz found the solution of the Plateau problem for a regular quadrilateral in 1865 and for a general quadrilateral in 1867 (allowing the construction of his periodic surface families) using complex methods. Weierstrass and Enneper developed more useful representation formulas, firmly linking minimal surfaces to complex analysis and harmonic functions. Other important contributions came from Beltrami, Bonnet, Darboux, Lie, Riemann, Serret and Weingarten.
Between 1925 and 1950 minimal surface theory revived, now mainly aimed at nonparametric minimal surfaces. The complete solution of the Plateau problem by Jesse Douglas and Tibor Radó was a major milestone. Bernstein's problem and Robert Osserman's work on complete minimal surfaces of finite total curvature were also important.
Another revival began in the 1980s. One cause was the discovery in 1982 by Celso Costa of a surface that disproved the conjecture that the plane, the catenoid, and the helicoid are the only complete embedded minimal surfaces in
$$
\R^3
$$
of finite topological type.
|
https://en.wikipedia.org/wiki/Minimal_surface
|
passage: Let T : X → X be a map on a complete non-empty metric space. Then, for example, some generalizations of the Banach fixed-point theorem are:
- Assume that some iterate Tn of T is a contraction. Then T has a unique fixed point.
- Assume that for each n, there exist cn such that d(Tn(x), Tn(y)) ≤ cnd(x, y) for all x and y, and that
$$
\sum\nolimits_n c_n <\infty.
$$
Then T has a unique fixed point.
In applications, the existence and uniqueness of a fixed point often can be shown directly with the standard Banach fixed point theorem, by a suitable choice of the metric that makes the map T a contraction. Indeed, the above result by Bessaga strongly suggests to look for such a metric. See also the article on fixed point theorems in infinite-dimensional spaces for generalizations.
In a non-empty compact metric space, any function
$$
T
$$
satisfying
$$
d(T(x),T(y))<d(x,y)
$$
for all distinct
$$
x,y
$$
, has a unique fixed point. The proof is simpler than the Banach theorem, because the function
$$
d(T(x),x)
$$
is continuous, and therefore assumes a minimum, which is easily shown to be zero.
|
https://en.wikipedia.org/wiki/Banach_fixed-point_theorem
|
passage: The logic of this testing is what affords this method of inquiry to be reasoned deductively. The formulated hypothesis is assumed to be 'true', and from that 'true' statement implications are inferred. If the following tests show the implications to be false, it follows that the hypothesis was false also. If test show the implications to be true, new insights will be gained. It is important to be aware that a positive test here will at best strongly imply but not definitively prove the tested hypothesis, as deductive inference (A ⇒ B) is not equivalent like that; only (¬B ⇒ ¬A) is valid logic. Their positive outcomes however, as Hempel put it, provide "at least some support, some corroboration or confirmation for it". This is why Popper insisted on fielded hypotheses to be falsifieable, as successful tests imply very little otherwise. As Gillies put it, "successful theories are those that survive elimination through falsification".
Deductive reasoning in this mode of inquiry will sometimes be replaced by abductive reasoning—the search for the most plausible explanation via logical inference. For example, in biology, where general laws are few, as valid deductions rely on solid presuppositions.
### Inductive method
The inductivist approach to deriving scientific truth first rose to prominence with Francis Bacon and particularly with Isaac Newton and those who followed him. After the establishment of the HD-method, it was often put aside as something of a "fishing expedition" though.
|
https://en.wikipedia.org/wiki/Scientific_method
|
passage: ### Orthogonals, quotients, and subspaces
If
$$
(X, Y, b)
$$
is a pairing then for any subset
$$
S
$$
of
$$
X
$$
:
and this set is -closed;
;
- Thus if is a -closed vector subspace of then
If is a family of -closed vector subspaces of then
If is a family of subsets of then
If
$$
X
$$
is a normed space then under the canonical duality,
$$
S^{\perp}
$$
is norm closed in
$$
X^{\prime}
$$
and
$$
S^{\perp\perp}
$$
is norm closed in
$$
X.
$$
|
https://en.wikipedia.org/wiki/Dual_system
|
passage: When negation or polarity indicators on both ends of a connection match, there is no logic negation in that path (effectively, bubbles "cancel"), making it easier to follow logic states from one symbol to the next. This is commonly seen in real logic diagrams – thus the reader must not get into the habit of associating the shapes exclusively as OR or AND shapes, but also take into account the bubbles at both inputs and outputs in order to determine the "true" logic function indicated.
A De Morgan symbol can show more clearly a gate's primary logical purpose and the polarity of its nodes that are considered in the "signaled" (active, on) state. Consider the simplified case where a two-input NAND gate is used to drive a motor when either of its inputs are brought low by a switch. The "signaled" state (motor on) occurs when either one OR the other switch is on. Unlike a regular NAND symbol, which suggests AND logic, the De Morgan version, a two negative-input OR gate, correctly shows that OR is of interest. The regular NAND symbol has a bubble at the output and none at the inputs (the opposite of the states that will turn the motor on), but the De Morgan symbol shows both inputs and output in the polarity that will drive the motor.
De Morgan's theorem is most commonly used to implement logic gates as combinations of only NAND gates, or as combinations of only NOR gates, for economic reasons.
|
https://en.wikipedia.org/wiki/Logic_gate
|
passage: The set of
$$
n \times n
$$
circulant matrices forms an
$$
n
$$
-dimensional vector space with respect to addition and scalar multiplication. This space can be interpreted as the space of functions on the cyclic group of order
$$
n
$$
,
$$
C_n
$$
, or equivalently as the group ring of
$$
C_n
$$
.
- Circulant matrices form a commutative algebra, since for any two given circulant matrices
$$
A
$$
and
$$
B
$$
, the sum
$$
A + B
$$
is circulant, the product
$$
AB
$$
is circulant, and
$$
AB = BA
$$
.
- For a nonsingular circulant matrix
$$
A
$$
, its inverse
$$
A^{-1}
$$
is also circulant. For a singular circulant matrix, its Moore–Penrose pseudoinverse
$$
A^+
$$
is circulant.
- The discrete Fourier transform matrix of order
$$
n
$$
is defined as by
$$
F_n = (f_{jk}) \text{ with } f_{jk} = e^{-2\pi i/n \cdot jk}, \,\text{for } 0 \leq j,k \leq n-1.
$$
There are important connections between circulant matrices and the DFT matrices.
|
https://en.wikipedia.org/wiki/Circulant_matrix
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.