text
stringlengths 82
2.62k
| source
stringlengths 31
108
|
---|---|
passage: Symplectic field theory provides invariants of Legendrian submanifolds called relative contact homology that can sometimes distinguish distinct Legendrian submanifolds that are topologically identical (i.e. smoothly isotopic).
Reeb vector field
If α is a contact form for a given contact structure, the Reeb vector field R can be defined as the unique element of the (one-dimensional) kernel of dα such that α(R) = 1. If a contact manifold arises as a constant-energy hypersurface inside a symplectic manifold, then the Reeb vector field is the restriction to the submanifold of the Hamiltonian vector field associated to the energy function. (The restriction yields a vector field on the contact hypersurface because the Hamiltonian vector field preserves energy levels.)
The dynamics of the Reeb field can be used to study the structure of the contact manifold or even the underlying manifold using techniques of Floer homology such as symplectic field theory and, in three dimensions, embedded contact homology. Different contact forms whose kernels give the same contact structure will yield different Reeb vector fields, whose dynamics are in general very different. The various flavors of contact homology depend a priori on the choice of a contact form, and construct algebraic structures the closed trajectories of their Reeb vector fields; however, these algebraic structures turn out to be independent of the contact form, i.e. they are invariants of the underlying contact structure, so that in the end, the contact form may be seen as an auxiliary choice.
|
https://en.wikipedia.org/wiki/Contact_geometry
|
passage: ascribe the permutations to Mathieu.
### Automorphism groups of Steiner systems
There exists up to equivalence a unique S(5,8,24) Steiner system W24 (the Witt design). The group M24 is the automorphism group of this Steiner system; that is, the set of permutations which map every block to some other block. The subgroups M23 and M22 are defined to be the stabilizers of a single point and two points respectively.
Similarly, there exists up to equivalence a unique S(5,6,12) Steiner system W12, and the group M12 is its automorphism group. The subgroup M11 is the stabilizer of a point.
W12 can be constructed from the affine geometry on the vector space , an S(2,3,9) system.
An alternative construction of W12 is the "Kitten" of .
An introduction to a construction of W24 via the Miracle Octad Generator of R. T. Curtis and Conway's analog for W12, the miniMOG, can be found in the book by Conway and Sloane.
### Automorphism groups on the Golay code
The group M24 is the permutation automorphism group of the extended binary Golay code W, i.e., the group of permutations on the 24 coordinates that map W to itself. All the Mathieu groups can be constructed as groups of permutations on the binary Golay code.
M12 has index 2 in its automorphism group, and M12:2 happens to be isomorphic to a subgroup of M24.
|
https://en.wikipedia.org/wiki/Mathieu_group
|
passage: ### IGARCH
Integrated Generalized Autoregressive Conditional heteroskedasticity (IGARCH) is a restricted version of the GARCH model, where the persistent parameters sum up to one, and imports a unit root in the GARCH process. The condition for this is
$$
\sum^p_{i=1} ~\beta_{i} +\sum_{i=1}^q~\alpha_{i} = 1
$$
.
### EGARCH
The exponential generalized autoregressive conditional heteroskedastic (EGARCH) model by Nelson & Cao (1991) is another form of the GARCH model. Formally, an EGARCH(p,q):
$$
\log\sigma_{t}^2=\omega+\sum_{k=1}^{q}\beta_{k}g(Z_{t-k})+\sum_{k=1}^{p}\alpha_{k}\log\sigma_{t-k}^{2}
$$
where
$$
g(Z_{t})=\theta Z_{t}+\lambda(|Z_{t}|-E(|Z_{t}|))
$$
,
$$
\sigma_{t}^{2}
$$
is the conditional variance,
$$
\omega
$$
,
$$
\beta
$$
,
$$
\alpha
$$
,
$$
\theta
$$
and
$$
\lambda
$$
are coefficients.
$$
Z_{t}
$$
may be a standard normal variable or come from a generalized error distribution.
|
https://en.wikipedia.org/wiki/Autoregressive_conditional_heteroskedasticity
|
passage: ## Definition
For two jointly distributed real-valued random variables
$$
X
$$
and
$$
Y
$$
with finite second moments, the covariance is defined as the expected value (or mean) of the product of their deviations from their individual expected values:
$$
\operatorname{cov}(X, Y) = \operatorname{E}{\big[(X - \operatorname{E}[X])(Y - \operatorname{E}[Y])\big]}
$$
where
$$
\operatorname{E}[X]
$$
is the expected value of
$$
X
$$
, also known as the mean of
$$
X
$$
. The covariance is also sometimes denoted
$$
\sigma_{XY}
$$
or
$$
\sigma(X,Y)
$$
, in analogy to variance. By using the linearity property of expectations, this can be simplified to the expected value of their product minus the product of their expected values:
$$
\begin{align}
\operatorname{cov}(X, Y)
&= \operatorname{E}\left[\left(X - \operatorname{E}\left[X\right]\right) \left(Y - \operatorname{E}\left[Y\right]\right)\right] \\
|
https://en.wikipedia.org/wiki/Covariance
|
passage: Thus, in contrast to classical mechanics, not only does the stationary path contribute, but actually all virtual paths between the initial and the final point also contribute.
Path integral
In terms of the wave function in the position representation, the path integral formula reads as follows:
$$
\psi(x,t)=\frac{1}{Z}\int_{\mathbf{x}(0)=x}\mathcal{D}\mathbf{x}\, e^{iS[\mathbf{x},\dot{\mathbf{x}}]}\psi_0(\mathbf{x}(t))\,
$$
where
$$
\mathcal{D}\mathbf{x}
$$
denotes integration over all paths
$$
\mathbf{x}
$$
with
$$
\mathbf{x}(0)=x
$$
and where
$$
Z
$$
is a normalization factor. Here
$$
S
$$
is the action, given by
$$
S[\mathbf{x},\dot\mathbf{x}]=\int dt\, L(\mathbf{x}(t),\dot\mathbf{x}(t))
$$
### Free particle
The path integral representation gives the quantum amplitude to go from point to point as an integral over all paths. For a free-particle action (for simplicity let , )
$$
S = \int \frac{\dot{x}^2}{2}\, \mathrm{d}t,
$$
the integral can be evaluated explicitly.
|
https://en.wikipedia.org/wiki/Path_integral_formulation
|
passage: The antinodes of the waves align in a superposition .
### Circular polarization
If the medium is linear and allows multiple independent displacement directions for the same travel direction
$$
\widehat{d}
$$
, we can choose two mutually perpendicular directions of polarization, and express any wave linearly polarized in any other direction as a linear combination (mixing) of those two waves.
By combining two waves with same frequency, velocity, and direction of travel, but with different phases and independent displacement directions, one obtains a circularly or elliptically polarized wave. In such a wave the particles describe circular or elliptical trajectories, instead of moving back and forth.
It may help understanding to revisit the thought experiment with a taut string mentioned above. Notice that you can also launch waves on the string by moving your hand to the right and left instead of up and down. This is an important point. There are two independent (orthogonal) directions that the waves can move. (This is true for any two directions at right angles, up and down and right and left are chosen for clarity.) Any waves launched by moving your hand in a straight line are linearly polarized waves.
But now imagine moving your hand in a circle. Your motion will launch a spiral wave on the string. You are moving your hand simultaneously both up and down and side to side.
|
https://en.wikipedia.org/wiki/Transverse_wave
|
passage: In the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra () provided formulae for the number of permutations and combinations, and these formulas may have been familiar to Indian mathematicians as early as the 6th century CE. The philosopher and astronomer Rabbi Abraham ibn Ezra () established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematician Levi ben Gerson (better known as Gersonides), in 1321.
The arithmetical triangle—a graphical diagram showing relationships among the binomial coefficients—was presented by mathematicians in treatises dating as far back as the 10th century, and would eventually become known as Pascal's triangle. Later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations.
During the Renaissance, together with the rest of mathematics and the sciences, combinatorics enjoyed a rebirth. Works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J.J. Sylvester (late 19th century) and Percy MacMahon (early 20th century) helped lay the foundation for enumerative and algebraic combinatorics.
### Graph theory
also enjoyed an increase of interest at the same time, especially in connection with the four color problem.
|
https://en.wikipedia.org/wiki/Combinatorics
|
passage: The covariant and contravariant basis vectors types have identical direction for orthogonal curvilinear coordinate systems, but as usual have inverted units with respect to each other.
Note the following important equality:
$$
\mathbf{b}^i\cdot\mathbf{b}_j = \delta^i_j
$$
wherein
$$
\delta^i_j
$$
denotes the generalized Kronecker delta.
A vector v can be specified in terms of either basis, i.e.,
$$
\mathbf{v} = v^1\mathbf{b}_1 + v^2\mathbf{b}_2 + v^3\mathbf{b}_3 = v_1\mathbf{b}^1 + v_2\mathbf{b}^2 + v_3\mathbf{b}^3
$$
Using the Einstein summation convention, the basis vectors relate to the components by
$$
\mathbf{v}\cdot\mathbf{b}^i = v^k\mathbf{b}_k\cdot\mathbf{b}^i = v^k\delta^i_k = v^i
$$
$$
\mathbf{v}\cdot\mathbf{b}_i = v_k\mathbf{b}^k\cdot\mathbf{b}_i = v_k\delta_i^k = v_i
$$
and
$$
\mathbf{v}\cdot\mathbf{b}_i = v^k\mathbf{b}_k\cdot\mathbf{b}_i = g_{ki}v^k
$$
$$
\mathbf{v}\cdot\mathbf{b}^i = v_k\mathbf{b}^k\cdot\mathbf{b}^i = g^{ki}v_k
$$
where g is the metric tensor (see below).
|
https://en.wikipedia.org/wiki/Curvilinear_coordinates
|
passage: The Fast-Folding Algorithm (FFA) is a computational method primarily utilized in the domain of astronomy for detecting periodic signals.
FFA is designed to reveal repeating or cyclical patterns by "folding" data, which involves dividing the data set into numerous segments, aligning these segments to a common phase, and summing them together to enhance the signal of periodic events. This algorithm is particularly advantageous when dealing with non-uniformly sampled data or signals with a drifting period, which refer to signals that exhibit a frequency or period drifting over space and time, such cycles are not stable and consistent; rather, they are randomized.
A quintessential application of FFA is in the detection and analysis of pulsars—highly magnetized, rotating neutron stars that emit beams of electromagnetic radiation. By employing FFA, astronomers can effectively distinguish noisy data to identify the regular pulses of radiation emitted by these celestial bodies. Moreover, the Fast-Folding Algorithm is instrumental in detecting long-period signals, which is often a challenge for other algorithms like the FFT (Fast-Fourier Transform) that operate under the assumption of a constant frequency. Through the process of folding and summing data segments, FFA provides a robust mechanism for unveiling periodicities despite noisy observational data, thereby playing a pivotal role in advancing our understanding of pulsar properties and behaviors.
|
https://en.wikipedia.org/wiki/Fast_folding_algorithm
|
passage: However, certain combinations of particular limiting values cannot be computed in this way, and knowing the limit of each function separately does not suffice to determine the limit of the combination. In these particular situations, the limit is said to take an indeterminate form, described by one of the informal expressions
$$
\frac 00,~ \frac{\infty}{\infty},~ 0\times\infty,~ \infty - \infty,~ 0^0,~ 1^\infty, \text{ or } \infty^0,
$$
among a wide variety of uncommon others, where each expression stands for the limit of a function constructed by an arithmetical combination of two functions whose limits respectively tend to or as indicated.
A limit taking one of these indeterminate forms might tend to zero, might tend to any finite value, might tend to infinity, or might diverge, depending on the specific functions involved. A limit which unambiguously tends to infinity, for instance
$$
\lim_{x \to 0} 1/x^2 = \infty,
$$
is not considered indeterminate. The term was originally introduced by Cauchy's student Moigno in the middle of the 19th century.
The most common example of an indeterminate form is the quotient of two functions each of which converges to zero. This indeterminate form is denoted by
$$
0/0
$$
.
|
https://en.wikipedia.org/wiki/Indeterminate_form
|
passage: the symmetric part of T is the symmetric product of the factors:
$$
v_1\odot v_2\odot\cdots\odot v_r := \frac{1}{r!}\sum_{\sigma\in\mathfrak{S}_r} v_{\sigma 1}\otimes v_{\sigma 2}\otimes\cdots\otimes v_{\sigma r}.
$$
In general we can turn Sym(V) into an algebra by defining the commutative and associative product ⊙. Given two tensors and , we use the symmetrization operator to define:
$$
T_1\odot T_2 = \operatorname{Sym}(T_1\otimes T_2)\quad\left(\in\operatorname{Sym}^{k_1+k_2}(V)\right).
$$
It can be verified (as is done by Kostrikin and Manin) that the resulting product is in fact commutative and associative. In some cases the operator is omitted: .
In some cases an exponential notation is used:
$$
v^{\odot k} = \underbrace{v \odot v \odot \cdots \odot v}_{k\text{ times}}=\underbrace{v \otimes v \otimes \cdots \otimes v}_{k\text{ times}}=v^{\otimes k}.
$$
Where v is a vector.
|
https://en.wikipedia.org/wiki/Symmetric_tensor
|
passage: - The split form, EVIII (or E8(8)), which has maximal compact subgroup Spin(16)/(Z/2Z), fundamental group of order 2 (implying that it has a double cover, which is a simply connected Lie real group but is not algebraic, see below) and has trivial outer automorphism group.
- EIX (or E8(−24)), which has maximal compact subgroup E7 × SU(2)/(−1,−1), fundamental group of order 2 (again implying a double cover, which is not algebraic) and has trivial outer automorphism group.
For a complete list of real forms of simple Lie algebras, see the list of simple Lie groups.
## E8 as an algebraic group
By means of a Chevalley basis for the Lie algebra, one can define E8 as a linear algebraic group over the integers and, consequently, over any commutative ring and in particular over any field: this defines the so-called split (sometimes also known as "untwisted") form of E8. Over an algebraically closed field, this is the only form; however, over other fields, there are often many other forms, or "twists" of E8, which are classified in the general framework of Galois cohomology (over a perfect field k) by the set H1(k,Aut(E8)), which, because the
### Dynkin diagram
of E8 (see below) has no automorphisms, coincides with H1(k,E8).
|
https://en.wikipedia.org/wiki/E8_%28mathematics%29
|
passage: oP
oS
oI
oF Tetragonal (t) D4h
tP
tI Hexagonal (h) Rhombohedral D3d
hR Hexagonal D6h
hP Cubic (c) Oh
cP
cI
cF
In geometry and crystallography, a Bravais lattice is a category of translative symmetry groups (also known as lattices) in three directions.
Such symmetry groups consist of translations by vectors of the form
R = n1a1 + n2a2 + n3a3,
where n1, n2, and n3 are integers and a1, a2, and a3 are three non-coplanar vectors, called primitive vectors.
These lattices are classified by the space group of the lattice itself, viewed as a collection of points; there are 14 Bravais lattices in three dimensions; each belongs to one lattice system only. They represent the maximum symmetry a structure with the given translational symmetry can have.
All crystalline materials (not including quasicrystals) must, by definition, fit into one of these arrangements.
For convenience a Bravais lattice is depicted by a unit cell which is a factor 1, 2, 3, or 4 larger than the primitive cell. Depending on the symmetry of a crystal or other pattern, the fundamental domain is again smaller, up to a factor 48.
The Bravais lattices were studied by Moritz Ludwig Frankenheim in 1842, who found that there were 15 Bravais lattices. This was corrected to 14 by A. Bravais in 1848.
## In other dimensions
|
https://en.wikipedia.org/wiki/Crystal_system
|
passage: There are also studies in economics that look at the positives and negatives of pollination, focused on bees, and how the process affects the pollinators themselves.
## Process of pollination
Pollen germination has three stages; hydration, activation and pollen tube emergence. The pollen grain is severely dehydrated so that its mass is reduced, enabling it to be more easily transported from flower to flower. Germination only takes place after rehydration, ensuring that premature germination does not take place in the anther. Hydration allows the plasma membrane of the pollen grain to reform into its normal bilayer organization providing an effective osmotic membrane. Activation involves the development of actin filaments throughout the cytoplasm of the cell, which eventually become concentrated at the point from which the pollen tube will emerge. Hydration and activation continue as the pollen tube begins to grow.
In conifers, the reproductive structures are borne on cones. The cones are either pollen cones (male) or ovulate cones (female), but some species are monoecious and others dioecious. A pollen cone contains hundreds of microsporangia carried on (or borne on) reproductive structures called sporophylls. Spore mother cells in the microsporangia divide by meiosis to form haploid microspores that develop further by two mitotic divisions into immature male gametophytes (pollen grains).
|
https://en.wikipedia.org/wiki/Pollination
|
passage: In the simplest case, a shear stress , exerted by a force parallel to the surface of the droplet, is proportional to the rate of deformation or strain rate. Such a shear stress occurs if the fluid has a velocity gradient because the fluid is moving faster on one side than another. If the speed in the direction varies with , the tangential force in direction per unit area normal to the direction is
$$
\sigma_{zx} = -\mu\frac{\partial v_x}{\partial z}\,,
$$
where is the viscosity. This is also a flux, or flow per unit area, of -momentum through the surface.
Including the effect of viscosity, the momentum balance equations for the incompressible flow of a Newtonian fluid are
$$
\rho \frac{D \mathbf{v}}{D t} = -\boldsymbol{\nabla} p + \mu\nabla^2 \mathbf{v} + \rho\mathbf{g}.\,
$$
These are known as the Navier–Stokes equations.
The momentum balance equations can be extended to more general materials, including solids. For each surface with normal in direction and force in direction , there is a stress component . The nine components make up the Cauchy stress tensor , which includes both pressure and shear.
|
https://en.wikipedia.org/wiki/Momentum
|
passage: There are an unlimited number of different types of conceivable hypercomputers, including:
- Turing's original oracle machines, defined by Turing in 1939.
- A real computer (a sort of idealized analog computer) can perform hypercomputation if physics admits general real variables (not just computable reals), and these are in some way "harnessable" for useful (rather than random) computation. This might require quite bizarre laws of physics (for example, a measurable physical constant with an oracular value, such as Chaitin's constant), and would require the ability to measure the real-valued physical value to arbitrary precision, though standard physics makes such arbitrary-precision measurements theoretically infeasible.
- Similarly, a neural net that somehow had Chaitin's constant exactly embedded in its weight function would be able to solve the halting problem, but is subject to the same physical difficulties as other models of hypercomputation based on real computation.
- Certain fuzzy logic-based "fuzzy Turing machines" can, by definition, accidentally solve the halting problem, but only because their ability to solve the halting problem is indirectly assumed in the specification of the machine; this tends to be viewed as a "bug" in the original specification of the machines.
|
https://en.wikipedia.org/wiki/Hypercomputation
|
passage: ## Hamiltonians and flow
There are N − 1 Hamiltonians, , generating an incompressible flow,
$$
\frac{d}{dt}f = \{f, H_1, \ldots, H_{N-1}\},
$$
The generalized phase-space velocity is divergenceless, enabling Liouville's theorem. The case reduces to a Poisson manifold, and conventional Hamiltonian mechanics.
For larger even , the Hamiltonians identify with the maximal number of independent invariants of motion (cf. Conserved quantity) characterizing a superintegrable system that evolves in -dimensional phase space. Such systems are also describable by conventional Hamiltonian dynamics; but their description in the framework of Nambu mechanics is substantially more elegant and intuitive, as all invariants enjoy the same geometrical status as the Hamiltonian: the trajectory in phase space is the intersection of the hypersurfaces specified by these invariants. Thus, the flow is perpendicular to all gradients of these Hamiltonians, whence parallel to the generalized cross product specified by the respective Nambu bracket.
Nambu mechanics can be extended to fluid dynamics, where the resulting Nambu brackets are non-canonical and the Hamiltonians are identified with the Casimir of the system, such as enstrophy or helicity.
## Quantization
From the view point of Zariski quatization, Takhtajan et al. propose quantization of Nambu dynamics.
Quantizing Nambu dynamics leads to intriguing structures that coincide with conventional quantization ones when superintegrable systems are involved—as they must.
|
https://en.wikipedia.org/wiki/Nambu_mechanics
|
passage: This created more value and utility due to the increase in users. Eventually increased usage through exponential growth led to the telephone is used by almost every household adding more value to the network for all users. Without the network effect and technological advances the telephone would have nowhere near the amount of value or utility as it does today.
### Financial exchanges
Transactions in the financial field may feature a network effect. As the number of sellers and buyers in the exchange, who have the symmetric information increases, liquidity increases, and transaction costs decrease. This then attracts a larger number of buyers and sellers to the exchange.
The network advantage of financial exchanges is apparent in the difficulty that startup exchanges have in dislodging a dominant exchange. For example, the Chicago Board of Trade has retained overwhelming dominance of trading in US Treasury bond futures despite the startup of Eurex US trading of identical futures contracts. Similarly, the Chicago Mercantile Exchange has maintained dominance in trading of Eurobond interest rate futures despite a challenge from Euronext. Liffe.
### Cryptocurrencies and blockchains
Cryptocurrencies such as Bitcoin and smart contract blockchains such as Ethereum also exhibit network effects.
Smart contract blockchains can produce network effects through the social network of individuals that uses a blockchain for securing its transactions. Public infrastructure networks such as Ethereum and others can facilitate entities that do not explicitly trust one another to collaborate in meaningful way, incentivizing growth in the network. However, as of 2019, such networks grow more slowly due to missing particular requirements such as privacy and scalability.
|
https://en.wikipedia.org/wiki/Network_effect
|
passage: The Persian astronomer Al-Biruni (973–1048) proposed that the Milky Way is "a collection of countless fragments of the nature of nebulous stars". The Andalusian astronomer Avempace ( 1138) proposed that the Milky Way was made up of many stars but appeared to be a continuous image in the Earth's atmosphere, citing his observation of a conjunction of Jupiter and Mars in 1106 or 1107 as evidence. The Persian astronomer Nasir al-Din al-Tusi (1201–1274) in his Tadhkira wrote: "The Milky Way, i.e. the Galaxy, is made up of a very large number of small, tightly clustered stars, which, on account of their concentration and smallness, seem to be cloudy patches. Because of this, it was likened to milk in color." Ibn Qayyim al-Jawziyya (1292–1350) proposed that the Milky Way is "a myriad of tiny stars packed together in the sphere of the fixed stars".
### Telescopic observations
Proof of the Milky Way consisting of many stars came in 1610 when Galileo Galilei used a telescope to study the Milky Way and discovered that it is composed of a huge number of faint stars. Galileo also concluded that the appearance of the Milky Way was due to refraction of the Earth's atmosphere. In a treatise in 1755, Immanuel Kant, drawing on earlier work by Thomas Wright, speculated (correctly) that the Milky Way might be a rotating body of a huge number of stars, held together by gravitational forces akin to the Solar System but on much larger scales.
|
https://en.wikipedia.org/wiki/Milky_Way
|
passage: ### Nullary
A constant can be treated as the output of an operation of arity 0, called a nullary operation.
Also, outside of functional programming, a function without arguments can be meaningful and not necessarily constant (due to side effects). Such functions may have some hidden input, such as global variables or the whole state of the system (time, free memory, etc.).
### Unary
Examples of unary operators in mathematics and in programming include the unary minus and plus, the increment and decrement operators in C-style languages (not in logical languages), and the successor, factorial, reciprocal, floor, ceiling, fractional part, sign, absolute value, square root (the principal square root), complex conjugate (unary of "one" complex number, that however has two parts at a lower level of abstraction), and norm functions in mathematics. In programming the two's complement, address reference, and the logical NOT operators are examples of unary operators.
All functions in lambda calculus and in some functional programming languages (especially those descended from ML) are technically unary, but see n-ary below.
According to Quine, the Latin distributives being singuli, bini, terni, and so forth, the term "singulary" is the correct adjective, rather than "unary". Abraham Robinson follows Quine's usage.
In philosophy, the adjective monadic is sometimes used to describe a one-place relation such as 'is square-shaped' as opposed to a two-place relation such as 'is the sister of'.
|
https://en.wikipedia.org/wiki/Arity
|
passage: It requires that call arrivals can be modeled by a Poisson process, which is not always a good match, but is valid for any statistical distribution of call holding times with a finite mean.
It applies to traffic transmission systems that do not buffer traffic.
More modern examples compared to POTS where Erlang B is still applicable, are optical burst switching (OBS) and several current approaches to optical packet switching (OPS).
Erlang B was developed as a trunk sizing tool for telephone networks with holding times in the minutes range, but being a mathematical equation it applies on any time-scale.
Extended Erlang B
Extended Erlang B differs from the classic Erlang-B assumptions by allowing for a proportion of blocked callers to try again, causing an increase in offered traffic from the initial baseline level. It is an iterative calculation rather than a formula and adds an extra parameter, the recall factor
$$
R_\text{f}
$$
, which defines the recall attempts.
The steps in the process are as follows. It starts at iteration
$$
k=0
$$
with a known initial baseline level of traffic
$$
E_{0}
$$
, which is successively adjusted to calculate a sequence of new offered traffic values
$$
E_{k+1}
$$
, each of which accounts for the recalls arising from the previously calculated offered traffic
$$
E_{k}
$$
.
1. Calculate the probability of a caller being blocked on their first attempt
$$
P_\text{b} = B(E_k,m)
$$
as above for Erlang B.
1.
|
https://en.wikipedia.org/wiki/Erlang_%28unit%29
|
passage: For instance,
$$
S^1
$$
is a circle, while
$$
S^2
$$
is the surface of an ordinary ball of radius one in 3 dimensions. Topologists consider a space X to be an n-sphere if there is a homeomorphism between them, i.e. every point in X may be assigned to exactly one point in the unit n-sphere by a continuous bijection with continuous inverse. For example, a point x on an n-sphere of radius r can be matched homeomorphically with a point on the unit n-sphere by multiplying its distance from the origin by
$$
1/r
$$
. Similarly, an n-cube of any radius is homeomorphic to an n-sphere.
In differential topology, two smooth manifolds are considered smoothly equivalent if there exists a diffeomorphism from one to the other, which is a homeomorphism between them, with the additional condition that it be smooth — that is, it should have derivatives of all orders at all its points — and its inverse homeomorphism must also be smooth. To calculate derivatives, one needs to have local coordinate systems defined consistently in X. Mathematicians (including Milnor himself) were surprised in 1956 when Milnor showed that consistent local coordinate systems could be set up on the 7-sphere in two different ways that were equivalent in the continuous sense, but not in the differentiable sense. Milnor and others set about trying to discover how many such exotic spheres could exist in each dimension and to understand how they relate to each other.
|
https://en.wikipedia.org/wiki/Exotic_sphere
|
passage: For instance, the homotopy groups of spheres are poorly understood and are not known in general, in contrast to the straightforward description given above for the homology groups.
For an
$$
n=1
$$
example, suppose
$$
X
$$
is the figure eight. As usual, its first homotopy group, or fundamental group,
$$
\pi_1(X)
$$
is the group of homotopy classes of directed loops starting and ending at a predetermined point (e.g. its center). It is isomorphic to the free group of rank 2,
$$
\pi_1(X) \cong \mathbb{Z} * \mathbb{Z}
$$
, which is not commutative: looping around the lefthand cycle and then around the righthand cycle is different from looping around the righthand cycle and then looping around the lefthand cycle. By contrast, the figure eight's first homology group
$$
H_1(X)\cong \mathbb{Z} \times \mathbb{Z}
$$
is abelian. To express this explicitly in terms of homology classes of cycles, one could take the homology class
$$
l
$$
of the lefthand cycle and the homology class
$$
r
$$
of the righthand cycle as basis elements of
$$
H_1(X)
$$
, allowing us to write
$$
H_1(X)=\{a_l l + a_r r\,|\; a_l, a_r \in \mathbb{Z}\}
$$
.
|
https://en.wikipedia.org/wiki/Homology_%28mathematics%29
|
passage: Indeed, such an isomorphism is obtained by observing
$$
M \otimes_B B' = M \otimes_B B \otimes_A A' \cong M \otimes_A A'.
$$
Thus, the two operations, namely forgetful functors and tensor products, commute in the sense of the above isomorphism.
The base change theorems discussed below are statements of a similar kind.
## Definition of the base change map
The base change theorems presented below all assert that (for different types of sheaves, and under various assumptions on the maps involved), that the following base change map
$$
g^*(R^r f_* \mathcal{F}) \to R^r f'_*(g'^*\mathcal{F})
$$
is an isomorphism, where
$$
\begin{array}{rcl} X' & \stackrel{g'}\to & X \\
f' \downarrow & & \downarrow f \\
S' & \stackrel g \to & S\\ \end{array}
$$
are continuous maps between topological spaces that form a Cartesian square and
$$
\mathcal{F}
$$
is a sheaf on X. Here
$$
R^i f_* \mathcal F
$$
denotes the higher direct image of
$$
\mathcal F
$$
under f, i.e., the derived functor of the direct image (also known as pushforward) functor
$$
f_*
$$
.
|
https://en.wikipedia.org/wiki/Base_change_theorems
|
passage: On Earth, a body of water is considered a lake when it is inland, not part of the ocean, is larger and deeper than a pond, and is fed by a river. The only world other than Earth known to harbor lakes is Titan, Saturn's largest moon, which has lakes of ethane, most likely mixed with methane. It is not known if Titan's lakes are fed by rivers, though Titan's surface is carved by numerous river beds. Natural lakes on Earth are generally found in mountainous areas, rift zones, and areas with ongoing or recent glaciation. Other lakes are found in endorheic basins or along the courses of mature rivers. In some parts of the world, there are many lakes because of chaotic drainage patterns left over from the last ice age. All lakes are temporary over geologic time scales, as they will slowly fill in with sediments or spill out of the basin containing them.
#### Ponds
A pond is a body of standing water, either natural or human-made, that is usually smaller than a lake. A wide variety of human-made bodies of water are classified as ponds, including water gardens designed for aesthetic ornamentation, fish ponds designed for commercial fish breeding, and solar ponds designed to store thermal energy. Ponds and lakes are distinguished from streams via current speed. While currents in streams are easily observed, ponds and lakes possess thermally driven micro-currents and moderate wind driven currents.
|
https://en.wikipedia.org/wiki/Nature
|
passage: It can also be written in terms of the Hurwitz zeta function:
$$
\int_0^z \operatorname{log\Gamma}(x) \, dx = \frac{z}{2} \log(2 \pi) + \frac{z(1-z)}{2} - \zeta'(-1) + \zeta'(-1,z) .
$$
When
$$
z=1
$$
it follows that
$$
\int_0^1 \operatorname{log\Gamma}(x) \, dx = \frac 1 2 \log(2\pi),
$$
and this is a consequence of Raabe's formula as well. O. Espinosa and V. Moll derived a similar formula for the integral of the square of
$$
\operatorname{log\Gamma}
$$
:
$$
\int_{0}^{1} \log ^{2} \Gamma(x) d x=\frac{\gamma^{2}}{12}+\frac{\pi^{2}}{48}+\frac{1}{3} \gamma L_{1}+\frac{4}{3} L_{1}^{2}-\left(\gamma+2 L_{1}\right) \frac{\zeta^{\prime}(2)}{\pi^{2}}+\frac{\zeta^{\prime \prime}(2)}{2 \pi^{2}},
$$
where
$$
L_1
$$
is
$$
\frac12\log(2\pi)
$$
.
|
https://en.wikipedia.org/wiki/Gamma_function
|
passage: Gesetz zum Schutze der Berufsbezeichnung "Ingenieur" und "Ingenieurin" – Ingenieurgesetz – IngG – (BayRS 702-2-W), zuletzt geändert durch § 1 des Gesetzes vom 24. März 2010 (GVBl S. 138)
- In France, the title engineer is used liberally and is often attributed based on professional position rather than initial qualification. However, the title ingénieur diplomé (diploma engineer) is reserved for people having followed one of the trainings listed by the Commission des Titres d'Ingénieur (Commission for Engineer Titles). It corresponds to a highly selective master's degree level.
- In Turkey use of the title is limited by several laws to people with an engineering degree from an accredited higher education institution or university. Engineering and architecture professions and related titles are governed by Law No. 3458, which came into effect in 1938. There are also several laws for each of the engineering branches. Usage of the "mühendis" (engineer in Turkish) title by others (even those with much more work experience) is illegal and punishable by law.
- In Chile, the ingeniero (engineer) title is regulated by law, which distinguishes at least three different kinds of professional engineering titles. First, the igeniería de ejecución, which only requires a degree in applied science and a technical degree from a university or a technical institute (usually four years total). Second, ingeniería, which requires a major degree in basic sciences plus a technical degree, both from a university (usually five years total).
|
https://en.wikipedia.org/wiki/Regulation_and_licensure_in_engineering
|
passage: Penetrating deep into the body with sonography is difficult. Some acoustic energy is lost each time an echo is formed, but most of it (approximately
$$
\textstyle 0.5 \frac{\mbox{dB}}{\mbox{cm depth}\cdot\mbox{MHz}}
$$
) is lost from acoustic absorption. (See Acoustic attenuation for further details on modeling of acoustic attenuation and absorption.)
The speed of sound varies as it travels through different materials, and is dependent on the acoustical impedance of the material. However, the sonographic instrument assumes that the acoustic velocity is constant at 1540 m/s. An effect of this assumption is that in a real body with non-uniform tissues, the beam becomes somewhat de-focused and image resolution is reduced.
To generate a 2-D image, the ultrasonic beam is swept. A transducer may be swept mechanically by rotating or swinging or a 1-D phased array transducer may be used to sweep the beam electronically. The received data is processed and used to construct the image. The image is then a 2-D representation of the slice into the body.
3-D images can be generated by acquiring a series of adjacent 2-D images. Commonly a specialized probe that mechanically scans a conventional 2-D image transducer is used. However, since the mechanical scanning is slow, it is difficult to make 3D images of moving tissues.
|
https://en.wikipedia.org/wiki/Medical_ultrasound
|
passage: Worldwide shipments of desktop and laptop computers fell by 19.5% in the third quarter of 2022 compared with the year-ago period, marking the steepest decline Gartner has documented in more than two decades of tracking the market.
After a period of volatility, the global PC market began to stabilize in 2023. According to IDC, worldwide PC shipments during the fourth quarter of 2024 grew 1.8% year-over-year, reaching 68.9 million units. Canalys reported a 3.2% annual growth in the first quarter of 2024, totaling 57.2 million units, with notebook shipments increasing by 4.2%.
In the first quarter of 2025, global PC shipments experienced a significant uptick, growing 9.4% year-over-year to 62.7 million units. This surge was partly attributed to manufacturers accelerating shipments to the U.S. ahead of newly implemented tariffs under President Donald Trump's trade policies. Lenovo maintained its lead in the global PC market, shipping 15.2 million units with an 11% growth, followed by HP with 12.8 million units (6% growth), Dell with 9.5 million units (3% growth), and Apple with 6.5 million units, marking a 22% increase.
The integration of artificial intelligence (AI) capabilities into PCs emerged as a significant trend during this period. Canalys projected that AI-capable PC shipments would reach 48 million units in 2024, representing 18% of total PC shipments, and surpass 100 million units in 2025, accounting for approximately 40% of the market. Gartner provided a slightly more optimistic forecast, estimating 54.5 million AI PC shipments in 2024 and 116 million in 2025.
|
https://en.wikipedia.org/wiki/Personal_computer
|
passage: The wheelbase contributes to the vehicle's turning radius, which is also a handling characteristic.
### Unsprung weight
Ignoring the flexing of other components, a car can be modeled as the sprung weight, carried by the springs, carried by the unsprung weight, carried by the tires, carried by the road. Unsprung weight is more properly regarded as a mass which has its own inherent inertia separate from the rest of the vehicle. When a wheel is pushed upwards by a bump in the road, the inertia of the wheel will cause it to be carried further upward above the height of the bump. If the force of the push is sufficiently large, the inertia of the wheel will cause the tire to completely lift off the road surface resulting in a loss of traction and control. Similarly when crossing into a sudden ground depression, the inertia of the wheel slows the rate at which it descends. If the wheel inertia is large enough, the wheel may be temporarily separated from the road surface before it has descended back into contact with the road surface.
This unsprung weight is cushioned from uneven road surfaces only by the compressive resilience of the tire (and wire wheels if fitted), which aids the wheel in remaining in contact with the road surface when the wheel inertia prevents close-following of the ground surface.
|
https://en.wikipedia.org/wiki/Automobile_handling
|
passage: Usability testing is most often done in web surveys and focuses on how people interact with survey, such as navigating the survey, entering survey responses, and finding help information. Usability testing complements traditional survey pretesting methods such as cognitive pretesting (how people understand the products), pilot testing (how will the survey procedures work), and expert review by a subject matter expert in survey methodology.
In translated survey products, usability testing has shown that "cultural fitness" must be considered in the sentence and word levels and in the designs for data entry and navigation, and that presenting translation and visual cues of common functionalities (tabs, hyperlinks, drop-down menus, and URLs) help to improve the user experience.
|
https://en.wikipedia.org/wiki/Usability_testing
|
passage: It is possible to use less memory by choosing a smaller m in the first step of the algorithm. Doing so increases the running time, which then is O(n/m). Alternatively one can use Pollard's rho algorithm for logarithms, which has about the same running time as the baby-step giant-step algorithm, but only a small memory requirement.
- While this algorithm is credited to Daniel Shanks, who published the 1971 paper in which it first appears, a 1994 paper by Nechaev states that it was known to Gelfond in 1962.
- There exist optimized versions of the original algorithm, such as using the collision-free truncated lookup tables of or negation maps and Montgomery's simultaneous modular inversion as proposed in.
## Further reading
- H. Cohen, A course in computational algebraic number theory, Springer, 1996.
- D. Shanks, Class number, a theory of factorization and genera. In Proc. Symp. Pure Math. 20, pages 415—440. AMS, Providence, R.I., 1971.
- A. Stein and E. Teske, Optimized baby step-giant step methods, Journal of the Ramanujan Mathematical Society 20 (2005), no. 1, 1–32.
- A. V. Sutherland, Order computations in generic groups, PhD thesis, M.I.T., 2007.
- D. C. Terr, A modification of Shanks’ baby-step giant-step algorithm, Mathematics of Computation 69 (2000), 767–773.
## References
|
https://en.wikipedia.org/wiki/Baby-step_giant-step
|
passage: The integration strips of Riemann integration are replaced with strips that are non-rectangular in shape. The method is to transform a "Cavalieri region" with a transformation
$$
h
$$
, or to use
$$
g = h^{-1}
$$
as integrand.
For a given function
$$
f(x)
$$
on an interval
$$
[a,b]
$$
, a "translational function"
$$
a(y)
$$
must intersect
$$
(x,f(x ))
$$
exactly once for any shift in the interval. A "Cavalieri region" is then bounded by
$$
f(x),a(y)
$$
, the
$$
x
$$
-axis, and
$$
b(y) = a(y) + (b-a)
$$
. The area of the region is then
$$
\int_{a(y)}^{b(y)} f(x) \, dx \ = \ \int_{a'}^{b'} f(x) \, dg(x) ,
$$
where
$$
a'
$$
and
$$
b'
$$
are the
$$
x
$$
-values where
$$
a(y)
$$
and
$$
b(y)
$$
intersect
$$
f(x)
$$
.
## Notes
## References
-
- via HathiTrust
-
-
-
-
-
-
-
-
-
-
-
-
Category:Definitions of mathematical integration
Category:Bernhard Riemann
|
https://en.wikipedia.org/wiki/Riemann%E2%80%93Stieltjes_integral
|
passage: Given a Lie group , one can construct the vector space of continuous complex-valued functions on , and turn it into a C*-algebra. This algebra has a natural Hopf algebra structure: given two functions
$$
\varphi, \psi\in C(G)
$$
, one defines multiplication as
$$
(\nabla(\varphi, \psi))(x)=\varphi(x)\psi(x)
$$
and comultiplication as
$$
(\Delta(\varphi))(x\otimes y)=\varphi(xy),
$$
the counit as
$$
\varepsilon(\varphi)=\varphi(e)
$$
and the antipode as
$$
(S(\varphi))(x)=\varphi(x^{-1}).
$$
Now, the Gelfand–Naimark theorem essentially states that every commutative Hopf algebra is isomorphic to the Hopf algebra of continuous functions on some compact topological group —the theory of compact topological groups and the theory of commutative Hopf algebras are the same. For Lie groups, this implies that is isomorphically dual to
$$
U(\mathfrak{g})
$$
; more precisely, it is isomorphic to a subspace of the dual space
$$
U^*(\mathfrak{g}).
$$
These ideas can then be extended to the non-commutative case.
|
https://en.wikipedia.org/wiki/Universal_enveloping_algebra
|
passage: The time complexity of the algorithm is
$$
O\left(n (\log n)^3\right)
$$
.
## Optimizations
The optimization technique used for the world record computations is called binary splitting.
### Binary splitting
A factor of
$$
1/{640320^{3/2}}
$$
can be taken out of the sum and simplified to
$$
\frac{1}{\pi} = \frac{1}{426880 \sqrt{10005}} \sum_{k=0}^{\infty}{\frac{(-1)^k (6k)! (545140134k + 13591409)}{(3k)! (k!)^3 (640320)^{3k}}}
$$
Let
$$
f(n) = \frac{(-1)^n (6n)!}{(3n)! (n!)^3 (640320)^{3n}}
$$
, and substitute that into the sum.
$$
\frac{1}{\pi} = \frac{1}{426880 \sqrt{10005}} \sum_{k=0}^{\infty}{f(k) \cdot (545140134k + 13591409)}
$$
$$
\frac{f(n)}{f(n-1)}
$$
can be simplified to
$$
\frac{-(6n-1)(2n-1)(6n-5)}{10939058860032000 n^3}
$$
, so
$$
f(n) = f(n-1) \cdot \frac{-(6n-1)(2n-1)(6n-5)}{10939058860032000 n^3}
$$
|
https://en.wikipedia.org/wiki/Chudnovsky_algorithm
|
passage: Since
$$
Q(t)
$$
is continuous and periodic it must be bounded. Thus the stability of the zero solution for
$$
y(t)
$$
and
$$
x(t)
$$
is determined by the eigenvalues of
$$
R
$$
.
The representation
$$
\phi \, (t) = P(t)e^{tB}
$$
is called a Floquet normal form for the fundamental matrix
$$
\phi \, (t)
$$
.
The eigenvalues of
$$
e^{TB}
$$
are called the characteristic multipliers of the system. They are also the eigenvalues of the (linear) Poincaré maps
$$
x(t) \to x(t+T)
$$
. A Floquet exponent (sometimes called a characteristic exponent), is a complex
$$
\mu
$$
such that
$$
e^{\mu T}
$$
is a characteristic multiplier of the system. Notice that Floquet exponents are not unique, since
$$
e^{(\mu + \frac{2 \pi i k}{T})T}=e^{\mu T}
$$
, where
$$
k
$$
is an integer. The real parts of the Floquet exponents are called Lyapunov exponents.
|
https://en.wikipedia.org/wiki/Floquet_theory
|
passage: To see this, take a generating set for the (finitely generated) normal subgroup and quotient. Then the generators for the normal subgroup, together with preimages of the generators for the quotient, generate the group.
## Examples
- The multiplicative group of integers modulo 9, , is the group of all integers relatively prime to 9 under multiplication . Note that 7 is not a generator of , since
$$
\{7^i \bmod{9}\ |\ i \in \mathbb{N}\} = \{7,4,1\},
$$
while 2 is, since
$$
\{2^i \bmod{9}\ |\ i \in \mathbb{N}\} = \{2,4,8,7,5,1\}.
$$
- On the other hand, Sn, the symmetric group of degree n, is not generated by any one element (is not cyclic) when n > 2. However, in these cases Sn can always be generated by two permutations which are written in cycle notation as (1 2) and . For example, the 6 elements of S3 can be generated from the two generators, (1 2) and (1 2 3), as shown by the right hand side of the following equations (composition is left-to-right):
e = (1 2)(1 2)
(1 2) = (1 2)
(1 3) = (1 2)(1 2 3)
(2 3) = (1 2 3)(1 2)
(1 2 3) = (1 2 3)
(1 3 2) = (1 2)(1 2 3)(1 2)
- Infinite groups can also have finite generating sets.
|
https://en.wikipedia.org/wiki/Generating_set_of_a_group
|
passage: In particular, expanding the KL divergence
$$
D_{KL}(\hat p\vert\vert p)
$$
around its minimum
$$
q
$$
(the
$$
I
$$
-projection of
$$
p
$$
on
$$
\Delta_{k, n}
$$
) in the constrained problem ensures by the Pythagorean theorem for
$$
I
$$
-divergence that any constant and linear term in the counts
$$
n \hat p_i
$$
vanishes from the conditional probability to multinationally sample those counts.
Notice that
by definition, every one of
$$
\hat p_1, \hat p_2, ..., \hat p_k
$$
must be a rational number,
whereas
$$
p_1, p_2, ..., p_k
$$
may be chosen from any real number in
$$
[0, 1]
$$
and need not satisfy the Diophantine system of equations.
Only asymptotically as
$$
n\rightarrow\infty
$$
, the
$$
\hat p_i
$$
's can be regarded as probabilities over
$$
[0, 1]
$$
.
Away from empirically observed constraints
$$
b_1,\ldots,b_\ell
$$
(such as moments or prevalences) the theorem can be generalized:
|
https://en.wikipedia.org/wiki/Multinomial_distribution
|
passage: The rule makes use of more information from the patterns and weights than the generalized Hebbian rule, due to the effect of the local field.
## Spurious patterns
Patterns that the network uses for training (called retrieval states) become attractors of the system. Repeated updates would eventually lead to convergence to one of the retrieval states. However, sometimes the network will converge to spurious patterns (different from the training patterns). In fact, the number of spurious patterns can be exponential in the number of stored patterns, even if the stored patterns are orthogonal. The energy in these spurious patterns is also a local minimum. For each stored pattern x, the negation -x is also a spurious pattern.
A spurious state can also be a linear combination of an odd number of retrieval states. For example, when using 3 patterns
$$
\mu_1, \mu_2, \mu_3
$$
, one can get the following spurious state:
$$
\epsilon_{i}^{\rm{mix}} = \pm \sgn(\pm \epsilon_{i}^{\mu_{1}}
\pm \epsilon_{i}^{\mu_{2}}
\pm \epsilon_{i}^{\mu_{3}})
$$
Spurious patterns that have an even number of states cannot exist, since they might sum up to zero
## Capacity
The Network capacity of the Hopfield network model is determined by neuron amounts and connections within a given network.
|
https://en.wikipedia.org/wiki/Hopfield_network
|
passage: When the standard was written in the year 2000 the recommended minimum number of iterations was 1,000, but the parameter is intended to be increased over time as CPU speeds increase. A Kerberos standard in 2005 recommended 4,096 iterations; Apple reportedly used 2,000 for iOS 3, and 10,000 for iOS 4; while LastPass in 2011 used 5,000 iterations for JavaScript clients and 100,000 iterations for server-side hashing. In 2023, OWASP recommended to use 600,000 iterations for PBKDF2-HMAC-SHA256 and 210,000 for PBKDF2-HMAC-SHA512.
Having a salt added to the password reduces the ability to use precomputed hashes (rainbow tables) for attacks, and means that multiple passwords have to be tested individually, not all at once. The public key cryptography standard recommends a salt length of at least 64 bits. The US National Institute of Standards and Technology recommends a salt length of at least 128 bits.
## Key derivation process
PBKDF2 has five input parameters:
where:
- is a pseudorandom function of two parameters with output length (e.g., a keyed HMAC)
- is the master password from which a derived key is generated
- is a sequence of bits, known as a cryptographic salt
- is the number of iterations desired
- is the desired bit-length of the derived key
- is the generated derived key
Each -bit block of derived key , is computed as follows (with marking string concatenation):
The function is the xor () of c iterations of chained PRFs.
|
https://en.wikipedia.org/wiki/PBKDF2
|
passage: ### Possible multiplicity
If there are vertices in the graph, then each spanning tree has edges.
There may be several minimum spanning trees of the same weight; in particular, if all the edge weights of a given graph are the same, then every spanning tree of that graph is minimum.
### Uniqueness
If each edge has a distinct weight then there will be only one, unique minimum spanning tree. This is true in many realistic situations, such as the telecommunications company example above, where it's unlikely any two paths have exactly the same cost. This generalizes to spanning forests as well.
Proof:
1. Assume the contrary, that there are two different MSTs and .
1. Since and differ despite containing the same nodes, there is at least one edge that belongs to one but not the other. Among such edges, let be the one with least weight; this choice is unique because the edge weights are all distinct. Without loss of generality, assume is in .
1. As is an MST, must contain a cycle with .
1. As a tree, contains no cycles, therefore must have an edge that is not in .
1. Since was chosen as the unique lowest-weight edge among those belonging to exactly one of and , the weight of must be greater than the weight of .
1. As and are part of the cycle , replacing with in therefore yields a spanning tree with a smaller weight.
1. This contradicts the assumption that is an MST.
|
https://en.wikipedia.org/wiki/Minimum_spanning_tree
|
passage: If the Hessian matrices of the Lagrangian functions are positive semi-definite, the energy function is guaranteed to decrease on the dynamical trajectory This property makes it possible to prove that the system of dynamical equations describing temporal evolution of neurons' activities will eventually reach a fixed point attractor state.
In certain situations one can assume that the dynamics of hidden neurons equilibrates at a much faster time scale compared to the feature neurons,
$$
\tau_h\ll\tau_f
$$
. In this case the steady state solution of the second equation in the system () can be used to express the currents of the hidden units through the outputs of the feature neurons. This makes it possible to reduce the general theory () to an effective theory for feature neurons only. The resulting effective update rules and the energies for various common choices of the Lagrangian functions are shown in Fig.2. In the case of log-sum-exponential Lagrangian function the update rule (if applied once) for the states of the feature neurons is the attention mechanism commonly used in many modern AI systems (see Ref. for the derivation of this result from the continuous time formulation).
### Relationship to classical Hopfield network with continuous variables
Classical formulation of continuous Hopfield Networks can be understood as a special limiting case of the modern Hopfield networks with one hidden layer.
|
https://en.wikipedia.org/wiki/Hopfield_network
|
passage: The combination of high performance, large (16 megabytes or 224 bytes) memory space and fairly low cost made it the most popular CPU design of its class. The Apple Lisa and Macintosh designs made use of the 68000, as did other designs in the mid-1980s, including the Atari ST and Amiga.
The world's first single-chip fully 32-bit microprocessor, with 32-bit data paths, 32-bit buses, and 32-bit addresses, was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982. After the divestiture of AT&T in 1984, it was renamed the WE 32000 (WE for Western Electric), and had two follow-on generations, the WE 32100 and WE 32200. These microprocessors were used in the AT&T 3B5 and 3B15 minicomputers; in the 3B2, the world's first desktop super microcomputer; in the "Companion", the world's first 32-bit laptop computer; and in "Alexander", the world's first book-sized super microcomputer, featuring ROM-pack memory cartridges similar to today's gaming consoles. All these systems ran the UNIX System V operating system.
The first commercial, single chip, fully 32-bit microprocessor available on the market was the HP FOCUS.
Intel's first 32-bit microprocessor was the iAPX 432, which was introduced in 1981, but was not a commercial success.
|
https://en.wikipedia.org/wiki/Microprocessor
|
passage: These are called viscous stresses. For instance, in a fluid such as water the stresses which arise from shearing the fluid do not depend on the distance the fluid has been sheared; rather, they depend on how quickly the shearing occurs.
Viscosity is the material property which relates the viscous stresses in a material to the rate of change of a deformation (the strain rate). Although it applies to general flows, it is easy to visualize and define in a simple shearing flow, such as a planar Couette flow.
In the Couette flow, a fluid is trapped between two infinitely large plates, one fixed and one in parallel motion at constant speed
$$
u
$$
(see illustration to the right). If the speed of the top plate is low enough (to avoid turbulence), then in steady state the fluid particles move parallel to it, and their speed varies from
$$
0
$$
at the bottom to
$$
u
$$
at the top. Each layer of fluid moves faster than the one just below it, and friction between them gives rise to a force resisting their relative motion. In particular, the fluid applies on the top plate a force in the direction opposite to its motion, and an equal but opposite force on the bottom plate. An external force is therefore required in order to keep the top plate moving at constant speed.
In many fluids, the flow velocity is observed to vary linearly from zero at the bottom to
$$
u
$$
at the top.
|
https://en.wikipedia.org/wiki/Viscosity
|
passage: Connes and Kreimer explicitly identify
$$
\mathfrak{g}
$$
with a space of derivations θ of H into R, i.e. linear maps such that
$$
\theta(ab)=\varepsilon(a)\theta(b) + \theta(a)\varepsilon(b),
$$
the formal tangent space of G at the identity ε. This forms a Lie algebra with Lie bracket
$$
[\theta_1,\theta_2](t)=(\theta_1 \otimes \theta_2 -\theta_2\otimes\theta_1)\Delta(t).
$$
$$
\mathfrak{g}
$$
is generated by the derivations θt defined by
$$
\theta_t(t^\prime)=\delta_{tt^\prime},
$$
for each rooted tree t.
The infinite-dimensional Lie algebra
$$
\mathfrak{g}
$$
from and the Lie algebra L(G) of the Butcher group as an infinite-dimensional Lie group are not the same. The Lie algebra L(G) can be identified with the Lie algebra of all derivations in the dual of H (i.e. the space of all linear maps from H to R), whereas
$$
\mathfrak{g}
$$
is obtained from the graded dual. Hence
$$
\mathfrak{g}
$$
turns out to be a (strictly smaller) Lie subalgebra of L(G).
|
https://en.wikipedia.org/wiki/Butcher_group
|
passage: DRAM typically takes the form of an integrated circuit chip, which can consist of dozens to billions of DRAM memory cells. DRAM chips are widely used in digital electronics where low-cost and high-capacity computer memory is required. One of the largest applications for DRAM is the main memory (colloquially called the RAM) in modern computers and graphics cards (where the main memory is called the graphics memory). It is also used in many portable devices and video game consoles. In contrast, SRAM, which is faster and more expensive than DRAM, is typically used where speed is of greater concern than cost and size, such as the cache memories in processors.
The need to refresh DRAM demands more complicated circuitry and timing than SRAM. This complexity is offset by the structural simplicity of DRAM memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very high densities with a simultaneous reduction in cost per bit. Refreshing the data consumes power, causing a variety of techniques to be used to manage the overall power consumption. For this reason, DRAM usually needs to operate with a memory controller; the memory controller needs to know DRAM parameters, especially memory timings, to initialize DRAMs, which may be different depending on different DRAM manufacturers and part numbers.
DRAM had a 47% increase in the price-per-bit in 2017, the largest jump in 30 years since the 45% jump in 1988, while in recent years the price has been going down.
|
https://en.wikipedia.org/wiki/Dynamic_random-access_memory
|
passage: Some Iranian producers export their products to foreign countries.
United Kingdom
Following Brexit, the UK medical device regulation was closely aligned with the EU medical device regulation, including classification. The regulation 7 of the Medical Devices Regulations 2002 (SI 2002 No 618, as amended) (UK medical devices regulations), classified general medical devices into four classes of increasing levels of risk: Class I, IIa, IIb or III in accordance with criteria in the UK medical devices regulations, Annex IX (as modified by Schedule 2A to the UK medical devices regulations).
### Validation and verification
Validation and verification of medical devices ensure that they fulfil their intended purpose. Validation or verification is generally needed when a health facility acquires a new device to perform medical tests.
The main difference between the two is that validation is focused on ensuring that the device meets the needs and requirements of its intended users and the intended use environment, whereas verification is focused on ensuring that the device meets its specified design requirements.
### Standardization and regulatory concerns
The ISO standards for medical devices are covered by ICS 11.100.20 and 11.040.01. The quality and risk management regarding the topic for regulatory purposes is convened by ISO 13485 and ISO 14971. ISO 13485:2016 is applicable to all providers and manufacturers of medical devices, components, contract services and distributors of medical devices. The standard is the basis for regulatory compliance in local markets, and most export markets. Additionally, ISO 9001:2008 sets precedence because it signifies that a company engages in the creation of new products.
|
https://en.wikipedia.org/wiki/Medical_device
|
passage: Thin coatings made of only dielectrics and conductors have very limited absorbing bandwidth, so magnetic materials are used when weight and cost permit, either in resonant RAM or as non-resonant RAM.
### Optimization methods
Thin non-resonant or broad resonance coatings can be modeled with a Leontovich impedance boundary condition (see also Electrical impedance). This is the ratio of the tangential electric field to the tangential magnetic field on the surface, and ignores fields propagating along the surface within the coating. This is particularly convenient when using boundary element method calculations. The surface impedance can be calculated and tested separately.
For an isotropic surface the ideal surface impedance is equal to the 377 ohm impedance of free space.
For non-isotropic (anisotropic) coatings, the optimal coating depends on the shape of the target and the radar direction, but duality, the symmetry of Maxwell's equations between the electric and magnetic fields, tells one that optimal coatings have η0 × η1 = 3772 Ω2, where η0 and η1 are perpendicular components of the anisotropic surface impedance, aligned with edges and/or the radar direction.
A perfect electric conductor has more back scatter from a leading edge for the linear polarization with the electric field parallel to the edge and more from a trailing edge with the electric field perpendicular to the edge, so the high surface impedance should be parallel to leading edges and perpendicular to trailing edges, for the greatest radar threat direction, with some sort of smooth transition between.
|
https://en.wikipedia.org/wiki/Radar_cross_section
|
passage: Windows 8 and Windows 8.1 have been subject to some criticism, such as the removal of the Start menu.
Windows 10
On September 30, 2014, Microsoft announced Windows 10 as the successor to Windows 8.1. It was released on July 29, 2015, and addresses shortcomings in the user interface first introduced with Windows 8. Changes on PC include the return of the Start Menu, a virtual desktop system, and the ability to run Windows Store apps within windows on the desktop rather than in full-screen mode. Windows 10 is said to be available to update from qualified Windows 7 with SP1, Windows 8.1 and Windows Phone 8.1 devices from the Get Windows 10 Application (for Windows 7, Windows 8.1) or Windows Update (Windows 7).
In February 2017, Microsoft announced the migration of its Windows source code repository from Perforce to Git. This migration involved 3.5 million separate files in a 300-gigabyte repository. By May 2017, 90 percent of its engineering team was using Git, in about 8500 commits and 1760 Windows builds per day.
In June 2021, shortly before Microsoft's announcement of Windows 11, Microsoft updated their lifecycle policy pages for Windows 10, revealing that support for their last release of Windows 10 will end on October 14, 2025. On April 27, 2023, Microsoft announced that version 22H2 would be the last of Windows 10.
Windows 11
On June 24, 2021, Windows 11 was announced as the successor to Windows 10 during a livestream. The new operating system was designed to be more user-friendly and understandable. It was released on October 5, 2021. As of May 2022 Windows 11 is a free upgrade to Windows 10 users who meet the system requirements.
|
https://en.wikipedia.org/wiki/Microsoft_Windows
|
passage: These planarians are not biologically immortal, but rather their death rate slowly increases with age. Organisms that are thought to be biologically immortal would, in one instance, be Turritopsis dohrnii, also known as the "immortal jellyfish", due to its ability to revert to its youth when it undergoes stress during adulthood. The reproductive system is observed to remain intact, and even the gonads of Turritopsis dohrnii are existing.
Some species exhibit "negative senescence", in which reproduction capability increases or is stable, and mortality falls with age, resulting from the advantages of increased body size during aging.
## Theories of aging
More than 300 different theories have been posited to explain the nature (mechanisms) and causes (reasons for natural emergence or factors) of aging. Good theories would both explain past observations and predict the results of future experiments. Some of the theories may complement each other, overlap, contradict, or may not preclude various other theories.
Theories of aging fall into two broad categories, evolutionary theories of aging and mechanistic theories of aging. Evolutionary theories of aging primarily explain why aging happens, but do not concern themselves with the molecular mechanism(s) that drive the process. All evolutionary theories of aging rest on the basic mechanisms that the force of natural selection declines with age. Mechanistic theories of aging can be divided into theories that propose aging is programmed, and damage accumulation theories, i.e. those that propose aging to be caused by specific molecular changes occurring over time.
|
https://en.wikipedia.org/wiki/Senescence
|
passage: The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method.
## Definition
Given a set of k + 1 data points
$$
(x_0, y_0),\ldots,(x_j, y_j),\ldots,(x_k, y_k)
$$
where no two xj are the same, the Newton interpolation polynomial is a linear combination of Newton basis polynomials
$$
N(x) := \sum_{j=0}^{k} a_{j} n_{j}(x)
$$
with the Newton basis polynomials defined as
$$
n_j(x) := \prod_{i=0}^{j-1} (x - x_i)
$$
for j > 0 and
$$
n_0(x) \equiv 1
$$
.
The coefficients are defined as
$$
a_j := [y_0,\ldots,y_j]
$$
where
$$
[y_0,\ldots,y_j]
$$
are the divided differences defined as
$$
\begin{align}
\mathopen[y_k] &:= y_k, && k \in \{ 0,\ldots,n\} \\
|
https://en.wikipedia.org/wiki/Newton_polynomial
|
passage: For a complete graph with n vertices, Cayley's formula gives the number of spanning trees as .
- If G is the complete bipartite graph
$$
K_{p,q}
$$
,then
$$
t(G)=p^{q-1}q^{p-1}
$$
.
- For the n-dimensional hypercube graph
$$
Q_n
$$
, the number of spanning trees is
$$
t(G)=2^{2^n-n-1}\prod_{k=2}^n k^
$$
.
### In arbitrary graphs
More generally, for any graph G, the number t(G) can be calculated in polynomial time as the determinant of a matrix derived from the graph,
using Kirchhoff's matrix-tree theorem.
Specifically, to compute t(G), one constructs the Laplacian matrix of the graph, a square matrix in which the rows and columns are both indexed by the vertices of G. The entry in row i and column j is one of three values:
- The degree of vertex i, if i = j,
- −1, if vertices i and j are adjacent, or
- 0, if vertices i and j are different from each other but not adjacent.
The resulting matrix is singular, so its determinant is zero. However, deleting the row and column for an arbitrarily chosen vertex leads to a smaller matrix whose determinant is exactly t(G).
|
https://en.wikipedia.org/wiki/Spanning_tree
|
passage: Arbitrary global optimization techniques may then be used to minimize this target function.
The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.
Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link. The whole network is represented as a single chromosome. The fitness function is evaluated as follows:
- Each weight encoded in the chromosome is assigned to the respective weight link of the network.
- The training set is presented to the network which propagates the input signals forward.
- The mean-squared error is returned to the fitness function.
- This function drives the genetic selection process.
Many chromosomes make up the population; therefore, many different neural networks are evolved until a stopping criterion is satisfied. A common stopping scheme is:
- When the neural network has learned a certain percentage of the training data or
- When the minimum value of the mean-squared-error is satisfied or
- When the maximum number of training generations has been reached.
The fitness function evaluates the stopping criterion as it receives the mean-squared error reciprocal from each network during training. Therefore, the goal of the genetic algorithm is to maximize the fitness function, reducing the mean-squared error.
Other global (and/or evolutionary) optimization techniques may be used to seek a good set of weights, such as simulated annealing or particle swarm optimization.
## Other architectures
|
https://en.wikipedia.org/wiki/Recurrent_neural_network
|
passage: It was unable to maintain sufficient steam pressure for long periods and was of little practical use.
The development of external combustion (steam) engines is detailed as part of the history of the car but often treated separately from the development of cars in their modern understanding. A variety of steam-powered road vehicles were used during the first part of the 19th century, including steam cars, steam buses, phaetons, and steam rollers. In the United Kingdom, sentiment against them led to the Locomotive Acts of 1865.
In 1807, Nicéphore Niépce and his brother Claude created what was probably the world's first internal combustion engine (which they called a Pyréolophore), but installed it in a boat on the river Saone in France. Coincidentally, in 1807, the Swiss inventor François Isaac de Rivaz designed his own "de Rivaz internal combustion engine", and used it to develop the world's first vehicle to be powered by such an engine. The Niépces' Pyréolophore was fuelled by a mixture of Lycopodium powder (dried spores of the Lycopodium plant), finely crushed coal dust and resin that were mixed with oil, whereas de Rivaz used a mixture of hydrogen and oxygen. Neither design was successful, as was the case with others, such as Samuel Brown, Samuel Morey, and Etienne Lenoir, who each built vehicles (usually adapted carriages or carts) powered by internal combustion engines.
|
https://en.wikipedia.org/wiki/Car
|
passage: By contrast, for a group within the tetrapods, such as birds, having four limbs is a plesiomorphy. Using these two terms allows a greater precision in the discussion of homology, in particular allowing clear expression of the hierarchical relationships among different homologous features.
It can be difficult to decide whether a character state is in fact the same and thus can be classified as a synapomorphy, which may identify a monophyletic group, or whether it only appears to be the same and is thus a homoplasy, which cannot identify such a group. There is a danger of circular reasoning: assumptions about the shape of a phylogenetic tree are used to justify decisions about character states, which are then used as evidence for the shape of the tree. Phylogenetics uses various forms of parsimony to decide such questions; the conclusions reached often depend on the dataset and the methods. Such is the nature of empirical science, and for this reason, most cladists refer to their cladograms as hypotheses of relationship. Cladograms that are supported by a large number and variety of different kinds of characters are viewed as more robust than those based on more limited evidence.
## Terminology for taxa
Mono-, para- and polyphyletic taxa can be understood based on the shape of the tree (as done above), as well as based on their character states. These are compared in the table below.
+ Term Node-based definition Character-based definition Holophyly, Monophyly A clade, a monophyletic taxon, is a taxon that consists of the last common ancestor and all its descendants.
|
https://en.wikipedia.org/wiki/Cladistics
|
passage: - No externalities.
### Second Fundamental Theorem
The second fundamental theorem has more demanding conditions.
- All assumptions of the first theorem; in addition:
- The preference relation is locally non-satiated and convex for each consumer i
- The production set is convex for each firm j.
- For the step from price quasi-equilibrium to price equilibrium with transfers: The initial endowment of each agents is strictly positive.
### Common failures of the assumptions
The following provides a non-exhaustive list of common failures of the assumptions underlying the fundamental theorems.
- Price-taking behaviour: In game theoretic interactions, e.g. when firms have monopoly power, the resulting equilibrium is not pareto-efficient
- Externalities: In many instances, prominently pollution & climate action, this assumption is violated. In certain instances, a Pigouvian tax can restore the pareto-efficient allocation.
- Non-satiation : While non-satiation is a very weak assumption, there exist two primary cases in which it fails to hold. Firstly, if preferences have a satiation point (e.g. Central Banks who target inflation have a satiation point at the inflation rate that they target). Secondly, if goods can only be purchased in discrete chunks, this assumption might be violated.
- Rationality: The field of Behavioral economics documents many violations of economic rationality.
- Convexity: In the presence of increasing returns to scale, convexity fails.
|
https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics
|
passage: They are available in a variety of formulations to fine tune them to the specific applications (such as ESD material blends, or the addition of flame retardants).Polymer matrix compositesGFRP, CFRPsupport removal, curingStructural applicationsCeramic slurries and claysAlumina, Zirconia, Kaolinsupport removal, furnace drying and sinteringInsulation, consumers objects, dental applicationsGreen ceramic/binder mixtureZirconia, Calcium phosphatesupport removal, debinding, sinteringstructural ceramics, piezoelectric componentsGreen metal/binder mixtureStainless steel, Titanium, Inconelsupport removal, debinding, sinteringTooling, fixtures, mechanical partsGreen metal/ceramic/binder mixtureStainless steel, Iron, tricalciumphosphate, yttria-stabilized zirconiaS.B. Hein, L. Reineke, V. Reinkemeyer: Fused Filament Fabrication of Biodegradable Materials for Implants, Proceeding of Euro PM 2019 Congress & Exhibition, Maastricht 13.-16. October 2019,, European Powder Metallurgy Association EPMA, Shrewsbury, 2019, .support removal, debinding, sinteringMechanical parts, implantsFood pasteschocolate, sugarsupport removalBiological materialsbioinkbioprinted organs and scaffoldsConductive polymer compositesComposites with Carbon Black, Graphene, Carbon Nano tubes or Copper NanoparticlesAnnealing for lower conductivitySensorspolymer derived ceramics (PDCs)poly lactic acid (PLA), polycarbonate (PC), nylon alloys, polypropylene (PP), polyethylene terephthalate glycol (PETG), polyethylene terephthalate (PET), and co-polyesters; and flexible materials including: flexible PLA, thermoplastic elastomer and thermoplastic polyurethane filamentsTo make SiOC(N) first the printed polymer is dipped in PDC, absorbed then sinteredheat exchangers, heat sinks, scaffoldings for bone tissue growth, chemical/ gas filters and custom scientific hardware
|
https://en.wikipedia.org/wiki/Fused_filament_fabrication
|
passage: Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code.
When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected.
## Gray isometry
The bijective mapping { 0 ↔ , 1 ↔ , 2 ↔ , 3 ↔ } establishes an isometry between the metric space over the finite field
$$
\mathbb{Z}_2^2
$$
with the metric given by the Hamming distance and the metric space over the finite ring
$$
\mathbb{Z}_4
$$
(the usual modular arithmetic) with the metric given by the Lee distance. The mapping is suitably extended to an isometry of the Hamming spaces
$$
\mathbb{Z}_2^{2m}
$$
and
$$
\mathbb{Z}_4^m
$$
. Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in
$$
\mathbb{Z}_2^2
$$
of ring-linear codes from
$$
\mathbb{Z}_4
$$
.
|
https://en.wikipedia.org/wiki/Gray_code
|
passage: After the first integration with respect to x, we would rigorously need to introduce a "constant" function of y. That is, If we were to differentiate this function with respect to x, any terms containing only y would vanish, leaving the original integrand. Similarly for the second integral, we would introduce a "constant" function of x, because we have integrated with respect to y. In this way, indefinite integration does not make very much sense for functions of several variables.
### Lack of commutativity
The order in which the integrals are computed is important in iterated integrals, particularly when the integrand is not continuous on the domain of integration. Examples in which the different orders lead to different results are usually for complicated functions as the one that follows.
Define the sequence
$$
a_0=0<a_1<a_2<\cdots
$$
such that
$$
a_n\to1
$$
. Let
$$
g_n
$$
be a sequence of continuous functions not vanishing in the interval
$$
(a_n,a_{n+1})
$$
and zero elsewhere, such that
$$
\int_0^1 g_n=1
$$
for every
$$
n
$$
. Define
$$
f(x,y)=\sum_{n=0}^\infty \left( g_n(x)-g_{n+1}(x)\right)g_n(y).
$$
In the previous sum, at each specific
$$
(x,y)
$$
, at most one term is different from zero.
|
https://en.wikipedia.org/wiki/Iterated_integral
|
passage: For example, viruses have been useful in the study of genetics and helped our understanding of the basic mechanisms of molecular genetics, such as DNA replication, transcription, RNA processing, translation, protein transport, and immunology.
Geneticists often use viruses as vectors to introduce genes into cells that they are studying. This is useful for making the cell produce a foreign substance, or to study the effect of introducing a new gene into the genome. Similarly, virotherapy uses viruses as vectors to treat various diseases, as they can specifically target cells and DNA. It shows promising use in the treatment of cancer and in gene therapy. Eastern European scientists have used phage therapy as an alternative to antibiotics for some time, and interest in this approach is increasing, because of the high level of antibiotic resistance now found in some pathogenic bacteria.
The expression of heterologous proteins by viruses is the basis of several manufacturing processes that are currently being used for the production of various proteins such as vaccine antigens and antibodies. Industrial processes have been recently developed using viral vectors and several pharmaceutical proteins are currently in pre-clinical and clinical trials.
#### Virotherapy
Virotherapy involves the use of genetically modified viruses to treat diseases. Viruses have been modified by scientists to reproduce in cancer cells and destroy them but not infect healthy cells.
|
https://en.wikipedia.org/wiki/Virus
|
passage: Hence they have first order contact if and only if the 2-dimensional subspace is degenerate (signature (1,0)), which holds if and only if the span of v and w is degenerate. By Lagrange's identity, this holds if and only if (v · w)2 = (v · v)(w · w) = 1, i.e., if and only if v · w = ± 1, i.e., x · y = 1 ± 1. The contact is oriented if and only if v · w = – 1, i.e., x · y = 0.
### The problem of Apollonius
The incidence of cycles in Lie sphere geometry provides a simple solution to the problem of Apollonius. This problem concerns a configuration of three distinct circles (which may be points or lines): the aim is to find every other circle (including points or lines) which is tangent to all three of the original circles. For a generic configuration of circles, there are at most eight such tangent circles.
The solution, using Lie sphere geometry, proceeds as follows. Choose an orientation for each of the three circles (there are eight ways to do this, but there are only four up to reversing the orientation of all three). This defines three points [x], [y], [z] on the Lie quadric Q. By the incidence of cycles, a solution to the Apollonian problem compatible with the chosen orientations is given by a point [q] ∈ Q such that q is orthogonal to x, y and z.
|
https://en.wikipedia.org/wiki/Lie_sphere_geometry
|
passage: The sum of these is (n + 1) + n + (n − 1) + ... + 2 + 1 = (n + 1)(n + 2) / 2 terms, each with its own coefficient. However, one of these coefficients is redundant in determining the curve, because we can always divide through the polynomial equation by any one of the coefficients, giving an equivalent equation with one coefficient fixed at 1, and thus [(n + 1)(n + 2) / 2] − 1 = n(n + 3) / 2 remaining coefficients.
For example, a fourth-degree equation has the general form
$$
x^4+c_1x^3y+c_2x^2y^2+ c_3xy^3+c_4y^4+c_5x^3+c_6x^2y+c_7xy^2+c_8y^3+c_9x^2+c_{10}xy+c_{11}y^2+c_{12}x+c_{13}y+c_{14}=0,
$$
with 4(4+3)/2 = 14 coefficients.
Determining an algebraic curve through a set of points consists of determining values for these coefficients in the algebraic equation such that each of the points satisfies the equation.
|
https://en.wikipedia.org/wiki/Cramer%27s_theorem_%28algebraic_curves%29
|
passage: This contrasts with mobile systems, where software is often available only through a manufacturer-supported channel and end-user program development may be discouraged by lack of support by the manufacturer.
Since the early
### 1990s
, Microsoft operating systems (first with MS-DOS and then with Windows) and CPUs based on Intel's x86 architecture – collectively called Wintel – have dominated the personal computer market, and today the term PC normally refers to the ubiquitous Wintel platform, or to Windows PCs in general (including those running ARM chips), to the point where software for Windows is marketed as "for PC". Alternatives to Windows occupy a minority share of the market; these include the Mac platform from Apple (running the macOS operating system), and free and open-source, Unix-like operating systems, such as Linux (including the Linux-derived ChromeOS). Other notable platforms until the 1990s were the Amiga from Commodore, the Atari ST, and the PC-98 from NEC.
## Terminology
The term 'PC' is an initialism for 'personal computer'. While the IBM Personal Computer incorporated the designation into its model name, the term originally described personal computers of any brand. In some contexts, PC is used to contrast with Mac, an Apple Macintosh computer.
Since none of these Apple products were mainframes or time-sharing systems, they were all personal computers but not PC (brand) computers. In 1995, a CBS segment on the growing popularity of PC reported: "For many newcomers PC stands for Pain and Confusion."
## History
|
https://en.wikipedia.org/wiki/Personal_computer
|
passage: In genetics and molecular biology, a corepressor is a molecule that represses the expression of genes. In prokaryotes, corepressors are small molecules whereas in eukaryotes, corepressors are proteins. A corepressor does not directly bind to DNA, but instead indirectly regulates gene expression by binding to repressors.
A corepressor downregulates (or represses) the expression of genes by binding to and activating a repressor transcription factor. The repressor in turn binds to a gene's operator sequence (segment of DNA to which a transcription factor binds to regulate gene expression), thereby blocking transcription of that gene.
## Function
### Prokaryotes
In prokaryotes, the term corepressor is used to denote the activating ligand of a repressor protein. For example, the E. coli tryptophan repressor (TrpR) is only able to bind to DNA and repress transcription of the trp operon when its corepressor tryptophan is bound to it. TrpR in the absence of tryptophan is known as an aporepressor and is inactive in repressing gene transcription. Trp operon encodes enzymes responsible for the synthesis of tryptophan. Hence TrpR provides a negative feedback mechanism that regulates the biosynthesis of tryptophan.
In short tryptophan acts as a corepressor for its own biosynthesis.
### Eukaryotes
In eukaryotes, a corepressor is a protein that binds to transcription factors.
|
https://en.wikipedia.org/wiki/Corepressor
|
passage: (A pass here is defined to be a full sequence of odd–even, or even–odd comparisons. The passes occur in order pass 1: odd–even, pass 2: even–odd, etc.)
Proof:
This proof is based loosely on one by Thomas Worsch.
Since the sorting algorithm only involves comparison-swap operations and is oblivious (the order of comparison-swap operations does not depend on the data), by Knuth's 0–1 sorting principle, it suffices to check correctness when each
$$
a_i
$$
is either 0 or 1. Assume that there are
$$
e
$$
1s.
Observe that the rightmost 1 can be either in an even or odd position, so it might not be moved by the first odd–even pass. But after the first odd–even pass, the rightmost 1 will be in an even position. It follows that it will be moved to the right by all remaining passes. Since the rightmost one starts in position greater than or equal to
$$
e
$$
, it must be moved at most
$$
n - e
$$
steps. It follows that it takes at most
$$
n - e + 1
$$
passes to move the rightmost 1 to its correct position.
Now, consider the second rightmost 1. After two passes, the 1 to its right will have moved right by at least one step. It follows that, for all remaining passes, we can view the second rightmost 1 as the rightmost 1.
|
https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
|
passage: We obtain a candidate for each keypoint by identifying its nearest neighbor in the database of keypoints from training images. The nearest neighbors are defined as the keypoints with minimum Euclidean distance from the given descriptor vector. The way Lowe determined whether a given candidate should be kept or 'thrown out' is by checking the ratio between the distance from this given candidate and the distance from the closest keypoint which is not of the same object class as the candidate at hand (candidate feature vector / closest different class feature vector), the idea is that we can only be sure of candidates in which features/keypoints from distinct object classes don't "clutter" it (not geometrically clutter in the feature space necessarily but more so clutter along the right half (>0) of the real line), this is an obvious consequence of using Euclidean distance as our nearest neighbor measure. The ratio threshold for rejection is whenever it is above 0.8. This method eliminated 90% of false matches while discarding less than 5% of correct matches. To further improve the efficiency of the best-bin-first algorithm search was cut off after checking the first 200 nearest neighbor candidates. For a database of 100,000 keypoints, this provides a speedup over exact nearest neighbor search by about 2 orders of magnitude, yet results in less than a 5% loss in the number of correct matches.
### Cluster identification by Hough transform voting
Hough transform is used to cluster reliable model hypotheses to search for keys that agree upon a particular model pose.
|
https://en.wikipedia.org/wiki/Scale-invariant_feature_transform
|
passage: Alternatively, stirring a non-Newtonian fluid can cause the viscosity to decrease, so the fluid appears "thinner" (this is seen in non-drip paints). There are many types of non-Newtonian fluids, as they are defined to be something that fails to obey a particular property—for example, most fluids with long molecular chains can react in a non-Newtonian manner.
### Equations for a Newtonian fluid
The constant of proportionality between the viscous stress tensor and the velocity gradient is known as the viscosity. A simple equation to describe incompressible Newtonian fluid behavior is
$$
\tau = -\mu\frac{\mathrm{d} u}{\mathrm{d} n}
$$
where
$$
\tau
$$
is the shear stress exerted by the fluid ("drag"),
$$
\mu
$$
is the fluid viscosity—a constant of proportionality, and
$$
\frac{\mathrm{d} u}{\mathrm{d} n}
$$
is the velocity gradient perpendicular to the direction of shear.
For a Newtonian fluid, the viscosity, by definition, depends only on temperature, not on the forces acting upon it.
|
https://en.wikipedia.org/wiki/Fluid_mechanics
|
passage: It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is an example of a longitudinal wave.
## Surface waves
This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean, lake, or any other type of water body. There are two types of surface waves, namely Rayleigh waves and Love waves.
Rayleigh waves, also known as ground roll, are waves that travel as ripples with motion similar to those of waves on the surface of water. Such waves are much slower than body waves, at roughly 90% of the velocity of for a typical homogeneous elastic medium. Rayleigh waves have energy losses only in two dimensions and are hence more destructive in earthquakes than conventional bulk waves, such as P-waves and S-waves, which lose energy in all three directions.
A Love wave is a surface wave having horizontal waves that are shear or transverse to the direction of propagation. They usually travel slightly faster than Rayleigh waves, at about 90% of the body wave velocity, and have the largest amplitude.
## Examples
- Seismic waves
- Sound waves
- Wind waves on seas and lakes
- Vibration
|
https://en.wikipedia.org/wiki/Mechanical_wave
|
passage: Möbius inversion then yields
$$
N_n = \frac{1}{n} \sum_{d\mid n} \mu\left(\frac{n}{d}\right) q^d,
$$
where is the Möbius function. (This formula was known to Gauss.) The main term occurs for , and it is not difficult to bound the remaining terms. The "Riemann hypothesis" statement depends on the fact that the largest proper divisor of can be no larger than .
|
https://en.wikipedia.org/wiki/Prime_number_theorem
|
passage: ### Approximations
Exact discretization may sometimes be intractable due to the heavy matrix exponential and integral operations involved. It is much easier to calculate an approximate discrete model, based on that for small timesteps
$$
e^{\mathbf{A}T} \approx \mathbf I + \mathbf A T
$$
. The approximate solution then becomes:
$$
\mathbf x[k+1] \approx (\mathbf I + \mathbf{A}T) \mathbf x[k] + T \mathbf{Bu}[k]
$$
This is also known as the Euler method, which is also known as the forward Euler method. Other possible approximations are
$$
e^{\mathbf{A}T} \approx (\mathbf I - \mathbf{A}T)^{-1}
$$
, otherwise known as the backward Euler method and
$$
e^{\mathbf{A}T} \approx (\mathbf I +\tfrac{1}{2} \mathbf{A}T) (\mathbf I - \tfrac{1}{2} \mathbf{A}T)^{-1}
$$
, which is known as the bilinear transform, or Tustin transform. Each of these approximations has different stability properties. The bilinear transform preserves the instability of the continuous-time system.
## Discretization of continuous features
In statistics and machine learning, discretization refers to the process of converting continuous features or variables to discretized or nominal features.
|
https://en.wikipedia.org/wiki/Discretization
|
passage: If
$$
I\subset \mathbb{R}
$$
is a non-degenerate interval, we say that
$$
f:I \to \R
$$
is continuous at
$$
p\in I
$$
if
$$
\lim_{x \to p} f(x) = f(p)
$$
. We say that
$$
f
$$
is a continuous map if
$$
f
$$
is continuous at every
$$
p\in I
$$
.
In contrast to the requirements for
$$
f
$$
to have a limit at a point
$$
p
$$
, which do not constrain the behavior of
$$
f
$$
at
$$
p
$$
itself, the following two conditions, in addition to the existence of
$$
\lim_{x\to p} f(x)
$$
, must also hold in order for
$$
f
$$
to be continuous at
$$
p
$$
: (i)
$$
f
$$
must be defined at
$$
p
$$
, i.e.,
$$
p
$$
is in the domain of
$$
f
$$
; and (ii)
$$
f(x)\to f(p)
$$
as
$$
x\to p
$$
. The definition above actually applies to any domain
$$
E
$$
that does not contain an isolated point, or equivalently,
$$
E
$$
where every
$$
p\in E
$$
is a limit point of
$$
E
$$
. A more general definition applying to
$$
f:X\to\mathbb{R}
$$
with a general domain
$$
X\subset \mathbb{R}
$$
is the following:
Definition.
|
https://en.wikipedia.org/wiki/Real_analysis
|
passage: For example, in the context of chain complexes, a boundary is any element of the image
$$
B_n := \mathrm{im}\, d_{n+1} :=\{d_{n+1}(c)\,|\; c\in C_{n+1}\}
$$
of the boundary homomorphism
$$
d_n: C_n \to C_{n-1}
$$
, for some
$$
n
$$
. In topology, the boundary of a space is technically obtained by taking the space's closure minus its interior, but it is also a notion familiar from examples, e.g., the boundary of the unit disk is the unit circle, or more topologically, the boundary of
$$
D^2
$$
is
$$
S^1
$$
.
Topologically, the boundary of the closed interval
$$
[0,1]
$$
is given by the disjoint union
$$
\{0\} \, \amalg \, \{1\}
$$
, and with respect to suitable orientation conventions, the oriented boundary of
$$
[0,1]
$$
is given by the union of a positively-oriented
$$
\{1\}
$$
with a negatively oriented
$$
\{0\}.
$$
The simplicial chain complex analog of this statement is that
$$
d_1([0,1]) = \{1\} - \{0\}
$$
.
|
https://en.wikipedia.org/wiki/Homology_%28mathematics%29
|
passage: For large m it agrees with dim
$$
H^0(X, \mathcal{F}(m))
$$
by Serre's vanishing theorem. If M is a finitely generated graded module and
$$
\tilde{M}
$$
the associated coherent sheaf the two definitions of Hilbert polynomial agree.
### Graded free resolutions
Since the category of coherent sheaves on a projective variety
$$
X
$$
is equivalent to the category of graded-modules modulo a finite number of graded-pieces, we can use the results in the previous section to construct Hilbert polynomials of coherent sheaves. For example, a complete intersection
$$
X
$$
of multi-degree
$$
(d_1,d_2)
$$
has the resolution
$$
0 \to \mathcal{O}_{\mathbb{P}^n}(-d_1-d_2) \xrightarrow{\begin{bmatrix} f_2 \\ -f_1 \end{bmatrix}} \mathcal{O}_{\mathbb{P}^n}(-d_1)\oplus\mathcal{O}_{\mathbb{P}^n}(-d_2) \xrightarrow{\begin{bmatrix}f_1 & f_2 \end{bmatrix}} \mathcal{O}_{\mathbb{P}^n} \to \mathcal{O}_X \to 0
$$
|
https://en.wikipedia.org/wiki/Hilbert_series_and_Hilbert_polynomial
|
passage: With respect to algebraic geometry codes, this means that Hermitian codes are long relative to the alphabet they are defined over.
The Riemann–Roch space of the Hermitian function field is given in the following statement. For the Hermitian function field
$$
\mathbb{F}_{q^2}(x,y)
$$
given by
$$
x^{q+1} = y^q + y
$$
and for
$$
m \in \mathbb{Z}^+
$$
, the Riemann–Roch space
$$
\mathcal{L}(mP_\infty)
$$
is
$$
\mathcal{L}(mP_\infty) = \left\langle x^a y^b : 0 \leq b \leq q-1, aq + b(q+1) \leq m \right\rangle ,
$$
where
$$
P_\infty
$$
is the point at infinity on
$$
\mathcal{H}_q(\mathbb{F}_{q^2})
$$
.
With that, the one-point Hermitian code can be defined in the following way. Let
$$
\mathcal{H}_q
$$
be the Hermitian curve defined over
$$
\mathbb{F}_{q^2}
$$
.
|
https://en.wikipedia.org/wiki/Algebraic_geometry_code
|
passage: Mathematically, this condition is also approached exponentially; in theory, it takes infinite time, but in practice, it is over, for all intents and purposes, in a much shorter period. At the end of this process with no heat sink but the internal parts of the ball (which are finite), there is no steady-state heat conduction to reach. Such a state never occurs in this situation, but rather the end of the process is when there is no heat conduction at all.
The analysis of non-steady-state conduction systems is more complex than that of steady-state systems. If the conducting body has a simple shape, then exact analytical mathematical expressions and solutions may be possible (see heat equation for the analytical approach). However, most often, because of complicated shapes with varying thermal conductivities within the shape (i.e., most complex objects, mechanisms or machines in engineering) often the application of approximate theories is required, and/or numerical analysis by computer. One popular graphical method involves the use of Heisler Charts.
Occasionally, transient conduction problems may be considerably simplified if regions of the object being heated or cooled can be identified, for which thermal conductivity is very much greater than that for heat paths leading into the region. In this case, the region with high conductivity can often be treated in the lumped capacitance model, as a "lump" of material with a simple thermal capacitance consisting of its aggregate heat capacity.
|
https://en.wikipedia.org/wiki/Thermal_conduction
|
passage: Here the disguise, such as sunglasses, is removed and the face hallucination algorithm is applied to the image. Such face hallucination algorithms need to be trained on similar face images with and without disguise. To fill in the area uncovered by removing the disguise, face hallucination algorithms need to correctly map the entire state of the face, which may be not possible due to the momentary facial expression captured in the low resolution image.
### 3-dimensional recognition
Three-dimensional face recognition technique uses 3D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin.
One advantage of 3D face recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view. Three-dimensional data points from a face vastly improve the precision of face recognition. 3D-dimensional face recognition research is enabled by the development of sophisticated sensors that project structured light onto the face. 3D matching technique are sensitive to expressions, therefore researchers at Technion applied tools from metric geometry to treat expressions as isometries. A new method of capturing 3D images of faces uses three tracking cameras that point at different angles; one camera will be pointing at the front of the subject, second one to the side, and third one at an angle. All these cameras will work together so it can track a subject's face in real-time and be able to face detect and recognize.
|
https://en.wikipedia.org/wiki/Facial_recognition_system
|
passage: Therefore, this distribution is infinitely divisible.
On the other hand, let Dn be the nth binary digit of Y, for n ≥ 0. Then the Dn's are independent and
$$
Y = \sum_{n=1}^\infty 2^n D_n,
$$
and each term in this sum is indecomposable.
## Related concepts
At the other extreme from indecomposability is infinite divisibility.
- Cramér's theorem shows that while the normal distribution is infinitely divisible, it can only be decomposed into normal distributions.
- Cochran's theorem shows that the terms in a decomposition of a sum of squares of normal random variables into sums of squares of linear combinations of these variables always have independent chi-squared distributions.
|
https://en.wikipedia.org/wiki/Indecomposable_distribution
|
passage: It is difficult to check numerically the linear dependence or exact orthogonality. Therefore, the notion of ε-orthogonality is used. For spaces with inner product, x is ε-orthogonal to y if
$$
\left|\left\langle x,y \right\rangle\right| / \left(\left\|x\right\|\left\|y\right\|\right) < \varepsilon
$$
(that is, cosine of the angle between and is less than ).
In high dimensions, two independent random vectors are with high probability almost orthogonal, and the number of independent random vectors, which all are with given high probability pairwise almost orthogonal, grows exponentially with dimension. More precisely, consider equidistribution in n-dimensional ball. Choose N independent random vectors from a ball (they are independent and identically distributed). Let θ be a small positive number. Then for
random vectors are all pairwise ε-orthogonal with probability . This growth exponentially with dimension and
$$
N\gg n
$$
for sufficiently big . This property of random bases is a manifestation of the so-called .
The figure (right) illustrates distribution of lengths N of pairwise almost orthogonal chains of vectors that are independently randomly sampled from the n-dimensional cube as a function of dimension, n. A point is first randomly selected in the cube. The second point is randomly chosen in the same cube.
|
https://en.wikipedia.org/wiki/Basis_%28linear_algebra%29
|
passage: Processing times of the same query may have large variance, from a fraction of a second to hours, depending on the chosen method. The purpose of query optimization, which is an automated process, is to find the way to process a given query in minimum time. The large possible variance in time justifies performing query optimization, though finding the exact optimal query plan, among all possibilities, is typically very complex, time-consuming by itself, may be too costly, and often practically impossible. Thus query optimization typically tries to approximate the optimum by comparing several common-sense alternatives to provide in a reasonable time a "good enough" plan which typically does not deviate much from the best possible result.
## General considerations
There is a trade-off between the amount of time spent figuring out the best query plan and the quality of the choice; the optimizer may not choose the best answer on its own. Different qualities of database management systems have different ways of balancing these two. Cost-based query optimizers evaluate the resource footprint of various query plans and use this as the basis for plan selection. These assign an estimated "cost" to each possible query plan, and choose the plan with the smallest cost. Costs are used to estimate the runtime cost of evaluating the query, in terms of the number of I/O operations required, CPU path length, amount of disk buffer space, disk storage service time, and interconnect usage between units of parallelism, and other factors determined from the data dictionary.
|
https://en.wikipedia.org/wiki/Query_optimization
|
passage: If we apply an identical force to each, the object with a bigger mass will experience a smaller acceleration, and the object with a smaller mass will experience a bigger acceleration. We might say that the larger mass exerts a greater "resistance" to changing its state of motion in response to the force.
However, this notion of applying "identical" forces to different objects brings us back to the fact that we have not really defined what a force is. We can sidestep this difficulty with the help of Newton's third law, which states that if one object exerts a force on a second object, it will experience an equal and opposite force. To be precise, suppose we have two objects of constant inertial masses m1 and m2. We isolate the two objects from all other physical influences, so that the only forces present are the force exerted on m1 by m2, which we denote F12, and the force exerted on m2 by m1, which we denote F21. Newton's second law states that
$$
\begin{align}
\mathbf{F_{12}} & =m_1\mathbf{a}_1,\\
\mathbf{F_{21}} & =m_2\mathbf{a}_2,
\end{align}
$$
where a1 and a2 are the accelerations of m1 and m2, respectively. Suppose that these accelerations are non-zero, so that the forces between the two objects are non-zero. This occurs, for example, if the two objects are in the process of colliding with one another.
|
https://en.wikipedia.org/wiki/Mass
|
passage: Since
$$
f(x) =\liminf_{n\to\infty} f_n(x) = \sup_n \inf_{k\geq n} f_k(x) = \sup_n g_n(x)
$$
,
and infima and suprema of measurable functions are measurable we see that
$$
f
$$
is measurable.
By the Monotone Convergence Theorem and property (1), the sup and integral may be interchanged:
$$
\begin{align}
\int_X f\,d\mu&= \int_X \sup_n g_n\,d\mu\\
BLOCK0\end{align}
$$
where the last step used property (2).
#### From "first principles"
To demonstrate that the monotone convergence theorem is not "hidden", the proof below does not use any properties of Lebesgue integral except those established here and the fact that the functions
$$
f
$$
and
$$
g_n
$$
are measurable.
Denote by
$$
\operatorname{SF}(f)
$$
the set of simple
$$
(\mathcal{F}, \operatorname{\mathcal B}_{\R_{\geq 0}})
$$
-measurable functions
$$
s:X\to [0,\infty)
$$
such that
$$
0\leq s\leq f
$$
on
$$
X
$$
.
Now we turn to the main theorem
The proof is complete.
|
https://en.wikipedia.org/wiki/Fatou%27s_lemma
|
passage: A semiconductor is a material with electrical conductivity between that of a conductor and an insulator. Its conductivity can be modified by adding impurities ("doping") to its crystal structure. When two regions with different doping levels are present in the same crystal, they form a semiconductor junction.
The behavior of charge carriers, which include electrons, ions, and electron holes, at these junctions is the basis of diodes, transistors, and most modern electronics. Some examples of semiconductors are silicon, germanium, gallium arsenide, and elements near the so-called "metalloid staircase" on the periodic table. After silicon, gallium arsenide is the second-most common semiconductor and is used in laser diodes, solar cells, microwave-frequency integrated circuits, and others. Silicon is a critical element for fabricating most electronic circuits.
Semiconductor devices can display a range of different useful properties, such as passing current more easily in one direction than the other, showing variable resistance, and having sensitivity to light or heat. Because the electrical properties of a semiconductor material can be modified by doping and by the application of electrical fields or light, devices made from semiconductors can be used for amplification, switching, and energy conversion. The term semiconductor is also used to describe materials used in high capacity, medium- to high-voltage cables as part of their insulation, and these materials are often plastic XLPE (cross-linked polyethylene) with carbon black.
|
https://en.wikipedia.org/wiki/Semiconductor
|
passage: A skew heap (or self-adjusting heap) is a heap data structure implemented as a binary tree. Skew heaps are advantageous because of their ability to merge more quickly than binary heaps. In contrast with binary heaps, there are no structural constraints, so there is no guarantee that the height of the tree is logarithmic. Only two conditions must be satisfied:
- The general heap order must be enforced
- Every operation (add, remove_min, merge) on two skew heaps must be done using a special skew heap merge.
A skew heap is a self-adjusting form of a leftist heap which attempts to maintain balance by unconditionally swapping all nodes in the merge path when merging two heaps. (The merge operation is also used when adding and removing values.)
With no structural constraints, it may seem that a skew heap would be horribly inefficient. However, amortized complexity analysis can be used to demonstrate that all operations on a skew heap can be done in O(log n).
In fact, with
$$
\varphi = \frac{1+\sqrt{5}}{2}
$$
denoting the golden ratio, the exact amortized complexity is known to be logφ n (approximately 1.44 log2 n).
## Definition
Skew heaps may be described with the following recursive definition:
- A heap with only one element is a skew heap.
-
|
https://en.wikipedia.org/wiki/Skew_heap
|
passage: In an orthonormal basis of an observer the non-zero components in geometric units are
$$
R^{\hat{r}}{}_{\hat{t}\hat{r}\hat{t}}= -R^{\hat{\theta}}{}_{\hat{\phi}\hat{\theta}\hat{\phi}} = -\frac{r_\text{s}}{r^3},
$$
$$
R^{\hat{\theta}}{}_{\hat{t}\hat{\theta}\hat{t}}= R^{\hat{\phi}}{}_{\hat{t}\hat{\phi}\hat{t}} = -R^{\hat{r}}{}_{\hat{\theta}\hat{r}\hat{\theta}} = -R^{\hat{r}}{}_{\hat{\phi}\hat{r}\hat{\phi}} = \frac{r_\text{s}}{2r^3}.
$$
Again, components which are obtainable by the symmetries of the Riemann tensor are not displayed. These results are invariant to any Lorentz boost, thus the components do not change for non-static observers. The geodesic deviation equation shows that the tidal acceleration between two observers separated by
$$
\xi^{\hat{j}}
$$
is
$$
D^2 \xi^{\hat{j}}/D\tau^2 = -R^{\hat{j}}{}_{\hat{t}\hat{k}\hat{t}} \xi^{\hat{k}}
$$
, so a body of length
$$
L
$$
is stretched in the radial direction by an apparent acceleration
$$
(r_\text{s}/r^3)c^2 L
$$
and squeezed in the perpendicular directions by
$$
-(r_\text{s}/(2r^3)) c^2 L
$$
.
|
https://en.wikipedia.org/wiki/Schwarzschild_metric
|
passage: If
$$
X
$$
is an hyperbolic manifold obtained as the quotient of
$$
\mathbb H^n
$$
by a group
$$
\Gamma
$$
then
$$
\pi_1(X) \cong \Gamma
$$
.
An equivalent statement is that any homotopy equivalence from
$$
M
$$
to
$$
N
$$
can be homotoped to a unique isometry. The proof actually shows that if
$$
N
$$
has greater dimension than
$$
M
$$
then there can be no homotopy equivalence between them.
### Algebraic form
The group of isometries of hyperbolic space
$$
\mathbb H^n
$$
can be identified with the Lie group
$$
\mathrm{PO}(n,1)
$$
(the projective orthogonal group of a quadratic form of signature
$$
(n,1)
$$
. Then the following statement is equivalent to the one above.
Let and and be two lattices in and suppose that there is a group isomorphism . Then and are conjugate in . That is, there exists a such that .
### In greater generality
Mostow rigidity holds (in its geometric formulation) more generally for fundamental groups of all complete, finite volume, non-positively curved (without Euclidean factors) locally symmetric spaces of dimension at least three, or in its algebraic formulation for all lattices in simple Lie groups not locally isomorphic to
$$
\mathrm{SL}_2(\R)
$$
.
|
https://en.wikipedia.org/wiki/Mostow_rigidity_theorem
|
passage: ### Extended monodromy data
As well as the monodromy representation described in the Fuchsian setting, deformations of irregular systems of linear ordinary differential equations are required to preserve extended monodromy data. Roughly speaking, monodromy data is now regarded as data which glues together canonical solutions near the singularities. If one takes
$$
x_i = x - \lambda_i
$$
as a local coordinate near a pole λiof order
$$
r_i+1
$$
, one can then solve term-by-term for a holomorphic gauge transformation g such that locally, the system looks like
$$
\frac{d(g_i^{-1}Z_i)}{dx_i} = \left(\sum_{j=1}^{r_i} \frac{(-j)T^{(i)}_j}{x_i^{j+1}}+\frac{M^{(i)}}{x_i}\right)(g_i^{-1}Z_i)
$$
where
$$
M^{(i)}
$$
and the
$$
T^{(i)}_j
$$
are diagonal matrices.
|
https://en.wikipedia.org/wiki/Isomonodromic_deformation
|
passage: This implies that
$$
S^2
$$
has trivial fundamental group, so as a consequence, it also has trivial first homology group.
The torus
$$
T^2
$$
has closed curves which cannot be continuously deformed into each other, for example in the diagram none of the cycles a, b or c can be deformed into one another. In particular, cycles a and b cannot be shrunk to a point whereas cycle c can.
If the torus surface is cut along both a and b, it can be opened out and flattened into a rectangle or, more conveniently, a square. One opposite pair of sides represents the cut along a, and the other opposite pair represents the cut along b.
The edges of the square may then be glued back together in different ways. The square can be twisted to allow edges to meet in the opposite direction, as shown by the arrows in the diagram. The various ways of gluing the sides yield just four topologically distinct surfaces:
$$
K^2
$$
is the Klein bottle, which is a torus with a twist in it (In the square diagram, the twist can be seen as the reversal of the bottom arrow). It is a theorem that the re-glued surface must self-intersect (when immersed in Euclidean 3-space). Like the torus, cycles a and b cannot be shrunk while c can be. But unlike the torus, following b forwards right round and back reverses left and right, because b happens to cross over the twist given to one join.
|
https://en.wikipedia.org/wiki/Homology_%28mathematics%29
|
passage: If , , , and are four points on an oriented affine line, their cross ratio is:
$$
(A,B; C,D) = \frac{AC : BC}{AD : BD},
$$
with the notation
$$
WX : YZ
$$
defined to mean the signed ratio of the displacement from to to the displacement from to . For collinear displacements this is a dimensionless quantity.
If the displacements themselves are taken to be signed real numbers, then the cross ratio between points can be written
$$
(A,B; C,D) = \frac{AC}{BC} \bigg/ \frac{AD}{BD} = \frac{AC\cdot BD}{BC\cdot AD}.
$$
If
$$
\widehat\R = \R \cup \{\infty\}
$$
is the projectively extended real line, the cross-ratio of four distinct numbers
$$
x_1, x_2, x_3, x_4
$$
in
$$
\widehat\R
$$
is given by
$$
(x_1, x_2; x_3, x_4)
= \frac{x_3 - x_1}{x_3 - x_2} \bigg/ \frac{x_4 - x_1}{x_4 - x_2}
= \frac{(x_3 - x_1)(x_4 - x_2)}{(x_3 - x_2)(x_4 - x_1)}.
$$
When one of
$$
x_1, x_2, x_3, x_4
$$
is the point at infinity this reduces to e.g.
|
https://en.wikipedia.org/wiki/Cross-ratio
|
passage: Such a case uses one of five approaches:
1. Say that 26 cannot be divided by 11; division becomes a partial function.
1. Give an approximate answer as a floating-point number. This is the approach usually taken in numerical computation.
1. Give the answer as a fraction representing a rational number, so the result of the division of 26 by 11 is
$$
\tfrac{26}{11}
$$
(or as a mixed number, so
$$
\tfrac{26}{11} = 2 \tfrac 4{11}.
$$
) Usually the resulting fraction should be simplified: the result of the division of 52 by 22 is also
$$
\tfrac{26}{11}
$$
. This simplification may be done by factoring out the greatest common divisor.
1. Give the answer as an integer quotient and a remainder, so
$$
\tfrac{26}{11} = 2 \mbox{ remainder } 4.
$$
To make the distinction with the previous case, this division, with two integers as result, is sometimes called Euclidean division, because it is the basis of the Euclidean algorithm.
1. Give the integer quotient as the answer, so
$$
\tfrac{26}{11} = 2.
$$
This is the floor function applied to case 2 or 3. It is sometimes called integer division, and denoted by "//".
Dividing integers in a computer program requires special care. Some programming languages treat integer division as in case 5 above, so the answer is an integer.
|
https://en.wikipedia.org/wiki/Division_%28mathematics%29
|
passage: ### Duality
One can take the dual of the above semidefinite program and obtain the following program:
$$
\max_{y \in \mathbb{R}^{m'}} y_0 ,
$$
subject to:
$$
C - y_0 e_{\emptyset}- \sum_{i \in [m]} y_i A_i - \sum_{S\cup T = U\cup V} y_{S,T,U,V} (e_{S,T} - e_{U,V})\succeq 0.
$$
We have a variable
$$
y_0
$$
corresponding to the constraint
$$
\langle e_{\emptyset}, X\rangle = 1
$$
(where
$$
e_{\emptyset}
$$
is the matrix with all entries zero save for the entry indexed by
$$
(\varnothing,\varnothing)
$$
), a real variable
$$
y_i
$$
for each polynomial constraint
$$
\langle X,A_i \rangle = 0 \quad s.t. i \in [m],
$$
and for each group of multisets
$$
S,T,U,V \subset [n], |S|,|T|,|U|,|V| \le d, S\cup T = U \cup V
$$
, we have a dual variable
$$
y_{S,T,U,V}
$$
for the symmetry constraint
$$
\langle X, e_{S,T} - e_{U,V} \rangle = 0
$$
.
|
https://en.wikipedia.org/wiki/Sum-of-squares_optimization
|
passage: Examples include the characteristic time, characteristic length, or characteristic number (dimensionless) of a given system, or material constants (e.g., Madelung constant, electrical resistivity, and heat capacity) of a particular material or substance.
## Characteristics
Physical constants are parameters in a physical theory that cannot be explained by that theory. This may be due to the apparent fundamental nature of the constant or due to limitations in the theory. Consequently, physical constants must be measured experimentally.
The set of parameters considered physical constants change as physical models change and how fundamental they appear can change. For example,
$$
c
$$
, the speed of light, was originally considered a property of light, a specific system. The discovery and verification of Maxwell's equations connected the same quantity with an entire system, electromagnetism. When the theory of special relativity emerged, the quantity came to be understood as the basis of causality. The speed of light is so fundamental it now defines the international unit of length.
## Relationship to units
### Numerical values
Whereas the physical quantity indicated by a physical constant does not depend on the unit system used to express the quantity, the numerical values of dimensional physical constants do depend on choice of unit system. The term "physical constant" refers to the physical quantity, and not to the numerical value within any given system of units.
|
https://en.wikipedia.org/wiki/Physical_constant
|
passage: If the parser builds complete parse trees, the three trees for inner Products, *, and Value are combined by a new tree root for Products. Otherwise, semantic details from the inner Products and Value are output to some later compiler pass, or are combined and saved in the new Products symbol.
### LR parse steps for example A * 2 + 1
In LR parsers, the shift and reduce decisions are potentially based on the entire stack of everything that has been previously parsed, not just on a single, topmost stack symbol. If done in an unclever way, that could lead to very slow parsers that get slower and slower for longer inputs. LR parsers do this with constant speed, by summarizing all the relevant left context information into a single number called the LR(0) parser state. For each grammar and LR analysis method, there is a fixed (finite) number of such states. Besides holding the already-parsed symbols, the parse stack also remembers the state numbers reached by everything up to those points.
At every parse step, the entire input text is divided into a stack of previously parsed phrases, a current look-ahead symbol, and the remaining unscanned text. The parser's next action is determined by its current LR(0) (rightmost on the stack) and the lookahead symbol. In the steps below, all the black details are exactly the same as in other non-LR shift-reduce parsers. LR parser stacks add the state information in purple, summarizing the black phrases to their left on the stack and what syntax possibilities to expect next.
|
https://en.wikipedia.org/wiki/LR_parser
|
passage: This way of emulating multi-dimensional arrays allows the creation of jagged arrays, where each row may have a different size or, in general, where the valid range of each index depends on the values of all preceding indices.
This representation for multi-dimensional arrays is quite prevalent in C and C++ software. However, C and C++ will use a linear indexing formula for multi-dimensional arrays that are declared with compile time constant size, e.g. by or , instead of the traditional .
The C99 standard introduced Variable Length Array types that let define array types with dimensions computed in run time. The dynamic 4D array can be constructed using a pointer to 4d array, e.g. . The individual elements are accessed by first de-referencing an array pointer followed by indexing, e.g. `(*arr)[i][j][k][l]`. Alternatively, n-d arrays can be declared as pointers to its first element which is a (n-1) dimensional array, e.g. and accessed using more idiomatic syntax, e.g. `arr[i][j][k][l]`.
### Indexing notation
Most programming languages that support arrays support the store and select operations, and have special syntax for indexing.
|
https://en.wikipedia.org/wiki/Array_%28data_type%29
|
passage: Dynamics goes beyond merely describing objects' behavior and also considers the forces which explain it.
Some authors (for example, Taylor (2005) and Greenwood (1997)) include special relativity within classical dynamics.
### Forces vs. energy
Another division is based on the choice of mathematical formalism. Classical mechanics can be mathematically presented in multiple different ways. The physical content of these different formulations is the same, but they provide different insights and facilitate different types of calculations. While the term "Newtonian mechanics" is sometimes used as a synonym for non-relativistic classical physics, it can also refer to a particular formalism based on Newton's laws of motion. Newtonian mechanics in this sense emphasizes force as a vector quantity.
In contrast, analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Two dominant branches of analytical mechanics are Lagrangian mechanics, which uses generalized coordinates and corresponding generalized velocities in tangent bundle space (the tangent bundle of the configuration space and sometimes called "state space"), and Hamiltonian mechanics, which uses coordinates and corresponding momenta in phase space (the cotangent bundle of the configuration space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system.
|
https://en.wikipedia.org/wiki/Classical_mechanics
|
passage: Examples include approaches to solving the heat equation, Schrödinger equation, wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a non-integer power is called the fractional derivative which, together with the fractional integral, is one of the basic operations of the fractional calculus.
### Finite fields
A field is an algebraic structure in which multiplication, addition, subtraction, and division are defined and satisfy the properties that multiplication is associative and every nonzero element has a multiplicative inverse. This implies that exponentiation with integer exponents is well-defined, except for nonpositive powers of . Common examples are the field of complex numbers, the real numbers and the rational numbers, considered earlier in this article, which are all infinite.
A finite field is a field with a finite number of elements. This number of elements is either a prime number or a prime power; that is, it has the form
$$
q=p^k,
$$
where is a prime number, and is a positive integer. For every such , there are fields with elements.
|
https://en.wikipedia.org/wiki/Exponentiation
|
passage: ### Ramanujan summation
Ramanujan summation is a method of assigning a value to divergent series used by Ramanujan and based on the Euler–Maclaurin summation formula. The Ramanujan sum of a series f(0) + f(1) + ... depends not only on the values of f at integers, but also on values of the function f at non-integral points, so it is not really a summation method in the sense of this article.
### Riemann summability
The series a1 + ... is called (R,k) (or Riemann) summable to s if
$$
\lim_{h\rightarrow 0} \sum_{n} a_n\left(\frac{\sin nh}{nh}\right)^k = s.
$$
The series a1 + ... is called R2 summable to s if
$$
\lim_{h\rightarrow 0} \frac{2}{\pi}\sum_n \frac{\sin^2 nh}{n^2h}(a_1+\cdots + a_n) = s.
$$
|
https://en.wikipedia.org/wiki/Divergent_series
|
passage: This is seen in the condition that a 1 at some bit position implies that the vector is not in the set. For sparse sets, this condition is common, and hence many node eliminations are possible.
Minato has proved that ZDDs are especially suitable for combinatorial problems, such as the classical problems in two-level logic minimization, knight's tour problem, fault simulation, timing analysis, the N-queens problem, as well as weak division. By using ZDDs, one can reduce the size of the representation of a set of n-bit vectors in OBDDs by at most a factor of n. In practice, the optimization is statistically significant.
## Definitions
We define a Zero-Suppressed Decision Diagram (ZDD) to be any directed acyclic graph such that:
1. A terminal node is either:
1. The special ⊤ node which represents the unit family
$$
\{ \emptyset \}
$$
(i.e., a singleton set), or
1. The special ⊥ node which represents the empty family
$$
\empty
$$
.
1. Each nonterminal node satisfies the following conditions:
1. There is exactly one node with zero in-degree—the root node. The root node is either terminal or labelled by the smallest integer in the diagram.
1. If two nodes have the same label, then their LO or HI edges point to different nodes. In other words, there are no redundant nodes.
|
https://en.wikipedia.org/wiki/Zero-suppressed_decision_diagram
|
passage: For practical purposes, when measurement errors are taken into account, often a measurement in terrestrial vacuum, or simply a calculation of C0, is sufficiently accurate.)
Using this measurement method, the dielectric constant may exhibit a resonance at certain frequencies corresponding to characteristic response frequencies (excitation energies) of contributors to the dielectric constant. These resonances are the basis for a number of experimental techniques for detecting defects. The conductance method measures absorption as a function of frequency. Alternatively, the time response of the capacitance can be used directly, as in deep-level transient spectroscopy.
Another example of frequency dependent capacitance occurs with MOS capacitors, where the slow generation of minority carriers means that at high frequencies the capacitance measures only the majority carrier response, while at low frequencies both types of carrier respond.
At optical frequencies, in semiconductors the dielectric constant exhibits structure related to the band structure of the solid. Sophisticated modulation spectroscopy measurement methods based upon modulating the crystal structure by pressure or by other stresses and observing the related changes in absorption or reflection of light have advanced our knowledge of these materials.
### Styles
The arrangement of plates and dielectric has many variations in different styles depending on the desired ratings of the capacitor. For small values of capacitance (microfarads and less), ceramic disks use metallic coatings, with wire leads bonded to the coating. Larger values can be made by multiple stacks of plates and disks.
|
https://en.wikipedia.org/wiki/Capacitor
|
passage: In a connected planar graph , every simple cycle of corresponds to a minimal cutset in the dual of , and vice versa. This can be seen as a form of the Jordan curve theorem: each simple cycle separates the faces of into the faces in the interior of the cycle and the faces of the exterior of the cycle, and the duals of the cycle edges are exactly the edges that cross from the interior to the exterior. The girth of any planar graph (the size of its smallest cycle) equals the edge connectivity of its dual graph (the size of its smallest cutset).
This duality extends from individual cutsets and cycles to vector spaces defined from them. The cycle space of a graph is defined as the family of all subgraphs that have even degree at each vertex; it can be viewed as a vector space over the two-element finite field, with the symmetric difference of two sets of edges acting as the vector addition operation in the vector space. Similarly, the cut space of a graph is defined as the family of all cutsets, with vector addition defined in the same way. Then the cycle space of any planar graph and the cut space of its dual graph are isomorphic as vector spaces. Thus, the rank of a planar graph (the dimension of its cut space) equals the cyclotomic number of its dual (the dimension of its cycle space) and vice versa.
|
https://en.wikipedia.org/wiki/Dual_graph
|
passage: For example, in psychological testing, one could take two well established multidimensional personality tests such as the Minnesota Multiphasic Personality Inventory (MMPI-2) and the NEO. By seeing how the MMPI-2 factors relate to the NEO factors, one could gain insight into what dimensions were common between the tests and how much variance was shared. For example, one might find that an extraversion or neuroticism dimension accounted for a substantial amount of shared variance between the two tests.
One can also use canonical-correlation analysis to produce a model equation which relates two sets of variables, for example a set of performance measures and a set of explanatory variables, or a set of outputs and set of inputs. Constraint restrictions can be imposed on such a model to ensure it reflects theoretical requirements or intuitively obvious conditions. This type of model is known as a maximum correlation model.
Visualization of the results of canonical correlation is usually through bar plots of the coefficients of the two sets of variables for the pairs of canonical variates showing significant correlation. Some authors suggest that they are best visualized by plotting them as heliographs, a circular format with ray like bars, with each half representing the two sets of variables.
## Examples
Let
$$
X = x_1
$$
with zero expected value, i.e.,
$$
\operatorname{E}(X)=0
$$
.
1.
|
https://en.wikipedia.org/wiki/Canonical_correlation
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.