text
stringlengths 82
2.62k
| source
stringlengths 31
108
|
---|---|
passage: The converse holds trivially: if can be written as for some invertible , lower triangular or otherwise, then is Hermitian and positive definite.
When is a real matrix (hence symmetric positive-definite), the factorization may be written
$$
\mathbf{A} = \mathbf{L L}^\mathsf{T},
$$
where is a real lower triangular matrix with positive diagonal entries.
### Positive semidefinite matrices
If a Hermitian matrix is only positive semidefinite, instead of positive definite, then it still has a decomposition of the form where the diagonal entries of are allowed to be zero.
The decomposition need not be unique, for example:
$$
\begin{bmatrix}0 & 0 \\0 & 1\end{bmatrix} = \mathbf L \mathbf L^*, \quad \quad \mathbf L=\begin{bmatrix}0 & 0\\ \cos \theta & \sin\theta\end{bmatrix},
$$
for any . However, if the rank of is , then there is a unique lower triangular with exactly positive diagonal elements and columns containing all zeroes.
Alternatively, the decomposition can be made unique when a pivoting choice is fixed.
|
https://en.wikipedia.org/wiki/Cholesky_decomposition
|
passage: In vacuum systems, the units torr (millimeter of mercury), micron (micrometer of mercury), and inch of mercury (inHg) are most commonly used. Torr and micron usually indicates an absolute pressure, while inHg usually indicates a gauge pressure.
Atmospheric pressures are usually stated using hectopascal (hPa), kilopascal (kPa), millibar (mbar) or atmospheres (atm). In American and Canadian engineering, stress is often measured in kip. Stress is not a true pressure since it is not scalar. In the cgs system the unit of pressure was the barye (ba), equal to 1 dyn·cm−2. In the mts system, the unit of pressure was the pieze, equal to 1 sthene per square metre.
Many other hybrid units are used such as mmHg/cm2 or grams-force/cm2 (sometimes as kg/cm2 without properly identifying the force units). Using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as a unit of force is prohibited in SI; the unit of force in SI is the newton (N).
## Static and dynamic pressure
Static pressure is uniform in all directions, so pressure measurements are independent of direction in an immovable (static) fluid. Flow, however, applies additional pressure on surfaces perpendicular to the flow direction, while having little impact on surfaces parallel to the flow direction. This directional component of pressure in a moving (dynamic) fluid is called dynamic pressure.
|
https://en.wikipedia.org/wiki/Pressure_measurement
|
passage: If is an arbitrary integral weight, it is in fact a large unsolved problem in representation theory to describe the cohomology modules
$$
H^i( G/B, \, L_\lambda )
$$
in general. Unlike over
$$
\mathbb{C}
$$
, Mumford gave an example showing that it need not be the case for a fixed that these modules are all zero except in a single degree .
Borel–Weil theorem
The Borel–Weil theorem provides a concrete model for irreducible representations of compact Lie groups and irreducible holomorphic representations of complex semisimple Lie groups. These representations are realized in the spaces of global sections of holomorphic line bundles on the flag manifold of the group. The Borel–Weil–Bott theorem is its generalization to higher cohomology spaces. The theorem dates back to the early 1950s and can be found in and .
### Statement of the theorem
The theorem can be stated either for a complex semisimple Lie group or for its compact form . Let be a connected complex semisimple Lie group, a Borel subgroup of , and the flag variety. In this scenario, is a complex manifold and a nonsingular algebraic . The flag variety can also be described as a compact homogeneous space , where is a (compact) Cartan subgroup of .
|
https://en.wikipedia.org/wiki/Borel%E2%80%93Weil%E2%80%93Bott_theorem
|
passage: ## Future production
Consumption in the twentieth and twenty-first centuries has been abundantly pushed by automobile sector growth. The 1985–2003 oil glut even fueled the sales of low fuel economy vehicles in OECD countries. The 2008 economic crisis seems to have had some impact on the sales of such vehicles; still, in 2008 oil consumption showed a small increase.
In 2016 Goldman Sachs predicted lower demand for oil due to emerging economies concerns, especially China. The BRICS (Brasil, Russia, India, China, South Africa) countries might also kick in, as China briefly had the largest automobile market in December 2009. In the long term, uncertainties linger; the OPEC believes that the OECD countries will push low consumption policies at some point in the future; when that happens, it will definitely curb oil sales, and both OPEC and the Energy Information Administration (EIA) kept lowering their 2020 consumption estimates during the past five years. A detailed review of International Energy Agency oil projections have revealed that revisions of world oil production, price and investments have been motivated by a combination of demand and supply factors. All together, Non-OPEC conventional projections have been fairly stable the last 15 years, while downward revisions were mainly allocated to OPEC. Upward revisions are primarily a result of US tight oil.
Production will also face an increasingly complex situation; while OPEC countries still have large reserves at low production prices, newly found reservoirs often lead to higher prices; offshore giants such as Tupi, Guara and Tiber demand high investments and ever-increasing technological abilities.
|
https://en.wikipedia.org/wiki/Petroleum
|
passage: For α = β = 1 this ratio equals
$$
\frac{\sqrt{3}}{2}
$$
, so that from α = β = 1 to α, β → ∞ the ratio decreases by 8.5%. For α = β = 0 the standard deviation is exactly equal to the mean absolute deviation around the mean. Therefore, this ratio decreases by 15% from α = β = 0 to α = β = 1, and by 25% from α = β = 0 to α, β → ∞ . However, for skewed beta distributions such that α → 0 or β → 0, the ratio of the standard deviation to the mean absolute deviation approaches infinity (although each of them, individually, approaches zero) because the mean absolute deviation approaches zero faster than the standard deviation.
|
https://en.wikipedia.org/wiki/Beta_distribution
|
passage: Vegetables from stems are asparagus, bamboo shoots, cactus pads or nopalitos, kohlrabi, and water chestnut. The spice, cinnamon is bark from a tree trunk. Gum arabic is an important food additive obtained from the trunks of Acacia senegal trees. Chicle, the main ingredient in chewing gum, is obtained from trunks of the chicle tree.
Medicines obtained from stems include quinine from the bark of cinchona trees, camphor distilled from wood of a tree in the same genus that provides cinnamon, and the muscle relaxant curare from the bark of tropical vines.
Wood is used in thousands of ways; it can be used to create buildings, furniture, boats, airplanes, wagons, car parts, musical instruments, sports equipment, railroad ties, utility poles, fence posts, pilings, toothpicks, matches, plywood, coffins, shingles, barrel staves, toys, tool handles, picture frames, veneer, charcoal and firewood. Wood pulp is widely used to make paper, paperboard, cellulose sponges, cellophane and some important plastics and textiles, such as cellulose acetate and rayon. Bamboo stems also have hundreds of uses, including in paper, buildings, furniture, boats, musical instruments, fishing poles, water pipes, plant stakes, and scaffolding. Trunks of palms and tree ferns are often used for building. Stems of reed are an important building material for use in thatching in some areas.
|
https://en.wikipedia.org/wiki/Plant_stem
|
passage: In particular, any differentiable function must be continuous at every point in its domain. The converse does not hold: a continuous function need not be differentiable. For example, a function with a bend, cusp, or vertical tangent may be continuous, but fails to be differentiable at the location of the anomaly.
Most functions that occur in practice have derivatives at all points or at almost every point. However, a result of Stefan Banach states that the set of functions that have a derivative at some point is a meagre set in the space of all continuous functions. Informally, this means that differentiable functions are very atypical among continuous functions. The first known example of a function that is continuous everywhere but differentiable nowhere is the Weierstrass function.
Differentiability classes
A function
$$
f
$$
is said to be if the derivative
$$
f^{\prime}(x)
$$
exists and is itself a continuous function. Although the derivative of a differentiable function never has a jump discontinuity, it is possible for the derivative to have an essential discontinuity.
|
https://en.wikipedia.org/wiki/Differentiable_function
|
passage: In many cases, there is substantial common structure in the MPTs, and differences are slight and involve uncertainty in the placement of a few taxa. There are a number of methods for summarizing the relationships within this set, including consensus trees, which show common relationships among all the taxa, and pruned agreement subtrees, which show common structure by temporarily pruning "wildcard" taxa from every tree until they all agree. Reduced consensus takes this one step further, by showing all subtrees (and therefore all relationships) supported by the input trees.
Even if multiple MPTs are returned, parsimony analysis still basically produces a point-estimate, lacking confidence intervals of any sort. This has often been levelled as a criticism, since there is certainly error in estimating the most-parsimonious tree, and the method does not inherently include any means of establishing how sensitive its conclusions are to this error. Several methods have been used to assess support.
### Assessment of validity
Jackknifing and bootstrapping, well-known statistical resampling procedures, have been employed with parsimony analysis. The jackknife, which involves resampling without replacement ("leave-one-out") can be employed on characters or taxa; interpretation may become complicated in the latter case, because the variable of interest is the tree, and comparison of trees with different taxa is not straightforward. The bootstrap, resampling with replacement (sample x items randomly out of a sample of size x, but items can be picked multiple times), is only used on characters, because adding duplicate taxa does not change the result of a parsimony analysis.
|
https://en.wikipedia.org/wiki/Maximum_parsimony
|
passage: Such models have been used to suggest that the first polypeptides were likely short and had non-enzymatic function. Game theoretic models suggested that the organization of RNA strings into cells may have been necessary to prevent "deceptive" use of the genetic code, i.e. preventing the ancient equivalent of viruses from overwhelming the RNA world.
- Stop codons: Codons for translational stops are also an interesting aspect to the problem of the origin of the genetic code. As an example for addressing stop codon evolution, it has been suggested that the stop codons are such that they are most likely to terminate translation early in the case of a frame shift error. In contrast, some stereochemical molecular models explain the origin of stop codons as "unassignable".
|
https://en.wikipedia.org/wiki/Genetic_code
|
passage: The amounts are rounded and given in Mtoe. Enerdata labels TES as Total energy consumption.
25% of worldwide primary production is used for conversion and transport, and 6% for non-energy products like lubricants, asphalt and petrochemicals. In 2019 TES was 606 EJ and final consumption was 418 EJ, 69% of TES. Most of the energy lost by conversion occurs in thermal electricity plants and the energy industry own use.
### Discussion about energy loss
There are different qualities of energy. Heat, especially at a relatively low temperature, is low-quality energy of random motion, whereas electricity is high-quality energy that flows smoothly through wires. It takes around 3 kWh of heat to produce 1 kWh of electricity. But by the same token, a kilowatt-hour of this high-quality electricity can be used to pump several kilowatt-hours of heat into a building using a heat pump. It turns out that the loss of useful energy incurred in thermal electricity plants is very much more than the loss due to, say, resistance in power lines, because of quality differences. Electricity can also be used in many ways in which heat cannot.
In fact, the loss in thermal plants is due to poor conversion of chemical energy of fuel to motion by combustion. Otherwise chemical energy of fuel is not inherently low-quality; for example, conversion of chemical energy to electricity in batteries can approach 100%. So energy loss in thermal plants is real loss.
|
https://en.wikipedia.org/wiki/World_energy_supply_and_consumption
|
passage: Although spin and the Pauli principle can only be derived from relativistic generalizations of quantum mechanics, the properties mentioned in the last two paragraphs belong to the basic postulates already in the non-relativistic limit. Especially, many important properties in natural science, e.g. the periodic system of chemistry, are consequences of the two properties.
## Mathematical structure of quantum mechanics
### Pictures of dynamics
Summary:
### Representations
The original form of the Schrödinger equation depends on choosing a particular representation of Heisenberg's canonical commutation relations. The Stone–von Neumann theorem dictates that all irreducible representations of the finite-dimensional Heisenberg commutation relations are unitarily equivalent. A systematic understanding of its consequences has led to the phase space formulation of quantum mechanics, which works in full phase space instead of Hilbert space, so then with a more intuitive link to the classical limit thereof. This picture also simplifies considerations
of quantization, the deformation extension from classical to quantum mechanics.
The quantum harmonic oscillator is an exactly solvable system where the different representations are easily compared. There, apart from the Heisenberg, or Schrödinger (position or momentum), or phase-space representations, one also encounters the Fock (number) representation and the Segal–Bargmann (Fock-space or coherent state) representation (named after Irving Segal and Valentine Bargmann). All four are unitarily equivalent.
### Time as an operator
The framework presented so far singles out time as the parameter that everything depends on.
|
https://en.wikipedia.org/wiki/Mathematical_formulation_of_quantum_mechanics
|
passage: ## Complete sequences
A sequence of positive integers is called a complete sequence if every positive integer can be expressed as a sum of values in the sequence, using each value at most once.
## Examples
Integer sequences that have their own name include:
- Abundant numbers
- Baum–Sweet sequence
- Bell numbers
- Binomial coefficients
- Carmichael numbers
- Catalan numbers
- Composite numbers
- Deficient numbers
- Euler numbers
- Even and odd numbers
- Factorial numbers
- Fibonacci numbers
- Fibonacci word
- Figurate numbers
- Golomb sequence
- Happy numbers
- Highly composite numbers
- Highly totient numbers
- Home primes
- Hyperperfect numbers
- Juggler sequence
- Kolakoski sequence
- Lucky numbers
- Lucas numbers
- Motzkin numbers
- Natural numbers
- Padovan numbers
- Partition numbers
- Perfect numbers
- Practical numbers
- Prime numbers
- Pseudoprime numbers
- Recamán's sequence
- Regular paperfolding sequence
- Rudin–Shapiro sequence
- Semiperfect numbers
- Semiprime numbers
- Superperfect numbers
- Triangular numbers
- Thue–Morse sequence
- Ulam numbers
- Weird numbers
- Wolstenholme number
|
https://en.wikipedia.org/wiki/Integer_sequence
|
passage: Both of these lists of functions extend infinitely in both directions. The Möbius inversion formula enables these lists to be traversed backwards.
As an example the sequence starting with is:
$$
f_n =
\begin{cases}
\underbrace{\mu * \ldots * \mu}_{-n \text{ factors}} * \varphi & \text{if } n < 0 \\[8px]
\varphi & \text{if } n = 0 \\[8px]
\varphi * \underbrace{\mathit{1}* \ldots * \mathit{1}}_{n \text{ factors}} & \text{if } n > 0
\end{cases}
$$
The generated sequences can perhaps be more easily understood by considering the corresponding Dirichlet series: each repeated application of the transform corresponds to multiplication by the Riemann zeta function.
|
https://en.wikipedia.org/wiki/M%C3%B6bius_inversion_formula
|
passage: Gottfried Wilhelm Leibniz emphasised the hirarchical organization of living machines, noting in his book Monadology (1714) that "...the machines of nature, that is living bodies, are still machines in their smallest parts, to infinity." This idea was developed further by Julien Offray de La Mettrie (1709–1750) in his book L'Homme Machine. In the 19th century the advances in cell theory in biological science encouraged this view. The evolutionary theory of Charles Darwin (1859) is a mechanistic explanation for the origin of species by means of natural selection. At the beginning of the 20th century Stéphane Leduc (1853–1939) promoted the idea that biological processes could be understood in terms of physics and chemistry, and that their growth resembled that of inorganic crystals immersed in solutions of sodium silicate. His ideas, set out in his book La biologie synthétique, were widely dismissed during his lifetime, but has incurred a resurgence of interest in the work of Russell, Barge and colleagues.
### Hylomorphism
Hylomorphism is a theory first expressed by the Greek philosopher Aristotle (322 BC). The application of hylomorphism to biology was important to Aristotle, and biology is extensively covered in his extant writings. In this view, everything in the material universe has both matter and form, and the form of a living thing is its soul (Greek psyche, Latin anima).
|
https://en.wikipedia.org/wiki/Life
|
passage: As such, the cell suicide mechanism is now crucial to all of our lives.
## DNA damage and apoptosis
Repair of DNA damages and apoptosis are two enzymatic processes essential for maintaining genome integrity in humans. Cells that are deficient in DNA repair tend to accumulate DNA damages, and when such cells are also defective in apoptosis they tend to survive even with excess DNA damage. Replication of DNA in such cells leads to mutations and these mutations may cause cancer (see Figure). Several enzymatic pathways have evolved for repairing different kinds of DNA damage, and it has been found that in five well studied DNA repair pathways particular enzymes have a dual role, where one role is to participate in repair of a specific class of damages and the second role is to induce apoptosis if the level of such DNA damage is beyond the cell's repair capability. These dual role proteins tend to protect against development of cancer. Proteins that function in such a dual role for each repair process are: (1) DNA mismatch repair, MSH2, MSH6, MLH1 and PMS2; (2) base excision repair, APEX1 (REF1/APE), poly(ADP-ribose) polymerase (PARP); (3) nucleotide excision repair, XPB, XPD (ERCC2), p53, p33(ING1b); (4) non-homologous end joining, the catalytic subunit of DNA-PK; (5) homologous recombinational repair, BRCA1, ATM, ATR, WRN, BLM, Tip60, p53.
## Programmed death of entire organisms
## Clinical significance
|
https://en.wikipedia.org/wiki/Programmed_cell_death
|
passage: Thus using a three-digit log table, the logarithm of 3542 is approximated by
$$
\begin{align}
\log_{10}3542 &= \log_{10}(1000 \cdot 3.542) \\
&= 3 + \log_{10}3.542 \\
&\approx 3 + \log_{10}3.54
\end{align}
$$
Greater accuracy can be obtained by interpolation:
$$
\log_{10}3542 \approx{} 3 + \log_{10}3.54 + 0.2 (\log_{10}3.55-\log_{10}3.54)
$$
The value of can be determined by reverse look up in the same table, since the logarithm is a monotonic function.
### Computations
The product and quotient of two positive numbers and were routinely calculated as the sum and difference of their logarithms. The product or quotient came from looking up the antilogarithm of the sum or difference, via the same table:
$$
cd = 10^{\, \log_{10} c} \, 10^{\,\log_{10} d} = 10^{\,\log_{10} c \, + \, \log_{10} d}
$$
and
$$
\frac c d = c d^{-1} = 10^{\, \log_{10}c \, - \, \log_{10}d}.
$$
For manual calculations that demand any appreciable precision, performing the lookups of the two logarithms, calculating their sum or difference, and looking up the antilogarithm is much faster than performing the multiplication by earlier methods such as prosthaphaeresis, which relies on trigonometric identities.
|
https://en.wikipedia.org/wiki/Logarithm
|
passage: ### Solution
First we need to represent e and p in a linear form. So we are going to rewrite the equation
$$
r(\theta)
$$
as
$$
\frac{1}{r(\theta)} = \frac{1}{p} - \frac{e}{p}\cos(\theta)
$$
. Furthermore, one could fit for apsides by expanding
$$
\cos(\theta)
$$
with an extra parameter as
$$
\cos(\theta-\theta_0)=\cos(\theta)\cos(\theta_0)+\sin(\theta)\sin(\theta_0)
$$
, which is linear in both
$$
\cos(\theta)
$$
and in the extra basis function
$$
\sin(\theta)
$$
, used to extra
$$
\tan\theta_0=\sin(\theta_0)/\cos(\theta_0)
$$
.
|
https://en.wikipedia.org/wiki/Ordinary_least_squares
|
passage: It has been proposed that, in the cork layer (the phellogen), suberin acts as a barrier to microbial degradation and so protects the internal structure of the plant.
Analysis of the lignin in the bark wall during decay by the white-rot fungi Lentinula edodes (Shiitake mushroom) using 13C NMR revealed that the lignin polymers contained more Guaiacyl lignin units than Syringyl units compared to the interior of the plant. Guaiacyl units are less susceptible to degradation as, compared to syringyl, they contain fewer aryl-aryl bonds, can form a condensed lignin structure, and have a lower redox potential. This could mean that the concentration and type of lignin units could provide additional resistance to fungal decay for plants protected by bark.
## Damage and repair
Bark can sustain damage from environmental factors, such as frost crack and sun scald, as well as biological factors, such as woodpecker and boring beetle attacks. Male deer and other male members of the Cervidae (deer family) can cause extensive bark damage during the rutting season by rubbing their antlers against the tree to remove their velvet.
The bark is often damaged by being bound to stakes or wrapped with wires. In the past, this damage was called bark-galling and was treated by applying clay laid on the galled place and binding it up with hay. In modern usage, "galling" most typically refers to a type of abnormal growth on a plant caused by insects or pathogens.
|
https://en.wikipedia.org/wiki/Bark_%28botany%29
|
passage: Magnetism is the class of physical attributes that occur through a magnetic field, which allows objects to attract or repel each other. Because both electric currents and magnetic moments of elementary particles give rise to a magnetic field, magnetism is one of two aspects of electromagnetism.
The most familiar effects occur in ferromagnetic materials, which are strongly attracted by magnetic fields and can be magnetized to become permanent magnets, producing magnetic fields themselves. Demagnetizing a magnet is also possible. Only a few substances are ferromagnetic; the most common ones are iron, cobalt, nickel, and their alloys.
All substances exhibit some type of magnetism. Magnetic materials are classified according to their bulk susceptibility.
### Ferromagnetism
is responsible for most of the effects of magnetism encountered in everyday life, but there are actually several types of magnetism. Paramagnetic substances, such as aluminium and oxygen, are weakly attracted to an applied magnetic field; diamagnetic substances, such as copper and carbon, are weakly repelled; while antiferromagnetic materials, such as chromium, have a more complex relationship with a magnetic field. The force of a magnet on paramagnetic, diamagnetic, and antiferromagnetic materials is usually too weak to be felt and can be detected only by laboratory instruments, so in everyday life, these substances are often described as non-magnetic.
The strength of a magnetic field always decreases with distance from the magnetic source, though the exact mathematical relationship between strength and distance varies.
|
https://en.wikipedia.org/wiki/Magnetism
|
passage: In one period of a maximal LFSR, 2n−1 runs occur (in the example above, the 3-bit LFSR has 4 runs). Exactly half of these runs are one bit long, a quarter are two bits long, up to a single run of zeroes n − 1 bits long, and a single run of ones n bits long. This distribution almost equals the statistical expectation value for a truly random sequence. However, the probability of finding exactly this distribution in a sample of a truly random sequence is rather low.
- LFSR output streams are deterministic. If the present state and the positions of the XOR gates in the LFSR are known, the next state can be predicted. This is not possible with truly random events. With maximal-length LFSRs, it is much easier to compute the next state, as there are only an easily limited number of them for each length.
- The output stream is reversible; an LFSR with mirrored taps will cycle through the output sequence in reverse order.
- The value consisting of all zeros cannot appear. Thus an LFSR of length n cannot be used to generate all 2n values.
Applications
LFSRs can be implemented in hardware, and this makes them useful in applications that require very fast generation of a pseudo-random sequence, such as direct-sequence spread spectrum radio. LFSRs have also been used for generating an approximation of white noise in various programmable sound generators.
|
https://en.wikipedia.org/wiki/Linear-feedback_shift_register
|
passage: This derivation assumes that the material has constant mass density and heat capacity through space as well as time.
Applying the law of conservation of energy to a small element of the medium centred at
$$
x
$$
, one concludes that the rate at which heat changes at a given point
$$
x
$$
is equal to the derivative of the heat flow at that point (the difference between the heat flows either side of the particle). That is,
$$
\frac{\partial Q}{\partial t} = - \frac{\partial q}{\partial x}
$$
From the above equations it follows that
$$
\frac{\partial u}{\partial t} \;=\; - \frac{1}{c \rho} \frac{\partial q}{\partial x}
\;=\; - \frac{1}{c \rho} \frac{\partial}{\partial x} \left(-k \,\frac{\partial u}{\partial x} \right)
\;=\; \frac{k}{c \rho} \frac{\partial^2 u}{\partial x^2}
$$
which is the heat equation in one dimension, with diffusivity coefficient
$$
\alpha = \frac{k}{c\rho}
$$
This quantity is called the thermal diffusivity of the medium.
#### Accounting for radiative loss
An additional term may be introduced into the equation to account for radiative loss of heat.
|
https://en.wikipedia.org/wiki/Heat_equation
|
passage: ### Other approaches
Since all propositional formulas can be converted into an equivalent formula in conjunctive normal form, proofs are often based on the assumption that all formulae are CNF. However, in some cases this conversion to CNF can lead to an exponential explosion of the formula. For example, translating the non-CNF formula
$$
(X_1 \wedge Y_1) \vee (X_2 \wedge Y_2) \vee \ldots \vee (X_n \wedge Y_n)
$$
into CNF produces a formula with
$$
2^n
$$
clauses:
$$
(X_1 \vee X_2 \vee \ldots \vee X_n) \wedge (Y_1 \vee X_2 \vee \ldots \vee X_n) \wedge (X_1 \vee Y_2 \vee \ldots \vee X_n) \wedge (Y_1 \vee Y_2 \vee \ldots \vee X_n) \wedge \ldots \wedge (Y_1 \vee Y_2 \vee \ldots \vee Y_n).
$$
Each clause contains either
$$
X_i
$$
or
$$
Y_i
$$
for each
$$
i
$$
.
There exist transformations into CNF that avoid an exponential increase in size by preserving satisfiability rather than equivalence. These transformations are guaranteed to only linearly increase the size of the formula, but introduce new variables.
|
https://en.wikipedia.org/wiki/Conjunctive_normal_form
|
passage: Then every cohomology class in
$$
H^{2k}(X, \Z) \cap H^{k,k}(X)
$$
is the cohomology class of an algebraic cycle with integral coefficients on
This is now known to be false. The first counterexample was constructed by . Using K-theory, they constructed an example of a torsion cohomology class—that is, a cohomology class such that for some positive integer —which is not the class of an algebraic cycle. Such a class is necessarily a Hodge class. reinterpreted their result in the framework of cobordism and found many examples of such classes.
The simplest adjustment of the integral Hodge conjecture is:
Integral Hodge conjecture modulo torsion. Let be a projective complex manifold. Then every cohomology class in
$$
H^{2k}(X, \Z) \cap H^{k,k}(X)
$$
is the sum of a torsion class and the cohomology class of an algebraic cycle with integral coefficients on
Equivalently, after dividing
$$
H^{2k}(X, \Z) \cap H^{k,k}(X)
$$
by torsion classes, every class is the image of the cohomology class of an integral algebraic cycle. This is also false.
|
https://en.wikipedia.org/wiki/Hodge_conjecture
|
passage: As a result, equipment used for real launch countdown operations is engaged. Command and control computers, application software, engineering plotting and trending tools, launch countdown procedure documents, launch commit criteria documents, hardware requirement documents, and any other items used by the engineering launch countdown teams during real launch countdown operations are used during the simulation.
The Space Shuttle vehicle hardware and related GSE hardware is simulated by mathematical models (written in Shuttle Ground Operations Simulator (SGOS) modeling language) that behave and react like real hardware. During the Shuttle Final Countdown Phase Simulation, engineers command and control hardware via real application software executing in the control consoles – just as if they were commanding real vehicle hardware. However, these real software applications do not interface with real Shuttle hardware during simulations. Instead, the applications interface with mathematical model representations of the vehicle and GSE hardware. Consequently, the simulations bypass sensitive and even dangerous mechanisms while providing engineering measurements detailing how the hardware would have reacted. Since these math models interact with the command and control application software, models and simulations are also used to debug and verify the functionality of application software.
### Satellite navigation
The only true way to test GNSS receivers (commonly known as Sat-Nav's in the commercial world) is by using an RF Constellation Simulator. A receiver that may, for example, be used on an aircraft, can be tested under dynamic conditions without the need to take it on a real flight. The test conditions can be repeated exactly, and there is full control over all the test parameters.
|
https://en.wikipedia.org/wiki/Simulation
|
passage: Note that the insert operation above is tail-recursive, so it can be rewritten as a while loop. Other operations are described in more detail in the original paper on Ctries.
The data-structure has been proven to be correct - Ctrie operations have been shown to have the atomicity, linearizability and lock-freedom properties. The lookup operation can be modified to guarantee wait-freedom.
## Advantages of Ctries
Ctries have been shown to be comparable in performance with concurrent skip lists, concurrent hash tables and similar data structures in terms of the lookup operation, being slightly slower than hash tables and faster than skip lists due to the lower level of indirections. However, they are far more scalable than most concurrent hash tables where the insertions are concerned. Most concurrent hash tables are bad at conserving memory - when the keys are removed from the hash table, the underlying array is not shrunk. Ctries have the property that the allocated memory is always a function of only the current number of keys in the data-structure.
Ctries have logarithmic complexity bounds of the basic operations, albeit with a low constant factor due to the high branching level (usually 32).
Ctries support a lock-free, linearizable, constant-time snapshot operation, based on the insight obtained from persistent data structures. This is a breakthrough in concurrent data-structure design, since existing concurrent data-structures do not support snapshots.
|
https://en.wikipedia.org/wiki/Ctrie
|
passage: In mathematics, a fractal is a geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scales, as illustrated in successive magnifications of the Mandelbrot set. This exhibition of similar patterns at increasingly smaller scales is called self-similarity, also known as expanding symmetry or unfolding symmetry; if this replication is exactly the same at every scale, as in the Menger sponge, the shape is called affine self-similar. Fractal geometry lies within the mathematical branch of measure theory.
One way that fractals are different from finite geometric figures is how they scale. Doubling the edge lengths of a filled polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the conventional dimension of the filled polygon). Likewise, if the radius of a filled sphere is doubled, its volume scales by eight, which is two (the ratio of the new to the old radius) to the power of three (the conventional dimension of the filled sphere). However, if a fractal's one-dimensional lengths are all doubled, the spatial content of the fractal scales by a power that is not necessarily an integer and is in general greater than its conventional dimension. This power is called the fractal dimension of the geometric object, to distinguish it from the conventional dimension (which is formally called the topological dimension).
|
https://en.wikipedia.org/wiki/Fractal
|
passage: In computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small positive integers; that is, it is an integer sorting algorithm. It operates by counting the number of objects that possess distinct key values, and applying prefix sum on those counts to determine the positions of each key value in the output sequence. Its running time is linear in the number of items and the difference between the maximum key value and the minimum key value, so it is only suitable for direct use in situations where the variation in keys is not significantly greater than the number of items. It is often used as a subroutine in radix sort, another sorting algorithm, which can handle larger keys more efficiently..
Counting sort is not a comparison sort; it uses key values as indexes into an array and the lower bound for comparison sorting will not apply. Bucket sort may be used in lieu of counting sort, and entails a similar time analysis. However, compared to counting sort, bucket sort requires linked lists, dynamic arrays, or a large amount of pre-allocated memory to hold the sets of items within each bucket, whereas counting sort stores a single number (the count of items) per bucket.
## Input and output assumptions
In the most general case, the input to counting sort consists of a collection of items, each of which has a non-negative integer key whose maximum value is at most .
In some descriptions of counting sort, the input to be sorted is assumed to be more simply a sequence of integers itself, but this simplification does not accommodate many applications of counting sort.
|
https://en.wikipedia.org/wiki/Counting_sort
|
passage: A functor F is then called a full embedding if it is a full functor and an embedding.
With the definitions of the previous paragraph, for any (full) embedding F : B → C the image of F is a (full) subcategory S of C, and F induces an isomorphism of categories between B and S. If F is not injective on objects then the image of F is equivalent to B.
In some categories, one can also speak of morphisms of the category being embeddings.
## Types of subcategories
A subcategory S of C is said to be isomorphism-closed or replete if every isomorphism k : X → Y in C such that Y is in S also belongs to S. An isomorphism-closed full subcategory is said to be strictly full.
A subcategory of C is wide or lluf (a term first posed by Peter Freyd) if it contains all the objects of C. A wide subcategory is typically not full: the only wide full subcategory of a category is that category itself.
A Serre subcategory is a non-empty full subcategory S of an abelian category C such that for all short exact sequences
$$
0\to M'\to M\to M''\to 0
$$
in C, M belongs to S if and only if both
$$
M'
$$
and
$$
M''
$$
do. This notion arises from Serre's C-theory.
|
https://en.wikipedia.org/wiki/Subcategory
|
passage: This kinetic behaviour is called slow-binding. This slow rearrangement after binding often involves a conformational change as the enzyme "clamps down" around the inhibitor molecule. Examples of slow-binding inhibitors include some important drugs, such methotrexate, allopurinol, and the activated form of acyclovir.
### Some examples
Diisopropylfluorophosphate (DFP) is an example of an irreversible protease inhibitor (see the "DFP reaction" diagram). The enzyme hydrolyses the phosphorus–fluorine bond, but the phosphate residue remains bound to the serine in the active site, deactivating it. Similarly, DFP also reacts with the active site of acetylcholine esterase in the synapses of neurons, and consequently is a potent neurotoxin, with a lethal dose of less than 100mg.
Suicide inhibition is an unusual type of irreversible inhibition where the enzyme converts the inhibitor into a reactive form in its active site. An example is the inhibitor of polyamine biosynthesis, α-difluoromethylornithine (DFMO), which is an analogue of the amino acid ornithine, and is used to treat African trypanosomiasis (sleeping sickness). Ornithine decarboxylase can catalyse the decarboxylation of DFMO instead of ornithine (see the "DFMO inhibitor mechanism" diagram).
|
https://en.wikipedia.org/wiki/Enzyme_inhibitor
|
passage: ## Definition
Without loss of generality, we can consider only centered profiles, which peak at zero. The Voigt profile is then
$$
V(x;\sigma,\gamma) \equiv \int_{-\infty}^\infty G(x';\sigma)L(x-x';\gamma)\, dx',
$$
where x is the shift from the line center,
$$
G(x;\sigma)
$$
is the centered Gaussian profile:
$$
G(x;\sigma) \equiv \frac{e^{-\frac{x^2}{2\sigma^2}}}{\sqrt{2\pi}\,\sigma},
$$
and
$$
L(x;\gamma)
$$
is the centered Lorentzian profile:
$$
L(x;\gamma) \equiv \frac{\gamma}{\pi(\gamma^2+x^2)}.
$$
The defining integral can be evaluated as:
$$
V(x;\sigma,\gamma)=\frac{\operatorname{Re}[w(z)]}{\sqrt{2 \pi}\,\sigma},
$$
where Re[w(z)] is the real part of the Faddeeva function evaluated for
$$
z=\frac{x+i\gamma}{\sqrt{2}\, \sigma}.
$$
In the limiting cases of
$$
\sigma=0
$$
and
$$
\gamma =0
$$
then
$$
V(x;\sigma,\gamma)
$$
simplifies to
$$
L(x;\gamma)
$$
and
$$
G(x;\sigma)
$$
, respectively.
|
https://en.wikipedia.org/wiki/Voigt_profile
|
passage: The likelihood of failure, however, can often be reduced through improved system design. Fault tree analysis maps the relationship between faults, subsystems, and redundant safety design elements by creating a logic diagram of the overall system.
The undesired outcome is taken as the root ('top event') of a tree of logic. For instance, the undesired outcome of a metal stamping press operation being considered might be a human appendage being stamped. Working backward from this top event it might be determined that there are two ways this could happen: during normal operation or during maintenance operation. This condition is a logical OR. Considering the branch of the hazard occurring during normal operation, perhaps it is determined that there are two ways this could happen: the press cycles and harms the operator, or the press cycles and harms another person. This is another logical OR. A design improvement can be made by requiring the operator to press two separate buttons to cycle the machine—this is a safety feature in the form of a logical AND. The button may have an intrinsic failure rate—this becomes a fault stimulus that can be analyzed.
When fault trees are labeled with actual numbers for failure probabilities, computer programs can calculate failure probabilities from fault trees. When a specific event is found to have more than one effect event, i.e. it has impact on several subsystems, it is called a common cause or common mode. Graphically speaking, it means this event will appear at several locations in the tree. Common causes introduce dependency relations between events.
|
https://en.wikipedia.org/wiki/Fault_tree_analysis
|
passage: In cryptography, key stretching techniques are used to make a possibly weak key, typically a password or passphrase, more secure against a brute-force attack by increasing the resources (time and possibly space) it takes to test each possible key. Passwords or passphrases created by humans are often short or predictable enough to allow password cracking, and key stretching is intended to make such attacks more difficult by complicating a basic step of trying a single password candidate. Key stretching also improves security in some real-world applications where the key length has been constrained, by mimicking a longer key length from the perspective of a brute-force attacker.
There are several ways to perform key stretching. One way is to apply a cryptographic hash function or a block cipher repeatedly in a loop. For example, in applications where the key is used for a cipher, the key schedule in the cipher may be modified so that it takes a specific length of time to perform. Another way is to use cryptographic hash functions that have large memory requirements – these can be effective in frustrating attacks by memory-bound adversaries.
## Process
Key stretching algorithms depend on an algorithm which receives an input key and then expends considerable effort to generate a stretched cipher (called an enhanced key) mimicking randomness and longer key length. The algorithm must have no known shortcut, so the most efficient way to relate the input and cipher is to repeat the key stretching algorithm itself. This compels brute-force attackers to expend the same effort for each attempt.
|
https://en.wikipedia.org/wiki/Key_stretching
|
passage: The hypergeometric equation is the case n = 3, with group of order 24 isomorphic to the symmetric group on 4 points, as first described by
Kummer. The appearance of the symmetric group is accidental and has no analogue for more than 3 singular points, and it is sometimes better to think of the group as an extension of the symmetric group on 3 points (acting as permutations of the 3 singular points) by a Klein 4-group (whose elements change the signs of the differences of the exponents at an even number of singular points). Kummer's group of 24 transformations is generated by the three transformations taking a solution F(a,b;c;z) to one of
$$
\begin{align}
(1-z)^{-a} F \left (a,c-b;c; \tfrac{z}{z-1} \right ) \\
F(a,b;1+a+b-c;1-z) \\
(1-z)^{-b} F \left(c-a,b;c; \tfrac{z}{z-1} \right )
\end{align}
$$
which correspond to the transpositions (12), (23), and (34) under an isomorphism with the symmetric group on 4 points 1, 2, 3, 4. (The first and third of these are actually equal to F(a,b;c;z) whereas the second is an independent solution to the differential equation.)
|
https://en.wikipedia.org/wiki/Hypergeometric_function
|
passage: $$
whose domains are topological spaces
$$
\left(Z_a, \zeta_a\right).
$$
If every
$$
g_a : \left(Z_a, \zeta_a\right) \to \left(X, \tau_{\mathcal{F}}\right)
$$
is continuous then adding these maps to the family
$$
\mathcal{F}
$$
will change the final topology on
$$
X;
$$
that is,
$$
\tau_{\mathcal{F} \cup \mathcal{G}} = \tau_{\mathcal{F}}.
$$
Explicitly, this means that the final topology on
$$
X
$$
induced by the "extended family"
$$
\mathcal{F} \cup \mathcal{G}
$$
is equal to the final topology
$$
\tau_{\mathcal{F}}
$$
induced by the original family
$$
\mathcal{F} = \left\{ f_i : i \in I \right\}.
$$
However, had there instead existed even just one map
$$
g_{a_0}
$$
|
https://en.wikipedia.org/wiki/Final_topology
|
passage: (Faroese) N/A (2008)18,917 Liechtenstein20,000 (1967)29,000 (1990) N/A (1994) 33,307 (2000) 35,789 (2009)Bevölkerungsstatistik 30. Juni 2009 , Landesverwaltung Liechtenstein. (2008)15,789 South Korea29,207,856 (1966)42,793,000 (1990) 44,453,000 (1994) 48,324,000 (2003)48,875,000 (2010) (2008)19,667,144 North Korea12,700,000 (1967)21,773,000 (1990) 23,483,000 (1994) 22,224,195 (2002) 24,051,218 (2010) (2008)11,351,218 Brunei107,200 (1967) 266,000 (1990) 280,000 (1994) 332,844 (2001)401,890 (2011) 76 (2008)306,609 Malaysia10,671,000 (1967)17,861,000 (1990) 19,489,000 (1994) 21,793,293 (2002)27,565,821 (2010) (2008)16,894,821 Thailand32,680,000 (1967)57,196,000 (1990) 59,396,000 (1994) 60,606,947 (2000)63,878,267 (2011) (2008) 31,198,267 Lebanon2,520,000 (1967)2,701,000 (1990) 2,915,000 (1994) 3,727,703 (2003)4,224,000 (2009) - (2008) Syria5,600,000 (1967)12,116,000 (1990) 13,844,000 (1994) 17,585,540 (2003)22,457,763 (2011) -(2008) Bahrain182,00 (1967)503,000 (1990) 549,000 (1994) 667,238 (2003)1,234,596 (2010) 75 (2008) Sri Lanka 11,741,000 (1967)16,993,000 (1990) 17,685,000 (1994) 19,607,519 (2002)20,238,000 (2009) - (2008) Switzerland6,050,000 (1967)6.712,000 (1990) 6,994,000 (1994) 7,261,200 (2002)7,866,500 (2010) - (2008) Luxembourg335,000 (1967)381,000 (1990) 401,000 (1994) 439,539 (2001) 511,840 (2011)"Population: 511 840 habitants au 1er janvier 2011", Le Portail des statistiques: Grand-Duché de Luxembourg, 3 May 2011.
|
https://en.wikipedia.org/wiki/Population_growth
|
passage: ## Involute
The involute of the cycloid has exactly the same shape as the cycloid it originates from. This can be visualized as the path traced by the tip of a wire initially lying on a half arch of the cycloid: as it unrolls while remaining tangent to the original cycloid, it describes a new cycloid (see also cycloidal pendulum and arc length).
### Demonstration
This demonstration uses the rolling-wheel definition of cycloid, as well as the instantaneous velocity vector of a moving point, tangent to its trajectory. In the adjacent picture,
$$
P_1
$$
and
$$
P_2
$$
are two points belonging to two rolling circles, with the base of the first just above the top of the second. Initially,
$$
P_1
$$
and
$$
P_2
$$
coincide at the intersection point of the two circles. When the circles roll horizontally with the same speed,
$$
P_1
$$
and
$$
P_2
$$
traverse two cycloid curves. Considering the red line connecting
$$
P_1
$$
and
$$
P_2
$$
at a given time, one proves the line is always tangent to the lower arc at and orthogonal to the upper arc at . Let
$$
Q
$$
be the point in common between the upper and lower circles at the given time.
|
https://en.wikipedia.org/wiki/Cycloid
|
passage: This greedy algorithm approximates the set cover to within the same Hn factor that Lovász proved as the integrality gap for set cover. There are strong complexity-theoretic reasons for believing that no polynomial time approximation algorithm can achieve a significantly better approximation ratio.
Similar randomized rounding techniques, and derandomized approximation algorithms, may be used in conjunction with linear programming relaxation to develop approximation algorithms for many other problems, as described by Raghavan, Tompson, and Young.
## Branch and bound for exact solutions
As well as its uses in approximation, linear programming plays an important role in branch and bound algorithms for computing the true optimum solution to hard optimization problems.
If some variables in the optimal solution have fractional values, we may start a branch and bound type process, in which we recursively solve subproblems in which some of the fractional variables have their values fixed to either zero or one. In each step of an algorithm of this type, we consider a subproblem of the original 0–1 integer program in which some of the variables have values assigned to them, either 0 or 1, and the remaining variables are still free to take on either value. In subproblem i, let Vi denote the set of remaining variables. The process begins by considering a subproblem in which no variable values have been assigned, and in which V0 is the whole set of variables of the original problem. Then, for each subproblem i, it performs the following steps.
1. Compute the optimal solution to the linear programming relaxation of the current subproblem.
|
https://en.wikipedia.org/wiki/Linear_programming_relaxation
|
passage: It can be extended to the Fourier transform of abstract harmonic analysis defined over locally compact abelian groups.
###
### Periodic convolution
(Fourier series coefficients)
Consider
$$
P
$$
-periodic functions
$$
u_{_P}
$$
and
$$
v_{_P},
$$
which can be expressed as periodic summations:
$$
u_{_P}(x)\ \triangleq \sum_{m=-\infty}^{\infty} u(x-mP)
$$
and
$$
v_{_P}(x)\ \triangleq \sum_{m=-\infty}^{\infty} v(x-mP).
$$
In practice the non-zero portion of components
$$
u
$$
and
$$
v
$$
are often limited to duration
$$
P,
$$
but nothing in the theorem requires that.
The Fourier series coefficients are:
$$
\begin{align}
U[k] &\triangleq \mathcal{F}\{u_{_P}\}[k] = \frac{1}{P} \int_P u_{_P}(x) e^{-i 2\pi k x/P} \, dx, \quad k \in \mathbb{Z}; \quad \quad \scriptstyle \text{integration over any interval of length } P\\
V[k] &\triangleq \mathcal{F}\{v_{_P}\}[k] = \frac{1}{P} \int_P v_{_P}(x) e^{-i 2\pi k x/P} \, dx, \quad k \in \mathbb{Z}
\end{align}
$$
where
$$
\mathcal{F}
$$
denotes the Fourier series integral.
-
|
https://en.wikipedia.org/wiki/Convolution_theorem
|
passage: As we can see,
$$
{q''}
$$
and
$$
{J}
$$
are analogous,
$$
{k}
$$
and
$$
{D}
$$
are analogous, while
$$
{T}
$$
and
$$
{C}
$$
are analogous.
### Implementing the Analogy
Heat-Mass Analogy:
Because the Nu and Sh equations are derived from these analogous governing equations, one can directly swap the Nu and Sh and the Pr and Sc numbers to convert these equations between mass and heat.
In many situations, such as flow over a flat plate, the Nu and Sh numbers are functions of the Pr and Sc numbers to some coefficient
$$
n
$$
. Therefore, one can directly calculate these numbers from one another using:
$$
\frac{Nu}{Sh} = \frac{Pr^n}{Sc^n}
$$
Where can be used in most cases, which comes from the analytical solution for the Nusselt Number for laminar flow over a flat plate. For best accuracy, n should be adjusted where correlations have a different exponent.
We can take this further by substituting into this equation the definitions of the heat transfer coefficient, mass transfer coefficient, and Lewis number, yielding:
$$
\frac{h}{h_m} = \frac{k}{D Le^n} =\rho C_p Le^{1-n}
$$
For fully developed turbulent flow, with n=1/3, this becomes the Chilton–Colburn J-factor analogy.
|
https://en.wikipedia.org/wiki/Transport_phenomena
|
passage: The primary limitation with bump mapping is that it perturbs only the surface normals without changing the underlying surface itself. Silhouettes and shadows therefore remain unaffected, which is especially noticeable for larger simulated displacements. This limitation can be overcome by techniques including displacement mapping where bumps are applied to the surface or using an isosurface.
### Methods
There are two primary methods to perform bump mapping. The first uses a height map for simulating the surface displacement yielding the modified normal. This is the method invented by Blinn and is usually what is referred to as bump mapping unless specified. The steps of this method are summarized as follows.
Before a lighting calculation is performed for each visible point (or pixel) on the object's surface:
1. Look up the height in the heightmap that corresponds to the position on the surface.
1. Calculate the surface normal of the heightmap, typically using the finite difference method.
1. Combine the surface normal from step two with the true ("geometric") surface normal so that the combined normal points in a new direction.
1. Calculate the interaction of the new "bumpy" surface with lights in the scene using, for example, the Phong reflection model.
The result is a surface that appears to have real depth. The algorithm also ensures that the surface appearance changes as lights in the scene are moved around.
The other method is to specify a normal map which contains the modified normal for each point on the surface directly.
|
https://en.wikipedia.org/wiki/Bump_mapping
|
passage: ### Asymptotic efficiency
Let θ be an unknown random variable, and suppose that
$$
x_1,x_2,\ldots
$$
are iid samples with density
$$
f(x_i|\theta)
$$
. Let
$$
\delta_n = \delta_n(x_1,\ldots,x_n)
$$
be a sequence of Bayes estimators of θ based on an increasing number of measurements. We are interested in analyzing the asymptotic performance of this sequence of estimators, i.e., the performance of
$$
\delta_n
$$
for large n.
To this end, it is customary to regard θ as a deterministic parameter whose true value is
$$
\theta_0
$$
. Under specific conditions, for large samples (large values of n), the posterior density of θ is approximately normal. In other words, for large n, the effect of the prior probability on the posterior is negligible. Moreover, if δ is the Bayes estimator under MSE risk, then it is asymptotically unbiased and it converges in distribution to the normal distribution:
$$
\sqrt{n}(\delta_n - \theta_0) \to N\left(0 , \frac{1}{I(\theta_0)}\right),
$$
where I(θ0) is the Fisher information of θ0.
It follows that the Bayes estimator δn under MSE is asymptotically efficient.
|
https://en.wikipedia.org/wiki/Bayes_estimator
|
passage: $$
Therefore the total error is bounded by
$$
\text{error} = \int_a^b f(x)\,dx - \frac{b-a}{N} \left[ {f(a) + f(b) \over 2} + \sum_{k=1}^{N-1} f \left( a+k \frac{b-a}{N} \right) \right] = \frac{f(\xi)h^3N}{12}=\frac{f(\xi)(b-a)^3}{12N^2}.
$$
|
https://en.wikipedia.org/wiki/Trapezoidal_rule
|
passage: For the first time filters could be produced that had precisely controllable passbands and other parameters. These developments took place in the 1920s and filters produced to these designs were still in widespread use in the 1980s, only declining as the use of analogue telecommunications has declined. Their immediate application was the economically important development of frequency division multiplexing for use on intercity and international lines.
1.
## Network synthesis filters
. The mathematical bases of network synthesis were laid in the 1930s and 1940s. After World War II, network synthesis became the primary tool of filter design. Network synthesis put filter design on a firm mathematical foundation, freeing it from the mathematically sloppy techniques of image design and severing the connection with physical lines. The essence of network synthesis is that it produces a design that will (at least if implemented with ideal components) accurately reproduce the response originally specified in black box terms.
Throughout this article the letters R, L, and C are used with their usual meanings to represent resistance, inductance, and capacitance, respectively. In particular they are used in combinations, such as LC, to mean, for instance, a network consisting only of inductors and capacitors. Z is used for electrical impedance, any 2-terminal combination of RLC elements and in some sections D is used for the rarely seen quantity elastance, which is the inverse of capacitance.
## Resonance
Early filters utilised the phenomenon of resonance to filter signals.
|
https://en.wikipedia.org/wiki/Analogue_filter
|
passage: The study of absent-mindedness in everyday life provides ample documentation and categorization of such aspects of behavior. While human error is firmly entrenched in the classical approaches to accident investigation and risk assessment, it has no role in newer approaches such as resilience engineering.
## Categories
There are many ways to categorize human error:Wallace and Ross, 2006
- exogenous versus endogenous error (i.e., originating outside versus inside the individual)
- situation assessment versus response planning and related distinctions in
- error in problem detection (also see signal detection theory)
- error in problem diagnosis (also see problem solving)
- error in action planning and execution (for example: slips or errors of execution versus mistakes or errors of intention)
- by level of analysis; for example, perceptual (e.g., optical illusions) versus cognitive versus communication versus organizational
- physical manipulation error
- 'slips' occurring when the physical action fails to achieve the immediate objective
- 'lapses' involve a failure of one's memory or recall
- active error - observable, physical action that changes equipment, system, or facility state, resulting in immediate undesired consequences
- latent human error resulting in hidden organization-related weaknesses or equipment flaws that lie dormant; such errors can go unnoticed at the time they occur, having no immediate apparent outcome
- equipment dependency error – lack of vigilance due to the assumption that hardware controls or physical safety devices will always work
- team error – lack of vigilance created by the social (interpersonal) interaction between two or more people working together
- personal dependencies error – unsafe attitudes and traps of human nature leading to complacency and overconfidence
|
https://en.wikipedia.org/wiki/Human_error
|
passage: Acrocentric
An acrocentric chromosome's centromere is situated so that one of the chromosome arms is much shorter than the other. The "acro-" in acrocentric refers to the Greek word for "peak." The human genome has six acrocentric chromosomes, including five autosomal chromosomes (13, 14, 15, 21, 22) and the Y chromosome.
Short acrocentric p-arms contain little genetic material and can be translocated without significant harm, as in a balanced Robertsonian translocation. In addition to some protein coding genes, human acrocentric p-arms also contain Nucleolus organizer regions (NORs), from which ribosomal RNA is transcribed. However, a proportion of acrocentric p-arms in cell lines and tissues from normal human donors do not contain detectable NORs. The domestic horse genome includes one metacentric chromosome that is homologous to two acrocentric chromosomes in the conspecific but undomesticated Przewalski's horse. This may reflect either fixation of a balanced Robertsonian translocation in domestic horses or, conversely, fixation of the fission of one metacentric chromosome into two acrocentric chromosomes in Przewalski's horses. A similar situation exists between the human and great ape genomes, with a reduction of two acrocentric chromosomes in the great apes to one metacentric chromosome in humans (see aneuploidy and the human chromosome 2).
|
https://en.wikipedia.org/wiki/Centromere
|
passage: To execute the algorithm effectively, Sidi's method calculates the interpolating polynomial
$$
p_{n,k} (x)
$$
in its Newton form.
## Convergence
Sidi showed that if the function
$$
f
$$
is (k + 1)-times continuously differentiable in an open interval
$$
I
$$
containing
$$
\alpha
$$
(that is,
$$
f \in C^{k+1} (I)
$$
),
$$
\alpha
$$
is a simple root of
$$
f
$$
(that is,
$$
f'(\alpha) \neq 0
$$
) and the initial approximations
$$
x_1 , \dots , x_{k+1}
$$
are chosen close enough to
$$
\alpha
$$
, then the sequence
$$
\{ x_i \}
$$
converges to
$$
\alpha
$$
, meaning that the following limit holds:
$$
\lim\limits_{n \to \infty} x_n = \alpha
$$
.
Sidi furthermore showed that
$$
\lim_{n\to\infty} \frac{x_{n +1}-\alpha}{\prod^k_{i=0}(x_{n-i}-\alpha)} = L = \frac{(-1)^{k+1}} {(k+1)!}\frac{f^{(k+1)}(\alpha)}{f'(\alpha)},
$$
and that the sequence converges to
$$
\alpha
$$
of order
$$
\psi_k
$$
|
https://en.wikipedia.org/wiki/Sidi%27s_generalized_secant_method
|
passage: There is a non-zero probability that the photon will tunnel across the gap rather than follow the refracted path.
However, it has been claimed that the Hartman effect cannot actually be used to violate relativity by transmitting signals faster than c, also because the tunnelling time "should not be linked to a velocity since evanescent waves do not propagate". The evanescent waves in the Hartman effect are due to virtual particles and a non-propagating static field, as mentioned in the sections above for gravity and electromagnetism.
#### Casimir effect
In physics, the Casimir–Polder force is a physical force exerted between separate objects due to resonance of vacuum energy in the intervening space between the objects. This is sometimes described in terms of virtual particles interacting with the objects, owing to the mathematical form of one possible way of calculating the strength of the effect. Because the strength of the force falls off rapidly with distance, it is only measurable when the distance between the objects is extremely small. Because the effect is due to virtual particles mediating a static field effect, it is subject to the comments about static fields discussed above.
#### EPR paradox
The EPR paradox refers to a famous thought experiment of Albert Einstein, Boris Podolsky and Nathan Rosen that was realized experimentally for the first time by Alain Aspect in 1981 and 1982 in the Aspect experiment. In this experiment, the two measurements of an entangled state are correlated even when the measurements are distant from the source and each other.
|
https://en.wikipedia.org/wiki/Faster-than-light
|
passage: If k = 0, stop. There is no match; the item is not in the array.
1. Compare the item against element in Fk−1.
1. If the item matches, stop.
1. If the item is less than entry Fk−1, discard the elements from positions to n. Set and return to step 2.
1. If the item is greater than entry Fk−1, discard the elements from positions 1 to Fk−1. Renumber the remaining elements from 1 to Fk−2, set , and return to step 2.
Alternative implementation (from "Sorting and Searching" by Knuth):
Given a table of records R1, R2, ..., RN whose keys are in increasing order K1 < K2 < ... < KN, the algorithm searches for a given argument K. Assume N+1= Fk+1
Step 1. [Initialize] i ← Fk, p ← Fk−1, q ← Fk−2 (throughout the algorithm, p and q will be consecutive Fibonacci numbers)
Step 2. [Compare] If K < Ki, go to Step 3; if K > Ki go to Step 4; and if K = Ki, the algorithm terminates successfully.
Step 3. [Decrease i] If q=0, the algorithm terminates unsuccessfully. Otherwise set (i, p, q) ← (i − q, q, p − q) (which moves p and q one position back in the Fibonacci sequence); then return to Step 2
Step 4. [Increase i] If p=1, the algorithm terminates unsuccessfully.
|
https://en.wikipedia.org/wiki/Fibonacci_search_technique
|
passage: ## Algebraic properties
The following equalities mean: Either both sides are undefined, or both sides are defined and equal. This is true for any
$$
a, b, c \in \widehat{\mathbb{R}}.
$$
$$
\begin{align}
(a + b) + c & = a + (b + c) \\
a + b & = b + a \\
(a \cdot b) \cdot c & = a \cdot (b \cdot c) \\
a \cdot b & = b \cdot a \\
a \cdot \infty & = \frac{a}{0} \\
\end{align}
$$
The following is true whenever expressions involved are defined, for any
$$
a, b, c \in \widehat{\mathbb{R}}.
$$
$$
\begin{align}
a \cdot (b + c) & = a \cdot b + a \cdot c \\
a & = \left(\frac{a}{b}\right) \cdot b & = \,\,& \frac{(a \cdot b)}{b} \\
a & = (a + b) - b & = \,\,& (a - b) + b
\end{align}
$$
In general, all laws of arithmetic that are valid for
$$
\mathbb{R}
$$
are also valid for
$$
\widehat{\mathbb{R}}
$$
whenever all the occurring expressions are defined.
## Intervals and topology
The concept of an interval can be extended to
$$
\widehat{\mathbb{R}}
$$
.
|
https://en.wikipedia.org/wiki/Projectively_extended_real_line
|
passage: (July 2010). "Binomial averages when the mean is an integer", The Mathematical Gazette 94, 331-332.
- Any median must lie within the interval
$$
\lfloor np \rfloor\leq m \leq \lceil np \rceil
$$
.
- A median cannot lie too far away from the mean:
$$
|m-np|\leq \min\{{\ln2}, \max\{p,1-p\}\}
$$
.
- The median is unique and equal to when (except for the case when and is odd).
- When is a rational number (with the exception of \ and odd) the median is unique.
- When
$$
p= \frac{1}{2}
$$
and is odd, any number in the interval
$$
\frac{1}{2} \bigl(n-1\bigr)\leq m \leq \frac{1}{2} \bigl(n+1\bigr)
$$
is a median of the binomial distribution. If
$$
p= \frac{1}{2}
$$
and is even, then
$$
m= \frac{n}{2}
$$
is the unique median.
### Tail bounds
For , upper bounds can be derived for the lower tail of the cumulative distribution function
$$
F(k;n,p) = \Pr(X \le k)
$$
, the probability that there are at most successes.
|
https://en.wikipedia.org/wiki/Binomial_distribution
|
passage: There are also some ports of Qt that may be available, but are not supported anymore. These platforms are listed in List of platforms supported by Qt. See also there for current community support for other lesser known platforms, such as SailfishOS.
### Licensing
Qt is available under the following free software licenses: GPL 2.0, GPL 3.0, LGPL 3.0 and LGPL 2.1 (with Qt special exception). Note that some modules are available only under a GPL license, which means that applications which link to these modules need to comply with that license.
In addition, Qt has always been available under a commercial license, like the Qt Commercial License, that allows developing proprietary applications with no restrictions on licensing.
### Qt tools
Qt comes with its own set of tools to ease cross-platform development, which can otherwise be cumbersome due to different set of development tools.
Qt Creator is a cross-platform IDE for C++ and QML. Qt Designer's GUI layout/design functionality is integrated into the IDE, although Qt Designer can still be started as a standalone tool.
In addition to Qt Creator, Qt provides qmake, a cross-platform build script generation tool that automates the generation of Makefiles for development projects across different platforms.
There are other tools available in Qt, including the Qt Designer interface builder and the Qt Assistant help browser (which are both embedded in Qt Creator), the Qt Linguist translation tool, uic (user interface compiler), and moc (Meta-Object Compiler).
## History of Qt
|
https://en.wikipedia.org/wiki/Qt_%28software%29
|
passage: Carl von Staudt was unsatisfied with past definitions of the cross-ratio relying on algebraic manipulation of Euclidean distances rather than being based purely on synthetic projective geometry concepts. In 1847, von Staudt demonstrated that the algebraic structure is implicit in projective geometry, by creating an algebra based on construction of the projective harmonic conjugate, which he called a throw (German: Wurf): given three points on a line, the harmonic conjugate is a fourth point that makes the cross ratio equal to . His algebra of throws provides an approach to numerical propositions, usually taken as axioms, but proven in projective geometry.
The English term "cross-ratio" was introduced in 1878 by William Kingdon Clifford.
## Definition
If , , , and are four points on an oriented affine line, their cross ratio is:
$$
(A,B; C,D) = \frac{AC : BC}{AD : BD},
$$
with the notation
$$
WX : YZ
$$
defined to mean the signed ratio of the displacement from to to the displacement from to . For collinear displacements this is a dimensionless quantity.
|
https://en.wikipedia.org/wiki/Cross-ratio
|
passage: This
$$
\equiv O
$$
notation was utilized in definitions; for example, Cantor defined two sets as being disjoint if their intersection has an absence of points; however, it is debatable whether Cantor viewed
$$
O
$$
as an existent set on its own, or if Cantor merely used
$$
\equiv O
$$
as an emptiness predicate. Zermelo accepted
$$
O
$$
itself as a set, but considered it an "improper set".
### Axiomatic set theory
In Zermelo set theory, the existence of the empty set is assured by the axiom of empty set, and its uniqueness follows from the axiom of extensionality. However, the axiom of empty set can be shown redundant in at least two ways:
- Standard first-order logic implies, merely from the logical axioms, that exists, and in the language of set theory, that thing must be a set. Now the existence of the empty set follows easily from the axiom of separation.
- Even using free logic (which does not logically imply that something exists), there is already an axiom implying the existence of at least one set, namely the axiom of infinity.
### Philosophical issues
While the empty set is a standard and widely accepted mathematical concept, it remains an ontological curiosity, whose meaning and usefulness are debated by philosophers and logicians.
The empty set is not the same thing as ; rather, it is a set with nothing it and a set is always .
|
https://en.wikipedia.org/wiki/Empty_set
|
passage: This is because
$$
h_K = 1
$$
if and only if
$$
\mathcal{O}_K
$$
is a UFD.
#### Exact sequence for ideal class groups
There is an exact sequence
$$
0 \to \mathcal{O}_K^* \to K^* \to \mathcal{I}_K \to \mathcal{C}_K \to 0
$$
associated to every number field.
### Structure theorem for fractional ideals
One of the important structure theorems for fractional ideals of a number field states that every fractional ideal
$$
I
$$
decomposes uniquely up to ordering as
$$
I = (\mathfrak{p}_1\ldots\mathfrak{p}_n)(\mathfrak{q}_1\ldots\mathfrak{q}_m)^{-1}
$$
for prime ideals
$$
\mathfrak{p}_i,\mathfrak{q}_j \in \text{Spec}(\mathcal{O}_K)
$$
.
in the spectrum of
$$
\mathcal{O}_K
$$
.
|
https://en.wikipedia.org/wiki/Fractional_ideal
|
passage: This formal language is the basis for proof systems, which allow a conclusion to be derived from premises if, and only if, it is a logical consequence of them. This section will show how this works by formalizing the . The formal language for a propositional calculus will be fully specified in , and an overview of proof systems will be given in .
### Propositional variables
Since propositional logic is not concerned with the structure of propositions beyond the point where they cannot be decomposed any more by logical connectives, it is typically studied by replacing such atomic (indivisible) statements with letters of the alphabet, which are interpreted as variables representing statements (propositional variables). With propositional variables, the would then be symbolized as follows:
Premise 1:
$$
P \to Q
$$
Premise 2:
$$
P
$$
Conclusion:
$$
Q
$$
When is interpreted as "It's raining" and as "it's cloudy" these symbolic expressions correspond exactly with the original expression in natural language. Not only that, but they will also correspond with any other inference with the same logical form.
When a formal system is used to represent formal logic, only statement letters (usually capital roman letters such as
$$
P
$$
,
$$
Q
$$
and
$$
R
$$
) are represented directly. The natural language propositions that arise when they're interpreted are outside the scope of the system, and the relation between the formal system and its interpretation is likewise outside the formal system itself.
### Gentzen notation
|
https://en.wikipedia.org/wiki/Propositional_calculus
|
passage: The upper and lower integrals are in turn the infimum and supremum, respectively, of upper and lower (Darboux) sums which over- and underestimate, respectively, the "area under the curve." In particular, for a given partition of the interval of integration, the upper and lower sums add together the areas of rectangular slices whose heights are the supremum and infimum, respectively, of f in each subinterval of the partition. These ideas are made precise below:
Darboux sums
A partition of an interval
$$
[a,b]
$$
is a finite sequence of values
$$
x_{i}
$$
such that
$$
a = x_0 < x_1 < \cdots < x_n = b.
$$
Each interval
$$
[x_{i-1},x_i]
$$
is called a subinterval of the partition. Let
$$
f:[a,b]\to\R
$$
be a bounded function, and let
$$
P = (x_0, \ldots, x_n)
$$
be a partition of
$$
[a,b]
$$
.
|
https://en.wikipedia.org/wiki/Darboux_integral
|
passage: This study also integrated genetic interactions and protein structures and mapped 458 interactions within 227 protein complexes.
## Normal microbiota
E. coli belongs to a group of bacteria informally known as coliforms that are found in the gastrointestinal tract of warm-blooded animals. E. coli normally colonizes an infant's gastrointestinal tract within 40 hours of birth, arriving with food or water or from the individuals handling the child. In the bowel, E. coli adheres to the mucus of the large intestine. It is the primary facultative anaerobe of the human gastrointestinal tract. (Facultative anaerobes are organisms that can grow in either the presence or absence of oxygen.) As long as these bacteria do not acquire genetic elements encoding for virulence factors, they remain benign commensals.
### Therapeutic use
Due to the low cost and speed with which it can be grown and modified in laboratory settings, E. coli is a popular expression platform for the production of recombinant proteins used in therapeutics. One advantage to using E. coli over another expression platform is that E. coli naturally does not export many proteins into the periplasm, making it easier to recover a protein of interest without cross-contamination. The E. coli K-12 strains and their derivatives (DH1, DH5α, MG1655, RV308 and W3110) are the strains most widely used by the biotechnology industry.
|
https://en.wikipedia.org/wiki/Escherichia_coli
|
passage: It also requires unique instrumentation such as the speculum. The speculum consists of two hinged blades of concave metal or plastic which are used to retract the tissues of the vagina and permit examination of the cervix, the lower part of the uterus located within the upper portion of the vagina. Gynaecologists typically do a bimanual examination (one hand on the abdomen and one or two fingers in the vagina) to palpate the cervix, uterus, ovaries and bony pelvis. It is not uncommon to do a rectovaginal examination for a complete evaluation of the pelvis, particularly if any suspicious masses are appreciated. Male gynaecologists may have a female chaperone for their examination. An abdominal or vaginal ultrasound can be used to confirm any abnormalities appreciated with the bimanual examination or when indicated by the patient's history.
|
https://en.wikipedia.org/wiki/Gynaecology
|
passage: While somewhat uncommon, entire asynchronous CPUs have been built without using a global clock signal. Two notable examples of this are the ARM compliant AMULET and the MIPS R3000 compatible MiniMIPS.
Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for embedded computers.
### Voltage regulator module
Many modern CPUs have a die-integrated power managing module which regulates on-demand voltage supply to the CPU circuitry allowing it to keep balance between performance and power consumption.
### Integer range
Every CPU represents numerical values in a specific way. For example, some early digital computers represented numbers as familiar decimal (base 10) numeral system values, and others have employed more unusual representations such as bi-quinary coded decimal (base 2–5) or ternary (base 3). Nearly all modern CPUs represent numbers in binary form, with each digit being represented by some two-valued physical quantity such as a "high" or "low" voltage.
Related to numeric representation is the size and precision of integer numbers that a CPU can represent.
|
https://en.wikipedia.org/wiki/Central_processing_unit
|
passage: Shear lag theory uses the shear lag model to predict properties such as the Young's modulus for short fiber composites. The model assumes that load is transferred from the matrix to the fibers solely through the interfacial shear stresses
$$
\tau_i
$$
acting on the cylindrical interface. Shear lag theory says then that the rate of change of the axial stress in the fiber as you move along the fiber is proportional to the ratio of the interfacial shear stresses over the radius of the fibre
$$
r_0
$$
:
$$
\frac{d\sigma_f}{dx} = -\frac{2\tau_i}{r_0}
$$
This leads to the average fiber stress over the full length of the fibre being given by:
$$
\sigma_f = E_f\varepsilon_1\left(1-\frac{\tanh(ns)}{ns}\right)
$$
where
-
$$
\varepsilon_1
$$
is the macroscopic strain in the composite
-
$$
s
$$
is the fiber aspect ratio (length over diameter)
-
$$
n = \left( \frac{2E_m}{E_f(1+\nu_m)\ln(1/f)} \right)^{1/2}
$$
is a dimensionless constant
-
$$
\nu_m
$$
is the Poisson's ratio of the matrix
By assuming a uniform tensile strain, this results in:
$$
E_1 = \frac{\sigma_1}{\varepsilon_1} = fE_f \left( 1 - \frac{\tanh(ns)}{ns}\right) + (1-f) E_m
$$
As s becomes larger, this tends towards the rule of mixtures, which represents the Young's modulus parallel to continuous fibers.
|
https://en.wikipedia.org/wiki/Composite_material
|
passage: In a similar way one can get a parameterization of
$$
C_{g_2,g_3}^\mathbb{C}
$$
by means of the doubly periodic
$$
\wp
$$
-function (see in the section "
## Relation to elliptic curves
"). This parameterization has the domain
$$
\mathbb{C}/\Lambda
$$
, which is topologically equivalent to a torus.
There is another analogy to the trigonometric functions. Consider the integral function
$$
a(x)=\int_0^x\frac{dy}{\sqrt{1-y^2}} .
$$
It can be simplified by substituting
$$
y=\sin t
$$
and
$$
s=\arcsin x
$$
:
$$
a(x)=\int_0^s dt = s = \arcsin x .
$$
That means
$$
a^{-1}(x) = \sin x
$$
. So the sine function is an inverse function of an integral function.
Elliptic functions are the inverse functions of elliptic integrals. In particular, let:
$$
u(z)=\int_z^\infin\frac{ds}{\sqrt{4s^3-g_2s-g_3}} .
$$
Then the extension of
$$
u^{-1}
$$
to the complex plane equals the
$$
\wp
$$
-function.
|
https://en.wikipedia.org/wiki/Weierstrass_elliptic_function
|
passage: The most notable method is known as the "station" method. When paying in stations, the dealer counts the number of ways or stations that the winning number hits the complete bet. In the example above, 26 hits 4 stations - 2 different corners, 1 split and 1 six-line. The dealer takes the number 4, multiplies it by 36, making 144 with the players bet down.
In some casinos, a player may bet full complete for less than the table straight-up maximum, for example, "number 17 full complete by $25" would cost $1000, that is 40 chips each at $25 value.
## Betting strategies and tactics
Over the years, many people have tried to beat the casino, and turn roulette—a game designed to turn a profit for the house—into one on which the player expects to win. Most of the time this comes down to the use of betting systems, strategies which say that the house edge can be beaten by simply employing a special pattern of bets, often relying on the "Gambler's fallacy", the idea that past results are any guide to the future (for example, if a roulette wheel has come up 10 times in a row on red, that red on the next spin is any more or less likely than if the last spin was black).
All betting systems that rely on patterns, when employed on casino edge games will result, on average, in the player losing money. In practice, players employing betting systems may win, and may indeed win very large sums of money, but the losses (which, depending on the design of the betting system, may occur quite rarely) will outweigh the wins.
|
https://en.wikipedia.org/wiki/Roulette
|
passage: # x^\textsf{T} y = \sum_{i=1}^n x_i y_i
x_1 y_1 + \cdots + x_n y_n,
$$
where
$$
x^{\operatorname{T}}
$$
is the transpose of
$$
x.
$$
A function
$$
\langle \,\cdot, \cdot\, \rangle : \R^n \times \R^n \to \R
$$
is an inner product on
$$
\R^n
$$
if and only if there exists a symmetric positive-definite matrix
$$
\mathbf{M}
$$
such that
$$
\langle x, y \rangle = x^{\operatorname{T}} \mathbf{M} y
$$
for all
$$
x, y \in \R^n.
$$
If
$$
\mathbf{M}
$$
is the identity matrix then
$$
\langle x, y \rangle = x^{\operatorname{T}} \mathbf{M} y
$$
is the dot product. For another example, if
$$
n = 2
$$
and
$$
\mathbf{M} = \begin{bmatrix} a & b \\ b & d \end{bmatrix}
$$
is positive-definite (which happens if and only if
$$
\det \mathbf{M} = a d - b^2 > 0
$$
and one/both diagonal elements are positive) then for any
$$
x := \left[x_1, x_2\right]^{\operatorname{T}}, y := \left[y_1, y_2\right]^{\operatorname{T}} \in \R^2,
$$
$$
\langle x, y \rangle
= x^{\operatorname{T}} \mathbf{M} y
= \left[x_1, x_2\right]
|
https://en.wikipedia.org/wiki/Inner_product_space
|
passage: More specifically, a tsunami can be generated when thrust faults associated with convergent or destructive plate boundaries move abruptly, resulting in water displacement, owing to the vertical component of movement involved. Movement on normal (extensional) faults can also cause displacement of the seabed, but only the largest of such events (typically related to flexure in the outer trench swell) cause enough displacement to give rise to a significant tsunami, such as the 1977 Sumba and 1933 Sanriku events.
Tsunamis have a small wave height offshore, and a very long wavelength (often hundreds of kilometres long, whereas normal ocean waves have a wavelength of only 30 or 40 metres), which is why they generally pass unnoticed at sea, forming only a slight swell usually about above the normal sea surface. They grow in height when they reach shallower water, in a wave shoaling process described below. A tsunami can occur in any tidal state and even at low tide can still inundate coastal areas.
On April 1, 1946, the 8.6 Aleutian Islands earthquake occurred with a maximum Mercalli intensity of VI (Strong). It generated a tsunami which inundated Hilo on the island of Hawaii with a surge. Between 165 and 173 were killed. The area where the earthquake occurred is where the Pacific Ocean floor is subducting (or being pushed downwards) under Alaska.
|
https://en.wikipedia.org/wiki/Tsunami
|
passage: In optimization problems, the assumption of independent and identical distribution simplifies the calculation of the likelihood function.
Due to this assumption, the likelihood function can be expressed as:
$$
l(\theta) = P(x_1, x_2, x_3,...,x_n|\theta) = P(x_1|\theta) P(x_2|\theta) P(x_3|\theta) ... P(x_n|\theta)
$$
To maximize the probability of the observed event, the log function is applied to maximize the parameter
$$
\theta
$$
. Specifically, it computes:
$$
\mathop{\rm argmax}\limits_\theta \log(l(\theta))
$$
where
$$
\log(l(\theta)) = \log(P(x_1|\theta)) + \log(P(x_2|\theta)) + \log(P(x_3|\theta)) + ... + \log(P(x_n|\theta))
$$
Computers are very efficient at performing multiple additions, but not as efficient at performing multiplications. This simplification enhances computational efficiency. The log transformation, in the process of maximizing, converts many exponential functions into linear functions.
There are two main reasons why this hypothesis is practically useful with the central limit theorem (CLT):
1.
|
https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables
|
passage: While hydrophobic substances are usually lipophilic, there are exceptions, such as the silicones and fluorocarbons.
## Chemical background
The hydrophobic interaction is mostly an entropic effect originating from the disruption of the highly dynamic hydrogen bonds between molecules of liquid water by the nonpolar solute, causing the water to compensate by forming a clathrate-like cage structure around the non-polar molecules. This structure is more highly ordered than free water molecules due to the water molecules arranging themselves to interact as much as possible with themselves, and thus results in a lower entropic state at the interface. This causes non-polar molecules to clump together to reduce the surface area exposed to water and thereby increase the entropy of the system. Thus, the two immiscible phases (hydrophilic vs. hydrophobic) will change so that their corresponding interfacial area will be minimal. This effect can be visualized in the phenomenon called phase separation.
## Superhydrophobicity
Superhydrophobic surfaces, such as the leaves of the lotus plant, are those that are extremely difficult to wet. The contact angles of a water droplet exceeds 150°. This is referred to as the lotus effect, and is primarily a physical property related to interfacial tension, rather than a chemical property.
### Theory
In 1805, Thomas Young defined the contact angle θ by analyzing the forces acting on a fluid droplet resting on a solid surface surrounded by a gas.
|
https://en.wikipedia.org/wiki/Hydrophobe
|
passage: For a more sophisticated example:
$$
\begin{align}
& \phi : \R^4 \to \{ 0 \} \\
& \phi(t,x,y,z) = C tz e^{tx-yz} + A \sin(3\omega t) \left(x^2z - B y^6\right) = 0
\end{align}
$$
for non-zero real constants , this function is well-defined for all , but it cannot be solved explicitly for these variables and written as "", "", etc.
The implicit function theorem of more than two real variables deals with the continuity and differentiability of the function, as follows. Let be a continuous function with continuous first order partial derivatives, and let ϕ evaluated at a point be zero:
$$
\phi(\boldsymbol{a}, b) = 0;
$$
and let the first partial derivative of with respect to evaluated at be non-zero:
$$
\left.\frac{\partial \phi(\boldsymbol{x},y)}{\partial y}\right|_{(\boldsymbol{x},y) = (\boldsymbol{a},b)} \neq 0 .
$$
Then, there is an interval containing , and a region containing , such that for every in there is exactly one value of in satisfying , and is a continuous function of so that . The total differentials of the functions are:
|
https://en.wikipedia.org/wiki/Function_of_several_real_variables
|
passage: As a real homography, points are described with projective coordinates, and the mapping is
$$
[y,\ 1] = \left[\frac {x - 1}{x +1},\ 1\right] \thicksim [x - 1, \ x + 1] = [x,\ 1]\begin{pmatrix}1 & 1 \\ -1 & 1 \end{pmatrix} .
$$
## Complex homography
On the upper half of the complex plane, the Cayley transform is:Erwin Kreyszig (1983) Advanced Engineering Mathematics, 5th edition, page 611, Wiley
$$
f(z) = \frac {z - i}{z + i} .
$$
Since
$$
\{\infty, 1, -1\}
$$
is mapped to
$$
\{1, -i, i\}
$$
, and Möbius transformations permute the generalised circles in the complex plane,
$$
f
$$
maps the real line to the unit circle. Furthermore, since
$$
f
$$
is a homeomorphism and
$$
i
$$
is taken to 0 by
$$
f
$$
, the upper half-plane is mapped to the unit disk.
In terms of the models of hyperbolic geometry, this Cayley transform relates the Poincaré half-plane model to the Poincaré disk model.
In electrical engineering the Cayley transform has been used to map a reactance half-plane to the Smith chart used for impedance matching of transmission lines.
|
https://en.wikipedia.org/wiki/Cayley_transform
|
passage: Consider partitioning the probability mass function of the joint Poisson distribution for the sample into two parts: one that depends solely on the sample
$$
\mathbf{x}
$$
, called
$$
h(\mathbf{x})
$$
, and one that depends on the parameter
$$
\lambda
$$
and the sample
$$
\mathbf{x}
$$
only through the function
$$
T(\mathbf{x}).
$$
Then
$$
T(\mathbf{x})
$$
is a sufficient statistic for
$$
\lambda.
$$
$$
P(\mathbf{x})=\prod_{i=1}^n\frac{\lambda^{x_i} e^{-\lambda}}{x_i!}=\frac{1}{\prod_{i=1}^n x_i!} \times \lambda^{\sum_{i=1}^n x_i}e^{-n\lambda}
$$
The first term
$$
h(\mathbf{x})
$$
depends only on
$$
\mathbf{x}
$$
. The second term
$$
g(T(\mathbf{x})|\lambda)
$$
depends on the sample only through
$$
T(\mathbf{x})=\sum_{i=1}^n x_i.
$$
Thus,
$$
T(\mathbf{x})
$$
is sufficient.
|
https://en.wikipedia.org/wiki/Poisson_distribution
|
passage: In the field of representation theory in mathematics, a projective representation of a group G on a vector space V over a field F is a group homomorphism from G to the projective linear group
$$
\mathrm{PGL}(V) = \mathrm{GL}(V) / F^*,
$$
where GL(V) is the general linear group of invertible linear transformations of V over F, and F∗ is the normal subgroup consisting of nonzero scalar multiples of the identity transformation (see Scalar transformation).
In more concrete terms, a projective representation of
$$
G
$$
is a collection of operators
$$
\rho(g)\in\mathrm{GL}(V),\, g\in G
$$
satisfying the homomorphism property up to a constant:
$$
\rho(g)\rho(h) = c(g, h)\rho(gh),
$$
for some constant
$$
c(g, h)\in F
$$
. Equivalently, a projective representation of
$$
G
$$
is a collection of operators
$$
\tilde\rho(g)\subset\mathrm{GL}(V), g\in G
$$
, such that
$$
\tilde\rho(gh)=\tilde\rho(g)\tilde\rho(h)
$$
. Note that, in this notation,
$$
\tilde\rho(g)
$$
is a set of linear operators related by multiplication with some nonzero scalar.
|
https://en.wikipedia.org/wiki/Projective_representation
|
passage: $$
f\left(\mathbf{X}\right)
$$
is
$$
V_{j_1 j_2 \dots j_r}=\int_0^1 \cdots \int_0^1 f_{j_1 j_2 \dots j_r}^2\left(X_{j_1},X_{j_2},\dots,X_{j_r}\right)dX_{j_1}dX_{j_2}\dots dX_{j_r}.
$$
The total variance is the sum of all conditional variances
$$
V = \sum_{j=1}^n V_j + \sum_{j=1}^{n-1} \sum_{k=j+1}^n V_{jk} + \cdots + V_{12\dots n}.
$$
The sensitivity index is defined as the normalized conditional variance as
$$
S_{j_1 j_2 \dots j_r} = \frac{V_{j_1 j_2 \dots j_r}}{V}
$$
especially the first order sensitivity
$$
S_j=\frac{V_j}{V}
$$
which indicates the main effect of the input
$$
X_j
$$
.
|
https://en.wikipedia.org/wiki/Fourier_amplitude_sensitivity_testing
|
passage: ## Legal issues and global regulation
International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece of malware or form of cyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute. Proving attribution for cybercrimes and cyberattacks is also a major problem for all law enforcement agencies. "Computer viruses switch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world." The use of techniques such as dynamic DNS, fast flux and bullet proof servers add to the difficulty of investigation and enforcement.
## Role of government
The role of the government is to make regulations to force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the national power-grid.
The government's regulatory role in cyberspace is complicated. For some, cyberspace was seen as a virtual space that was to remain free of government intervention, as can be seen in many of today's libertarian blockchain and bitcoin discussions.
Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem.
|
https://en.wikipedia.org/wiki/Computer_security
|
passage: __NOTOC__
In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix
$$
A
$$
is skew-Hermitian if it satisfies the relation
where
$$
A^\textsf{H}
$$
denotes the conjugate transpose of the matrix
$$
A
$$
. In component form, this means that
for all indices
$$
i
$$
and
$$
j
$$
, where
$$
a_{ij}
$$
is the element in the
$$
i
$$
-th row and
$$
j
$$
-th column of
$$
A
$$
, and the overline denotes complex conjugation.
Skew-Hermitian matrices can be understood as the complex versions of real skew-symmetric matrices, or as the matrix analogue of the purely imaginary numbers. The set of all skew-Hermitian
$$
n \times n
$$
matrices forms the
$$
u(n)
$$
Lie algebra, which corresponds to the Lie group U(n). The concept can be generalized to include linear transformations of any complex vector space with a sesquilinear norm.
Note that the adjoint of an operator depends on the scalar product considered on the
$$
n
$$
dimensional complex or real space
$$
K^n
$$
.
|
https://en.wikipedia.org/wiki/Skew-Hermitian_matrix
|
passage: Close to half of the proof of the Feit–Thompson theorem involves intricate calculations with character values. Easier, but still essential, results that use character theory include Burnside's theorem (a purely group-theoretic proof of Burnside's theorem has since been found, but that proof came over half a century after Burnside's original proof), and a theorem of Richard Brauer and Michio Suzuki stating that a finite simple group cannot have a generalized quaternion group as its Sylow -subgroup.
## Definitions
Let be a finite-dimensional vector space over a field and let be a representation of a group on . The character of is the function given by
$$
\chi_{\rho}(g) = \operatorname{Tr}(\rho(g))
$$
where is the trace.
A character is called irreducible or simple if is an irreducible representation. The degree of the character is the dimension of ; in characteristic zero this is equal to the value . A character of degree 1 is called linear. When is finite and has characteristic zero, the kernel of the character is the normal subgroup:
$$
\ker \chi_\rho := \left \lbrace g \in G \mid \chi_{\rho}(g) = \chi_{\rho}(1) \right \rbrace,
$$
which is precisely the kernel of the representation . However, the character is not a group homomorphism in general.
|
https://en.wikipedia.org/wiki/Character_theory
|
passage: Wires also have some self-inductance.
## Example
Assume an electric network consisting of two voltage sources and three resistors.
According to the first law:
$$
i_1 - i_2 - i_3 = 0
$$
Applying the second law to the closed circuit , and substituting for voltage using Ohm's law gives:
$$
-R_2 i_2 + \mathcal{E}_1 - R_1 i_1 = 0
$$
The second law, again combined with Ohm's law, applied to the closed circuit gives:
$$
-R_3 i_3 - \mathcal{E}_2 - \mathcal{E}_1 + R_2 i_2 = 0
$$
This yields a system of linear equations in , , :
$$
\begin{cases}
BLOCK0-R_3 i_3 - \mathcal{E}_2 - \mathcal{E}_1 + R_2 i_2 & = 0
\end{cases}
$$
which is equivalent to
$$
\begin{cases}
i_1 + (- i_2) + (- i_3) & = 0 \\
R_1 i_1 + R_2 i_2 + 0 i_3 & = \mathcal{E}_1 \\
0 i_1 + R_2 i_2 - R_3 i_3 & = \mathcal{E}_1 + \mathcal{E}_2
\end{cases}
$$
Assuming
$$
\begin{align}
R_1 &= 100\Omega, & R_2 &= 200\Omega, & R_3 &= 300\Omega, \\
\mathcal{E}_1 &= 3\text{V}, & \mathcal{E}_2 &= 4\text{V}
\end{align}
$$
the solution
|
https://en.wikipedia.org/wiki/Kirchhoff%27s_circuit_laws
|
passage: Then the Airy pattern will be perfectly focussed at the distance given by the lens's focal length (assuming collimated light incident on the aperture) given by the above equations.
The zeros of
$$
J_1(x)
$$
are at
$$
x = ka \sin \theta \approx 3.8317, 7.0156, 10.1735, 13.3237, 16.4706\dots .
$$
From this, it follows that the first dark ring in the diffraction pattern occurs where
$$
ka \sin{\theta} = 3.8317\dots,
$$
or
$$
\sin \theta \approx \frac{3.83}{ka} = \frac{3.83 \lambda}{2 \pi a} = 1.22 \frac{\lambda}{2a} = 1.22 \frac{\lambda}{d}.
$$
If a lens is used to focus the Airy pattern at a finite distance, then the radius
$$
q_1
$$
of the first dark ring on the focal plane is solely given by the numerical aperture A (closely related to the f-number) by
$$
q_1 = R \sin \theta_1 \approx 1.22 {R} \frac{\lambda}{d} = 1.22 \frac{\lambda}{2A}
$$
where the numerical aperture A is equal to the aperture's radius d/2 divided by R', the distance from the center of the Airy pattern to the edge of the aperture.
|
https://en.wikipedia.org/wiki/Airy_disk
|
passage: The last several examples are corollaries of a basic fact about the irreducible cyclotomic polynomials: the cosines are the real parts of the zeroes of those polynomials; the sum of the zeroes is the Möbius function evaluated at (in the very last case above) 21; only half of the zeroes are present above. The two identities preceding this last one arise in the same fashion with 21 replaced by 10 and 15, respectively.
|
https://en.wikipedia.org/wiki/List_of_trigonometric_identities
|
passage: The first-order Stark effect occurs in rotational transitions of symmetric top molecules (but not for linear and asymmetric molecules). In first approximation a molecule may be seen as a rigid rotor. A symmetric top rigid rotor has the unperturbed eigenstates
$$
|JKM \rangle = (D^J_{MK})^* \quad\text{with}\quad M,K= -J,-J+1,\dots,J
$$
with 2(2J+1)-fold degenerate energy for |K| > 0 and (2J+1)-fold degenerate energy for K=0.
Here DJMK is an element of the Wigner D-matrix. The first-order perturbation matrix on basis of the unperturbed rigid rotor function is non-zero and can be diagonalized. This gives shifts and splittings
in the rotational spectrum. Quantitative analysis of these Stark shift yields the permanent electric dipole moment of the symmetric top molecule.
#### Second order
As stated, the quadratic Stark effect is described by second-order perturbation theory. The zeroth-order eigenproblem
$$
H^{(0)} \psi^0_k = E^{(0)}_k \psi^0_k, \quad k=0,1, \ldots, \quad E^{(0)}_0 < E^{(0)}_1 \le E^{(0)}_2, \dots
$$
is assumed to be solved.
|
https://en.wikipedia.org/wiki/Stark_effect
|
passage: 1. Real-time market responsiveness is important for marketers because of the ability to shift marketing efforts and correct to current trends, which is helpful in maintaining relevance to consumers. This can supply corporations with the information necessary to predict the wants and needs of consumers in advance.
1. Data-driven market ambidexterity are being highly fueled by big data. New models and algorithms are being developed to make significant predictions about certain economic and social situations.
## Case studies
Government
#### China
- The Integrated Joint Operations Platform (IJOP, 一体化联合作战平台) is used by the government to monitor the population, particularly Uyghurs. Biometrics, including DNA samples, are gathered through a program of free physicals.
- By 2020, China plans to give all its citizens a personal "social credit" score based on how they behave. The Social Credit System, now being piloted in a number of Chinese cities, is considered a form of mass surveillance which uses big data analysis technology.
#### India
- Big data analysis was tried out for the BJP to win the 2014 Indian General Election.
- The Indian government uses numerous techniques to ascertain how the Indian electorate is responding to government action, as well as ideas for policy augmentation.
#### Israel
- Personalized diabetic treatments can be created through GlucoMe's big data solution.
|
https://en.wikipedia.org/wiki/Big_data
|
passage: Although this approach succeeds for some values of (such as , the Eisenstein integers), in general such numbers do factor uniquely. This failure of unique factorization in some cyclotomic fields led Ernst Kummer to the concept of ideal numbers and, later, Richard Dedekind to ideals.
#### Unique factorization of quadratic integers
The quadratic integer rings are helpful to illustrate Euclidean domains. Quadratic integers are generalizations of the Gaussian integers in which the imaginary unit i is replaced by a number . Thus, they have the form , where and are integers and has one of two forms, depending on a parameter . If does not equal a multiple of four plus one, then
$$
\omega = \sqrt D .
$$
If, however, does equal a multiple of four plus one, then
$$
\omega = \frac{1 + \sqrt{D}}{2} .
$$
If the function corresponds to a norm function, such as that used to order the Gaussian integers above, then the domain is known as norm-Euclidean. The norm-Euclidean rings of quadratic integers are exactly those where is one of the values −11, −7, −3, −2, −1, 2, 3, 5, 6, 7, 11, 13, 17, 19, 21, 29, 33, 37, 41, 57, or 73. The cases and yield the Gaussian integers and Eisenstein integers, respectively.
If is allowed to be any Euclidean function, then the list of possible values of for which the domain is Euclidean is not yet known.
|
https://en.wikipedia.org/wiki/Euclidean_algorithm
|
passage: ## History
Histones were discovered in 1884 by Albrecht Kossel. The word "histone" dates from the late 19th century and is derived from the German word "Histon", a word itself of uncertain origin, perhaps from Ancient Greek ἵστημι (hístēmi, “make stand”) or ἱστός (histós, “loom”).
In the early 1960s, before the types of histones were known and before histones were known to be highly conserved across taxonomically diverse organisms, James F. Bonner and his collaborators began a study of these proteins that were known to be tightly associated with the DNA in the nucleus of higher organisms. Bonner and his postdoctoral fellow Ru Chih C. Huang showed that isolated chromatin would not support RNA transcription in the test tube, but if the histones were extracted from the chromatin, RNA could be transcribed from the remaining DNA. Their paper became a citation classic. Paul T'so and James Bonner had called together a World Congress on Histone Chemistry and Biology in 1964, in which it became clear that there was no consensus on the number of kinds of histone and that no one knew how they would compare when isolated from different organisms. Bonner and his collaborators then developed methods to separate each type of histone, purified individual histones, compared amino acid compositions in the same histone from different organisms, and compared amino acid sequences of the same histone from different organisms in collaboration with Emil Smith from UCLA. For example, they found Histone IV sequence to be highly conserved between peas and calf thymus.
|
https://en.wikipedia.org/wiki/Histone
|
passage: - An example of a fibration which is not a fiber bundle is given by the mapping
$$
i^* \colon X^{I^k} \to X^{\partial I^k}
$$
induced by the inclusion
$$
i \colon \partial I^k \to I^k
$$
where
$$
k \in \N,
$$
$$
X
$$
a topological space and
$$
X^{A} = \{f \colon A \to X\}
$$
is the space of all continuous mappings with the compact-open topology.
- The
### Hopf fibration
$$
S^1 \to S^3 \to S^2
$$
is a non-trivial fiber bundle and, specifically, a Serre fibration.
## Basic concepts
|
https://en.wikipedia.org/wiki/Fibration
|
passage: The law of large numbers states that the average of the sequence, i.e.,
$$
\bar{X}_{n}:=\frac{1}{n}\sum_{i=1}^{n}X_{i}
$$
, will approach the expected value almost certainly, that is, the events which do not satisfy this limit have zero probability. The expectation value of flipping heads, assumed to be represented by 1, is given by
$$
p
$$
. In fact, one has
$$
\mathbb{E}[X_i]=\mathbb{P}([X_i=1])=p,
$$
for any given random variable
$$
X_i
$$
out of the infinite sequence of Bernoulli trials that compose the Bernoulli process.
One is often interested in knowing how often one will observe H in a sequence of n coin flips. This is given by simply counting:
|
https://en.wikipedia.org/wiki/Bernoulli_process
|
passage: A monochromatic wave (a wave of a single frequency) consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves longer than a continent to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation:
$$
\displaystyle v=f\lambda
$$
where v is the speed of the wave (c in a vacuum or less in other media), f is the frequency, and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant.
Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude, and phase. Such a component wave is said to be monochromatic.
|
https://en.wikipedia.org/wiki/Electromagnetic_radiation
|
passage: This notion is present in C and Perl, among others. Note that as in earlier languages such as Algol 60 and FORTRAN, spaces are allowed in identifiers, so that `half pi` is a single identifier (thus avoiding the underscores versus camel case versus all lower-case issues).
As another example, to express the mathematical idea of a sum of `f(i)` from i=1 to n, the following ALGOL 68 integer expression suffices:
(INT sum := 0; FOR i TO n DO sum +:= f(i) OD; sum)
Note that, being an integer expression, the former block of code can be used in any context where an integer value can be used. A block of code returns the value of the last expression it evaluated; this idea is present in Lisp, among other languages.
Compound statements are all terminated by distinctive closing brackets:
- IF choice clauses:
IF condition THEN statements [ ELSE statements ] FI
"brief" form: ( condition | statements | statements )
IF condition1 THEN statements ELIF condition2 THEN statements [ ELSE statements ] FI
"brief" form: ( condition1 | statements |: condition2 | statements | statements )
|
https://en.wikipedia.org/wiki/ALGOL_68
|
passage: ### Lua
```lua
counter = 5
factorial = 1
while counter > 0 do
factorial = factorial * counter
counter = counter - 1
end
print(factorial)
```
### MATLAB, Octave
```matlab
counter = 5;
factorial = 1;
while (counter > 0)
factorial = factorial * counter; %Multiply
counter = counter - 1; %Decrement
end
factorial
```
### Mathematica
```mathematica
Block[{counter=5,factorial=1}, (*localize counter and factorial*)
While[counter>0, (*While loop*)
factorial*=counter; (*Multiply*)
counter--; (*Decrement*)
];
factorial
]
```
### Oberon, Oberon-2, Oberon-07, Component
### Pascal
```pascal
MODULE Factorial;
IMPORT Out;
VAR
Counter, Factorial: INTEGER;
BEGIN
Counter := 5;
Factorial := 1;
WHILE Counter > 0 DO
Factorial := Factorial * Counter;
DEC(Counter)
END;
Out.Int(Factorial,0)
END Factorial.
```
### Maya Embedded Language
```perl
int $counter = 5;
int $factorial = 1;
int $multiplication;
while ($counter > 0) {
$multiplication = $factorial * $counter;
$counter -= 1;
print("Counter is: " + $counter + ", multiplication is: " + $multiplication + "\n");
}
```
|
https://en.wikipedia.org/wiki/While_loop
|
passage: ### Coordinate transformations
Suppose now that a different parameterization is selected, by allowing and to depend on another pair of variables and . Then the analog of () for the new variables is
The chain rule relates , , and to , , and via the matrix equation
where the superscript T denotes the matrix transpose. The matrix with the coefficients , , and arranged in this way therefore transforms by the Jacobian matrix of the coordinate change
$$
J = \begin{bmatrix}
BLOCK0\end{bmatrix}\,.
$$
A matrix which transforms in this way is one kind of what is called a tensor. The matrix
$$
\begin{bmatrix} E & F \\ F & G \end{bmatrix}
$$
with the transformation law () is known as the metric tensor of the surface.
### Invariance of arclength under coordinate transformations
first observed the significance of a system of coefficients , , and , that transformed in this way on passing from one system of coordinates to another. The upshot is that the first fundamental form () is invariant under changes in the coordinate system, and that this follows exclusively from the transformation properties of , , and .
|
https://en.wikipedia.org/wiki/Metric_tensor
|
passage: ### Merge
A merge or integration is an operation in which two sets of changes are applied to a file or set of files. Some sample scenarios are as follows:
- A user, working on a set of files, updates or syncs their working copy with changes made, and checked into the repository, by other users.
- A user tries to check in files that have been updated by others since the files were checked out, and the revision control software automatically merges the files (typically, after prompting the user if it should proceed with the automatic merge, and in some cases only doing so if the merge can be clearly and reasonably resolved).
- A branch is created, the code in the files is independently edited, and the updated branch is later incorporated into a single, unified trunk.
- A set of files is branched, a problem that existed before the branching is fixed in one branch, and the fix is then merged into the other branch. (This type of selective merge is sometimes known as a cherry pick to distinguish it from the complete merge in the previous case.)
### Promote
The act of copying file content from a less controlled location into a more controlled location. For example, from a user's workspace into a repository, or from a stream to its parent.
### Pull, push
Copy revisions from one repository into another. Pull is initiated by the receiving repository, while push is initiated by the source. Fetch is sometimes used as a synonym for pull, or to mean a pull followed by an update.
### Pull request
### Repository
|
https://en.wikipedia.org/wiki/Version_control
|
passage: Most commonly, they are used to represent a simple group of Boolean flags or an ordered sequence of Boolean values.
Bit arrays are used for priority queues, where the bit at index k is set if and only if k is in the queue; this data structure is used, for example, by the Linux kernel, and benefits strongly from a find-first-zero operation in hardware.
Bit arrays can be used for the allocation of memory pages, inodes, disk sectors, etc. In such cases, the term bitmap may be used. However, this term is frequently used to refer to raster images, which may use multiple bits per pixel.
Another application of bit arrays is the Bloom filter, a probabilistic set data structure that can store large sets in a small space in exchange for a small probability of error. It is also possible to build probabilistic hash tables based on bit arrays that accept either false positives or false negatives.
Bit arrays and the operations on them are also important for constructing succinct data structures, which use close to the minimum possible space. In this context, operations like finding the nth 1 bit or counting the number of 1 bits up to a certain position become important.
Bit arrays are also a useful abstraction for examining streams of compressed data, which often contain elements that occupy portions of bytes or are not byte-aligned. For example, the compressed Huffman coding representation of a single 8-bit character can be anywhere from 1 to 255 bits long.
In information retrieval, bit arrays are a good representation for the posting lists of very frequent terms.
|
https://en.wikipedia.org/wiki/Bit_array
|
passage: Suppose
$$
s_{p-2} \equiv 0 \pmod{M_p}.
$$
Then
$$
\omega^{2^{p-2}} + \bar{\omega}^{2^{p-2}} = k M_p
$$
for some integer k, so
$$
\omega^{2^{p-2}} = k M_p - \bar{\omega}^{2^{p-2}}.
$$
Multiplying by
$$
\omega^{2^{p - 2}}
$$
gives
$$
\left(\omega^{2^{p-2}}\right)^2 = k M_p\omega^{2^{p-2}} - (\omega \bar{\omega})^{2^{p-2}}.
$$
Thus,
$$
\omega^{2^{p-1}} = k M_p\omega^{2^{p-2}} - 1.\qquad\qquad(1)
$$
For a contradiction, suppose Mp is composite, and let q be the smallest prime factor of Mp. Mersenne numbers are odd, so q > 2. Let
$$
\mathbb{Z}_q
$$
be the integers modulo q, and let
$$
X = \left\{a + b \sqrt{3} \mid a, b \in \mathbb{Z}_q\right\}.
$$
Multiplication in
$$
X
$$
is defined as
$$
\left(a + \sqrt{3} b\right) \left(c + \sqrt{3} d\right) = [(a c + 3 b d) \,\bmod\,q] + \sqrt{3} [(a d + b c) \,\bmod\,q].
$$
Clearly, this multiplication is closed, i.e. the product of numbers from X is itself in X.
|
https://en.wikipedia.org/wiki/Lucas%E2%80%93Lehmer_primality_test
|
passage: (In infinite-dimensional spaces, the property of compactness is stronger than the joint properties of being closed and being bounded.)
Edgar’s theorem implies Lindenstrauss’s theorem.
## Related notions
A closed convex subset of a topological vector space is called if every one of its (topological) boundary points is an extreme point. The unit ball of any Hilbert space is a strictly convex set.
### k-extreme points
More generally, a point in a convex set
$$
S
$$
is -extreme if it lies in the interior of a
$$
k
$$
-dimensional convex set within
$$
S,
$$
but not a
$$
k + 1
$$
-dimensional convex set within
$$
S.
$$
Thus, an extreme point is also a
$$
0
$$
-extreme point. If
$$
S
$$
is a polytope, then the
$$
k
$$
-extreme points are exactly the interior points of the
$$
k
$$
-dimensional faces of
$$
S.
$$
More generally, for any convex set
$$
S,
$$
the
$$
k
$$
-extreme points are partitioned into
$$
k
$$
-dimensional open faces.
The finite-dimensional Krein–Milman theorem, which is due to Minkowski, can be quickly proved using the concept of
$$
k
$$
-extreme points.
|
https://en.wikipedia.org/wiki/Extreme_point
|
passage: The stochastic process would simply be the canonical process
$$
(\pi_t)_{t \in T}
$$
, defined on
$$
\Omega=(\mathbb{R}^n)^T
$$
with probability measure
$$
P=\mu
$$
. The reason that the original statement of the theorem does not mention inner regularity of the measures
$$
\nu_{t_1\dots t_k}
$$
is that this would automatically follow, since Borel probability measures on Polish spaces are automatically Radon.
This theorem has many far-reaching consequences; for example it can be used to prove the existence of the following, among others:
- Brownian motion, i.e., the Wiener process,
- a Markov chain taking values in a given state space with a given transition matrix,
- infinite products of (inner-regular) probability spaces.
## History
According to John Aldrich, the theorem was independently discovered by British mathematician Percy John Daniell in the slightly different setting of integration theory.
## References
## External links
- Aldrich, J. (2007) "But you have to remember P.J.Daniell of Sheffield" Electronic Journ@l for History of Probability and Statistics December 2007.
Category:Theorems about stochastic processes
|
https://en.wikipedia.org/wiki/Kolmogorov_extension_theorem
|
passage: Early British radar sets were referred to as RDF, which is often stated was a deception. In fact, the Chain Home systems used large RDF receivers to determine directions. Later radar systems generally used a single antenna for broadcast and reception, and determined direction from the direction the antenna was facing.
## History
### Early mechanical systems
The earliest experiments in RDF were carried out in 1888 when Heinrich Hertz discovered the directionality of an open loop of wire used as an antenna. When the antenna was aligned so it pointed at the signal it produced maximum gain, and produced zero signal when face on. This meant there was always an ambiguity in the location of the signal: it would produce the same output if the signal was in front or back of the antenna. Later experimenters also used dipole antennas, which worked in the opposite sense, reaching maximum gain at right angles and zero when aligned. RDF systems using mechanically swung loop or dipole antennas were common by the turn of the 20th century. Prominent examples were patented by John Stone Stone in 1902 (U.S. Patent 716,134) and Lee de Forest in 1904 (U.S. Patent 771,819), among many other examples.
By the early 1900s, many experimenters were looking for ways to use this concept for locating the position of a transmitter. Early radio systems generally used medium wave and longwave signals.
|
https://en.wikipedia.org/wiki/Direction_finding
|
passage: When the primes have been indicated, it is found that there are concentrations in certain vertical and diagonal lines, and amongst these the so-called Euler sequences with high concentrations of primes are discovered."
## Explanation
Diagonal, horizontal, and vertical lines in the number spiral correspond to polynomials of the form
$$
f(n) = 4 n^2 + b n + c
$$
where b and c are integer constants. When b is even, the lines are diagonal, and either all numbers are odd, or all are even, depending on the value of c. It is therefore no surprise that all primes other than 2 lie in alternate diagonals of the Ulam spiral. Some polynomials, such as
$$
4 n^2 + 8 n + 3
$$
, while producing only odd values, factorize over the integers
$$
(4 n^2 + 8 n + 3)=(2n+1)(2n+3)
$$
and are therefore never prime except possibly when one of the factors equals 1. Such examples correspond to diagonals that are devoid of primes or nearly so.
To gain insight into why some of the remaining odd diagonals may have a higher concentration of primes than others, consider
$$
4 n^2 + 6 n + 1
$$
and
$$
4 n^2 + 6 n + 5
$$
. Compute remainders upon division by 3 as n takes successive values 0, 1, 2, .... For the first of these polynomials, the sequence of remainders is 1, 2, 2, 1, 2, 2, ..., while for the second, it is 2, 0, 0, 2, 0, 0, ....
|
https://en.wikipedia.org/wiki/Ulam_spiral
|
passage: (Here the scalar field is understood to be complex, i.e. to correspond to a (smooth) function
$$
f:\R^3 \to \Complex
$$
.) In spherical coordinates this is:
$$
\nabla^2 f
= \frac{1}{r^2} \frac{\partial}{\partial r}\left(r^2 \frac{\partial f}{\partial r}\right)
+ \frac{1}{r^2 \sin\theta} \frac{\partial}{\partial \theta}\left(\sin\theta \frac{\partial f}{\partial \theta}\right)
+ \frac{1}{r^2 \sin^2\theta} \frac{\partial^2 f}{\partial \varphi^2} = 0.
$$
Consider the problem of finding solutions of the form . By separation of variables, two differential equations result by imposing Laplace's equation:
$$
\frac{1}{R}\frac{d}{dr}\left(r^2\frac{dR}{dr}\right) = \lambda,\qquad \frac{1}{Y}\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta \frac{\partial Y}{\partial\theta}\right) + \frac{1}{Y}\frac{1}{\sin^2\theta}\frac{\partial^2Y}{\partial\varphi^2} = -\lambda.
$$
The second equation can be simplified under the assumption that has the form .
|
https://en.wikipedia.org/wiki/Spherical_harmonics
|
passage: In the more convenient multi-index notation this can be written
$$
f(x) = \sum_{\alpha \in \N^n} a_\alpha (x - c)^\alpha.
$$
where
$$
\N
$$
is the set of natural numbers, and so
$$
\N^n
$$
is the set of ordered n-tuples of natural numbers.
The theory of such series is trickier than for single-variable series, with more complicated regions of convergence. For instance, the power series
$$
\sum_{n=0}^\infty x_1^n x_2^n
$$
is absolutely convergent in the set
$$
\{ (x_1, x_2): |x_1 x_2| < 1\}
$$
between two hyperbolas. (This is an example of a log-convex set, in the sense that the set of points
$$
(\log |x_1|, \log |x_2|)
$$
, where
$$
(x_1, x_2)
$$
lies in the above region, is a convex set. More generally, one can show that when c=0, the interior of the region of absolute convergence is always a log-convex set in this sense.) On the other hand, in the interior of this region of convergence one may differentiate and integrate under the series sign, just as one may with ordinary power series.
## Order of a power series
Let be a multi-index for a power series .
|
https://en.wikipedia.org/wiki/Power_series
|
passage: The Centipede game is commonly used in introductory game theory courses and texts to highlight the concept of backward induction and the iterated elimination of dominated strategies, which show a standard way of providing a solution to the game.
## Play
One possible version of a centipede game could be played as follows:
The addition of coins is taken to be an externality, as it is not contributed by either player.
### Formal definition
The centipede game may be written as
$$
\mathcal{G}(N,~m_{0},~m_{1})
$$
where
$$
N, m_{0}, m_{1}\in\mathbb{N}
$$
and
$$
m_{0}>m_{1}
$$
. Players
$$
I
$$
and
$$
II
$$
alternate, starting with player
$$
I
$$
, and may on each turn play a move from
$$
\{\mathrm{take},\mathrm{push}\}
$$
with a maximum of
$$
N
$$
rounds. The game terminates when
$$
\mathrm{take}
$$
is played for the first time, otherwise upon
$$
N
$$
moves, if
$$
\mathrm{take}
$$
is never played.
Suppose the game ends on round
$$
t\in\{0,\ldots,N-1\}
$$
with player
$$
p\in\{I,II\}
$$
making the final move.
|
https://en.wikipedia.org/wiki/Centipede_game
|
passage: Since it takes six tetrahedrals to represent one hexahedral the tet mesh size will be considerably larger and will require a lot more computing power and RAM to solve an equivalent hexahedral mesh. The tetrahedral mesh will also require more relaxation factors to solve the simulation by effectively dampening the amplitude of the gradients. This increases the number of sub-cycle steps and drives the courant number up. If you built a hexahedral mesh this is where the tortoise passes the hare.
5) Post processing the results: The time required in this step is highly dependent on the size of the mesh (number of cells).
6) Making design changes: If you build a non-structured mesh this is where you go back to the beginning and start all over again. If you build a hexahedral mesh then you make the geometric change, re-smooth the mesh and restart the simulation.
7) Accuracy: This is the major difference between a non-structured mesh and a hexahedral mesh, and the main reason why it is preferred.
The "spatial twist continuum" addresses the issue of complex mesh model creation by elevating the structure of the mesh to a higher level of abstraction that assists in the creation of the all-hexahedral mesh.
## References
- Murdoch P.; Benzley S.1; Blacker T.; Mitchell S.A. "The spatial twist continuum: A connectivity based method for representing all-hexahedral finite element meshes."
|
https://en.wikipedia.org/wiki/Spatial_twist_continuum
|
passage: In computer science, an operator-precedence parser is a bottom-up parser that interprets an operator-precedence grammar. For example, most calculators use operator-precedence parsers to convert from the human-readable infix notation relying on order of operations to a format that is optimized for evaluation such as Reverse Polish notation (RPN).
Edsger Dijkstra's shunting yard algorithm is commonly used to implement operator-precedence parsers.
## Relationship to other parsers
An operator-precedence parser is a simple shift-reduce parser that is capable of parsing a subset of LR(1) grammars. More precisely, the operator-precedence parser can parse all LR(1) grammars where two consecutive nonterminals and epsilon never appear in the right-hand side of any rule.
Operator-precedence parsers are not used often in practice; however they do have some properties that make them useful within a larger design. First, they are simple enough to write by hand, which is not generally the case with more sophisticated right shift-reduce parsers. Second, they can be written to consult an operator table at run time, which makes them suitable for languages that can add to or change their operators while parsing. (An example is Haskell, which allows user-defined infix operators with custom associativity and precedence; consequently, an operator-precedence parser must be run on the program after parsing of all referenced modules.)
|
https://en.wikipedia.org/wiki/Operator-precedence_parser
|
passage: Each of these common eigenvectors v ∈ V defines a linear functional on the subalgebra U of End(V ) generated by the set of endomorphisms S; this functional is defined as the map which associates to each element of U its eigenvalue on the eigenvector v. This map is also multiplicative, and sends the identity to 1; thus it is an algebra homomorphism from U to the base field. This "generalized eigenvalue" is a prototype for the notion of a weight.
The notion is closely related to the idea of a multiplicative character in group theory, which is a homomorphism χ from a group G to the multiplicative group of a field F. Thus χ: G → F× satisfies χ(e) = 1 (where e is the identity element of G) and
$$
\chi(gh) = \chi(g)\chi(h)
$$
for all g, h in G.
Indeed, if G acts on a vector space V over F, each simultaneous eigenspace for every element of G, if such exists, determines a multiplicative character on G: the eigenvalue on this common eigenspace of each element of the group.
|
https://en.wikipedia.org/wiki/Weight_%28representation_theory%29
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.