text
stringlengths 82
2.62k
| source
stringlengths 31
108
|
---|---|
passage: ## Statement
The three utilities problem can be stated as follows:
The problem is an abstract mathematical puzzle which imposes constraints that would not exist in a practical engineering situation. Its mathematical formalization is part of the field of topological graph theory which studies the embedding of graphs on surfaces. An important part of the puzzle, but one that is often not stated explicitly in informal wordings of the puzzle, is that the houses, companies, and lines must all be placed on a two-dimensional surface with the topology of a plane, and that the lines are not allowed to pass through other buildings; sometimes this is enforced by showing a drawing of the houses and companies, and asking for the connections to be drawn as lines on the same drawing.
In more formal graph-theoretic terms, the problem asks whether the complete bipartite graph
$$
K_{3,3}
$$
is a planar graph. This graph has six vertices in two subsets of three: one vertex for each house, and one for each utility. It has nine edges, one edge for each of the pairings of a house with a utility, or more abstractly one edge for each pair of a vertex in one subset and a vertex in the other subset. Planar graphs are the graphs that can be drawn without crossings in the plane, and if such a drawing could be found, it would solve the three utilities puzzle.
## Puzzle solutions
### Unsolvability
|
https://en.wikipedia.org/wiki/Three_utilities_problem
|
passage: The idea of the proof is to get an upper bound on
$$
C^m
$$
for
$$
m \in \mathbb{N}
$$
and show that it can only hold for all
$$
m
$$
if
$$
C \leq 1
$$
. Rewrite
$$
C^m
$$
as
$$
\begin{align}
C^m & = \left( \sum_{i=1}^n r^{-l_i} \right)^m \\
& = \sum_{i_1=1}^n \sum_{i_2=1}^n \cdots \sum_{i_m=1}^n r^{-\left(l_{i_1} + l_{i_2} + \cdots + l_{i_m} \right)} \\
\end{align}
$$
Consider all m-powers
$$
S^m
$$
, in the form of words
$$
s_{i_1}s_{i_2}\dots s_{i_m}
$$
, where
$$
i_1, i_2, \dots, i_m
$$
are indices between 1 and
$$
n
$$
. Note that, since S was assumed to uniquely decodable,
$$
s_{i_1}s_{i_2}\dots s_{i_m}=s_{j_1}s_{j_2}\dots s_{j_m}
$$
implies
$$
i_1=j_1, i_2=j_2, \dots, i_m=j_m
$$
.
|
https://en.wikipedia.org/wiki/Kraft%E2%80%93McMillan_inequality
|
passage: ## Entire characteristic functions
As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane by analytic continuation, in cases where this is possible.
## Related concepts
Related concepts include the moment-generating function and the probability-generating function. The characteristic function exists for all probability distributions. This is not the case for the moment-generating function.
The characteristic function is closely related to the Fourier transform: the characteristic function of a probability density function is the complex conjugate of the continuous Fourier transform of (according to the usual convention; see continuous Fourier transform – other conventions).
$$
\varphi_X(t) = \langle e^{itX} \rangle = \int_{\mathbf{R}} e^{itx}p(x)\, dx = \overline{\left( \int_{\mathbf{R}} e^{-itx}p(x)\, dx \right)} = \overline{P(t)},
$$
where denotes the continuous Fourier transform of the probability density function .
|
https://en.wikipedia.org/wiki/Characteristic_function_%28probability_theory%29
|
passage: 1. The inner culling loop (the loop) exactly reflects the way the algorithm is formulated, but seemingly without realizing that the indexed culling starts at exactly the index representing the square of the base odd number and that the indexing using multiplication can much more easily be expressed as a simple repeated addition of the base odd number across the range; in fact, this method of adding a constant value across the culling range is exactly how the Sieve of Eratosthenes culling is generally implemented.
The following Python code in the same style resolves the above three issues, as well converting the code to a prime-counting function that also displays the total number of composite-culling operations:
```python
from math import isqrt
def sieve_of_Sundaram(n):
"""The sieve of Sundaram is a simple deterministic algorithm for finding all the prime numbers up to a specified integer."""
if n < 3:
if n < 2:
return 0
else:
return 1
k = (n - 3) // 2 + 1
integers_list = [True for i in range(k)]
ops = 0
for i in range((isqrt(n) - 3) // 2 + 1):
1. if integers_list[i]: # adding this condition turns it into a SoE!
p = 2 * i + 3
s = (p * p - 3) // 2 # compute cull start
for j in range(s, k, p):
integers_list[j] = False
ops += 1
print("Total operations: ", ops, ";", sep=)
count = 1
for i in range(k):
if integers_list[i]:
count += 1
|
https://en.wikipedia.org/wiki/Sieve_of_Sundaram
|
passage: Each book in a library may be checked out by one patron at a time. However, a single patron may be able to check out multiple books. Therefore, the information about which books are checked out to which patrons may be represented by an associative array, in which the books are the keys and the patrons are the values. Using notation from Python or JSON, the data structure would be:
```javascript
{
"Pride and Prejudice": "Alice",
"Wuthering Heights": "Alice",
"Great Expectations": "John"
}
```
A lookup operation on the key "Great Expectations" would return "John". If John returns his book, that would cause a deletion operation, and if Pat checks out a book, that would cause an insertion operation, leading to a different state:
```javascript
{
"Pride and Prejudice": "Alice",
"The Brothers Karamazov": "Pat",
"Wuthering Heights": "Alice"
}
```
## Implementation
For dictionaries with very few mappings, it may make sense to implement the dictionary using an association list, which is a linked list of mappings. With this implementation, the time to perform the basic dictionary operations is linear in the total number of mappings. However, it is easy to implement and the constant factors in its running time are small.
|
https://en.wikipedia.org/wiki/Associative_array
|
passage: Pappus's area theorem describes the relationship between the areas of three parallelograms attached to three sides of an arbitrary triangle. The theorem, which can also be thought of as a generalization of the Pythagorean theorem, is named after the Greek mathematician Pappus of Alexandria (4th century AD), who discovered it.
## Theorem
Given an arbitrary triangle with two arbitrary parallelograms attached to two of its sides the theorem tells how to construct a parallelogram over the third side, such that the area of the third parallelogram equals the sum of the areas of the other two parallelograms.
Let ABC be the arbitrary triangle and ABDE and ACFG the two arbitrary parallelograms attached to the triangle sides AB and AC. The extended parallelogram sides DE and FG intersect at H. The line segment AH now "becomes" the side of the third parallelogram BCML attached to the triangle side BC, i.e., one constructs line segments BL and CM over BC, such that BL and CM are a parallel and equal in length to AH. The following identity then holds for the areas (denoted by A) of the parallelograms:
$$
\text{A}_{ABDE}+\text{A}_{ACFG}=\text{A}_{BCML}
$$
The theorem generalizes the Pythagorean theorem twofold. Firstly it works for arbitrary triangles rather than only for right angled ones and secondly it uses parallelograms rather than squares.
|
https://en.wikipedia.org/wiki/Pappus%27s_area_theorem
|
passage: After degradation lysosomal products are transported out of the lysosome through specific membrane proteins or via vesicular membrane trafficking to be recycled or to be utilized for energy.
Aside from cellular clearance and secretion, lysosomes mediate biological processes like plasma membrane repair, cell homeostasis, energy metabolism, cell signaling, and the immune response.
## Discovery
Christian de Duve, a Belgian scientist at the Laboratory of Physiological Chemistry at the Catholic University of Louvain, is credited with discovering lysosomes in the 1950s. De Duve and his team were studying the distribution of hydrolytic enzymes such as acid phosphatase within cells, using cell fractionation methods to isolate subcellular components. De Duve and his team identified an unknown organelle that was rich in acid phosphatase. This led them to propose the existence of lysosomes as membrane bound organelles containing digestive enzymes capable of breaking down a variety of biological molecules.
Using differential centrifugation and enzyme activity assays, the team confirmed the hypothesis and understood that these organelles play a crucial role in intracellular digestion processes, such as phagocytosis and autophagy. The presence of digestive enzymes was further validated using electron microscopy. De Duve’s discovery laid the foundation for new research into lysosomal functions and understanding disorders which could lead to undigested materials accumulating in the cell. De Duve was awarded the Nobel Prize in Physiology or Medicine in 1974.
|
https://en.wikipedia.org/wiki/Lysosome
|
passage: It is not necessary that all aspects of internal thermodynamic equilibrium be reached simultaneously; some can be established before others. For example, in many cases of such evolution, internal mechanical equilibrium is established much more rapidly than the other aspects of the eventual thermodynamic equilibrium. Another example is that, in many cases of such evolution, thermal equilibrium is reached much more rapidly than chemical equilibrium.
### Fluctuations within an isolated system in its own internal thermodynamic equilibrium
In an isolated system, thermodynamic equilibrium by definition persists over an indefinitely long time. In classical physics it is often convenient to ignore the effects of measurement and this is assumed in the present account.
To consider the notion of fluctuations in an isolated thermodynamic system, a convenient example is a system specified by its extensive state variables, internal energy, volume, and mass composition. By definition they are time-invariant. By definition, they combine with time-invariant nominal values of their conjugate intensive functions of state, inverse temperature, pressure divided by temperature, and the chemical potentials divided by temperature, so as to exactly obey the laws of thermodynamics. But the laws of thermodynamics, combined with the values of the specifying extensive variables of state, are not sufficient to provide knowledge of those nominal values. Further information is needed, namely, of the constitutive properties of the system.
It may be admitted that on repeated measurement of those conjugate intensive functions of state, they are found to have slightly different values from time to time.
|
https://en.wikipedia.org/wiki/Thermodynamic_equilibrium
|
passage: bounded:
$$
|a(u,v)| \le C \|u\| \|v\|\,;
$$
and
1. coercive:
$$
a(u,u) \ge c \|u\|^2\,.
$$
Then, for any bounded there is a unique solution
$$
u\in V
$$
to the equation
$$
a(u,v) = f(v) \quad \forall v \in V
$$
and it holds
$$
\|u\| \le \frac1c \|f\|_{V'}\,.
$$
### Application to example 1
Here, application of the Lax–Milgram theorem is a stronger result than is needed.
- Boundedness: all bilinear forms on
$$
\R^n
$$
are bounded. In particular, we have
$$
|a(u,v)| \le \|A\|\,\|u\|\,\|v\|
$$
- Coercivity: this actually means that the real parts of the eigenvalues of
$$
A
$$
are not smaller than
$$
c
$$
. Since this implies in particular that no eigenvalue is zero, the system is solvable.
Additionally, this yields the estimate
$$
\|u\| \le \frac1c \|f\|,
$$
where
$$
c
$$
is the minimal real part of an eigenvalue of
|
https://en.wikipedia.org/wiki/Weak_formulation
|
passage: Isotopic tracers are some of the most important tools in geology because they can be used to understand complex mixing processes in earth systems. Further discussion of the application of isotopic tracers in geology is covered under the heading of isotope geochemistry.
Isotopic tracers are usually subdivided into two categories: stable isotope tracers and radiogenic isotope tracers. Stable isotope tracers involve only non-radiogenic isotopes and usually are mass-dependent. In theory, any element with two stable isotopes can be used as an isotopic tracer. However, the most commonly used stable isotope tracers involve relatively light isotopes, which readily undergo fractionation in natural systems. See also isotopic signature. A radiogenic isotope tracer involves an isotope produced by radioactive decay, which is usually in a ratio with a non-radiogenic isotope (whose abundance in the earth does not vary due to radioactive decay).
## Stable isotope labeling
Stable isotope labeling involves the use of non-radioactive isotopes that can act as tracers used to model several chemical and biochemical systems. The chosen isotope can act as a label on that compound that can be identified through nuclear magnetic resonance (NMR) and mass spectrometry (MS). Some of the most common stable isotopes are 2H, 13C, and 15N, which can further be produced into NMR solvents, amino acids, nucleic acids, lipids, common metabolites and cell growth media.
|
https://en.wikipedia.org/wiki/Isotopic_labeling
|
passage: This fact is a central one in Fourier series.
### Orthogonal polynomials
Various polynomial sequences named for mathematicians of the past are sequences of orthogonal polynomials. In particular:
- The Hermite polynomials are orthogonal with respect to the Gaussian distribution with zero mean value.
- The Legendre polynomials are orthogonal with respect to the uniform distribution on the interval
$$
[-1,1]
$$
.
- The Laguerre polynomials are orthogonal with respect to the exponential distribution. Somewhat more general Laguerre polynomial sequences are orthogonal with respect to gamma distributions.
- The Chebyshev polynomials of the first kind are orthogonal with respect to the measure
$$
\frac{1}{\sqrt{1-x^2}}.
$$
- The Chebyshev polynomials of the second kind are orthogonal with respect to the Wigner semicircle distribution.
## Combinatorics
In combinatorics, two
$$
n \times n
$$
Latin squares are said to be orthogonal if their superimposition yields all possible
$$
n^2
$$
combinations of entries.
## Completely orthogonal
Two flat planes
$$
A
$$
and
$$
B
$$
of a Euclidean four-dimensional space are called completely orthogonal if and only if every line in
$$
A
$$
is orthogonal to every line in
$$
B
$$
.
|
https://en.wikipedia.org/wiki/Orthogonality_%28mathematics%29
|
passage: ### Deployment models
Regarding cloud resources, Microsoft Azure offers two deployment models: the "classic" model and the Azure Resource Manager. In the classic model, each resource, like a virtual machine or SQL database, had to be managed separately, but in 2014, Azure introduced the Azure Resource Manager, which allows users to group related services. This update makes it easier and more efficient to deploy, manage, and monitor resources that work closely together. The classic model will eventually be phased out.
### Infrastructure development
In January 2025, Microsoft announced plans to invest $80 billion in AI and data centers as part of its fiscal year 2025 budget. This investment would enhance the scalability and performance of Azure's cloud infrastructure, which supports AI-driven applications, including services developed through Microsoft's partnership with OpenAI.
## History and timeline
In 2005, Microsoft took over Groove Networks, and Bill Gates made Groove's founder Ray Ozzie one of his 5 direct reports as one of 3 chief technology officers. Ozzie met with Amitabh Srivastava, which let Srivastava change course. They convinced Dave Cutler to postpone his retirement, and their teams developed a cloud operating system. Red Dog: Five questions with Microsoft mystery man Dave Cutler , ZDNet, 2009-02-25.
- October 2008 (PDC LA) – Announced the Windows Azure Platform.
- March 2009 – Announced SQL Azure Relational Database.
- November 2009 – Updated Windows Azure CTP, Enabled full trust, PHP, Java, CDN CTP, and more.
- February 1, 2010 – Windows Azure Platform commercially available.
- June 2010 – Windows Azure Update, .NET
|
https://en.wikipedia.org/wiki/Microsoft_Azure
|
passage: Nevertheless, with prior knowledge or assumptions about the signal, it turns out to be possible to perfectly reconstruct a signal from a series of measurements (acquiring this series of measurements is called sampling). Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized.
An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if a real signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly by means of sinc interpolation. The main idea is that with prior knowledge about constraints on the signal's frequencies, fewer samples are needed to reconstruct the signal.
Around 2004, Emmanuel Candès, Justin Romberg, Terence Tao, and David Donoho proved that given knowledge about a signal's sparsity, the signal may be reconstructed with even fewer samples than the sampling theorem requires. This idea is the basis of compressed sensing.
## History
Compressed sensing relies on techniques, which several other scientific fields have used historically. In statistics, the least squares method was complemented by the -norm, which was introduced by Laplace. Following the introduction of linear programming and Dantzig's simplex algorithm, the
$$
L^1
$$
-norm was used in computational statistics. In statistical theory, the
$$
L^1
$$
-norm was used by George W. Brown and later writers on median-unbiased estimators.
|
https://en.wikipedia.org/wiki/Compressed_sensing
|
passage: The pattern itself is seemingly random, as it represents the way in which the scene's light interfered with the original light source – but not the original light source itself. The interference pattern can be considered an encoded version of the scene, requiring a particular key – the original light source – in order to view its contents.
This missing key is provided later by shining a laser, identical to the one used to record the hologram, onto the developed film. When this beam illuminates the hologram, it is diffracted by the hologram's surface pattern. This produces a light field identical to the one originally produced by the scene and scattered onto the hologram.
### Comparison with photography
Holography may be better understood via an examination of its differences from ordinary photography:
- A hologram represents a recording of information regarding the light that came from the original scene as scattered in a range of directions rather than from only one direction, as in a photograph. This allows the scene to be viewed from a range of different angles, as if it were still present.
- A photograph can be recorded using normal light sources (sunlight or electric lighting) whereas a laser is required to record a hologram.
- A lens is required in photography to record the image, whereas in holography, the light from the object is scattered directly onto the recording medium.
- A holographic recording requires a second light beam (the reference beam) to be directed onto the recording medium.
|
https://en.wikipedia.org/wiki/Holography
|
passage: Again, an optimal policy can always be found among stationary policies.
To define optimality in a formal manner, define the state-value of a policy
$$
\pi
$$
by
$$
V^{\pi} (s) = \operatorname \mathbb{E}[G\mid s,\pi],
$$
where
$$
G
$$
stands for the discounted return associated with following
$$
\pi
$$
from the initial state
$$
s
$$
. Defining
$$
V^*(s)
$$
as the maximum possible state-value of
$$
V^\pi(s)
$$
, where
$$
\pi
$$
is allowed to change,
$$
V^*(s) = \max_\pi V^\pi(s).
$$
A policy that achieves these optimal state-values in each state is called optimal. Clearly, a policy that is optimal in this sense is also optimal in the sense that it maximizes the expected discounted return, since
$$
V^*(s) = \max_\pi \mathbb{E}[G\mid s,\pi]
$$
, where
$$
s
$$
is a state randomly sampled from the distribution
$$
\mu
$$
of initial states (so
$$
\mu(s) = \Pr(S_0 = s)
$$
).
Although state-values suffice to define optimality, it is useful to define action-values.
|
https://en.wikipedia.org/wiki/Reinforcement_learning
|
passage: Exceptionally, a rotation block may be diagonal, . Thus, negating one column if necessary, and noting that a reflection diagonalizes to a +1 and −1, any orthogonal matrix can be brought to the form
$$
P^\mathrm{T}QP = \begin{bmatrix}
\begin{matrix}R_1 & & \\ & \ddots & \\ & & R_k\end{matrix} & 0 \\
0 & \begin{matrix}\pm 1 & & \\ & \ddots & \\ & & \pm 1\end{matrix} \\
\end{bmatrix},
$$
The matrices give conjugate pairs of eigenvalues lying on the unit circle in the complex plane; so this decomposition confirms that all eigenvalues have absolute value 1. If is odd, there is at least one real eigenvalue, +1 or −1; for a rotation, the eigenvector associated with +1 is the rotation axis.
### Lie algebra
Suppose the entries of are differentiable functions of , and that gives .
|
https://en.wikipedia.org/wiki/Orthogonal_matrix
|
passage: ### Control structures
Following the structured program theorem, all programs are seen as composed of three control structures:
- "Sequence"; ordered statements or subroutines executed in sequence.
- "Selection"; one of a number of statements is executed depending on the state of the program. This is usually expressed with keywords such as `if..then..else..endif`. The conditional statement should have at least one true condition and each condition should have one exit point at max.
- "Iteration"; a statement or block is executed until the program reaches a certain state, or operations have been applied to every element of a collection. This is usually expressed with keywords such as `while`, `repeat`, `for` or `do..until`. Often it is recommended that each loop should only have one entry point (and in the original structural programming, also only one exit point, and a few languages enforce this).
### Subroutines
Subroutines; callable units such as procedures, functions, methods, or subprograms are used to allow a sequence to be referred to by a single statement.
### Blocks
Blocks are used to enable groups of statements to be treated as if they were one statement. Block-structured languages have a syntax for enclosing structures in some formal way, such as an if-statement bracketed by `if..fi` as in ALGOL 68, or a code section bracketed by `BEGIN..
|
https://en.wikipedia.org/wiki/Structured_programming
|
passage: And the difference we want to build confidence intervals for is:
$$
p_{*1} - p_{1*} = \frac{F_{11} + F_{01}}{N} - \frac{F_{11} + F_{10}}{N} = \frac{F_{01}}{N} - \frac{F_{10}}{N} = p_{01} - p_{10}
$$
Hence, a confidence intervals for the marginal positive proportions (
$$
p_{*1} - p_{1*}
$$
) is the same as building a confidence interval for the difference of the proportions from the secondary diagonal of the two-by-two contingency table (
$$
p_{01} - p_{10}
$$
).
Calculating a p-value for such a difference is known as McNemar's test. Building confidence interval around it can be constructed using methods described above for Confidence intervals for the difference of two proportions.
The Wald confidence intervals from the previous section can be applied to this setting, and appears in the literature using alternative notations. Specifically, the SE often presented is based on the contingency table frequencies instead of the sample proportions.
|
https://en.wikipedia.org/wiki/Multinomial_distribution
|
passage: The hydrogen–oxygen–hydrogen angle is 104.45°, which is less than the 109.47° for ideal sp3 hybridization. The valence bond theory explanation is that the oxygen atom's lone pairs are physically larger and therefore take up more space than the oxygen atom's bonds to the hydrogen atoms. The molecular orbital theory explanation (Bent's rule) is that lowering the energy of the oxygen atom's nonbonding hybrid orbitals (by assigning them more s character and less p character) and correspondingly raising the energy of the oxygen atom's hybrid orbitals bonded to the hydrogen atoms (by assigning them more p character and less s character) has the net effect of lowering the energy of the occupied molecular orbitals because the energy of the oxygen atom's nonbonding hybrid orbitals contributes completely to the energy of the oxygen atom's lone pairs while the energy of the oxygen atom's other two hybrid orbitals contributes only partially to the energy of the bonding orbitals (the remainder of the contribution coming from the hydrogen atoms' 1s orbitals).
## Chemical properties
### Self-ionization
In liquid water there is some self-ionization giving hydronium ions and hydroxide ions.
2 +
The equilibrium constant for this reaction, known as the ionic product of water,
$$
K_{\rm w}=[{\rm{H_3O^+}}][{\rm{OH^-}}]
$$
, has a value of about at 25 °C.
|
https://en.wikipedia.org/wiki/Properties_of_water
|
passage: For ordered data this translates to a performance loss compared to a predictable branch.
Predication is most effective when paths are balanced or when the longest path is the most frequently executed, but determining such a path is very difficult at compile time, even in the presence of profiling information.
## History
Predicated instructions were popular in European computer designs of the 1950s, including the Mailüfterl (1955), the Zuse Z22 (1955), the ZEBRA (1958), and the Electrologica X1 (1958). The IBM ACS-1 design of 1967 allocated a "skip" bit in its instruction formats, and the CDC Flexible Processor in 1976 allocated three conditional execution bits in its microinstruction formats.
Hewlett-Packard's PA-RISC architecture (1986) had a feature called nullification, which allowed most instructions to be predicated by the previous instruction. IBM's POWER architecture (1990) featured conditional move instructions. POWER's successor, PowerPC (1993), dropped these instructions. Digital Equipment Corporation's Alpha architecture (1992) also featured conditional move instructions. MIPS gained conditional move instructions in 1994 with the MIPS IV version; and SPARC was extended in Version 9 (1994) with conditional move instructions for both integer and floating-point registers.
In the Hewlett-Packard/Intel IA-64 architecture, most instructions are predicated. The predicates are stored in 64 special-purpose predicate registers; and one of the predicate registers is always true so that unpredicated instructions are simply instructions predicated with the value true.
|
https://en.wikipedia.org/wiki/Predication_%28computer_architecture%29
|
passage: Let
$$
f: X\to Y
$$
be any map. The power sets P(X) and P(Y) are complete Boolean algebras, and the map
$$
f^{-1}: P(Y)\to P(X)
$$
is a homomorphism of complete Boolean algebras. Suppose the spaces X and Y are topological spaces, endowed with the topology O(X) and O(Y) of open sets on X and Y. Note that O(X) and O(Y) are subframes of P(X) and P(Y). If
$$
f
$$
is a continuous function, then
$$
f^{-1}: O(Y)\to O(X)
$$
preserves finite meets and arbitrary joins of these subframes. This shows that O is a functor from the category Top of topological spaces to Loc, taking any continuous map
$$
f: X\to Y
$$
to the map
$$
O(f): O(X)\to O(Y)
$$
in Loc that is defined in Frm to be the inverse image frame homomorphism
$$
f^{-1}: O(Y)\to O(X).
$$
Given a map of locales
$$
f: A\to B
$$
in Loc, it is common to write
$$
f^*: B\to A
$$
for the frame homomorphism that defines it in Frm.
|
https://en.wikipedia.org/wiki/Complete_Heyting_algebra
|
passage: & = \int_V \rho(\mathbf{r}) (\mathbf{r}-\mathbf{R})\cdot (\mathbf{r}-\mathbf{R})dV + 2\mathbf{d}\cdot\left(\int_V \rho(\mathbf{r}) (\mathbf{r}-\mathbf{R}) \, dV\right) + \left(\int_V \rho(\mathbf{r}) \, dV\right)\mathbf{d}\cdot\mathbf{d}.
\end{align}
$$
The first term is the moment of inertia IR, the second term is zero by definition of the center of mass, and the last term is the total mass of the body times the square magnitude of the vector d.
|
https://en.wikipedia.org/wiki/Parallel_axis_theorem
|
passage: It was adapted from French usage, and is similar to the system that was documented or invented by Chuquet.
Traditional American usage (which was also adapted from French usage but at a later date), Canadian, and modern British usage assign new names for each power of one thousand (the short scale). Thus, a billion is 1000 × 10002 = 109; a trillion is 1000 × 10003 = 1012; and so forth. Due to its dominance in the financial world (and by the US dollar), this was adopted for official United Nations documents.
Traditional French usage has varied; in 1948, France, which had originally popularized the short scale worldwide, reverted to the long scale.
The term milliard is unambiguous and always means 109. It is seldom seen in American usage and rarely in British usage, but frequently in continental European usage. The term is sometimes attributed to French mathematician Jacques Peletier du Mans (for this reason, the long scale is also known as the Chuquet-Peletier system), but the Oxford English Dictionary states that the term derives from post-Classical Latin term milliartum, which became milliare and then milliart and finally our modern term.
Concerning names ending in -illiard for numbers 106n+3, milliard is certainly in widespread use in languages other than English, but the degree of actual use of the larger terms is questionable. The terms "milliardo" in Italian, "Milliarde" in German, "miljard" in Dutch, "milyar" in Turkish, and "миллиард," milliard (transliterated) in Russian, are standard usage when discussing financial topics.
|
https://en.wikipedia.org/wiki/Names_of_large_numbers
|
passage: Another common automotive use is in electronic stability control systems, which use a lateral accelerometer to measure cornering forces. The widespread use of accelerometers in the automotive industry has pushed their cost down dramatically. Another automotive application is the monitoring of noise, vibration, and harshness (NVH), conditions that cause discomfort for drivers and passengers and may also be indicators of mechanical faults.
Tilting trains use accelerometers and gyroscopes to calculate the required tilt.
### Volcanology
Modern electronic accelerometers are used in remote sensing devices intended for the monitoring of active volcanoes to detect the motion of magma.
### Consumer electronics
Accelerometers are increasingly being incorporated into personal electronic devices to detect the orientation of the device, for example, a display screen.
A free-fall sensor (FFS) is an accelerometer used to detect if a system has been dropped and is falling. It can then apply safety measures such as parking the head of a hard disk to prevent a head crash and resulting data loss upon impact. This device is included in the many common computer and consumer electronic products that are produced by a variety of manufacturers. It is also used in some data loggers to monitor handling operations for shipping containers. The length of time in free fall is used to calculate the height of drop and to estimate the shock to the package.
#### Motion input
Some smartphones, digital audio players and personal digital assistants contain accelerometers for user interface control; often the accelerometer is used to present landscape or portrait views of the device's screen, based on the way the device is being held.
|
https://en.wikipedia.org/wiki/Accelerometer
|
passage: Given a positive integer S, there may be infinitely many c such that the expression n2 + n + c is always coprime to S. The integer c may be negative, in which case there is a delay before primes are produced.
It is known, based on Dirichlet's theorem on arithmetic progressions, that linear polynomial functions
$$
L(n) = an + b
$$
produce infinitely many primes as long as a and b are relatively prime (though no such function will assume prime values for all values of n). Moreover, the Green–Tao theorem says that for any k there exists a pair of a and b, with the property that
$$
L(n) = an+b
$$
is prime for any n from 0 through k − 1. However, as of 2020 the best known result of such type is for k = 27:
$$
224584605939537911 + 18135696597948930n
$$
is prime for all n from 0 through 26. It is not even known whether there exists a univariate polynomial of degree at least 2, that assumes an infinite number of values that are prime; see Bunyakovsky conjecture.
## Possible formula using a recurrence relation
Another prime generator is defined by the recurrence relation
$$
a_n = a_{n-1} + \gcd(n,a_{n-1}), \quad a_1 = 7,
$$
where gcd(x, y) denotes the greatest common divisor of x and y.
|
https://en.wikipedia.org/wiki/Formula_for_primes
|
passage: There are enormous numbers of examples of these, and their classification is not completely understood. The example with smallest volume is the Weeks manifold. Other examples are given by the Seifert–Weber space, or "sufficiently complicated" Dehn surgeries on links, or most Haken manifolds. The geometrization conjecture implies that a closed 3-manifold is hyperbolic if and only if it is irreducible, atoroidal, and has infinite fundamental group. This geometry can be modeled as a left invariant metric on the Bianchi group of type V or VIIh≠0. Under Ricci flow, manifolds with hyperbolic geometry expand.
### The geometry of S2 × R
The point stabilizer is O(2, R) × Z/2Z, and the group G is O(3, R) × R × Z/2Z, with 4 components. The four finite volume manifolds with this geometry are: S2 × S1, the mapping torus of the antipode map of S2, the connected sum of two copies of 3-dimensional projective space, and the product of S1 with two-dimensional projective space. The first two are mapping tori of the identity map and antipode map of the 2-sphere, and are the only examples of 3-manifolds that are prime but not irreducible. The third is the only example of a non-trivial connected sum with a geometric structure. This is the only model geometry that cannot be realized as a left invariant metric on a 3-dimensional Lie group.
|
https://en.wikipedia.org/wiki/Geometrization_conjecture
|
passage: In computer science, a 2–3 tree is a tree data structure, where every node with children (internal node) has either two children (2-node) and one data element or three children (3-node) and two data elements. A 2–3 tree is a B-tree of order 3. Nodes on the outside of the tree (leaf nodes) have no children and one or two data elements., pp.145–147 2–3 trees were invented by John Hopcroft in 1970.
2–3 trees are required to be balanced, meaning that each leaf is at the same level. It follows that each right, center, and left subtree of a node contains the same or close to the same amount of data.
## Definitions
We say that an internal node is a 2-node if it has one data element and two children.
We say that an internal node is a 3-node if it has two data elements and three children.
A 4-node, with three data elements, may be temporarily created during manipulation of the tree but is never persistently stored in the tree.
We say that is a 2–3 tree if and only if one of the following statements hold:
- is empty. In other words, does not have any nodes.
- is a 2-node with data element . If has left child and right child , then
- and are 2–3 trees of the same height;
- is greater than each element in ; and
- is less than each data element in .
- is a 3-node with data elements and , where .
|
https://en.wikipedia.org/wiki/2%E2%80%933_tree
|
passage: Also, shape is determined by only the outer boundary of an object.
### Congruence and similarity
Objects that can be transformed into each other by rigid transformations and mirroring (but not scaling) are congruent. An object is therefore congruent to its mirror image (even if it is not symmetric), but not to a scaled version. Two congruent objects always have either the same shape or mirror image shapes, and have the same size.
Objects that have the same shape or mirror image shapes are called geometrically similar, whether or not they have the same size. Thus, objects that can be transformed into each other by rigid transformations, mirroring, and uniform scaling are similar. Similarity is preserved when one of the objects is uniformly scaled, while congruence is not. Thus, congruent objects are always geometrically similar, but similar objects may not be congruent, as they may have different size.
### Homeomorphism
A more flexible definition of shape takes into consideration the fact that realistic shapes are often deformable, e.g. a person in different postures, a tree bending in the wind or a hand with different finger positions.
One way of modeling non-rigid movements is by homeomorphisms. Roughly speaking, a homeomorphism is a continuous stretching and bending of an object into a new shape. Thus, a square and a circle are homeomorphic to each other, but a sphere and a donut are not.
|
https://en.wikipedia.org/wiki/Shape
|
passage: The scale along each of the three axes and the angles among them are determined separately as dictated by the angle of viewing. Trimetric perspective is seldom used in technical drawings.
## Multiview projection
In multiview projection, up to six pictures of an object are produced, called primary views, with each projection plane parallel to one of the coordinate axes of the object. The views are positioned relative to each other according to either of two schemes: first-angle or third-angle projection. In each, the appearances of views may be thought of as being projected onto planes that form a six-sided box around the object. Although six different sides can be drawn, usually three views of a drawing give enough information to make a three-dimensional object. These views are known as front view (also elevation), top view (also plan) and end view (also section). When the plane or axis of the object depicted is not parallel to the projection plane, and where multiple sides of an object are visible in the same image, it is called an auxiliary view. Thus isometric projection, dimetric projection and trimetric projection would be considered auxiliary views in multiview projection. A typical characteristic of multiview projection is that one axis of space is usually displayed as vertical.
## Cartography
An orthographic projection map is a map projection of cartography. Like the stereographic projection and gnomonic projection, orthographic projection is a perspective (or azimuthal) projection, in which the sphere is projected onto a tangent plane or secant plane.
|
https://en.wikipedia.org/wiki/Orthographic_projection
|
passage: The resolved area on which the fiber experiences the force is increased by a factor of
$$
\cos \theta
$$
from rotation.
$$
A_{\mbox{res}}=A_{0}/\cos\theta
$$
. Taking the effective tensile strength to be
$$
(\mbox{T.S.})_{\mbox{c}}=F_{\mbox{res}}/A_{\mbox{res}}
$$
and the aligned tensile strength
$$
\sigma^*_\parallel=F/A
$$
.
$$
(\mbox{T.S.})_{\mbox{c}}\;(\mbox{longitudinal fracture})=\frac{\sigma^*_\parallel}{\cos^2\theta}
$$
At moderate angles,
$$
\theta \approx 45^{\circ}
$$
, the material experiences shear failure. The effective force direction is reduced with respect to the aligned direction.
$$
F_{\mbox{res}}=F\cos\theta
$$
. The resolved area on which the force acts is
$$
A_{\mbox{res}}=A_m/\sin\theta
$$
.
|
https://en.wikipedia.org/wiki/Composite_material
|
passage: - Klystron tube – invented by the brothers Russell and Sigurd Varian at Stanford. Their prototype was completed and demonstrated successfully on August 30, 1937. Upon publication in 1939, news of the klystron immediately influenced the work of U.S. and UK researchers working on radar equipment.
- RISC – ARPA funded VLSI project of microprocessor design. Stanford and UC Berkeley are most associated with the popularization of this concept. The Stanford MIPS would go on to be commercialized as the successful MIPS architecture, while Berkeley RISC gave its name to the entire concept, commercialized as the SPARC. Another success from this era were IBM's efforts that eventually led to the IBM POWER instruction set architecture, PowerPC, and Power ISA. As these projects matured, a wide variety of similar designs flourished in the late 1980s and especially the early 1990s, representing a major force in the Unix workstation market as well as embedded processors in laser printers, routers and similar products.
- SUN workstation – Andy Bechtolsheim designed the SUN workstation, for the Stanford University Network communications project as a personal CAD workstation, which led to Sun Microsystems.
- MIMO - Arogyaswami Paulraj and Thomas Kailath invented multiple-input and multiple-output (MIMO) radio communications, which involves simultaneously using multiple antennas on receivers and transmitters. Invented in 1992, MIMO is an essential element in many modern wireless technologies today.
|
https://en.wikipedia.org/wiki/Stanford_University
|
passage: ## th roots and polynomial roots
The definition of a square root of
$$
x
$$
as a number
$$
y
$$
such that
$$
y^2 = x
$$
has been generalized in the following way.
A cube root of
$$
x
$$
is a number
$$
y
$$
such that
$$
y^3 = x
$$
; it is denoted
$$
\sqrt[3]x.
$$
If is an integer greater than two, a -th root of
$$
x
$$
is a number
$$
y
$$
such that
$$
y^n = x
$$
; it is denoted
$$
\sqrt[n]x.
$$
Given any polynomial , a root of is a number such that . For example, the th roots of are the roots of the polynomial (in )
$$
y^n - x.
$$
Abel–Ruffini theorem states that, in general, the roots of a polynomial of degree five or higher cannot be expressed in terms of th roots.
## Square roots of matrices and operators
If A is a positive-definite matrix or operator, then there exists precisely one positive definite matrix or operator B with ; we then define . In general matrices may have multiple square roots or even an infinitude of them. For example, the identity matrix has an infinity of square roots, though only one of them is positive definite.
## In integral domains, including fields
Each element of an integral domain has no more than 2 square roots. The difference of two squares identity is proved using the commutativity of multiplication.
|
https://en.wikipedia.org/wiki/Square_root
|
passage: Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases.
In the early days of computing, a number of terms for the practitioners of the field of computing were suggested (albeit facetiously) in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "" in Italian) or "information and mathematics" are often used, e.g. (French), (German), (Italian, Dutch), (Spanish, Portuguese), (Slavic languages and Hungarian) or (, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh).
|
https://en.wikipedia.org/wiki/Computer_science
|
passage: ##### Primary pathogens
Primary pathogens cause disease as a result of their presence or activity within the normal, healthy host, and their intrinsic virulence (the severity of the disease they cause) is, in part, a necessary consequence of their need to reproduce and spread. Many of the most common primary pathogens of humans only infect humans, however, many serious diseases are caused by organisms acquired from the environment or that infect non-human hosts.
##### Opportunistic pathogens
Opportunistic pathogens can cause an infectious disease in a host with depressed resistance (immunodeficiency) or if they have unusual access to the inside of the body (for example, via trauma). Opportunistic infection may be caused by microbes ordinarily in contact with the host, such as pathogenic bacteria or fungi in the gastrointestinal or the upper respiratory tract, and they may also result from (otherwise innocuous) microbes acquired from other hosts (as in Clostridioides difficile colitis) or from the environment as a result of traumatic introduction (as in surgical wound infections or compound fractures). An opportunistic disease requires impairment of host defenses, which may occur as a result of genetic defects (such as chronic granulomatous disease), exposure to antimicrobial drugs or immunosuppressive chemicals (as might occur following poisoning or cancer chemotherapy), exposure to ionizing radiation, or as a result of an infectious disease with immunosuppressive activity (such as with measles, malaria or HIV disease).
|
https://en.wikipedia.org/wiki/Infection
|
passage: In computer science, a suffix automaton is an efficient data structure for representing the substring index of a given string which allows the storage, processing, and retrieval of compressed information about all its substrings. The suffix automaton of a string
$$
S
$$
is the smallest directed acyclic graph with a dedicated initial vertex and a set of "final" vertices, such that paths from the initial vertex to final vertices represent the suffixes of the string.
In terms of automata theory, a suffix automaton is the minimal partial deterministic finite automaton that recognizes the set of suffixes of a given string
$$
S=s_1 s_2 \dots s_n
$$
. The state graph of a suffix automaton is called a directed acyclic word graph (DAWG), a term that is also sometimes used for any deterministic acyclic finite state automaton.
Suffix automata were introduced in 1983 by a group of scientists from the University of Denver and the University of Colorado Boulder. They suggested a linear time online algorithm for its construction and showed that the suffix automaton of a string
$$
S
$$
having length at least two characters has at most
$$
2|S| - 1
$$
states and at most
$$
3|S| - 4
$$
transitions. Further works have shown a close connection between suffix automata and suffix trees, and have outlined several generalizations of suffix automata, such as compacted suffix automaton obtained by compression of nodes with a single outgoing arc.
|
https://en.wikipedia.org/wiki/Suffix_automaton
|
passage: In Euclidean geometry, Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). The theorem is named after the Greek astronomer and mathematician Ptolemy (Claudius Ptolemaeus). Ptolemy used the theorem as an aid to creating his table of chords, a trigonometric table that he applied to astronomy.
If the vertices of the cyclic quadrilateral are A, B, C, and D in order, then the theorem states that:
$$
AC\cdot BD = AB\cdot CD+BC\cdot AD
$$
This relation may be verbally expressed as follows:
If a quadrilateral is cyclic then the product of the lengths of its diagonals is equal to the sum of the products of the lengths of the pairs of opposite sides.
Moreover, the converse of Ptolemy's theorem is also true:
In a quadrilateral, if the sum of the products of the lengths of its two pairs of opposite sides is equal to the product of the lengths of its diagonals, then the quadrilateral can be inscribed in a circle i.e. it is a cyclic quadrilateral.
To appreciate the utility and general significance of Ptolemy’s Theorem, it is especially useful to study its main
## Corollaries
.
## Corollaries on inscribed polygons
### Equilateral triangle
Ptolemy's Theorem yields as a corollary a theorem regarding an equilateral triangle inscribed in a circle.
|
https://en.wikipedia.org/wiki/Ptolemy%27s_theorem
|
passage: If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing.
### Equivalence of definitions
Proofs of equivalence between the entropy in statistical mechanics — the Gibbs entropy formula:
$$
S = - k_\mathsf{B} \sum_i{p_i \ln{p_i}}
$$
and the entropy in classical thermodynamics:
$$
\mathrm{d} S = \frac{\delta Q_\mathsf{rev}}{T}
$$
together with the fundamental thermodynamic relation are known for the microcanonical ensemble, the canonical ensemble, the grand canonical ensemble, and the isothermal–isobaric ensemble. These proofs are based on the probability density of microstates of the generalised Boltzmann distribution and the identification of the thermodynamic internal energy as the ensemble average
$$
U = \left\langle E_i \right\rangle
$$
. Thermodynamic relations are then employed to derive the well-known Gibbs entropy formula. However, the equivalence between the Gibbs entropy formula and the thermodynamic definition of entropy is not a fundamental thermodynamic relation but rather a consequence of the form of the generalized Boltzmann distribution.
Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates:
|
https://en.wikipedia.org/wiki/Entropy
|
passage: The magnitude of this noise or distortion is determined by the number of quantization levels. In binary systems this is determined by and typically stated in terms of the number of bits. Each additional bit adds approximately 6 dB in possible SNR (e.g. 24 x 6 = 144 dB for 24-bit and 120 dB for 20-bit quantization). The 16-bit digital system of Red Book audio CD has 216 = 65,536 possible signal amplitudes, theoretically allowing for an SNR of 98 dB.
### Rumble
Rumble is a form of noise characteristic caused by imperfections in the bearings of turntables. The platter tends to have a slight amount of motion besides the desired rotation and the turntable surface also moves up, down and side-to-side slightly. This additional motion is added to the desired signal as noise, usually of very low frequencies, creating a rumbling sound during quiet passages. Very inexpensive turntables sometimes used ball bearings, which are very likely to generate audible amounts of rumble. More expensive turntables tend to use massive sleeve bearings, which are much less likely to generate offensive amounts of rumble. Increased turntable mass also tends to lead to reduced rumble. A good turntable should have rumble at least 60 dB below the specified output level from the pick-up. Because they have no moving parts in the signal path, digital systems are not subject to rumble.
### Wow and flutter
Wow and flutter are a change in frequency of an analog device and are the result of mechanical imperfections.
|
https://en.wikipedia.org/wiki/Comparison_of_analog_and_digital_recording
|
passage: For an updated paper on the subject, see French and Krause (2010). Krause builds on the set theory ZFU, consisting of Zermelo-Fraenkel set theory with an ontology extended to include two kinds of urelements:
- m-atoms, whose intended interpretation is elementary quantum particles;
- M-atoms, macroscopic objects to which classical logic is assumed to apply.
Quasi-sets (q-sets) are collections resulting from applying axioms, very similar to those for ZFU, to a basic domain composed of m-atoms, M-atoms, and aggregates of these. The axioms of
$$
\mathfrak{Q}
$$
include equivalents of extensionality, but in a weaker form, termed "weak extensionality axiom"; axioms asserting the existence of the empty set, unordered pair, union set, and power set; the axiom of separation; an axiom stating the image of a q-set under a q-function is also a q-set; q-set equivalents of the axioms of infinity, regularity, and choice. Q-set theories based on other set-theoretical frameworks are, of course, possible.
$$
\mathfrak{Q}
$$
has a primitive concept of quasi-cardinal, governed by eight additional axioms, intuitively standing for the quantity of objects in a collection. The quasi-cardinal of a quasi-set is not defined in the usual sense (by means of ordinals) because the m-atoms are assumed (absolutely) indistinguishable.
|
https://en.wikipedia.org/wiki/Quasi-set_theory
|
passage: In mathematics, more specifically ring theory, the Jacobson radical of a ring R is the ideal consisting of those elements in R that annihilate all simple right R-modules. It happens that substituting "left" in place of "right" in the definition yields the same ideal, and so the notion is left–right symmetric. The Jacobson radical of a ring is frequently denoted by J(R) or rad(R); the former notation will be preferred in this article, because it avoids confusion with other radicals of a ring. The Jacobson radical is named after Nathan Jacobson, who was the first to study it for arbitrary rings in .
The Jacobson radical of a ring has numerous internal characterizations, including a few definitions that successfully extend the notion to non-unital rings. The radical of a module extends the definition of the Jacobson radical to include modules. The Jacobson radical plays a prominent role in many ring- and module-theoretic results, such as Nakayama's lemma.
## Definitions
There are multiple equivalent definitions and characterizations of the Jacobson radical, but it is useful to consider the definitions based on if the ring is commutative or not.
### Commutative case
In the commutative case, the Jacobson radical of a commutative ring R is defined as the intersection of all maximal ideals
$$
\mathfrak{m}
$$
.
|
https://en.wikipedia.org/wiki/Jacobson_radical
|
passage: ### Audio example
The qualitative effects of aliasing can be heard in the following audio demonstration. Six sawtooth waves are played in succession, with the first two sawtooths having a fundamental frequency of 440 Hz (A4), the second two having fundamental frequency of 880 Hz (A5), and the final two at 1760 Hz (A6). The sawtooths alternate between bandlimited (non-aliased) sawtooths and aliased sawtooths and the sampling rate is 22050 Hz. The bandlimited sawtooths are synthesized from the sawtooth waveform's Fourier series such that no harmonics above the Nyquist frequency (11025 Hz = 22050 Hz / 2 here) are present.
The aliasing distortion in the lower frequencies is increasingly obvious with higher fundamental frequencies, and while the bandlimited sawtooth is still clear at 1760 Hz, the aliased sawtooth is degraded and harsh with a buzzing audible at frequencies lower than the fundamental.
### Direction finding
A form of spatial aliasing can also occur in antenna arrays or microphone arrays used to estimate the direction of arrival of a wave signal, as in geophysical exploration by seismic waves. Waves must be sampled more densely than two points per wavelength, or the wave arrival direction becomes ambiguous.
|
https://en.wikipedia.org/wiki/Aliasing
|
passage: Bell acknowledged that abandoning this assumption would both allow for the maintenance of determinism as well as locality. This perspective is known as superdeterminism, and is defended by some physicists such as Sabine Hossenfelder and Tim Palmer.
More advanced variations on these arguments include quantum contextuality, by Bell, Simon B. Kochen and Ernst Specker, which argues that hidden variable theories cannot be "sensible", meaning that the values of the hidden variables inherently depend on the devices used to measure them.
This debate is relevant because there are possibly specific situations in which the arrival of an electron at a screen at a certain point and time would trigger one event, whereas its arrival at another point would trigger an entirely different event (e.g. see Schrödinger's cat—a thought experiment used as part of a deeper debate).
In his 1939 address "The Relation between Mathematics and Physics", Paul Dirac pointed out that purely deterministic classical mechanics cannot explain the cosmological origins of the universe; today the early universe is modeled quantum mechanically.
Nevertheless, the question of determinism in modern physics remains debated. On one hand, Albert Einstein's theory of relativity, which represents an advancement over Newtonian mechanics, is based on a deterministic framework. On the other hand, Einstein himself resisted the indeterministic view of quantum mechanics, as evidenced by his famous debates with Niels Bohr, which continued until his death.
Moreover, chaos theory highlights that even within a deterministic framework, the ability to precisely predict the evolution of a system is often limited.
|
https://en.wikipedia.org/wiki/Determinism
|
passage: Now apply the spectral theorem for compact operators on Hilbert
spaces to TK to show the existence of the
orthonormal basis {ei}i of
L2[a,b]
$$
\lambda_i e_i(t)= [T_K e_i](t) = \int_a^b K(t,s) e_i(s)\, ds.
$$
If λi ≠ 0, the eigenvector (eigenfunction) ei is seen to be continuous on [a,b]. Now
$$
\sum_{i=1}^\infty \lambda_i |e_i(t) e_i(s)| \leq \sup_{x \in [a,b]} |K(x,x)|,
$$
which shows that the sequence
$$
\sum_{i=1}^\infty \lambda_i e_i(t) e_i(s)
$$
converges absolutely and uniformly to a kernel K0 which is easily seen to define the same operator as the kernel K. Hence K=K0 from which Mercer's theorem follows.
Finally, to show non-negativity of the eigenvalues one can write
$$
\lambda \langle f,f \rangle= \langle f, T_{K}f \rangle
$$
and expressing the right hand side as an integral well-approximated by its Riemann sums, which are non-negative
by positive-definiteness of K, implying
$$
\lambda \langle f,f \rangle \geq 0
$$
, implying
$$
\lambda \geq 0
$$
.
|
https://en.wikipedia.org/wiki/Mercer%27s_theorem
|
passage: The function
$$
n
$$
is determined by its values at the points
$$
\mathfrak{m}_p
$$
only, so we can think of
$$
n
$$
as a kind of "regular function" on the closed points, a very special type among the arbitrary functions
$$
f
$$
with
$$
f(\mathfrak{m}_p)\in \mathbb{F}_p
$$
.
Note that the point
$$
\mathfrak{m}_p
$$
is the vanishing locus of the function
$$
n=p
$$
, the point where the value of
$$
p
$$
is equal to zero in the residue field. The field of "rational functions" on
$$
Z
$$
is the fraction field of the generic residue ring,
$$
k(\mathfrak{p}_0)=\operatorname{Frac}(\mathbb{Z}) = \mathbb{Q}
$$
. A fraction
$$
a/b
$$
has "poles" at the points
$$
\mathfrak{m}_p
$$
corresponding to prime divisors of the denominator.
This also gives a geometric interpretaton of Bezout's lemma stating that if the integers
$$
n_1,\ldots, n_r
$$
have no common prime factor, then there are integers
$$
a_1,\ldots,a_r
$$
with
$$
a_1 n_1+\cdots + a_r n_r = 1
$$
.
|
https://en.wikipedia.org/wiki/Scheme_%28mathematics%29
|
passage: Disseminated disease
A disseminated disease has spread to other parts; with cancer, this is usually called metastatic disease.
Systemic disease
A systemic disease is a disease that affects the entire body, such as influenza or high blood pressure.
## Classification
Diseases may be classified by cause, pathogenesis (mechanism by which the disease is caused), or by symptoms. Alternatively, diseases may be classified according to the organ system involved, though this is often complicated since many diseases affect more than one organ.
A chief difficulty in nosology is that diseases often cannot be defined and classified clearly, especially when cause or pathogenesis are unknown. Thus diagnostic terms often only reflect a symptom or set of symptoms (syndrome).
Classical classification of human disease derives from the observational correlation between pathological analysis and clinical syndromes. Today it is preferred to classify them by their cause if it is known.
The most known and used classification of diseases is the World Health Organization's ICD. This is periodically updated. Currently, the last publication is the ICD-11.
## Causes
Diseases can be caused by any number of factors and may be acquired or congenital. Microorganisms, genetics, the environment or a combination of these can contribute to a diseased state.
Only some diseases such as influenza are contagious and commonly believed infectious. The microorganisms that cause these diseases are known as pathogens and include varieties of bacteria, viruses, protozoa, and fungi.
|
https://en.wikipedia.org/wiki/Disease
|
passage: this equation holds for all values of
$$
\theta
$$
, we get that
$$
[ \hat{n}\cdot \vec{L}, \hat{H}] =0
$$
, or that every angular momentum component commutes with the Hamiltonian.
Since
$$
L_z
$$
and
$$
L^2
$$
are such mutually commuting operators that also commute with the Hamiltonian, the wavefunctions can be expressed as
$$
|\alpha;\ell,m\rangle
$$
or
$$
\psi_{\alpha;\ell,m}(r,\theta,\phi)
$$
where
$$
\alpha
$$
is used to label different wavefunctions.
Since
$$
L_\pm = L_x \pm i L_y
$$
also commutes with the Hamiltonian, the energy eigenvalues in such cases are always independent of
$$
m
$$
.
|
https://en.wikipedia.org/wiki/Particle_in_a_spherically_symmetric_potential
|
passage: That means that the navigator should consult the third row, q = 3, on the toleta.
Suppose the ship sailed 100 miles on the SE-by-E bearing. To check his distance from the intended eastward course, the mariner will read the corresponding entry on the alargar column and immediately see he is 55 miles away from the intended course. The avanzar column informs him that having sailed 100 miles on the current SEbE course, he has covered 83 miles of the intended E course.
The next step is to determine how to return to the intended course. Continuing the example, to get back to the intended Eastward course, our mariner has to re-orient the ship's bearing in a northeasterly direction. But there are various northeasterly angles – NbE, NNE, NE, ENE, etc. The mariner has a choose the bearing – if he returns by a sharp angle (e.g. North by east), he will return to the intended course faster than at a more gentle gradient (e.g. East by north). Whichever angle he chooses, he must deduce exactly how long he must sail on that bearing in order to reach his old course. If he sails too long, he risks overshooting it.
Calculating the return course is what the last three columns of the toleta are for. In the fourth column, the return angles are expressed as quarters from the intended course bearing (not the current course bearing).
|
https://en.wikipedia.org/wiki/Rule_of_marteloio
|
passage: ### Discrete version and linear programming formulation
In the case where the margins
$$
\mu
$$
and
$$
\nu
$$
are discrete, let
$$
\mu_x
$$
and
$$
\nu_y
$$
be the probability masses respectively assigned to
$$
x\in \mathbf{X}
$$
and
$$
y\in \mathbf{Y}
$$
, and let
$$
\gamma_{xy}
$$
be the probability of an
$$
xy
$$
assignment. The objective function in the primal Kantorovich problem is then
$$
\sum_{x\in \mathbf{X},y\in \mathbf{Y}} \gamma_{xy}c_{xy}
$$
and the constraint
$$
\gamma \in \Gamma(\mu ,\nu)
$$
expresses as
$$
\sum_{y\in \mathbf{Y}}\gamma_{xy}=\mu_x,\forall x\in \mathbf{X}
$$
and
$$
\sum_{x\in \mathbf{X}} \gamma_{xy}=\nu_y,\forall y\in \mathbf{Y}.
$$
In order to input this in a linear programming problem, we need to vectorize the matrix
$$
\gamma_{xy}
$$
by either stacking its columns or its rows, we call
$$
\operatorname{vec}
$$
this operation.
|
https://en.wikipedia.org/wiki/Transportation_theory_%28mathematics%29
|
passage: Then the pair
$$
\langle A_i,f_{ij}\rangle
$$
is called a direct system over
$$
I
$$
.
The direct limit of the direct system
$$
\langle A_i,f_{ij}\rangle
$$
is denoted by
$$
\varinjlim A_i
$$
and is defined as follows. Its underlying set is the disjoint union of the
$$
A_i
$$
's modulo a certain :
$$
\varinjlim A_i = \bigsqcup_i A_i\bigg/\sim.
$$
Here, if
$$
x_i\in A_i
$$
and
$$
x_j\in A_j
$$
, then
$$
x_i\sim\, x_j
$$
if and only if there is some
$$
k\in I
$$
with
$$
i \le k
$$
and
$$
j \le k
$$
such that
$$
f_{ik}(x_i) = f_{jk}(x_j)\,
$$
.
Intuitively, two elements in the disjoint union are equivalent if and only if they "eventually become equal" in the direct system. An equivalent formulation that highlights the duality to the inverse limit is that an element is equivalent to all its images under the maps of the direct system, i.e.
$$
x_i\sim\, f_{ij}(x_i)
$$
whenever
$$
i \le j
$$
.
|
https://en.wikipedia.org/wiki/Direct_limit
|
passage: Then results of clinical and laboratory analyses are studied to reveal statistically different variables in these groups. Using these variables, discriminant functions are built to classify disease severity in future patients. Additionally, Linear Discriminant Analysis (LDA) can help select more discriminative samples for data augmentation, improving classification performance.
In biology, similar principles are used in order to classify and define groups of different biological objects, for example, to define phage types of Salmonella enteritidis based on Fourier transform infrared spectra, to detect animal source of Escherichia coli studying its virulence factors etc.
### Earth science
This method can be used to . For example, when different data from various zones are available, discriminant analysis can find the pattern within the data and classify it effectively.
## Comparison to logistic regression
Discriminant function analysis is very similar to logistic regression, and both can be used to answer the same research questions. Logistic regression does not have as many assumptions and restrictions as discriminant analysis. However, when discriminant analysis’ assumptions are met, it is more powerful than logistic regression. Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate. Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met.
|
https://en.wikipedia.org/wiki/Linear_discriminant_analysis
|
passage: Examples include Vertico SMI, near field scanning optical microscopy which uses evanescent waves, and stimulated emission depletion. In 2005, a microscope capable of detecting a single molecule was described as a teaching tool.
Despite significant progress in the last decade, techniques for surpassing the diffraction limit remain limited and specialized.
While most techniques focus on increases in lateral resolution there are also some techniques which aim to allow analysis of extremely thin samples. For example, sarfus methods place the thin sample on a contrast-enhancing surface and thereby allows to directly visualize films as thin as 0.3 nanometers.
On 8 October 2014, the Nobel Prize in Chemistry was awarded to Eric Betzig, William Moerner and Stefan Hell for the development of super-resolved fluorescence microscopy.
#### Structured illumination SMI
SMI (spatially modulated illumination microscopy) is a light optical process of the so-called point spread function (PSF) engineering. These are processes which modify the PSF of a microscope in a suitable manner to either increase the optical resolution, to maximize the precision of distance measurements of fluorescent objects that are small relative to the wavelength of the illuminating light, or to extract other structural parameters in the nanometer range. Cremer, Christoph; Hausmann, Michael; Bradl, Joachim and Schneider, Bernhard "Wave field microscope with detection point spread function", , priority date 10 July 1997
|
https://en.wikipedia.org/wiki/Optical_microscope
|
passage: Then, given this relation, we can see. For example, we can apply this formula to find the Pontryagin classes of a complex vector bundle on a curve and a surface. For a curve, we haveso all of the Pontryagin classes of complex vector bundles are trivial.
In general, looking at first two terms of the productwe can see that
$$
p_1(E) = c_1(E)^2 - 2c_2(E)
$$
. In particular, for line bundles this simplifies further since
$$
c_2(L) = 0
$$
by dimension reasons.
### Pontryagin classes on a Quartic K3 Surface
Recall that a quartic polynomial whose vanishing locus in
$$
\mathbb{CP}^3
$$
is a smooth subvariety is a K3 surface. If we use the normal sequencewe can findshowing
$$
c_1(X) = 0
$$
and
$$
c_2(X) = 6[H]^2
$$
. Since
$$
[H]^2
$$
corresponds to four points, due to Bézout's lemma, we have the second chern number as
$$
24
$$
. Since
$$
p_1(X) = -2c_2(X)
$$
in this case, we have
$$
p_1(X) = -48
$$
. This number can be used to compute the third stable homotopy group of spheres.
## Pontryagin numbers
Pontryagin numbers are certain topological invariants of a smooth manifold.
|
https://en.wikipedia.org/wiki/Pontryagin_class
|
passage: Below: the martini covering/medial lattice, same as the 2×2, 1×1 subnet for kagome-type lattices (removed).
Some other examples of generalized bow-tie lattices (a-d) and the duals of the lattices (e-h):
Lattice z Site percolation threshold Bond percolation threshold martini ()(3,92)+()(93) 3 3 0.764826..., 1 + p4 − 3p3 = 0 0.707107... = 1/bow-tie (c) 3,43 0.672929..., 1 − 2p3 − 2p4 − 2p5 − 7p6 + 18p7 + 11p8 − 35p9 + 21p10 − 4p11 = 0bow-tie (d) 3,43 0.625457..., 1 − 2p2 − 3p3 + 4p4 − p5 = 0martini-A ()(3,72)+()(3,73)3,43 1/0.625457..., 1 − 2p2 − 3p3 + 4p4 − p5 = 0bow-tie dual (e) 3,43 0.595482..., 1-pcbond (bow-tie (a)) bow-tie (b) 3,4,63 0.533213..., 1 − p − 2p3 -4p4-4p5+156+ 13p7-36p8+19p9+ p10 + p11=0 martini covering/medial ()(33,9) + ()(3,9,3,9) 4 4 0.707107...
|
https://en.wikipedia.org/wiki/Percolation_threshold
|
passage: Most diodes therefore have a negative temperature coefficient, typically −2 mV/°C for silicon diodes. The temperature coefficient is approximately constant for temperatures above about 20 kelvin. Some graphs are given for 1N400x series, and CY7 cryogenic temperature sensor.
### Current steering
Diodes will prevent currents in unintended directions. To supply power to an electrical circuit during a power failure, the circuit can draw current from a battery. An uninterruptible power supply may use diodes in this way to ensure that the current is only drawn from the battery when necessary. Likewise, small boats typically have two circuits each with their own battery/batteries: one used for engine starting; one used for domestics. Normally, both are charged from a single alternator, and a heavy-duty split-charge diode is used to prevent the higher-charge battery (typically the engine battery) from discharging through the lower-charge battery when the alternator is not running.
Diodes are also used in electronic musical keyboards. To reduce the amount of wiring needed in electronic musical keyboards, these instruments often use keyboard matrix circuits. The keyboard controller scans the rows and columns to determine which note the player has pressed. The problem with matrix circuits is that, when several notes are pressed at once, the current can flow backward through the circuit and trigger "phantom keys" that cause "ghost" notes to play. To avoid triggering unwanted notes, most keyboard matrix circuits have diodes soldered with the switch under each key of the musical keyboard.
|
https://en.wikipedia.org/wiki/Diode
|
passage: ### Modern theory
The discovery of relativity and of quantum mechanics in the first decades of the 20th century transformed the conceptual basis of physics without reducing the practical value of most of the physical theories developed up to that time. Consequently the topics of physics have come to be divided into "classical physics" and "modern physics", with the latter category including effects related to quantum mechanics and relativity.
Classical physics is generally concerned with matter and energy on the normal scale of observation, while much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on a very large or very small scale. For example, atomic and nuclear physics study matter on the smallest scale at which chemical elements can be identified. The physics of elementary particles is on an even smaller scale since it is concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles in particle accelerators. On this scale, ordinary, commonsensical notions of space, time, matter, and energy are no longer valid.
The two chief theories of modern physics present a different picture of the concepts of space, time, and matter from that presented by classical physics. Classical mechanics approximates nature as continuous, while quantum theory is concerned with the discrete nature of many phenomena at the atomic and subatomic level and with the complementary aspects of particles and waves in the description of such phenomena.
|
https://en.wikipedia.org/wiki/Physics
|
passage: In this book Fisher also outlined the Lady tasting tea, now a famous design of a statistical randomized experiment which uses Fisher's exact test and is the original exposition of Fisher's notion of a null hypothesis. OED quote: 1935 R. A. Fisher, The Design of Experiments ii. 19, "We may speak of this hypothesis as the 'null hypothesis'...the null hypothesis is never proved or established, but is possibly disproved, in the course of experimentation. "
The same year he also published a paper on fiducial inference and applied it to the Behrens–Fisher problem, the solution to which, proposed first by Walter Behrens and a few years later by Fisher, is the Behrens–Fisher distribution.
In 1936, he introduced the Iris flower data set as an example of discriminant analysis.
In his 1937 paper The wave of advance of advantageous genes he proposed Fisher's equation in the context of population dynamics to describe the spatial spread of an advantageous allele, and explored its travelling wave solutions. Out of this also came the Fisher–Kolmogorov equation.
In 1937, he visited the Indian Statistical Institute in Calcutta, and its one part-time employee, P. C. Mahalanobis, often returning to encourage its development. He was the guest of honour at its 25th anniversary in 1957, when it had 2000 employees.
In 1938, Fisher and Frank Yates described the Fisher–Yates shuffle in their book Statistical tables for biological, agricultural and medical research. Their description of the algorithm used pencil and paper; a table of random numbers provided the randomness.
|
https://en.wikipedia.org/wiki/Ronald_Fisher
|
passage: This can be understood intuitively since the magnitude operator reduces information by one bit (if the probability distribution at its input is even). Alternatively, since a half-normal distribution is always positive, the one bit it would take to record whether a standard normal random variable were positive (say, a 1) or negative (say, a 0) is no longer necessary. Thus,
$$
h(Y) = \frac{1}{2} \log_2 \left( \frac{\pi e \sigma^2}{2} \right) = \frac{1}{2} \log_2 \left( 2\pi e \sigma^2 \right) -1.
$$
## Applications
The half-normal distribution is commonly utilized as a prior probability distribution for variance parameters in Bayesian inference applications.
|
https://en.wikipedia.org/wiki/Half-normal_distribution
|
passage: Each range is divided into a multitude of channels. In the standards, channels are numbered at 5 MHz spacing within a band (except in the 60 GHz band, where they are 2.16 GHz apart), and the number refers to the centre frequency of the channel. Although channels are numbered at 5 MHz spacing, transmitters generally occupy at least 20 MHz, and standards allow for neighbouring channels to be bonded together to form a wider channel for higher throughput.
Countries apply their own regulations to the allowable channels, allowed users and maximum power levels within these frequency ranges. 802.11b/g/n can use the 2.4 GHz band, operating in the United States under FCC Part 15 rules and regulations. In this frequency band, equipment may occasionally suffer interference from microwave ovens, cordless telephones, USB 3.0 hubs, Bluetooth and other devices.
Spectrum assignments and operational limitations are not consistent worldwide: Australia and Europe allow for an additional two channels (12, 13) beyond the 11 permitted in the United States for the 2.4 GHz band, while Japan has three more (12–14).
802.11a/h/j/n/ac/ax can use the 5 GHz U-NII band, which, for much of the world, offers at least 23 non-overlapping 20 MHz channels. This is in contrast to the 2.4 GHz frequency band where the channels are only 5 MHz wide. In general, lower frequencies have longer range but have less capacity. The 5 GHz bands are absorbed to a greater degree by common building materials than the 2.4 GHz bands and usually give a shorter range.
|
https://en.wikipedia.org/wiki/Wi-Fi
|
passage: Energy is released by bond formation. This is not as a result of reduction in potential energy, because the attraction of the two electrons to the two protons is offset by the electron-electron and proton-proton repulsions. Instead, the release of energy (and hence stability of the bond) arises from the reduction in kinetic energy due to the electrons being in a more spatially distributed (i.e. longer de Broglie wavelength) orbital compared with each electron being confined closer to its respective nucleus. These bonds exist between two particular identifiable atoms and have a direction in space, allowing them to be shown as single connecting lines between atoms in drawings, or modeled as sticks between spheres in models.
In a polar covalent bond, one or more electrons are unequally shared between two nuclei.
### Covalent bond
s often result in the formation of small collections of better-connected atoms called molecules, which in solids and liquids are bound to other molecules by forces that are often much weaker than the covalent bonds that hold the molecules internally together. Such weak intermolecular bonds give organic molecular substances, such as waxes and oils, their soft bulk character, and their low melting points (in liquids, molecules must cease most structured or oriented contact with each other).
|
https://en.wikipedia.org/wiki/Chemical_bond
|
passage: The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology.
William Glen observes that
In general, scientists tend to look for theories that are "elegant" or "beautiful". Scientists often use these terms to refer to a theory that is following the known facts but is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.
To minimize the confirmation bias that results from entertaining a single hypothesis, strong inference emphasizes the need for entertaining multiple alternative hypotheses, and avoiding artifacts.
### Predictions from the hypothesis
James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'.: "Watson did enough work on Tobacco mosaic virus to produce the diffraction pattern for a helix, per Crick's work on the transform of a helix.": June 1952 — Watson had succeeded in getting X-ray pictures of TMV showing a diffraction pattern consistent with the transform of a helix. This prediction followed from the work of Cochran, Crick and VandCochran W, Crick FHC and Vand V. (1952) "The Structure of Synthetic Polypeptides. I.
|
https://en.wikipedia.org/wiki/Scientific_method
|
passage: Improving the bound
$$
O(\sqrt{x})
$$
in this formula is known as Dirichlet's divisor problem.
The behaviour of the sigma function is irregular. The asymptotic growth rate of the sigma function can be expressed by:
$$
\limsup_{n\rightarrow\infty}\frac{\sigma(n)}{n\,\log \log n}=e^\gamma,
$$
where lim sup is the limit superior. This result is Grönwall's theorem, published in 1913 . His proof uses Mertens' third theorem, which says that:
$$
\lim_{n\to\infty}\frac{1}{\log n}\prod_{p\le n}\frac{p}{p-1}=e^\gamma,
$$
where p denotes a prime.
In 1915, Ramanujan proved that under the assumption of the Riemann hypothesis, Robin's inequality
$$
\ \sigma(n) < e^\gamma n \log \log n
$$
(where γ is the Euler–Mascheroni constant)
holds for all sufficiently large n . The largest known value that violates the inequality is n=5040. In 1984, Guy Robin proved that the inequality is true for all n > 5040 if and only if the Riemann hypothesis is true . This is Robin's theorem and the inequality became known after him.
|
https://en.wikipedia.org/wiki/Divisor_function
|
passage: The scalar is a mathematical number representing a physical quantity. Examples of scalar fields in applications include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields (known as scalar bosons), such as the Higgs field. These fields are the subject of scalar field theory.
### Vector fields
A vector field is an assignment of a vector to each point in a space. A vector field in the plane, for instance, can be visualized as a collection of arrows with a given magnitude and direction each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from point to point. This can be used, for example, to calculate work done over a line.
### Vectors and pseudovectors
In more advanced treatments, one further distinguishes pseudovector fields and pseudoscalar fields, which are identical to vector fields and scalar fields, except that they change sign under an orientation-reversing map: for example, the curl of a vector field is a pseudovector field, and if one reflects a vector field, the curl points in the opposite direction. This distinction is clarified and elaborated in geometric algebra, as described below.
## Vector algebra
The algebraic (non-differential) operations in vector calculus are referred to as vector algebra, being defined for a vector space and then applied pointwise to a vector field.
|
https://en.wikipedia.org/wiki/Vector_calculus
|
passage: ## COMIT based
- COMIT
- SNOBOL
- Icon
- Unicon
- Lua (also under Modula and Scheme)
- Ring (also under C, BASIC, Ruby, Python, C#)
## DCL based
- DCL
- Windows PowerShell (also under C#, ksh, and Perl)
## ed based
- ed (programming language)
- sed
- AWK
- Perl (also under C)
## Eiffel based
- Eiffel
- Cobra (design by contract)
- Sather
- Ubercode
## Forth based
- Forth
- InterPress
- PostScript
- Joy
- Factor
- Rebol (also under Lisp)
- RPL (also under Lisp)
## Fortran based
- Fortran
- Fortran II
- BASIC (see also BASIC based)
- SAKO
- Fortran IV
- WATFOR
- WATFIV
- Fortran 66
- FORMAC
- Ratfor
- Fortran 77
- WATFOR-77
- Ratfiv
- Fortran 90
- Fortran 95
- F
- Fortran 2003
- Fortran 2008
- Fortran 2018
- ALGOL (see also ALGOL based)
## FP based
- FP (Function Programming)
- FL (Function Level)
- J (also under APL)
- FPr (also under Lisp and object-oriented programming)
## HyperTalk based
- HyperTalk
- ActionScript (also under JavaScript)
- AppleScript
- LiveCode
- SenseTalk
- SuperTalk
- Transcript
Java based
- Java (also under C)
- Ateji PX
- C#
- Ceylon
- Fantom
- Apache Groovy
- OptimJ
- Processing
- Scala
- Join Java
- J#
- Kotlin
- X10
|
https://en.wikipedia.org/wiki/Generational_list_of_programming_languages
|
passage: Then, by the construction,
the time taken for a secondary wavefront from to reach has at most a second-order dependence on the displacement , and
the time taken for a secondary wavefront to reach from has at most a second-order dependence on the displacement .
By (i), the ray path is a path of stationary traversal time from to ; and by (ii), it is a path of stationary traversal time from a point on to .
So Huygens' construction implicitly defines a ray path as a path of stationary traversal time between successive positions of a wavefront, the time being reckoned from a point-source on the earlier wavefront. This conclusion remains valid if the secondary wavefronts are reflected or refracted by surfaces of discontinuity in the properties of the medium, provided that the comparison is restricted to the affected paths and the affected portions of the wavefronts.
Fermat's principle, however, is conventionally expressed in point-to-point terms, not wavefront-to-wavefront terms. Accordingly, let us modify the example by supposing that the wavefront which becomes surface at time , and which becomes surface at the later time , is emitted from point at time . Let be a point on (as before), and a point on . And let , , , and be given, so that the problem is to find .
If satisfies Huygens' construction, so that the secondary wavefront from is tangential to at , then is a path of stationary traversal time from to .
|
https://en.wikipedia.org/wiki/Fermat%27s_principle
|
passage: In computer science, the segment tree is a data structure used for storing information about intervals or segments. It allows querying which of the stored segments contain a given point. A similar data structure is the interval tree.
A segment tree for a set of n intervals uses O(n log n) storage and can be built in O(n log n) time. Segment trees support searching for all the intervals that contain a query point in time O(log n + k), k being the number of retrieved intervals or segments.
Applications of the segment tree are in the areas of computational geometry, geographic information systems and machine learning.
The segment tree can be generalized to higher dimension spaces.
## Definition
### Description
Let be a set of intervals, or segments. Let p1, p2, ..., pm be the list of distinct interval endpoints, sorted from left to right. Consider the partitioning of the real line induced by those points. The regions of this partitioning are called elementary intervals. Thus, the elementary intervals are, from left to right:
$$
(-\infty, p_1), [p_1,p_1], (p_1, p_2), [p_2, p_2], \dots, (p_{m-1}, p_m), [p_m, p_m], (p_m, +\infty)
$$
That is, the list of elementary intervals consists of open intervals between two consecutive endpoints pi and pi+1, alternated with closed intervals consisting of a single endpoint.
|
https://en.wikipedia.org/wiki/Segment_tree
|
passage: That is,
$$
\biggl(\sum_i u_i v_i\biggr)^2 - \biggl(\sum_i {u_i^2}\biggr) \biggl(\sum_i {v_i^2}\biggr) \leq 0.
$$
n-dimensional complex space
If
$$
\mathbf{u}, \mathbf{v} \in \Complex^n
$$
with
$$
\mathbf{u} = (u_1, \ldots, u_n)
$$
and
$$
\mathbf{v} = (v_1, \ldots, v_n)
$$
(where
$$
u_1, \ldots, u_n \in \Complex
$$
and
$$
v_1, \ldots, v_n \in \Complex
$$
) and if the inner product on the vector space
$$
\Complex^n
$$
is the canonical complex inner product (defined by
$$
\langle \mathbf{u}, \mathbf{v} \rangle := u_1 \overline{v_1} + \cdots + u_{n} \overline{v_n},
$$
where the bar notation is used for complex conjugation), then the inequality may be restated more explicitly as follows:
$$
\bigl|\langle \mathbf{u}, \mathbf{v} \rangle\bigr|^2
|
https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality
|
passage: The category of species is equivalent to the category of symmetric sequences in finite sets.
## Definition of species
Any species consists of individual combinatorial structures built on the elements of some finite set: for example, a combinatorial graph is a structure of edges among a given set of vertices, and the species of graphs includes all graphs on all finite sets. Furthermore, a member of a species can have its underying set relabeled by the elements of any other equinumerous set, for example relabeling the vertices of a graph gives "the same graph structure" on the new vertices, i.e. an isomorphic graph.
This leads to the formal definition of a combinatorial species. Let
$$
\mathcal{B}
$$
be the category of finite sets, with the morphisms of the category being the bijections between these sets. A species is a functor
$$
F\colon \mathcal{B} \to \mathcal{B}.
$$
For each finite set A in
$$
\mathcal{B}
$$
, the finite set F[A] is called the set of F-structures on A, or the set of structures of species F on A. Further, by the definition of a functor, if φ is a bijection between sets A and B, then F[φ] is a bijection between the sets of F-structures F[A] and F[B], called transport of F-structures along φ.
|
https://en.wikipedia.org/wiki/Combinatorial_species
|
passage: While simple, flat file systems become awkward as the number of files grows and makes it difficult to organize data into related groups of files.
A recent addition to the flat file system family is Amazon's S3, a remote storage service, which is intentionally simplistic to allow users the ability to customize how their data is stored. The only constructs are buckets (imagine a disk drive of unlimited size) and objects (similar, but not identical to the standard concept of a file). Advanced file management is allowed by being able to use nearly any character (including '/') in the object's name, and the ability to select subsets of the bucket's content based on identical prefixes.
## Implementations
An operating system (OS) typically supports one or more file systems. Sometimes an OS and its file system are so tightly interwoven that it is difficult to describe them independently.
An OS typically provides file system access to the user. Often an OS provides command line interface, such as Unix shell, Windows Command Prompt and PowerShell, and OpenVMS DCL. An OS often also provides graphical user interface file browsers such as MacOS Finder and Windows File Explorer.
### Unix and Unix-like operating systems
Unix-like operating systems create a virtual file system, which makes all the files on all the devices appear to exist in a single hierarchy. This means, in those systems, there is one root directory, and every file existing on the system is located under it somewhere. Unix-like systems can use a RAM disk or network shared resource as its root directory.
|
https://en.wikipedia.org/wiki/File_system
|
passage: It is sometimes assumed that quantitative decision analysis can be applied only to factors that lend themselves easily to measurement (e.g., in natural units such as dollars). However, quantitative decision analysis and related methods, such as applied information economics, can also be applied even to seemingly intangible factors.
## Decision analysis as a prescriptive approach
Prescriptive decision-making research focuses on how to make "optimal" decisions (based on the axioms of rationality), while descriptive decision-making research aims to explain how people actually make decisions (regardless of whether their decisions are "good" or optimal). Unsurprisingly, therefore, there are numerous situations in which decisions made by individuals depart markedly from the decisions that would be recommended by decision analysis.
Some have criticized formal methods of decision analysis for allowing decision makers to avoid taking responsibility for their own decisions, and instead recommend reliance on intuition or "gut feelings". Moreover, for decisions that must be made under significant time pressure, it is not surprising that formal methods of decision analysis are of little use, with intuition and expertise becoming more important.
However, when time permits, studies have demonstrated that quantitative algorithms for decision making can yield results that are superior to "unaided intuition". In addition, despite the known biases in the types of human judgments required for decision analysis, research has shown at least a modest benefit of training and feedback in reducing bias.
Critics cite the phenomenon of paralysis by analysis as one possible consequence of over-reliance on decision analysis in organizations (the expense of decision analysis is in itself a factor in the analysis). However, strategies are available to reduce such risk.
|
https://en.wikipedia.org/wiki/Decision_analysis
|
passage: Implementations of sparse matrix structures that are efficient on modern parallel computer architectures are an object of current investigation.
List structures include the edge list, an array of pairs of vertices, and the adjacency list, which separately lists the neighbors of each vertex: Much like the edge list, each vertex has a list of which vertices it is adjacent to.
Matrix structures include the incidence matrix, a matrix of 0's and 1's whose rows represent vertices and whose columns represent edges, and the adjacency matrix, in which both the rows and columns are indexed by vertices. In both cases a 1 indicates two adjacent objects and a 0 indicates two non-adjacent objects. The degree matrix indicates the degree of vertices. The Laplacian matrix is a modified form of the adjacency matrix that incorporates information about the degrees of the vertices, and is useful in some calculations such as Kirchhoff's theorem on the number of spanning trees of a graph.
The distance matrix, like the adjacency matrix, has both its rows and columns indexed by vertices, but rather than containing a 0 or a 1 in each cell it contains the length of a shortest path between two vertices.
## Problems
### Enumeration
There is a large literature on graphical enumeration: the problem of counting graphs meeting specified conditions. Some of this work is found in Harary and Palmer (1973).
|
https://en.wikipedia.org/wiki/Graph_theory
|
passage: &=
\biggl\langle
BLOCK1\biggr\rangle
-
\biggl\langle
BLOCK2\biggr\rangle
\\
&=
-\tfrac12 i k
\biggl\langle
BLOCK3\biggr\rangle
- \rho k
\biggl\langle
BLOCK4\biggr\rangle
\\
&=
- i \pi k \sum_{p+q=k} \hat{u}_p \hat{u}_q - 2\pi\rho{}k^2\hat{u}_k.
\end{align}
$$
Assemble the three terms for each
$$
k
$$
to obtain
$$
2 \pi \partial_t \hat{u}_k
=
- i \pi k \sum_{p+q=k} \hat{u}_p \hat{u}_q
- 2\pi\rho{}k^2\hat{u}_k
+ 2 \pi \hat{f}_k
\quad k\in\left\{ -\tfrac12N,\dots,\tfrac12N-1 \right\}, \forall t>0.
$$
Dividing through by
$$
2\pi
$$
, we finally arrive at
$$
\partial_t \hat{u}_k
=
- \frac{i k}{2} \sum_{p+q=k} \hat{u}_p \hat{u}_q
- \rho{}k^2\hat{u}_k
+ \hat{f}_k
|
https://en.wikipedia.org/wiki/Spectral_method
|
passage: Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.
The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:
$$
\begin{align}
\frac{\mathrm{d}x}{\mathrm{d}t} &= \sigma y - \sigma x, \\
\frac{\mathrm{d}y}{\mathrm{d}t} &= \rho x - x z - y, \\
\frac{\mathrm{d}z}{\mathrm{d}t} &= x y - \beta z.
\end{align}
$$
where
$$
x
$$
,
$$
y
$$
, and
$$
z
$$
make up the system state,
$$
t
$$
is time, and
$$
\sigma
$$
,
$$
\rho
$$
,
$$
\beta
$$
are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values.
|
https://en.wikipedia.org/wiki/Chaos_theory
|
passage: ## History
The earliest treatise dedicated to the general study of plant and animal poisons, including their classification, recognition, and the treatment of their effects is the Kalpasthāna, one of the major sections of the Suśrutasaṃhitā, a Sanskrit work composed before ca. 300 CE and perhaps in part as early as the fourth century BCE. The Kalpasthāna was influential on many later Sanskrit medical works and was translated into Arabic and other languages, influencing South East Asia, the Middle East, Tibet and eventually Europe.
Dioscorides, a Greek physician in the court of the Roman emperor Nero, made an early attempt to classify plants according to their toxic and therapeutic effect. A work attributed to the 10th century author Ibn Wahshiyya called the Book on Poisons describes various toxic substances and poisonous recipes that can be made using magic. A 14th century Kannada poetic work attributed to the Jain prince Mangarasa, Khagendra Mani Darpana, describes several poisonous plants.
The 16th-century Swiss physician Paracelsus is considered "the father" of modern toxicology, based on his rigorous (for the time) approach to understanding the effects of substances on the body. He is credited with the classic toxicology maxim, "Alle Dinge sind Gift und nichts ist ohne Gift; allein die Dosis macht, dass ein Ding kein Gift ist." which translates as, "All things are poisonous and nothing is without poison; only the dose makes a thing not poisonous." This is often condensed to: "The dose makes the poison" or in Latin "Sola dosis facit venenum".
|
https://en.wikipedia.org/wiki/Toxicology
|
passage:
$$
c(t)
$$
is period t consumption,
$$
k(t)
$$
is period t capital per worker (with
$$
k(0) = k_{0} > 0
$$
),
$$
f(k(t))
$$
is period t production,
$$
n
$$
is the population growth rate,
$$
\delta
$$
is the capital depreciation rate, the agent discounts future utility at rate
$$
\rho
$$
, with
$$
u'>0
$$
and
$$
u''<0
$$
.
Here,
$$
k(t)
$$
is the state variable which evolves according to the above equation, and
$$
c(t)
$$
is the control variable. The Hamiltonian becomes
$$
H(k,c,\mu,t)=e^{-\rho t}u(c(t))+\mu(t)\dot{k}=e^{-\rho t}u(c(t))+\mu(t)[f(k(t)) - (n + \delta)k(t) - c(t)]
$$
The optimality conditions are
$$
\frac{\partial H}{\partial c}=0 \Rightarrow
e^{-\rho t}u'(c)=\mu(t)
$$
$$
\frac{\partial H}{\partial k}=-\frac{\partial \mu}{\partial t}=-\dot{\mu} \Rightarrow \mu(t)[f'(k)-(n+\delta)]=-\dot{\mu}
$$
in addition to the transversality condition
$$
\mu(T)k(T)=0
$$
.
|
https://en.wikipedia.org/wiki/Hamiltonian_%28control_theory%29
|
passage: ### Sudoku
are noteworthy examples of exact cover problems. The n queens problem is a generalized exact cover problem.
## Formal definition
Given a collection
$$
\mathcal{S}
$$
of subsets of a set
$$
X
$$
, an exact cover of
$$
X
$$
is a subcollection
$$
\mathcal{S}^{*}
$$
of
$$
\mathcal{S}
$$
that satisfies two conditions:
- The intersection of any two distinct subsets in
$$
\mathcal{S}^{*}
$$
is empty, i.e., the subsets in
$$
\mathcal{S}^{*}
$$
are pairwise disjoint. In other words, each element in
$$
X
$$
is contained in at most one subset in
$$
\mathcal{S}^{*}
$$
.
- The union of the subsets in
$$
\mathcal{S}^{*}
$$
is
$$
X
$$
, i.e., the subsets in
$$
\mathcal{S}^{*}
$$
cover
$$
X
$$
. In other words, each element in
$$
X
$$
is contained in at least one subset in
$$
\mathcal{S}^{*}
$$
.
In short, an exact cover is exact in the sense that each element in
$$
X
$$
is contained in exactly one subset in
$$
\mathcal{S}^{*}
$$
.
|
https://en.wikipedia.org/wiki/Exact_cover
|
passage: It is this approach that underpins the monitoring protocols of the Water Framework Directive in the European Union.
#### Radiological parameters
Radiation monitoring involves the measurement of radiation dose or radionuclide contamination for reasons related to the assessment or control of exposure to ionizing radiation or radioactive substances, and the interpretation of the results. The 'measurement' of dose often means the measurement of a dose equivalent quantity as a proxy (i.e. substitute) for a dose quantity that cannot be measured directly. Also, sampling may be involved as a preliminary step to measurement of the content of radionuclides in environmental media. The methodological and technical details of the design and operation of monitoring programmes and systems for different radionuclides, environmental media and types of facility are given in IAEA Safety Guide RS–G-1.8 and in IAEA Safety Report No. 64.
Radiation monitoring is often carried out using networks of fixed and deployable sensors such as the US Environmental Protection Agency's Radnet and the SPEEDI network in Japan. Airborne surveys are also made by organizations like the Nuclear Emergency Support Team.
#### Microbiological parameters
Bacteria and viruses are the most commonly monitored groups of microbiological organisms and even these are only of great relevance where water in the aquatic environment is subsequently used as drinking water or where water contact recreation such as swimming or canoeing is practised.
Although pathogens are the primary focus of attention, the principal monitoring effort is almost always directed at much more common indicator species such as Escherichia coli, supplemented by overall coliform bacteria counts.
|
https://en.wikipedia.org/wiki/Environmental_monitoring
|
passage: Then the uniform convergence implies that
$$
\oint_C f(z)\,dz = \oint_C \lim_{n\to \infty} f_n(z)\,dz =\lim_{n\to \infty} \oint_C f_n(z)\,dz = 0
$$
for every closed curve C, and therefore by Morera's theorem f must be holomorphic. This fact can be used to show that, for any open set , the set of all bounded, analytic functions is a Banach space with respect to the supremum norm.
### Infinite sums and integrals
Morera's theorem can also be used in conjunction with Fubini's theorem and the Weierstrass M-test to show the analyticity of functions defined by sums or integrals, such as the Riemann zeta function
$$
\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}
$$
or the Gamma function
$$
\Gamma(\alpha) = \int_0^\infty x^{\alpha-1} e^{-x}\,dx.
$$
Specifically one shows that
$$
\oint_C \Gamma(\alpha)\,d\alpha = 0
$$
for a suitable closed curve C, by writing
$$
|
https://en.wikipedia.org/wiki/Morera%27s_theorem
|
passage: The basic version of the algorithm uses the global topology as the swarm communication structure. This topology allows all particles to communicate with all the other particles, thus the whole swarm share the same best position g from a single particle. However, this approach might lead the swarm to be trapped into a local minimum, thus different topologies have been used to control the flow of information among particles. For instance, in local topologies, particles only share information with a subset of particles. This subset can be a geometrical one – for example "the m nearest particles" – or, more often, a social one, i.e. a set of particles that is not depending on any distance. In such cases, the PSO variant is said to be local best (vs global best for the basic PSO).
A commonly used swarm topology is the ring, in which each particle has just two neighbours, but there are many others. The topology is not necessarily static. In fact, since the topology is related to the diversity of communication of the particles, some efforts have been done to create adaptive topologies (SPSO, APSO, stochastic star, TRIBES, Cyber Swarm, and C-PSO)
By using the ring topology, PSO can attain generation-level parallelism, significantly enhancing the evolutionary speed.
## Inner workings
There are several schools of thought as to why and how the PSO algorithm can perform optimization.
|
https://en.wikipedia.org/wiki/Particle_swarm_optimization
|
passage: $$
can be understood by rewriting and in terms of the fundamental recurrence formulas:
$$
\begin{align}
\boldsymbol{\Tau}_{\boldsymbol{n}}(z)& = \frac{(b_n+z)A_{n-1} + a_nA_{n-2}}{(b_n+z)B_{n-1} + a_nB_{n-2}}& \boldsymbol{\Tau}_{\boldsymbol{n}}(z)& = \frac{zA_{n-1} + A_n}{zB_{n-1} + B_n};\\[6px]
\boldsymbol{\Tau}_{\boldsymbol{n+1}}(z)& = \frac{(b_{n+1}+z)A_n + a_{n+1}A_{n-1}}{(b_{n+1}+z)B_n + a_{n+1}B_{n-1}}& \boldsymbol{\Tau}_{\boldsymbol{n+1}}(z)& = \frac{zA_n + A_{n+1}} {zB_n + B_{n+1}}.\,
\end{align}
$$
In the first of these equations the ratio tends toward as tends toward zero.
|
https://en.wikipedia.org/wiki/Continued_fraction
|
passage: Windows, macOS, and Linux all have server and personal variants. With the exception of Microsoft Windows, the designs of each of them were inspired by or directly inherited from the Unix operating system.
Early personal computers used operating systems that supported command line interaction, using an alphanumeric display and keyboard. The user had to remember a large range of commands to, for example, open a file for editing or to move text from one place to another. Starting in the early 1960s, the advantages of a graphical user interface began to be explored, but widespread adoption required lower-cost graphical display equipment. By 1984, mass-market computer systems using graphical user interfaces were available; by the turn of the 21st century, text-mode operating systems were no longer a significant fraction of the personal computer market.
Applications
Generally, a computer user uses application software to carry out a specific task. System software supports applications and provides common services such as memory management, network connectivity and device drivers, all of which may be used by applications but are not directly of interest to the end user. A simplified analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system): the power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user.
Typical examples of software applications are word processors, spreadsheets, and media players. Multiple applications bundled together as a package are sometimes referred to as an application suite.
|
https://en.wikipedia.org/wiki/Personal_computer
|
passage: Plant thylakoid membranes have the largest lipid component of a non-bilayer forming monogalactosyl diglyceride (MGDG), and little phospholipids; despite this unique lipid composition, chloroplast thylakoid membranes have been shown to contain a dynamic lipid-bilayer matrix as revealed by magnetic resonance and electron microscope studies.
A biological membrane is a form of lamellar phase lipid bilayer. The formation of lipid bilayers is an energetically preferred process when the glycerophospholipids described above are in an aqueous environment. This is known as the hydrophobic effect. In an aqueous system, the polar heads of lipids align towards the polar, aqueous environment, while the hydrophobic tails minimize their contact with water and tend to cluster together, forming a vesicle; depending on the concentration of the lipid, this biophysical interaction may result in the formation of micelles, liposomes, or lipid bilayers. Other aggregations are also observed and form part of the polymorphism of amphiphile (lipid) behavior. Phase behavior is an area of study within biophysics. Micelles and bilayers form in the polar medium by a process known as the hydrophobic effect. When dissolving a lipophilic or amphiphilic substance in a polar environment, the polar molecules (i.e., water in an aqueous solution) become more ordered around the dissolved lipophilic substance, since the polar molecules cannot form hydrogen bonds to the lipophilic areas of the amphiphile.
|
https://en.wikipedia.org/wiki/Lipid
|
passage: Young's modulus (or the Young modulus) is a mechanical property of solid materials that measures the tensile or compressive stiffness when the force is applied lengthwise. It is the modulus of elasticity for tension or axial compression. Young's modulus is defined as the ratio of the stress (force per unit area) applied to the object and the resulting axial strain (displacement or deformation) in the linear elastic region of the material.
Although Young's modulus is named after the 19th-century British scientist Thomas Young, the concept was developed in 1727 by Leonhard Euler. The first experiments that used the concept of Young's modulus in its modern form were performed by the Italian scientist Giordano Riccati in 1782, pre-dating Young's work by 25 years. The term modulus is derived from the Latin root term modus, which means measure.
## Definition
Young's modulus,
$$
E
$$
, quantifies the relationship between tensile or compressive stress
$$
\sigma
$$
(force per unit area) and axial strain
$$
\varepsilon
$$
(proportional deformation) in the linear elastic region of a material:
$$
E = \frac{\sigma}{\varepsilon}
$$
Young's modulus is commonly measured in the International System of Units (SI) in multiples of the pascal (Pa) and common values are in the range of gigapascals (GPa).
## Examples
:
- Rubber (increasing pressure: large length increase, meaning low )
- Aluminium (increasing pressure: small length increase, meaning high )
|
https://en.wikipedia.org/wiki/Young%27s_modulus
|
passage: ## Base for the closed sets
Closed sets are equally adept at describing the topology of a space. There is, therefore, a dual notion of a base for the closed sets of a topological space. Given a topological space
$$
X,
$$
a family
$$
\mathcal{C}
$$
of closed sets forms a base for the closed sets if and only if for each closed set
$$
A
$$
and each point
$$
x
$$
not in
$$
A
$$
there exists an element of
$$
\mathcal{C}
$$
containing
$$
A
$$
but not containing
$$
x.
$$
A family
$$
\mathcal{C}
$$
is a base for the closed sets of
$$
X
$$
if and only if its in
$$
X,
$$
that is the family
$$
\{X\setminus C: C\in \mathcal{C}\}
$$
of complements of members of
$$
\mathcal{C}
$$
, is a base for the open sets of
$$
X.
$$
Let
$$
\mathcal{C}
$$
be a base for the closed sets of
$$
X.
$$
Then
1. BLOCK161.
|
https://en.wikipedia.org/wiki/Base_%28topology%29
|
passage: This and many of the following series may be obtained by applying Möbius inversion and Dirichlet convolution to known series. For example, given a Dirichlet character one has
$$
\frac 1 {L(\chi,s)}=\sum_{n=1}^\infty \frac{\mu(n)\chi(n)}{n^s}
$$
where is a Dirichlet L-function.
If the arithmetic function has a Dirichlet inverse function
$$
f^{-1}(n)
$$
, i.e., if there exists an inverse function such that the Dirichlet convolution of f with its inverse yields the multiplicative identity
$$
\sum_{d|n} f(d) f^{-1}(n/d) = \delta_{n,1}
$$
, then the DGF of the inverse function is given by the reciprocal of F:
$$
\sum_{n \geq 1} \frac{f^{-1}(n)}{n^s} = \left(\sum_{n \geq 1} \frac{f(n)}{n^s}\right)^{-1}.
$$
Other identities include
$$
\frac{\zeta(s-1)}{\zeta(s)}=\sum_{n=1}^{\infty} \frac{\varphi(n)}{n^s}
$$
where
$$
\varphi(n)
$$
is the totient function,
$$
|
https://en.wikipedia.org/wiki/Dirichlet_series
|
passage: In mice, gene knockouts are commonly used to study the function of specific genes in development, physiology, and cancer research.
The use of gene knockouts in mouse models has been particularly valuable in the study of human diseases. For example, gene knockouts in mice have been used to study the role of specific genes in cancer, neurological disorders, immune disorders, and metabolic disorders.
However, gene knockouts also have some limitations. For example, the loss of a single gene may not fully mimic the effects of a genetic disorder, and the knockouts may have unintended effects on other genes or pathways. Additionally, gene knockouts are not always a good model for human disease as the mouse genome is not identical to the human genome, and mouse physiology is different from human physiology.
The KO technique is essentially the opposite of a gene knock-in. Knocking out two genes simultaneously in an organism is known as a double knockout (DKO). Similarly the terms triple knockout (TKO) and quadruple knockouts (QKO) are used to describe three or four knocked out genes, respectively. However, one needs to distinguish between heterozygous and homozygous KOs. In the former, only one of two gene copies (alleles) is knocked out, in the latter both are knocked out.
## Methods
Knockouts are accomplished through a variety of techniques. Originally, naturally occurring mutations were identified and then gene loss or inactivation had to be established by DNA sequencing or other methods.
|
https://en.wikipedia.org/wiki/Gene_knockout
|
passage: - ANSI X9.17 standard (Financial Institution Key Management (wholesale)), which has been adopted as a FIPS standard as well. It takes as input a TDEA (keying option 2) key bundle k and (the initial value of) a 64-bit random seed s. Each time a random number is required, it executes the following steps:
Obviously, the technique is easily generalized to any block cipher; AES has been suggested. If the key k is leaked, the entire X9.17 stream can be predicted; this weakness is cited as a reason for creating Yarrow.
All these above-mentioned schemes, save for X9.17, also mix the state of a CSPRNG with an additional source of entropy. They are therefore not "pure" pseudorandom number generators, in the sense that the output is not completely determined by their initial state. This addition aims to prevent attacks even if the initial state is compromised.
## Standards
Several CSPRNGs have been standardized. For example:
- FIPS 186-4
- NIST SP 800-90A
- NIST SP 800-90A Rev.1
- ANSI X9.17-1985 Appendix C
- ANSI X9.31-1998 Appendix A.2.4
- ANSI X9.62-1998 Annex A.4, obsoleted by ANSI X9.62-2005, Annex D (HMAC_DRBG)
A good reference is maintained by NIST.
There are also standards for statistical testing of new CSPRNG designs:
- A Statistical Test Suite for Random and Pseudorandom Number Generators, NIST Special Publication 800-22.
|
https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
|
passage: By the end of the Carboniferous, the equatorial regions of Pangaea became drier.
- During the Permian period, the landmass received seasonal rainfall in contrast to the aforementioned dryness. However, the regions lying north of the Central Pangaean Mountains received little precipitation as they lied in the rain shadow of the mountain range which blocked monsoon winds from the Southern Hemisphere.
- During the Triassic period, the monsoons reached their maximum extent, such that the previously dry conditions of the Colorado Plateau were alleviated and it started to receive moisture due to the changing wind directions. In contrast, the regions of present day Australia were at higher latitudes and experienced much drier and seasonal conditions around the same time.
- During the Jurassic, the megamonsoon declined and the regions of Gondwana and southern Laurasia experienced dry conditions.
### Post-breakup
When Pangaea finally broke apart by the middle Mesozoic era, the megamonsoon fell apart completely. The breakup could have contributed to an increase in polar temperatures as colder waters mixed with warmer waters, also accompanied by outgassing of large quantities of carbon dioxide from continental rifts. This produced a Mesozoic CO2 high that contributed to the very warm climate of the Early Cretaceous. The opening of the Tethys Ocean also contributed to the warming of the climate. The very active mid-ocean ridges associated with the breakup of Pangaea raised sea levels to the highest in the geological record, flooding much of the continents.
|
https://en.wikipedia.org/wiki/Pangaea
|
passage: It is commonly depicted as:
$$
\mathrm{conf}(X \Rightarrow Y) = P(Y | X) = \frac{\mathrm{supp}(X \cap Y)}{ \mathrm{supp}(X) }=\frac{\text{number of transactions containing }X\text{ and }Y}{\text{number of transactions containing }X}
$$
The equation illustrates that confidence can be computed by calculating the co-occurrence of transactions and within the dataset in ratio to transactions containing only . This means that the number of transactions in both and is divided by those just in .
For example, Table 2 shows the rule
$$
\{\mathrm{butter, bread}\} \Rightarrow \{\mathrm{milk}\}
$$
which has a confidence of
$$
\frac{1/5}{1/5}=\frac{0.2}{0.2}=1.0
$$
in the dataset, which denotes that every time a customer buys butter and bread, they also buy milk. This particular example demonstrates the rule being correct 100% of the time for transactions containing both butter and bread. The rule
$$
\{\mathrm{fruit}\} \Rightarrow \{\mathrm{eggs}\}
$$
, however, has a confidence of
$$
\frac{2/5}{3/5}=\frac{0.4}{0.6}=0.67
$$
. This suggests that eggs are bought 67% of the times that fruit is brought.
|
https://en.wikipedia.org/wiki/Association_rule_learning
|
passage: Each step generates the
$$
k!
$$
permutations that end with the same
$$
n-k
$$
final elements. It does this by calling itself once with the
$$
k\text{th}
$$
element unaltered and then
$$
k-1
$$
times with the (
$$
k\text{th}
$$
) element exchanged for each of the initial
$$
k-1
$$
elements. The recursive calls modify the initial
$$
k-1
$$
elements and a rule is needed at each iteration to select which will be exchanged with the last. Heap's method says that this choice can be made by the parity of the number of elements operated on at this step. If
$$
k
$$
is even, then the final element is iteratively exchanged with each element index. If
$$
k
$$
is odd, the final element is always exchanged with the first.
|
https://en.wikipedia.org/wiki/Heap%27s_algorithm
|
passage: Furthermore, the zero-point energy density has other physical consequences e.g. the Casimir effect, contribution to the Lamb shift, or anomalous magnetic moment of the electron, it is clear it is not just a mathematical constant or artifact that can be cancelled out.
#### Necessity of the vacuum field in QED
The vacuum state of the "free" electromagnetic field (that with no sources) is defined as the ground state in which for all modes . The vacuum state, like all stationary states of the field, is an eigenstate of the Hamiltonian but not the electric and magnetic field operators. In the vacuum state, therefore, the electric and magnetic fields do not have definite values. We can imagine them to be fluctuating about their mean value of zero.
In a process in which a photon is annihilated (absorbed), we can think of the photon as making a transition into the vacuum state. Similarly, when a photon is created (emitted), it is occasionally useful to imagine that the photon has made a transition out of the vacuum state. An atom, for instance, can be considered to be "dressed" by emission and reabsorption of "virtual photons" from the vacuum. The vacuum state energy described by is infinite.
|
https://en.wikipedia.org/wiki/Zero-point_energy
|
passage: ## Mathematical modelling
In the mathematical language of dynamic systems analysis, one of the simplest bistable systems is
$$
\frac{dy}{dt} = y (1-y^2).
$$
This system describes a ball rolling down a curve with shape
$$
\frac{y^4}{4} - \frac{y^2}{2}
$$
, and has three equilibrium points:
$$
y = 1
$$
,
$$
y = 0
$$
, and
$$
y = -1
$$
. The middle point
$$
y=0
$$
is marginally stable (
$$
y = 0
$$
is stable but
$$
y \approx 0
$$
will not converge to
$$
y = 0
$$
), while the other two points are stable. The direction of change of
$$
y(t)
$$
over time depends on the initial condition
$$
y(0)
$$
. If the initial condition is positive (
$$
y(0)>0
$$
), then the solution
$$
y(t)
$$
approaches 1 over time, but if the initial condition is negative (
$$
y(0)< 0
$$
), then
$$
y(t)
$$
approaches −1 over time. Thus, the dynamics are "bistable". The final state of the system can be either
$$
y = 1
$$
or
$$
y = -1
$$
, depending on the initial conditions.
The appearance of a bistable region can be understood for the model system
$$
\frac{dy}{dt} = y (r-y^2)
$$
which undergoes a supercritical pitchfork bifurcation with bifurcation parameter
$$
r
$$
.
|
https://en.wikipedia.org/wiki/Bistability
|
passage: 1. It lacks coherence and precision. There is widespread confusion about what exactly the V-Model is. If one boils it down to those elements that most people would agree upon it becomes a trite and unhelpful representation of software development. Disagreement about the merits of the V-Model often reflects a lack of shared understanding of its definition.
## Current state
Supporters of the V-Model argue that it has evolved and supports flexibility and agility throughout the development process. They argue that in addition to being a highly disciplined approach, it promotes meticulous design, development, and documentation necessary to build stable software products. Lately, it is being adopted by the medical device industry. "A Software Process Development, Assessment and Improvement Framework, for the Medical Device Industry "
|
https://en.wikipedia.org/wiki/V-model_%28software_development%29
|
passage: $$
k^{\epsilon}(x)
$$
and right part of equation previous equation we obtain from expressions:
$$
k^\epsilon (x)=\begin{cases} 1, & 0 < x < 1 \\ \frac{1}{\epsilon^2}, & 1 < x < 2
\end{cases}
$$
$$
\phi^\epsilon (x)=\begin{cases} 2, & 0 < x < 1 \\ 2c_0, & 1 < x < 2
\end{cases}\quad (3)
$$
Boundary conditions:
$$
u_\epsilon(0) = 0, u_\epsilon(2) = 0
$$
Connection conditions in the point
$$
x = 1
$$
:
|
https://en.wikipedia.org/wiki/Fictitious_domain_method
|
passage: The "ultraviolet catastrophe" is the expression of the fact that the formula misbehaves at higher frequencies; it predicts infinite energy emission because
$$
B_{\nu}(T) \to \infty
$$
as
$$
\nu \to \infty
$$
.
An example, from Mason's A History of the Sciences, illustrates multi-mode vibration via a piece of string. As a natural vibrator, the string will oscillate with specific modes (the standing waves of a string in harmonic resonance), dependent on the length of the string. In classical physics, a radiator of energy will act as a natural vibrator. Since each mode will have the same energy, most of the energy in a natural vibrator will be in the smaller wavelengths and higher frequencies, where most of the modes are.
According to classical electromagnetism, the number of electromagnetic modes in a 3-dimensional cavity, per unit frequency, is proportional to the square of the frequency. This implies that the radiated power per unit frequency should be proportional to frequency squared. Thus, both the power at a given frequency and the total radiated power is unlimited as higher and higher frequencies are considered: this is unphysical, as the total radiated power of a cavity is not observed to be infinite, a point that was made independently by Einstein, Lord Rayleigh, and Sir James Jeans in 1905.
## Solution
In 1900, Max Planck derived the correct form for the intensity spectral distribution function by making some assumptions that were strange for the time.
|
https://en.wikipedia.org/wiki/Ultraviolet_catastrophe
|
passage: For example, in the context of large eddy simulation, the PDF becomes the filtered PDF. PDF methods can also be used to describe chemical reactions, and are particularly useful for simulating chemically reacting flows because the chemical source term is closed and does not require a model. The PDF is commonly tracked by using Lagrangian particle methods; when combined with large eddy simulation, this leads to a Langevin equation for subfilter particle evolution.
#### Vorticity confinement method
The vorticity confinement (VC) method is an Eulerian technique used in the simulation of turbulent wakes. It uses a solitary-wave like approach to produce a stable solution with no numerical spreading. VC can capture the small-scale features to within as few as 2 grid cells. Within these features, a nonlinear difference equation is solved as opposed to the finite difference equation. VC is similar to shock capturing methods, where conservation laws are satisfied, so that the essential integral quantities are accurately computed.
#### Linear eddy model
The Linear eddy model is a technique used to simulate the convective mixing that takes place in turbulent flow. Specifically, it provides a mathematical way to describe the interactions of a scalar variable within the vector flow field. It is primarily used in one-dimensional representations of turbulent flow, since it can be applied across a wide range of length scales and Reynolds numbers. This model is generally used as a building block for more complicated flow representations, as it provides high resolution predictions that hold across a large range of flow conditions.
|
https://en.wikipedia.org/wiki/Computational_fluid_dynamics
|
passage: Then, we can say the morphism of categories fibered in groupoids
$$
p
$$
is smooth and surjective if the associated morphismof schemes is smooth and surjective.
### Deligne–Mumford stacks
Algebraic stacks, also known as Artin stacks, are by definition equipped with a smooth surjective atlas
$$
\mathcal{U} \to \mathcal{X}
$$
, where
$$
\mathcal{U}
$$
is the stack associated to some scheme
$$
U \to S
$$
. If the atlas
$$
\mathcal{U}\to \mathcal{X}
$$
is moreover étale, then
$$
\mathcal{X}
$$
is said to be a Deligne–Mumford stack. The subclass of Deligne-Mumford stacks is useful because it provides the correct setting for many natural stacks considered, such as the moduli stack of algebraic curves. In addition, they are strict enough that object represented by points in Deligne-Mumford stacks do not have infinitesimal automorphisms. This is very important because infinitesimal automorphisms make studying the deformation theory of Artin stacks very difficult. For example, the deformation theory of the Artin stack
$$
BGL_n = [*/GL_n]
$$
, the moduli stack of rank
$$
n
$$
vector bundles, has infinitesimal automorphisms controlled partially by the Lie algebra
$$
\mathfrak{gl}_n
$$
.
|
https://en.wikipedia.org/wiki/Algebraic_stack
|
passage: In the second-order equation
$$
f(x,y)+g\left(x,y,{dy \over dx}\right){dy \over dx}+{d^2y \over dx^2} (J(x,y))=0
$$
only the term
$$
f(x,y)
$$
is a term purely of
$$
x
$$
and
$$
y
$$
. Let
$$
{\partial I\over\partial x} = f(x,y)
$$
. If
$$
{\partial I\over\partial x}=f(x,y)
$$
, then
$$
f(x,y)={ dI\over dx}-{\partial I\over\partial y}{dy \over dx}
$$
Since the total derivative of
$$
I(x,y)
$$
with respect to
$$
x
$$
is equivalent to the implicit ordinary derivative
$$
{dI \over dx}
$$
, then
$$
f(x,y)+{\partial I\over\partial y}{dy \over dx}={dI \over dx}={d \over dx}(I(x,y)-h(x))+{dh(x) \over dx}
$$
So,
$$
{dh(x) \over dx}=f(x,y)+{\partial I\over\partial y}{dy \over dx}-{d \over dx}(I(x,y)-h(x))
$$
and
$$
|
https://en.wikipedia.org/wiki/Exact_differential_equation
|
passage: ## Software
- ELKI includes several k-medoid variants, including a Voronoi-iteration k-medoids, the original PAM algorithm, Reynolds' improvements, and the O(n²) FastPAM and FasterPAM algorithms, CLARA, CLARANS, FastCLARA and FastCLARANS.
- Julia contains a k-medoid implementation of the k-means style algorithm (fast, but much worse result quality) in the JuliaStats/Clustering.jl package.
- KNIME includes a k-medoid implementation supporting a variety of efficient matrix distance measures, as well as a number of native (and integrated third-party) k-means implementations
- Python contains FasterPAM and other variants in the "kmedoids" package, additional implementations can be found in many other packages
- R contains PAM in the "cluster" package, including the FasterPAM improvements via the options `variant = "faster"` and `medoids = "random"`. There also exists a "fastkmedoids" package.
- RapidMiner has an operator named KMedoids, but it does not implement any of above KMedoids algorithms. Instead, it is a k-means variant, that substitutes the mean with the closest data point (which is not the medoid), which combines the drawbacks of k-means (limited to coordinate data) with the additional cost of finding the nearest point to the mean.
- Rust has a "kmedoids" crate that also includes the FasterPAM variant.
- MATLAB implements PAM, CLARA, and two other algorithms to solve the k-medoid clustering problem.
|
https://en.wikipedia.org/wiki/K-medoids
|
passage: Finding an entry in the auxiliary index would tell us which block to search in the main database; after searching the auxiliary index, we would have to search only that one block of the main database—at a cost of one more disk read.
In the above example the index would hold 10,000 entries and would take at most 14 comparisons to return a result. Like the main database, the last six or so comparisons in the auxiliary index would be on the same disk block. The index could be searched in about eight disk reads, and the desired record could be accessed in 9 disk reads.
Creating an auxiliary index can be repeated to make an auxiliary index to the auxiliary index. That would make an aux-aux index that would need only 100 entries and would fit in one disk block.
Instead of reading 14 disk blocks to find the desired record, we only need to read 3 blocks. This blocking is the core idea behind the creation of the B-tree, where the disk blocks fill out a hierarchy of levels to make up the index. Reading and searching the first (and only) block of the aux-aux index which is the root of the tree identifies the relevant block in aux-index in the level below. Reading and searching that aux-index block identifies the relevant block to read, until the final level, known as the leaf level, identifies a record in the main database. Instead of 150 milliseconds, we need only 30 milliseconds to get the record.
|
https://en.wikipedia.org/wiki/B-tree
|
passage: In this case, the derivative of
$$
f(x)
$$
(or the gradient of
$$
f(x)
$$
if
$$
x
$$
is a vector) is given by
$$
\frac{\partial f}{\partial x} = \frac{\partial \phi(x,\overline{z})}{\partial x}.
$$
### Example of no directional derivative
In the statement of Danskin, it is important to conclude semi-differentiability of
$$
f
$$
and not directional-derivative as explains this simple example.
Set
$$
Z=\{-1,+1\},\ \phi(x,z)= zx
$$
, we get
$$
f(x)=|x|
$$
which is semi-differentiable with
$$
\partial_-f(0)=-1, \partial_+f(0)=+1
$$
but has not a directional derivative at
$$
x=0
$$
.
### Subdifferential
|
https://en.wikipedia.org/wiki/Danskin%27s_theorem
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.