text
stringlengths 82
2.62k
| source
stringlengths 31
108
|
---|---|
passage: This integer can also be thought of as the winding number of a loop around the origin in the plane.
The identification (a group isomorphism) of the homotopy group with the integers is often written as an equality: thus .
Mappings from a 2-sphere to a 2-sphere can be visualized as wrapping a plastic bag around a ball and then sealing it. The sealed bag is topologically equivalent to a 2-sphere, as is the surface of the ball. The bag can be wrapped more than once by twisting it and wrapping it back over the ball. (There is no requirement for the continuous map to be injective and so the bag is allowed to pass through itself.) The twist can be in one of two directions and opposite twists can cancel out by deformation. The total number of twists after cancellation is an integer, called the degree of the mapping. As in the case mappings from the circle to the circle, this degree identifies the homotopy group with the group of integers, .
These two results generalize: for all , (see below).
Any continuous mapping from a circle to an ordinary sphere can be continuously deformed to a one-point mapping, and so its homotopy class is trivial. One way to visualize this is to imagine a rubber-band wrapped around a frictionless ball: the band can always be slid off the ball. The homotopy group is therefore a trivial group, with only one element, the identity element, and so it can be identified with the subgroup of consisting only of the number zero.
|
https://en.wikipedia.org/wiki/Homotopy_groups_of_spheres
|
passage: Moreover, the same transformation can be used to execute nested relations that are not functional. For example:
```prolog
grandparent(X) := parent(parent(X)).
parent(X) := mother(X).
parent(X) := father(X).
mother(charles) := elizabeth.
father(charles) := phillip.
mother(harry) := diana.
father(harry) := charles.
?- grandparent(X,Y).
X = harry,
Y = elizabeth.
X = harry,
Y = phillip.
```
### Relationship with relational programming
The term relational programming has been used to cover a variety of programming languages that treat functions as a special case of relations. Some of these languages, such as miniKanren
and relational linear programming
are logic programming languages in the sense of this article.
However, the relational language RML is an imperative programming language
whose core construct is a
relational expression, which is similar to an expression in first-order predicate logic.
Other relational programming languages are based on the relational calculus or relational algebra.
### Semantics of Horn clause programs
Viewed in purely logical terms, there are two approaches to the declarative semantics of Horn clause logic programs: One approach is the original logical consequence semantics, which understands solving a goal as showing that the goal is a theorem that is true in all models of the program.
|
https://en.wikipedia.org/wiki/Logic_programming
|
passage: 1. A bit with the same value as the previous digit means that the corresponding disk is stacked on top of the previous disk. That is to say: a contiguous sequence of 1s or 0s means that the corresponding disks are all on the same peg.
- A bit with a different value than the previous one means that the corresponding disk is on another peg and not on the previous stack. Given 1) above, only 1 choice of the remaining 2 pegs is a legal placement. Note that after the placement of the first set of disks, every “placement round” starts and ends with 1 of the potential pegs having even parity, the other having odd.
For example, in 8-disk Tower of Hanoi:
- Move 0 = 00000000.
- The largest disk (leftmost) bit is 0, so it is on the starting peg (0).
- All other disks are 0 as well, so they are stacked on top of it. Hence all disks are on the starting peg, in the puzzle's initial configuration.
- Move 25510 (28 − 1) = 11111111.
- The largest disk bit is 1, so it is on the final peg (2).
- All other disks are 1 as well, so they are stacked on top of it. Hence all disks are on the final peg and the puzzle is solved.
- Move 21610 = 11011000.
- The largest disk bit is 1, so disk 8 is on the final peg (2). Note that it sits on base number 11 (11>8).
|
https://en.wikipedia.org/wiki/Tower_of_Hanoi
|
passage: The generalized volatility for time horizon T in years is expressed as:
$$
\sigma_\text{T} = \sigma_\text{annually} \sqrt{T}.
$$
Therefore, if the daily logarithmic returns of a stock have a standard deviation of and the time period of returns is P in trading days, the annualized volatility is
$$
\sigma_\text{annually} = \sigma_\text{daily} \sqrt{P}.
$$
so
$$
\sigma_\text{T} = \sigma_\text{daily} \sqrt{PT}.
$$
A common assumption is that P = 252 trading days in any given year. Then, if = 0.01, the annualized volatility is
$$
\sigma_\text{annually} = 0.01 \sqrt{252} = 0.1587.
$$
The monthly volatility (i.e.
$$
T = \tfrac{1}{12}
$$
of a year) is
$$
\sigma_\text{monthly} = 0.01 \sqrt{\tfrac{252}{12}} = 0.0458.
$$
The formulas used above to convert returns or volatility measures from one time period to another assume a particular underlying model or process. These formulas are accurate extrapolations of a random walk, or Wiener process, whose steps have finite variance. However, more generally, for natural stochastic processes, the precise relationship between volatility measures for different time periods is more complicated.
|
https://en.wikipedia.org/wiki/Volatility_%28finance%29
|
passage: NASA prevented COBE's engineers from going to other space companies to launch COBE, and eventually a redesigned COBE was placed into Sun-synchronous orbit on 18 November 1989 aboard a Delta launch vehicle.
On 23 April 1992, COBE scientists announced at the APS April Meeting in Washington, D.C. the finding of the "primordial seeds" (CMBE anisotropy) in data from the DMR instrument; until then the other instruments were "unable to see the template." The following day The New York Times ran the story on the front page, explaining the finding as "the first evidence revealing how an initially smooth cosmos evolved into today's panorama of stars, galaxies and gigantic clusters of galaxies. "
The Nobel Prize in Physics for 2006 was jointly awarded to John C. Mather, NASA Goddard Space Flight Center, and George F. Smoot III, University of California, Berkeley, "for their discovery of the blackbody form and anisotropy of the cosmic microwave background radiation".
## Spacecraft
COBE was an Explorer class satellite, with technology borrowed heavily from IRAS, but with some unique characteristics.
The need to control and measure all the sources of systematic errors required a rigorous and integrated design. COBE would have to operate for a minimum of 6 months and constrain the amount of radio interference from the ground, COBE and other satellites as well as radiative interference from the Earth, Sun and Moon.
|
https://en.wikipedia.org/wiki/Cosmic_Background_Explorer
|
passage: For a point P ∈ X, the ramification index eP is defined as follows. Let Q = ƒ(P) and let t be a local uniformizing parameter at P; that is, t is a regular function defined in a neighborhood of Q with t(Q) = 0 whose differential is nonzero. Pulling back t by ƒ defines a regular function on X. Then
$$
e_P = v_P(t\circ f)
$$
where vP is the valuation in the local ring of regular functions at P. That is, eP is the order to which
$$
t\circ f
$$
vanishes at P. If eP > 1, then ƒ is said to be ramified at P. In that case, Q is called a branch point.
## Notes
## References
-
-
-
-
-
-
Category:Complex analysis
Category:Inverse functions
de:Verzweigungspunkt
|
https://en.wikipedia.org/wiki/Branch_point
|
passage: The model considers the event that the amount of money reaches 0, representing bankruptcy. The model can answer questions such as the probability that this occurs within finite time, or the mean time until which it occurs.
First-hitting-time models can be applied to expected lifetimes, of patients or mechanical devices. When the process reaches an adverse threshold state for the first time, the patient dies, or the device breaks down.
The time for a particle to escape through a narrow opening in a confined space is termed the narrow escape problem, and is commonly studied in biophysics and cellular biology.
## First passage time of a 1D Brownian particle
One of the simplest and omnipresent stochastic systems is that of the Brownian particle in one dimension. This system describes the motion of a particle which moves stochastically in one dimensional space, with equal probability of moving to the left or to the right. Given that Brownian motion is used often as a tool to understand more complex phenomena, it is important to understand the probability of a first passage time of the Brownian particle of reaching some position distant from its start location. This is done through the following means.
The probability density function (PDF) for a particle in one dimension is found by solving the one-dimensional diffusion equation. (This equation states that the position probability density diffuses outward over time. It is analogous to say, cream in a cup of coffee if the cream was all contained within some small location initially.
|
https://en.wikipedia.org/wiki/First-hitting-time_model
|
passage: Hence every odd move involves the smallest disk. It can also be observed that the smallest disk traverses the pegs f, t, r, f, t, r, etc. for odd height of the tower and traverses the pegs f, r, t, f, r, t, etc. for even height of the tower. This provides the following algorithm, which is easier, carried out by hand, than the recursive algorithm.
In alternate moves:
- Move the smallest disk to the peg it has not recently come from.
- Move another disk legally (there will be only one possibility).
For the very first move, the smallest disk goes to peg t if h is odd and to peg r if h is even.
Also observe that:
- Disks whose ordinals have even parity move in the same sense as the smallest disk.
- Disks whose ordinals have odd parity move in opposite sense.
- If h is even, the remaining third peg during successive moves is t, r, f, t, r, f, etc.
- If h is odd, the remaining third peg during successive moves is r, t, f, r, t, f, etc.
|
https://en.wikipedia.org/wiki/Tower_of_Hanoi
|
passage: In computer science, frequent subtree mining is the problem of finding all patterns in a given database whose support (a metric related to its number of occurrences in other subtrees) is over a given threshold. It is a more general form of the maximum agreement subtree problem.
## Definition
Frequent subtree mining is the problem of trying to find all of the patterns whose "support" is over a certain user-specified level, where "support" is calculated as the number of trees in a database which have at least one subtree isomorphic to a given pattern.
|
https://en.wikipedia.org/wiki/Frequent_subtree_mining
|
passage: The problem was caused by the index being recalculated thousands of times daily, and always being truncated (rounded down) to 3 decimal places, in such a way that the rounding errors accumulated. Recalculating the index for the same period using rounding to the nearest thousandth rather than truncation corrected the index value from 524.811 up to 1098.892.
For the examples below, refers to the sign function applied to the original number, .
#### Rounding down
One may round down (or take the floor, or round toward negative infinity): is the largest integer that does not exceed .
$$
y = \mathrm{floor}(x) = \left\lfloor x \right\rfloor = -\left\lceil -x \right\rceil
$$
For example, 23.7 gets rounded to 23, and −23.2 gets rounded to −24.
#### Rounding up
One may also round up (or take the ceiling, or round toward positive infinity): is the smallest integer that is not less than .
$$
y = \operatorname{ceil}(x) = \left\lceil x \right\rceil = -\left\lfloor -x \right\rfloor
$$
For example, 23.2 gets rounded to 24, and −23.7 gets rounded to −23.
#### Rounding toward zero
One may also round toward zero (or truncate, or round away from infinity): is the integer that is closest to such that it is between 0 and (included); i.e. is the integer part of , without its fraction digits.
|
https://en.wikipedia.org/wiki/Rounding
|
passage: ### Index
Definition: `Index(i)`: return the character at position i Time complexity:
To retrieve the i-th character, we begin a recursive search from the root node:
```java
@Override
public int indexOf(char ch, int startIndex) {
if (startIndex > weight) {
return right.indexOf(ch, startIndex - weight);
}
return left.indexOf(ch, startIndex);
}
```
For example, to find the character at in Figure 2.1 shown on the right, start at the root node (A), find that 22 is greater than 10 and there is a left child, so go to the left child (B). 9 is less than 10, so subtract 9 from 10 (leaving ) and go to the right child (D). Then because 6 is greater than 1 and there's a left child, go to the left child (G). 2 is greater than 1 and there's a left child, so go to the left child again (J). Finally 2 is greater than 1 but there is no left child, so the character at index 1 of the short string "na" (ie "n") is the answer. (1-based index)
Concat
Definition: `Concat(S1, S2)`: concatenate two ropes, S1 and S2, into a single rope.
Time complexity: (or time to compute the root weight)
A concatenation can be performed simply by creating a new root node with and , which is constant time. The weight of the parent node is set to the length of the left child S1, which would take time, if the tree is balanced.
|
https://en.wikipedia.org/wiki/Rope_%28data_structure%29
|
passage: For this reason, observables are identified with elements of an abstract C*-algebra A (that is one without a distinguished representation as an algebra of operators) and states are positive linear functionals on A. However, by using the GNS construction, we can recover Hilbert spaces that realize A as a subalgebra of operators.
Geometrically, a pure state on a C*-algebra A is a state that is an extreme point of the set of all states on A. By properties of the GNS construction these states correspond to irreducible representations of A.
The states of the C*-algebra of compact operators K(H) correspond exactly to the density operators, and therefore the pure states of K(H) are exactly the pure states in the sense of quantum mechanics.
The C*-algebraic formulation can be seen to include both classical and quantum systems. When the system is classical, the algebra of observables become an abelian C*-algebra. In that case the states become probability measures.
## History
The formalism of density operators and matrices was introduced in 1927 by John von Neumann and independently, but less systematically, by Lev Landau and later in 1946 by Felix Bloch. Von Neumann introduced the density matrix in order to develop both quantum statistical mechanics and a theory of quantum measurements. The name density matrix itself relates to its classical correspondence to a phase-space probability measure (probability distribution of position and momentum) in classical statistical mechanics, which was introduced by Eugene Wigner in 1932.
|
https://en.wikipedia.org/wiki/Density_matrix
|
passage: The American Medical Informatics Association created a, board certification for medical informatics from the American Board of Preventive Medicine. The American Nurses Credentialing Center offers a board certification in Nursing Informatics. For Radiology Informatics, the CIIP (Certified Imaging Informatics Professional) certification was created by ABII (The American Board of Imaging Informatics) which was founded by SIIM (the Society for Imaging Informatics in Medicine) and ARRT (the American Registry of Radiologic Technologists) in 2005. The CIIP certification requires documented experience working in Imaging Informatics, formal testing and is a limited time credential requiring renewal every five years.
The exam tests for a combination of IT technical knowledge, clinical understanding, and project management experience thought to represent the typical workload of a PACS administrator or other radiology IT clinical support role. Certifications from PARCA (PACS Administrators Registry and Certifications Association) are also recognized. The five PARCA certifications are tiered from entry-level to architect level. The American Health Information Management Association offers credentials in medical coding, analytics, and data administration, such as Registered Health Information Administrator and Certified Coding Associate. Certifications are widely requested by employers in health informatics, and overall the demand for certified informatics workers in the United States is outstripping supply. The American Health Information Management Association reports that only 68% of applicants pass certification exams on the first try.
|
https://en.wikipedia.org/wiki/Health_informatics
|
passage: For example, the first five days might be (see the image on the right):
1. Dig a blue lake of width 1/3 passing within /3 of all dry land.
1. Dig a red lake of width 1/32 passing within /32 of all dry land.
1. Dig a green lake of width 1/33 passing within /33 of all dry land.
1. Extend the blue lake by a channel of width 1/34 passing within /34 of all dry land. (The small channel connects the thin blue lake to the thick one, near the middle of the image.)
1. Extend the red lake by a channel of width 1/35 passing within /35 of all dry land. (The tiny channel connects the thin red lake to the thick one, near the top left of the image.)
A variation of this construction can produce a countable infinite number of connected lakes with the same boundary: instead of extending the lakes in the order 1, 2, 0, 1, 2, 0, 1, 2, 0, ...., extend them in the order 0, 0, 1, 0, 1, 2, 0, 1, 2, 3, 0, 1, 2, 3, 4, ... and so on.
Wada basins
Wada basins are certain special basins of attraction studied in the mathematics of non-linear systems. A basin having the property that every neighborhood of every point on the boundary of that basin intersects at least three basins is called a Wada basin, or said to have the Wada property. Unlike the Lakes of Wada, Wada basins are often disconnected.
|
https://en.wikipedia.org/wiki/Lakes_of_Wada
|
passage: In the following we will consider the limit (L, φ) of a diagram F : J → C.
- Terminal objects. If J is the empty category there is only one diagram of shape J: the empty one (similar to the empty function in set theory). A cone to the empty diagram is essentially just an object of C. The limit of F is any object that is uniquely factored through by every other object. This is just the definition of a terminal object.
- Products. If J is a discrete category then a diagram F is essentially nothing but a family of objects of C, indexed by J. The limit L of F is called the product of these objects. The cone φ consists of a family of morphisms φX : L → F(X) called the projections of the product. In the category of sets, for instance, the products are given by Cartesian products and the projections are just the natural projections onto the various factors.
- Powers. A special case of a product is when the diagram F is a constant functor to an object X of C. The limit of this diagram is called the Jth power of X and denoted XJ.
- Equalizers. If J is a category with two objects and two parallel morphisms from one object to the other, then a diagram of shape J is a pair of parallel morphisms in C. The limit L of such a diagram is called an equalizer of those morphisms.
- Kernels. A kernel is a special case of an equalizer where one of the morphisms is a zero morphism.
- Pullbacks.
|
https://en.wikipedia.org/wiki/Limit_%28category_theory%29
|
passage: The proof of this theorem is not trivial, since it requires extending
$$
\mu_0
$$
from an algebra of sets to a potentially much bigger sigma-algebra, guaranteeing that the extension is unique (if
$$
\mu_0
$$
is
$$
\sigma
$$
-finite), and moreover that it does not fail to satisfy the sigma-additivity of the original function.
## Semi-ring and ring
### Definitions
For a given set
$$
\Omega,
$$
we call a family
$$
\mathcal{S}
$$
of subsets of
$$
\Omega
$$
a if it has the following properties:
-
$$
\varnothing \in \mathcal{S}
$$
- For all
$$
A, B \in \mathcal{S},
$$
we have
$$
A \cap B \in \mathcal{S}
$$
(closed under pairwise intersections)
- For all
$$
A, B \in \mathcal{S},
$$
there exists a finite number of disjoint sets
$$
K_i \in \mathcal{S}, i = 1, 2, \ldots, n,
$$
such that
$$
A \setminus B = \coprod_{i=1}^n K_i
$$
(relative complements can be written as finite disjoint unions).
|
https://en.wikipedia.org/wiki/Carath%C3%A9odory%27s_extension_theorem
|
passage: Any left R-module M can then be seen to be a right module over Rop, and any right module over R can be considered a left module over Rop.
- Modules over a Lie algebra are (associative algebra) modules over its universal enveloping algebra.
- If R and S are rings with a ring homomorphism , then every S-module M is an R-module by defining . In particular, S itself is such an R-module.
## Submodules and homomorphisms
Suppose M is a left R-module and N is a subgroup of M. Then N is a submodule (or more explicitly an R-submodule) if for any n in N and any r in R, the product (or for a right R-module) is in N.
If X is any subset of an R-module M, then the submodule spanned by X is defined to be
$$
\langle X \rangle = \,\bigcap_{N\supseteq X} N
$$
where N runs over the submodules of M that contain X, or explicitly
$$
\left\{\sum_{i=1}^k r_ix_i \mid r_i \in R, x_i \in X\right\}
$$
, which is important in the definition of tensor products of modules.
|
https://en.wikipedia.org/wiki/Module_%28mathematics%29
|
passage: Exposure to PM is associated with respiratory diseases (such as aggravation of asthma, bronchitis, and rhinosinusitis), cardiovascular effects (such as increased risk of heart attacks and arrhythmias due to systemic inflammation).
- Fine particles (PM), with diameters less than 2.5 micrometers, can penetrate deep into the lungs, reaching the bronchioles and alveoli. They are associated with chronic rhinosinusitis (PM particles can deposit in the nasal passages and sinuses, leading to inflammation and chronic rhinosinusitis), respiratory diseases (exacerbation of asthma and COPD due to deep lung penetration), and cardiovascular diseases from systemic inflammation and oxidative stress.
- Ultrafine particles (PM), with diameters less than 0.1 micrometers (100 nanometers), can enter the bloodstream and reach other organs, including the heart and brain. Health effects include neurological effects (potential contribution to neurodegenerative diseases such as Alzheimer's due to particles crossing the blood-brain barrier), cardiovascular effects such as promotion of atherosclerosis and increased risk of heart attacks.
##### Mechanisms of health effects
Particles can cause health effects through several mechanisms: inflammation in the respiratory tract oxidative stress via reactive oxygen species, leading to cellular damage, and systemic effects, such as translocation of ultrafine particles into circulation affects organs beyond the lungs.
|
https://en.wikipedia.org/wiki/Particulate_matter
|
passage: The Pythagorean theorem, and hence this length, can also be derived from the law of cosines in trigonometry. In a right triangle, the cosine of an angle is the ratio of the leg adjacent of the angle and the hypotenuse. For a right angle γ (gamma), where the adjacent leg equals 0, the cosine of γ also equals 0. The law of cosines formulates that
$$
c^2 = a^2 + b^2 - 2ab\cos\theta
$$
holds for some angle θ (theta). By observing that the angle opposite the hypotenuse is right and noting that its cosine is 0, so in this case θ = γ = 90°:
$$
c^2 = a^2 + b^2 - 2ab\cos\theta = a^2 + b^2 \implies c = \sqrt{a^2 + b^2}.
$$
Many computer languages support the ISO C standard function hypot(x,y), which returns the value above. The function is designed not to fail where the straightforward calculation might overflow or underflow and can be slightly more accurate and sometimes significantly slower.
Some languages have extended the definition to higher dimensions. For example, C++17 supports
$$
\mbox{std::hypot}(x, y, z) = \sqrt{x^2 +y^2 + z^2}
$$
; this gives the length of the diagonal of a rectangular cuboid with edges x, y, and z.
|
https://en.wikipedia.org/wiki/Hypotenuse
|
passage: Special alloys, either with low carbon content or with added carbon "getters" such as titanium and niobium (in types 321 and 347, respectively), can prevent this effect, but the latter require special heat treatment after welding to prevent the similar phenomenon of "knifeline attack". As its name implies, corrosion is limited to a very narrow zone adjacent to the weld, often only a few micrometers across, making it even less noticeable.
### Crevice corrosion
Crevice corrosion is a localized form of corrosion occurring in confined spaces (crevices), to which the access of the working fluid from the environment is limited. Formation of a differential aeration cell leads to corrosion inside the crevices. Examples of crevices are gaps and contact areas between parts, under gaskets or seals, inside cracks and seams, spaces filled with deposits, and under sludge piles.
Crevice corrosion is influenced by the crevice type (metal-metal, metal-non-metal), crevice geometry (size, surface finish), and metallurgical and environmental factors. The susceptibility to crevice corrosion can be evaluated with ASTM standard procedures. A critical crevice corrosion temperature is commonly used to rank a material's resistance to crevice corrosion.
### Hydrogen grooving
In the chemical industry, hydrogen grooving is the corrosion of piping at grooves created by the interaction of a corrosive agent, corroded pipe constituents, and hydrogen gas bubbles.
|
https://en.wikipedia.org/wiki/Corrosion
|
passage: This space of global 1-forms can be identified with the space of global sections of the tautological line bundle O(1) restricted to the cubic F and moreover:
Torelli-type Theorem : Let g' be the natural morphism from S to the grassmannian G(2,5) defined by the cotangent sheaf of S generated by its 5-dimensional space of global sections. Let F' be the union of the lines corresponding to g'(S). The threefold F' is isomorphic to F.
Thus knowing a Fano surface S, we can recover the threefold F.
By the Tangent Bundle Theorem, we can also understand geometrically the invariants of S:
a) Recall that the second Chern number of a rank 2 vector bundle on a surface is the number of zeroes of a generic section. For a Fano surface S, a 1-form w defines also a hyperplane section {w=0} into P4 of the cubic F. The zeros of the generic w on S corresponds bijectively to the numbers of lines into the smooth cubic surface intersection of {w=0} and F, therefore we recover that the second Chern class of S equals 27.
b) Let w1, w2 be two 1-forms on S. The canonical divisor K on S associated to the canonical form w1 ∧ w2 parametrizes the lines on F that cut the plane P={w1=w2=0} into P4. Using w1 and w2 such that the intersection of P and F is the union of 3 lines, one can recover the fact that K2=45.
|
https://en.wikipedia.org/wiki/Fano_surface
|
passage: ### Antiderivative
The antiderivative (indefinite integral) of the real absolute value function is
$$
\int \left|x\right| dx = \frac{x\left|x\right|}{2} + C,
$$
where is an arbitrary constant of integration. This is not a complex antiderivative because complex antiderivatives can only exist for complex-differentiable (holomorphic) functions, which the complex absolute value function is not.
### Derivatives of compositions
The following two formulae are special cases of the chain rule:
$$
{d \over dx} f(|x|)={x \over |x|} (f'(|x|))
$$
if the absolute value is inside a function, and
$$
{d \over dx} |f(x)|={f(x) \over |f(x)|} f'(x)
$$
if another function is inside the absolute value. In the first case, the derivative is always discontinuous at
$$
x=0
$$
in the first case and where
$$
f(x)=0
$$
in the second case.
Distance
The absolute value is closely related to the idea of distance. As noted above, the absolute value of a real or complex number is the distance from that number to the origin, along the real number line, for real numbers, or in the complex plane, for complex numbers, and more generally, the absolute value of the difference of two real or complex numbers is the distance between them.
|
https://en.wikipedia.org/wiki/Absolute_value
|
passage: Given a codimension
$$
\geq 2
$$
subscheme
$$
Z \subset X
$$
there is a Cartesian square
$$
\begin{matrix}
E & \longrightarrow & Bl_Z(X) \\
\downarrow & & \downarrow \\
Z & \longrightarrow & X
\end{matrix}
$$
From this there is an associated long exact sequence
$$
\cdots \to H^n(X) \to H^n(Z) \oplus H^n(Bl_Z(X)) \to H^n(E) \to H^{n+1}(X) \to \cdots
$$
If the subvariety
$$
Z
$$
is smooth, then the connecting morphisms are all trivial, hence
$$
H^n(Bl_Z(X))\oplus H^n(Z) \cong H^n(X) \oplus H^n(E)
$$
## Axioms and generalized cohomology theories
There are various ways to define cohomology for topological spaces (such as singular cohomology, Čech cohomology, Alexander–Spanier cohomology or sheaf cohomology). (Here sheaf cohomology is considered only with coefficients in a constant sheaf.) These theories give different answers for some spaces, but there is a large class of spaces on which they all agree.
|
https://en.wikipedia.org/wiki/Cohomology
|
passage: ## Global and local alignments
Global alignments, which attempt to align every residue in every sequence, are most useful when the sequences in the query set are similar and of roughly equal size. (This does not mean global alignments cannot start and/or end in gaps.) A general global alignment technique is the Needleman–Wunsch algorithm, which is based on dynamic programming. Local alignments are more useful for dissimilar sequences that are suspected to contain regions of similarity or similar sequence motifs within their larger sequence context. The Smith–Waterman algorithm is a general local alignment method based on the same dynamic programming scheme but with additional choices to start and end at any place.
Hybrid methods, known as semi-global or "glocal" (short for global-local) methods, search for the best possible partial alignment of the two sequences (in other words, a combination of one or both starts and one or both ends is stated to be aligned). This can be especially useful when the downstream part of one sequence overlaps with the upstream part of the other sequence. In this case, neither global nor local alignment is entirely appropriate: a global alignment would attempt to force the alignment to extend beyond the region of overlap, while a local alignment might not fully cover the region of overlap. Another case where semi-global alignment is useful is when one sequence is short (for example a gene sequence) and the other is very long (for example a chromosome sequence).
|
https://en.wikipedia.org/wiki/Sequence_alignment
|
passage: In the mathematical discipline of graph theory, a matching or independent edge set in an undirected graph is a set of edges without common vertices. In other words, a subset of the edges is a matching if each vertex appears in at most one edge of that matching. Finding a matching in a bipartite graph can be treated as a network flow problem.
## Definitions
Given a graph a matching M in G is a set of pairwise non-adjacent edges, none of which are loops; that is, no two edges share common vertices.
A vertex is matched (or saturated) if it is an endpoint of one of the edges in the matching. Otherwise the vertex is unmatched (or unsaturated).
A maximal matching is a matching M of a graph G that is not a subset of any other matching. A matching M of a graph G is maximal if every edge in G has a non-empty intersection with at least one edge in M. The following figure shows examples of maximal matchings (red) in three graphs.
A maximum matching (also known as maximum-cardinality matching) is a matching that contains the largest possible number of edges. There may be many maximum matchings. The matching number
$$
\nu(G)
$$
of a graph is the size of a maximum matching. Every maximum matching is maximal, but not every maximal matching is a maximum matching. The following figure shows examples of maximum matchings in the same three graphs.
|
https://en.wikipedia.org/wiki/Matching_%28graph_theory%29
|
passage: Nonlinear systems can produce highly complex behaviors including bifurcations, chaos, harmonics, and subharmonics which cannot be produced or analyzed using linear methods.
Polynomial signal processing is a type of non-linear signal processing, where polynomial systems may be interpreted as conceptually straightforward extensions of linear systems to the nonlinear case.
### Statistical
Statistical signal processing is an approach which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks. Statistical techniques are widely used in signal processing applications. For example, one can model the probability distribution of noise incurred when photographing an image, and construct techniques based on this model to reduce the noise in the resulting image.
### Graph
Graph signal processing generalizes signal processing tasks to signals living on non-Euclidean domains whose structure can be captured by a weighted graph. Graph signal processing presents several key points such as sampling signal techniques, recovery techniques and time-varying techiques. Graph signal processing has been applied with success in the field of image processing, computer vision
and sound anomaly detection.
|
https://en.wikipedia.org/wiki/Signal_processing
|
passage: The exact value of the above (the worst-case number of comparisons during the heap construction) is known to be equal to:
$$
2 n - 2 s_2 (n) - e_2 (n)
$$
,
where is the sum of all digits of the binary representation of and is the exponent of in the prime factorization of .
The average case is more complex to analyze, but it can be shown to asymptotically approach comparisons. Note that this paper uses Floyd's original terminology "siftup" for what is now called sifting down.
The Build-Max-Heap function that follows, converts an array A which stores a complete
binary tree with n nodes to a max-heap by repeatedly using Max-Heapify (down-heapify for a max-heap) in a bottom-up manner.
The array elements indexed by
, , ..., n
are all leaves for the tree (assuming that indices start at 1)—thus each is a one-element heap, and does not need to be down-heapified. Build-Max-Heap runs
Max-Heapify on each of the remaining tree nodes.
Build-Max-Heap (A):
for each index i from floor(length(A)/2) downto 1 do:
Max-Heapify(A, i)
## Heap implementation
Heaps are commonly implemented with an array. Any binary tree can be stored in an array, but because a binary heap is always a complete binary tree, it can be stored compactly.
|
https://en.wikipedia.org/wiki/Binary_heap
|
passage: It is generally indicated for a variety of indications including: when the standard transthoracic echocardiogram is non diagnostic, for detailed evaluation of abnormalities that are typically in the far field, such as the aorta, left atrial appendage, evaluation of native or prosthetic heart valves, evaluation of cardiac masses, evaluation of endocarditis, valvular abscesses, or for the evaluation of cardiac source of embolus. It is frequently used in the setting of atrial fibrillation or atrial flutter to facilitate the clinical decision with regard to anticoagulation, cardioversion and/or radio frequency ablation.
Cardiac MRI utilizes special protocols to image heart structure and function with specific sequences for certain diseases such as hemochromatosis and amyloidosis.
Cardiac CT utilizes special protocols to image heart structure and function with particular emphasis on coronary arteries.
#### Interventional cardiology
Interventional cardiology is a branch of cardiology that deals specifically with the catheter based treatment of structural heart diseases. A large number of procedures can be performed on the heart by catheterization, including angiogram, angioplasty, atherectomy, and stent implantation. These procedures all involve insertion of a sheath into the femoral artery or radial artery (but, in practice, any large peripheral artery or vein) and cannulating the heart under visualization (most commonly fluoroscopy). This cannulation allows indirect access to the heart, bypassing the trauma caused by surgical opening of the chest.
|
https://en.wikipedia.org/wiki/Cardiology
|
passage: 80% of all flowering plants are hermaphroditic, meaning they contain both sexes in the same flower, while 5 percent of plant species are monoecious. The remaining 15% would therefore be dioecious (each plant unisexual). Plants that self-pollinate include several types of orchids, and sunflowers. Dandelions are capable of self-pollination as well as cross-pollination.
## Advantages
There are several advantages for self-pollinating flowers. Firstly, if a given genotype is well-suited for an environment, self-pollination helps to keep this trait stable in the species. Not being dependent on pollinating agents allows self-pollination to occur when bees and wind are nowhere to be found. Self-pollination or cross pollination can be an advantage when the number of flowers is small or they are widely spaced. During self-pollination, the pollen grains are not transmitted from one flower to another. As a result, there is less wastage of pollen. Also, self-pollinating plants do not depend on external carriers. They also cannot make changes in their characters and so the features of a species can be maintained with purity.
Self-pollination also helps to preserve parental characters as the gametes from the same flower are evolved. It is not necessary for flowers to produce nectar, scent, or to be colourful in order to attract pollinators.
## Disadvantages
The disadvantages of self-pollination come from a lack of variation that allows no adaptation to the changing environment or potential pathogen attack.
|
https://en.wikipedia.org/wiki/Self-pollination
|
passage: In its classical use, the KdV equation is applicable for wavelengths λ in excess of about five times the average water depth h, so for λ > 5 h; and for the period τ greater than
$$
\scriptstyle 7 \sqrt{h/g}
$$
with g the strength of the gravitational acceleration. To envisage the position of the KdV equation within the scope of classical wave approximations, it distinguishes itself in the following ways:
- Korteweg–de Vries equation — describes the forward propagation of weakly nonlinear and dispersive waves, for long waves with λ > 7 h.
- Shallow water equations — are also nonlinear and do have amplitude dispersion, but no frequency dispersion; they are valid for very long waves, λ > 20 h.
- Boussinesq equations — have the same range of validity as the KdV equation (in their classical form), but allow for wave propagation in arbitrary directions, so not only forward-propagating waves. The drawback is that the Boussinesq equations are often more difficult to solve than the KdV equation; and in many applications wave reflections are small and may be neglected.
- Airy wave theory — has full frequency dispersion, so valid for arbitrary depth and wavelength, but is a linear theory without amplitude dispersion, limited to low-amplitude waves.
- Stokes' wave theory — a perturbation-series approach to the description of weakly nonlinear and dispersive waves, especially successful in deeper water for relative short wavelengths, as compared to the water depth.
|
https://en.wikipedia.org/wiki/Cnoidal_wave
|
passage: A- and B-DNA are very similar, forming right-handed helices, whereas Z-DNA is a left-handed helix with a zig-zag phosphate backbone. Z-DNA is thought to play a specific role in chromatin structure and transcription because of the properties of the junction between B- and Z-DNA.
At the junction of B- and Z-DNA, one pair of bases is flipped out from normal bonding. These play a dual role of a site of recognition by many proteins and as a sink for torsional stress from RNA polymerase or nucleosome binding. DNA bases are stored as a code structure with four chemical bases such as "Adenine (A), Guanine (G), Cytosine (C), and Thymine (T)". The order and sequences of these chemical structures of DNA are reflected as information available for the creation and control of human organisms. "A with T and C with G" pairing up to build the DNA base pair. Sugar and phosphate molecules are also paired with these bases, making DNA nucleotides arrange 2 long spiral strands unitedly called "double helix". In eukaryotes, DNA consists of a cell nucleus and the DNA is providing strength and direction to the mechanism of heredity. Moreover, between the nitrogenous bonds of the 2 DNA, homogenous bonds are forming.
### Nucleosomes and beads-on-a-string
The basic repeat element of chromatin is the nucleosome, interconnected by sections of linker DNA, a far shorter arrangement than pure DNA in solution.
|
https://en.wikipedia.org/wiki/Chromatin
|
passage: For example, the sapje (sap) mutant is the zebrafish orthologue of human Duchenne muscular dystrophy (DMD). The Machuca-Tzili and co-workers applied zebrafish to determine the role of alternative splicing factor, MBNL, in myotonic dystrophy type 1 (DM1) pathogenesis. More recently, Todd et al. described a new zebrafish model designed to explore the impact of CUG repeat expression during early development in DM1 disease. Zebrafish is also an excellent animal model to study congenital muscular dystrophies including CMD Type 1 A (CMD 1A) caused by mutation in the human laminin α2 (LAMA2) gene. The zebrafish, because of its advantages discussed above, and in particular the ability of zebrafish embryos to absorb chemicals, has become a model of choice in screening and testing new drugs against muscular dystrophies.
### Bone physiology and pathology
Zebrafish have been used as model organisms for bone metabolism, tissue turnover, and resorbing activity. These processes are largely evolutionary conserved. They have been used to study osteogenesis (bone formation), evaluating differentiation, matrix deposition activity, and cross-talk of skeletal cells, to create and isolate mutants modeling human bone diseases, and test new chemical compounds for the ability to revert bone defects. The larvae can be used to follow new (de novo) osteoblast formation during bone development. They start mineralising bone elements as early as 4 days post fertilisation.
|
https://en.wikipedia.org/wiki/Zebrafish
|
passage: A cover composed of subbasic sets is called a .
For subcollection
$$
S
$$
of the power set
$$
\wp(X),
$$
there is a unique topology having
$$
S
$$
as a subbase; it is the intersection of all topologies on
$$
X
$$
containing
$$
S
$$
. In general, however, the converse is not true, i.e. there is no unique subbasis for a given topology.
Thus, we can start with a fixed topology and find subbases for that topology, and we can also start with an arbitrary subcollection of the power set
$$
\wp(X)
$$
and form the topology generated by that subcollection. We can freely use either equivalent definition above; indeed, in many cases, one of the three conditions is more useful than the others.
### Alternative definition
Less commonly, a slightly different definition of subbase is given which requires that the subbase
$$
\mathcal{B}
$$
cover
$$
X.
$$
In this case,
$$
X
$$
is the union of all sets contained in
$$
\mathcal{B}.
$$
This means that there can be no confusion regarding the use of nullary intersections in the definition.
However, this definition is not always equivalent to the three definitions above.
|
https://en.wikipedia.org/wiki/Subbase
|
passage: The Dedekind zeta-function of K is then defined by
$$
\zeta_K(s) = \sum_a \frac{1}{(Na)^s}
$$
for every complex number s with real part > 1. The sum extends over all non-zero ideals a of OK.
The Dedekind zeta-function satisfies a functional equation and can be extended by analytic continuation to the whole complex plane. The resulting function encodes important information about the number field K. The extended Riemann hypothesis asserts that for every number field K and every complex number s with ζK(s) = 0: if the real part of s is between 0 and 1, then it is in fact 1/2.
The ordinary Riemann hypothesis follows from the extended one if one takes the number field to be
$$
\mathbb Q
$$
, with ring of integers
$$
\mathbb Z
$$
.
The ERH implies an effective version of the Chebotarev density theorem: if L/K is a finite Galois extension with Galois group G, and C a union of conjugacy classes of G, the number of unramified primes of K of norm below x with Frobenius conjugacy class in C is
$$
\frac{|C|}{|G|}\Bigl(\operatorname{Li}(x)+O\bigl(\sqrt x(n\log x+\log|\Delta|)\bigr)\Bigr),
$$
where the constant implied in the big-O notation is absolute, n is the degree of L over Q, and Δ its discriminant.
|
https://en.wikipedia.org/wiki/Generalized_Riemann_hypothesis
|
passage: The Clayton canonical vine copula allows for the occurrence of extreme downside events and has been successfully applied in portfolio optimization and risk management applications. The model is able to reduce the effects of extreme downside correlations and produces improved statistical and economic performance compared to scalable elliptical dependence copulas such as the Gaussian and Student-t copula.
Other models developed for risk management applications are panic copulas that are glued with market estimates of the marginal distributions to analyze the effects of panic regimes on the portfolio profit and loss distribution. Panic copulas are created by Monte Carlo simulation, mixed with a re-weighting of the probability of each scenario.
As regards derivatives pricing, dependence modelling with copula functions is widely used in applications of financial risk assessment and actuarial analysis – for example in the pricing of collateralized debt obligations (CDOs). Some believe the methodology of applying the Gaussian copula to credit derivatives to be one of the causes of the 2008 financial crisis; see .
Despite this perception, there are documented attempts within the financial industry, occurring before the crisis, to address the limitations of the Gaussian copula and of copula functions more generally, specifically the lack of dependence dynamics. The Gaussian copula is lacking as it only allows for an elliptical dependence structure, as dependence is only modeled using the variance-covariance matrix.
|
https://en.wikipedia.org/wiki/Copula_%28statistics%29
|
passage: Burnside's lemma, sometimes also called Burnside's counting theorem, the Cauchy–Frobenius lemma, or the orbit-counting theorem, is a result in group theory that is often useful in taking account of symmetry when counting mathematical objects. It was discovered by Augustin Louis Cauchy and Ferdinand Georg Frobenius, and became well known after William Burnside quoted it. The result enumerates orbits of a symmetry group acting on some objects: that is, it counts distinct objects, considering objects symmetric to each other as the same; or counting distinct objects up to a symmetry equivalence relation; or counting only objects in canonical form. For example, in describing possible organic compounds of certain type, one considers them up to spatial rotation symmetry: different rotated drawings of a given molecule are chemically identical. (However a mirror reflection might give a different compound.)
Formally, let
$$
G
$$
be a finite group that acts on a set
$$
X
$$
.
|
https://en.wikipedia.org/wiki/Burnside%27s_lemma
|
passage: Interpolation search is an algorithm for searching for a key in an array that has been ordered by numerical values assigned to the keys (key values). It was first described by W. W. Peterson in 1957. Interpolation search resembles the method by which people search a telephone directory for a name (the key value by which the book's entries are ordered): in each step the algorithm calculates where in the remaining search space the sought item might be, based on the key values at the bounds of the search space and the value of the sought key, usually via a linear interpolation. The key value actually found at this estimated position is then compared to the key value being sought. If it is not equal, then depending on the comparison, the remaining search space is reduced to the part before or after the estimated position. This method will only work if calculations on the size of differences between key values are sensible.
By comparison, binary search always chooses the middle of the remaining search space, discarding one half or the other, depending on the comparison between the key found at the estimated position and the key sought — it does not require numerical values for the keys, just a total order on them. The remaining search space is reduced to the part before or after the estimated position. The linear search uses equality only as it compares elements one-by-one from the start, ignoring any sorting.
On average the interpolation search makes about log(log(n)) comparisons (if the elements are uniformly distributed), where n is the number of elements to be searched.
|
https://en.wikipedia.org/wiki/Interpolation_search
|
passage: First, the sign of the left-hand side will be negative since either all three of the ratios are negative, the case where the line misses the triangle (see diagram), or one is negative and the other two are positive, the case where crosses two sides of the triangle.
To check the magnitude, construct perpendiculars from to the line and let their lengths be respectively. Then by similar triangles it follows that
$$
\left|\frac{\overline{AF}}{\overline{FB}}\right| = \left|\frac{a}{b}\right|, \quad \left|\frac{\overline{BD}}{\overline{DC}}\right| = \left|\frac{b}{c}\right|, \quad \left|\frac{\overline{CE}}{\overline{EA}}\right| = \left|\frac{c}{a}\right|.
$$
Therefore,
$$
\left|\frac{\overline{AF}}{\overline{FB}}\right| \times \left|\frac{\overline{BD}}{\overline{DC}}\right| \times \left|\frac{\overline{CE}}{\overline{EA}}\right| = \left| \frac{a}{b} \times \frac{b}{c} \times \frac{c}{a} \right| = 1.
$$
For a simpler, if less symmetrical way to check the magnitude, draw parallel to where meets at .
|
https://en.wikipedia.org/wiki/Menelaus%27s_theorem
|
passage: ## Polynomial evaluation and long division
Given the polynomial
$$
p(x) =
# \sum_{i
=0}^n a_i x^i = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots + a_n x^n,
$$
where
$$
a_0, \ldots, a_n
$$
are constant coefficients, the problem is to evaluate the polynomial at a specific value
$$
x_0
$$
of
$$
x.
$$
For this, a new sequence of constants is defined recursively as follows:
Then
$$
b_0
$$
is the value of
$$
p(x_0)
$$
.
To see why this works, the polynomial can be written in the form
$$
p(x) = a_0 + x \bigg(a_1 + x \Big(a_2 + x \big(a_3 + \cdots + x(a_{n-1} + x \, a_n) \cdots \big) \Big) \bigg) \ .
$$
Thus, by iteratively substituting the
$$
b_i
$$
into the expression,
$$
\begin{align}
p(x_0) & = a_0 + x_0\Big(a_1 + x_0\big(a_2 + \cdots + x_0(a_{n-1} + b_n x_0) \cdots \big)\Big) \\
& = a_0 + x_0\Big(a_1 + x_0\big(a_2 + \cdots + x_0 b_{n-1}\big)\Big) \\
& ~~
|
https://en.wikipedia.org/wiki/Horner%27s_method
|
passage: The path of observer C is given by
$$
(T, \, r\cos(\omega T), \, r\sin(\omega T), \, 0)
$$
, where
$$
T
$$
is the current coordinate time. When r and
$$
\omega
$$
are constant,
$$
dx = -r \omega \sin(\omega T) \, dT
$$
and
$$
dy = r \omega \cos(\omega T) \, dT
$$
. The incremental proper time formula then becomes
$$
d\tau
= \sqrt{dT^2 - \left(\frac{r \omega}{c}\right)^2 \sin^2(\omega T)\; dT^2 - \left(\frac{r \omega}{c}\right)^2 \cos^2(\omega T) \; dT^2}
= dT\sqrt{1 - \left ( \frac{r\omega}{c} \right )^2}.
$$
So for an observer rotating at a constant distance of r from a given point in spacetime at a constant angular rate of ω between coordinate times
$$
T_1
$$
and
$$
T_2
$$
, the proper time experienced will be
$$
\int_{T_1}^{T_2} d\tau
= (T_2 - T_1) \sqrt{ 1 - \left ( \frac{r\omega}{c} \right )^2}
= \Delta T \sqrt{1 - v^2/c^2},
$$
as
$$
v = r \omega
$$
for a rotating observer.
|
https://en.wikipedia.org/wiki/Proper_time
|
passage: Bioinorganic chemistry is a field that examines the role of metals in biology. Bioinorganic chemistry includes the study of both natural phenomena such as the behavior of metalloproteins as well as artificially introduced metals, including those that are non-essential, in medicine and toxicology. Many biological processes such as respiration depend upon molecules that fall within the realm of inorganic chemistry. The discipline also includes the study of inorganic models or mimics that imitate the behaviour of metalloproteins.
As a mix of biochemistry and inorganic chemistry, bioinorganic chemistry is important in elucidating the implications of electron-transfer proteins, substrate bindings and activation, atom and group transfer chemistry as well as metal properties in biological chemistry. The successful development of truly interdisciplinary work is necessary to advance bioinorganic chemistry.
## Composition of living organisms
About 99% of mammals' mass are the elements carbon, nitrogen, calcium, sodium, chlorine, potassium, hydrogen, phosphorus, oxygen and sulfur. The organic compounds (proteins, lipids and carbohydrates) contain the majority of the carbon and nitrogen and most of the oxygen and hydrogen is present as water. The entire collection of metal-containing biomolecules in a cell is called the metallome.
|
https://en.wikipedia.org/wiki/Bioinorganic_chemistry
|
passage: However, the abstract notion of a compact Riemann surface is always algebraizable (The Riemann's existence theorem, Kodaira embedding theorem.), but it is not easy to verify which compact complex analytic spaces are algebraizable. In fact, Hopf found a class of compact complex manifolds without nonconstant meromorphic functions. However, there is a Siegel result that gives the necessary conditions for compact complex manifolds to be algebraic. The generalization of the Riemann-Roch theorem to several complex variables was first extended to compact analytic surfaces by Kodaira, Kodaira also extended the theorem to three-dimensional, and n-dimensional Kähler varieties. Serre formulated the Riemann–Roch theorem as a problem of dimension of coherent sheaf cohomology, and also Serre proved Serre duality. Cartan and Serre proved the following property: the cohomology group is finite-dimensional for a coherent sheaf on a compact complex manifold M. Riemann–Roch on a Riemann surface for a vector bundle was proved by Weil in 1938.
Hirzebruch generalized the theorem to compact complex manifolds in 1994 and Grothendieck generalized it to a relative version (relative statements about morphisms.). Next, the generalization of the result that "the compact Riemann surfaces are projective" to the high-dimension. In particular, consider the conditions that when embedding of compact complex submanifold X into the complex projective space
$$
\mathbb{CP}^n
$$
.
|
https://en.wikipedia.org/wiki/Function_of_several_complex_variables
|
passage: The sense of the black/white bit is then inverted (for example, 0=black, 1=white). Everything becomes white. This momentarily breaks the invariant that reachable objects are black, but a full marking phase follows immediately, to mark them black again. Once this is done, all unreachable memory is white. No "sweep" phase is necessary.
The mark and don't sweep strategy requires cooperation between the allocator and collector, but is incredibly space efficient since it only requires one bit per allocated pointer (which most allocation algorithms require anyway). However, this upside is somewhat mitigated, since most of the time large portions of memory are wrongfully marked black (used), making it hard to give resources back to the system (for use by other allocators, threads, or processes) in times of low memory usage.
The mark and don't sweep strategy can therefore be seen as a compromise between the upsides and downsides of the mark and sweep and the stop and copy strategies.
### Generational GC (ephemeral GC)
It has been empirically observed that in many programs, the most recently created objects are also those most likely to become unreachable quickly (known as infant mortality or the generational hypothesis). A generational GC (also known as ephemeral GC) divides objects into generations and, on most cycles, will place only the objects of a subset of generations into the initial white (condemned) set.
|
https://en.wikipedia.org/wiki/Tracing_garbage_collection
|
passage: The SVG format does not have a compression scheme of its own, but due to the textual nature of XML, an SVG graphic can be compressed using a program such as gzip. Because of its scripting potential, SVG is a key component in web applications: interactive web pages that look and act like applications.
#### Other 2D vector formats
- AFDesign (Affinity Designer document)
- AI (Adobe Illustrator Artwork)— proprietary file format developed by Adobe Systems
- CDR—proprietary format for CorelDRAW vector graphics editor
- !DRAW—a native vector graphic format (in several backward compatible versions) for the RISC-OS computer system begun by Acorn in the mid-1980s and still present on that platform today
- DrawingML—used in Office Open XML documents
- GEM—metafiles interpreted and written by the Graphics Environment Manager VDI subsystem
- GLE (Graphics Layout Engine)—graphics scripting language
- HP-GL (Hewlett-Packard Graphics Language)—introduced on Hewlett-Packard plotters, but generalized into a printer language
- HVIF (Haiku Vector Icon Format)
- Lottie—format for vector graphics animation
- MathML (Mathematical Markup Language)—an application of XML for describing mathematical notations
- NAPLPS (North American Presentation Layer Protocol Syntax)
- ODG (OpenDocument Graphics)
- PGML (Precision Graphics Markup Language)—a
|
https://en.wikipedia.org/wiki/Image_file_format
|
passage: $$
Equivalently, with the interaction Lagrangian , it is
$$
S=\sum_{n=0}^{\infty}\frac{i^n}{n!} \left(\prod_{j=1}^n \int d^4 x_j\right) \mathcal{T}\left\{\prod_{j=1}^n \mathcal{L}_V\left(x_j\right)\right\} \equiv\sum_{n=0}^{\infty}S^{(n)}\;.
$$
A Feynman diagram is a graphical representation of a single summand in the Wick's expansion of the time-ordered product in the th-order term of the Dyson series of the -matrix,
$$
\mathcal{T}\prod_{j=1}^n\mathcal{L}_V\left(x_j\right)=\sum_{\text{A}}(\pm)\mathcal{N}\prod_{j=1}^n\mathcal{L}_V\left(x_j\right)\;,
$$
where signifies the normal-ordered product of the operators and (±) takes care of the possible sign change when commuting the fermionic operators to bring them together for a contraction (a propagator) and represents all possible contractions.
|
https://en.wikipedia.org/wiki/Feynman_diagram
|
passage: Here
$$
\otimes
$$
denotes the tensor product of vector bundles. It follows that the dimensions of the two cohomology groups are equal:
$$
h^i(X,E)=h^{n-i}(X,K_X\otimes E^{\ast}).
$$
As in Poincaré duality, the isomorphism in Serre duality comes from the cup product in sheaf cohomology. Namely, the composition of the cup product with a natural trace map on
$$
H^n(X,K_X)
$$
is a perfect pairing:
$$
H^i(X,E)\times H^{n-i}(X,K_X\otimes E^{\ast})\to H^n(X,K_X)\to k.
$$
The trace map is the analog for coherent sheaf cohomology of integration in de Rham cohomology.
### Differential-geometric theorem
Serre also proved the same duality statement for X a compact complex manifold and E a holomorphic vector bundle.
Here, the Serre duality theorem is a consequence of Hodge theory. Namely, on a compact complex manifold
$$
X
$$
equipped with a Riemannian metric, there is a Hodge star operator:
$$
\star: \Omega^p(X) \to \Omega^{2n-p}(X),
$$
where
$$
\dim_{\mathbb{C}} X = n
$$
.
|
https://en.wikipedia.org/wiki/Serre_duality
|
passage: Usually the first strategy for identifying an unknown compound is to compare its experimental mass spectrum against a library of mass spectra. If no matches result from the search, then manual interpretation or software assisted interpretation of mass spectra must be performed. Computer simulation of ionization and fragmentation processes occurring in mass spectrometer is the primary tool for assigning structure or peptide sequence to a molecule. An a priori structural information is fragmented in silico and the resulting pattern is compared with observed spectrum. Such simulation is often supported by a fragmentation library that contains published patterns of known decomposition reactions. Software taking advantage of this idea has been developed for both small molecules and proteins.
Analysis of mass spectra can also be spectra with accurate mass. A mass-to-charge ratio value (m/z) with only integer precision can represent an immense number of theoretically possible ion structures; however, more precise mass figures significantly reduce the number of candidate molecular formulas. A computer algorithm called formula generator calculates all molecular formulas that theoretically fit a given mass with specified tolerance.
A recent technique for structure elucidation in mass spectrometry, called precursor ion fingerprinting, identifies individual pieces of structural information by conducting a search of the tandem spectra of the molecule under investigation against a library of the product-ion spectra of structurally characterized precursor ions.
## Applications
Mass spectrometry has both qualitative and quantitative uses.
|
https://en.wikipedia.org/wiki/Mass_spectrometry
|
passage: This is the most commonly employed sensing technology for general purpose pressure measurement.
- Capacitive: Uses a diaphragm and pressure cavity to create a variable capacitor to detect strain due to applied pressure, capacitance decreasing as pressure deforms the diaphragm. Common technologies use metal, ceramic, and silicon diaphragms. Capacitive pressure sensors are being integrated into CMOS technology and it is being explored if thin 2D materials can be used as diaphragm material.
- Electromagnetic: Measures the displacement of a diaphragm by means of changes in inductance (reluctance), linear variable differential transformer (LVDT), Hall effect, or by eddy current principle.
- Piezoelectric: Uses the piezoelectric effect in certain materials such as quartz to measure the strain upon the sensing mechanism due to pressure. This technology is commonly employed for the measurement of highly dynamic pressures. As the basic principle is dynamic, no static pressures can be measured with piezoelectric sensors.
- Strain-Gauge: Strain gauge based pressure sensors also use a pressure sensitive element where metal strain gauges are glued on or thin-film gauges are applied on by sputtering. This measuring element can either be a diaphragm or for metal foil gauges measuring bodies in can-type can also be used. The big advantages of this monolithic can-type design are an improved rigidity and the capability to measure highest pressures of up to 15,000 bar.
|
https://en.wikipedia.org/wiki/Pressure_measurement
|
passage: On the other hand, if
$$
G
$$
is compact, then every finite-dimensional representation
$$
\Pi
$$
of
$$
G
$$
admits an inner product with respect to which
$$
\Pi
$$
is unitary, showing that
$$
\Pi
$$
decomposes as a sum of irreducibles. Similarly, if
$$
\mathfrak{g}
$$
is a complex semisimple Lie algebra, every finite-dimensional representation of
$$
\mathfrak{g}
$$
is a sum of irreducibles. Weyl's original proof of this used the unitarian trick: Every such
$$
\mathfrak{g}
$$
is the complexification of the Lie algebra of a simply connected compact Lie group
$$
K
$$
. Since
$$
K
$$
is simply connected, there is a one-to-one correspondence between the finite-dimensional representations of
$$
K
$$
and of
$$
\mathfrak{g}
$$
. Thus, the just-mentioned result about representations of compact groups applies. It is also possible to prove semisimplicity of representations of
$$
\mathfrak{g}
$$
directly by algebraic means, as in Section 10.3 of Hall's book.
See also: Fusion category (which are semisimple).
|
https://en.wikipedia.org/wiki/Semi-simplicity
|
passage: Often one wishes to know the image quality in pixels per inch (PPI) that would be suitable for a given output device. If the choice is too low, then the quality will be below what the device is capable of—loss of quality—and if the choice is too high then pixels will be stored unnecessarily—wasted disk space. The ideal pixel density (PPI) depends on the output format, output device, the intended use and artistic choice. For inkjet printers measured in DPI it is generally good practice to use half or less than the DPI to determine the PPI. For example, an image intended for a printer capable of 600 dpi could be created at 300 ppi. When using other technologies such as AM or FM screen printing, there are often published screening charts that indicate the ideal PPI for a printing method.
Using the DPI or LPI of a printer remains useful to determine PPI until one reaches larger formats, such as 36" or higher, as the factor of visual acuity then becomes more important to consider. If a print can be viewed close up, then one may choose the printer device limits. However, if a poster, banner or billboard will be viewed from far away then it is possible to use a much lower PPI.
## Computer displays
The PPI/PPCM of a computer display is related to the size of the display in inches/centimetres and the total number of pixels in the horizontal and vertical directions. This measurement is often referred to as dots per inch, though that measurement more accurately refers to the resolution of a computer printer.
|
https://en.wikipedia.org/wiki/Pixel_density
|
passage: A geometric description of irreducible representations of such groups, including the above-mentioned cuspidal representations, is obtained by Deligne-Lusztig theory, which constructs such representation in the l-adic cohomology of Deligne-Lusztig varieties.
The similarity of the representation theory of
$$
S_n
$$
and
$$
GL_n(\mathbf F_q)
$$
goes beyond finite groups. The philosophy of cusp forms highlights the kinship of representation theoretic aspects of these types of groups with general linear groups of local fields such as Qp and of the ring of adeles, see .
## Outlook—Representations of compact groups
The theory of representations of compact groups may be, to some degree, extended to locally compact groups. The representation theory unfolds in this context great importance for harmonic analysis and the study of automorphic forms. For proofs, further information and for a more detailed insight which is beyond the scope of this chapter please consult [4] and [5].
### Definition and properties
A topological group is a group together with a topology with respect to which the group composition and the inversion are continuous.
Such a group is called compact, if any cover of
$$
G,
$$
which is open in the topology, has a finite subcover. Closed subgroups of a compact group are compact again.
Let
$$
G
$$
be a compact group and let
$$
V
$$
be a finite-dimensional
$$
\Complex
$$
–vector space.
|
https://en.wikipedia.org/wiki/Representation_theory_of_finite_groups
|
passage: In mathematics, a von Neumann algebra or W*-algebra is a -algebra of bounded operators on a Hilbert space that is closed in the weak operator topology and contains the identity operator. It is a special type of C*-algebra.
Von Neumann algebras were originally introduced by John von Neumann, motivated by his study of single operators, group representations, ergodic theory and quantum mechanics. His double commutant theorem shows that the analytic definition is equivalent to a purely algebraic definition as an algebra of symmetries.
Two basic examples of von Neumann algebras are as follows:
- The ring
$$
L^\infty(\mathbb R)
$$
of essentially bounded measurable functions on the real line is a commutative von Neumann algebra, whose elements act as multiplication operators by pointwise multiplication on the Hilbert space
$$
L^2(\mathbb R)
$$
of square-integrable functions.
- The algebra
$$
\mathcal B(\mathcal H)
$$
of all bounded operators on a Hilbert space
$$
\mathcal H
$$
is a von Neumann algebra, non-commutative if the Hilbert space has dimension at least
$$
2
$$
.
Von Neumann algebras were first studied by in 1929; he and Francis Murray developed the basic theory, under the original name of rings of operators, in a series of papers written in the 1930s and 1940s (; ), reprinted in the collected works of .
|
https://en.wikipedia.org/wiki/Von_Neumann_algebra
|
passage: One version is: if X is proper over a quasi-compact scheme Y and X has only finitely many irreducible components (which is automatic for Y noetherian), then there is a projective surjective morphism g: W → X such that W is projective over Y. Moreover, one can arrange that g is an isomorphism over a dense open subset U of X, and that g−1(U) is dense in W. One can also arrange that W is integral if X is integral.
- Nagata's compactification theorem, as generalized by Deligne, says that a separated morphism of finite type between quasi-compact and quasi-separated schemes factors as an open immersion followed by a proper morphism.
- Proper morphisms between locally noetherian schemes preserve coherent sheaves, in the sense that the higher direct images Rif∗(F) (in particular the direct image f∗(F)) of a coherent sheaf F are coherent (EGA III, 3.2.1). (Analogously, for a proper map between complex analytic spaces, Grauert and Remmert showed that the higher direct images preserve coherent analytic sheaves.) As a very special case: the ring of regular functions on a proper scheme X over a field k has finite dimension as a k-vector space. By contrast, the ring of regular functions on the affine line over k is the polynomial ring k[x], which does not have finite dimension as a k-vector space.
|
https://en.wikipedia.org/wiki/Proper_morphism
|
passage: 1. Resolution enhancement in spectroscopy. Bands in the second derivative of a spectroscopic curve are narrower than the bands in the spectrum: they have reduced half-width. This allows partially overlapping bands to be "resolved" into separate (negative) peaks. The diagram illustrates how this may be used also for chemical analysis, using measurement of "peak-to-valley" distances. In this case the valleys are a property of the 2nd derivative of a Lorentzian. (x-axis position is relative to the position of the peak maximum on a scale of half width at half height).
1. Resolution enhancement with 4th derivative (positive peaks). The minima are a property of the 4th derivative of a Lorentzian.
### Moving average
The "moving average filter" is a trivial example of a Savitzky–Golay filter that is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles.
Each subset of the data set is fit with a straight horizontal line as opposed to a higher order polynomial. An unweighted moving average filter is the simplest convolution filter.
The moving average is often used for a quick technical analysis of financial data, like stock prices, returns or trading volumes. It is also used in economics to examine gross domestic product, employment or other macroeconomic time series.
|
https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter
|
passage: For example, conversion of energy from one type of potential field to another is reversible, as in the pendulum system described above.
In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as thermal energy and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal).
As the universe evolves with time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or as other kinds of increases in disorder). This has led to the hypothesis of the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), continues to decrease.
Conservation of energy
The fact that energy can be neither created nor destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out as work or heat, and that no energy is lost in transfer.
|
https://en.wikipedia.org/wiki/Energy
|
passage: If we define
$$
a_n=1+p_n
$$
, the bounds
$$
1+\sum_{n=1}^{N} p_n \le \prod_{n=1}^{N} \left( 1 + p_n \right) \le \exp \left( \sum_{n=1}^{N}p_n \right)
$$
show that the infinite product of an converges if the infinite sum of the pn converges. This relies on the Monotone convergence theorem. We can show the converse by observing that, if
$$
p_n \to 0
$$
, then
$$
\lim_{n \to \infty} \frac{\log(1+p_n)}{p_n} = \lim_{x\to 0} \frac{\log(1+x)}{x} = 1,
$$
and by the limit comparison test it follows that the two series
$$
\sum_{n=1}^\infty \log(1+p_n) \quad \text{and} \quad \sum_{n=1}^\infty p_n,
$$
are equivalent meaning that either they both converge or they both diverge.
If the series
$$
\sum_{n=1}^{\infty} \log(a_n)
$$
diverges to
$$
-\infty
$$
, then the sequence of partial products of the an converges to zero. The infinite product is said to diverge to zero.
|
https://en.wikipedia.org/wiki/Infinite_product
|
passage: The left-most digit is the last quotient. In general, the th digit from the right is the remainder of the division by
$$
b_2
$$
of the th quotient.
For example: converting A10BHex to decimal (41227):
0xA10B/10 = Q: 0x101A R: 7 (ones place)
0x101A/10 = Q: 0x19C R: 2 (tens place)
0x19C/10 = Q: 0x29 R: 2 (hundreds place)
0x29/10 = Q: 0x4 R: 1 ...
4
When converting to a larger base (such as from binary to decimal), the remainder represents
$$
b_2
$$
as a single digit, using digits from
$$
b_1
$$
. For example: converting 0b11111001 (binary) to 249 (decimal):
0b11111001/10 = Q: 0b11000 R: 0b1001 (0b1001 = "9" for ones place)
0b11000/10 = Q: 0b10 R: 0b100 (0b100 = "4" for tens)
0b10/10 = Q: 0b0 R: 0b10 (0b10 = "2" for hundreds)
For the fractional part, conversion can be done by taking digits after the radix point (the numerator), and dividing it by the implied denominator in the target radix. Approximation may be needed due to a possibility of non-terminating digits if the reduced fraction's denominator has a prime factor other than any of the base's prime factor(s) to convert to.
|
https://en.wikipedia.org/wiki/Positional_notation
|
passage: It is not, however, positive-definite, so the representation is not unitary.
### Measurement of spin along the , , or axes
Each of the (Hermitian) Pauli matrices of spin- particles has two eigenvalues, +1 and −1. The corresponding normalized eigenvectors are
$$
\begin{array}{lclc}
\psi_{x+} = \left|\frac{1}{2}, \frac{+1}{2}\right\rangle_x =
BLOCK0 \psi_{x-} = \left|\frac{1}{2}, \frac{-1}{2}\right\rangle_x =
BLOCK1 \psi_{y+} = \left|\frac{1}{2}, \frac{+1}{2}\right\rangle_y =
BLOCK2 \psi_{y-} = \left|\frac{1}{2}, \frac{-1}{2}\right\rangle_y =
BLOCK3 \psi_{z+} = \left|\frac{1}{2}, \frac{+1}{2}\right\rangle_z = &
BLOCK4 \psi_{z-} = \left|\frac{1}{2}, \frac{-1}{2}\right\rangle_z = &
BLOCK5\end{array}
$$
(Because any eigenvector multiplied by a constant is still an eigenvector, there is ambiguity about the overall sign.
|
https://en.wikipedia.org/wiki/Spin_%28physics%29
|
passage: Other animals reproduce sexually with external fertilization, including many basal vertebrates. Vertebrates reproduce with internal fertilization through cloacal copulation (in reptiles, some fish, and most birds) or penile-vaginal penetration and ejaculation of semen (in mammals).
In domesticated animals, there are various type of mating methods being employed to mate animals like pen mating (when female is moved to the desired male into a pen) or paddock mating (where one male is let loose in the paddock with several females).
## Plants and fungi
Like in animals, mating in other Eukaryotes, such as plants and fungi, denotes . However, in vascular plants this is mostly achieved without physical contact between mating individuals (see pollination), and in some cases, e.g., in fungi no distinguishable male or female organs exist (see isogamy); however, mating types in some fungal species are somewhat analogous to sexual dimorphism in animals, and determine whether or not two individual isolates can mate. Yeasts are eukaryotic microorganisms classified in the kingdom Fungi, with 1,500 species currently described. In general, under high stress conditions like nutrient starvation, haploid cells will die; under the same conditions, however, diploid cells of Saccharomyces cerevisiae can undergo sporulation, entering sexual reproduction (meiosis) and produce a variety of haploid spores, which can go on to mate (conjugate) and reform the diploid.
|
https://en.wikipedia.org/wiki/Mating
|
passage: The question of what 'best' means is a common question in social choice theory. The following rules are most common:
- Utilitarian rule – sometimes called the max-sum rule or Benthamite welfare – aims to maximize the sum of utilities.
- Egalitarian rule – sometimes called the max-min rule or Rawlsian welfare – aims to maximize the smallest utility.
## Social choice functions
A social choice function, sometimes called a voting system in the context of politics, is a rule that takes an individual's complete and transitive preferences over a set of outcomes and returns a single chosen outcome (or a set of tied outcomes). We can think of this subset as the winners of an election, and compare different social choice functions based on which axioms or mathematical properties they fulfill.
Arrow's impossibility theorem is what often comes to mind when one thinks about impossibility theorems in voting. There are several famous theorems concerning social choice functions. The Gibbard–Satterthwaite theorem implies that the only rule satisfying non-imposition (every alternative can be chosen) and strategyproofness when there are more than two candidates is the dictatorship mechanism. That is, a voter may be able to cast a ballot that misrepresents their preferences to obtain a result that is more favorable to them under their sincere preferences. May's theorem shows that when there are only two candidates and only rankings of options are available, the simple majority vote is the unique neutral, anonymous, and positively-responsive voting rule.
|
https://en.wikipedia.org/wiki/Social_choice_theory
|
passage: In addition to being centrally extended, the symmetry algebra of a conformally invariant quantum theory has to be complexified, resulting in two copies of the Virasoro algebra.
In Euclidean CFT, these copies are called holomorphic and antiholomorphic. In Lorentzian CFT, they are called left-moving and right moving. Both copies have the same central charge.
The space of states of a theory is a representation of the product of the two Virasoro algebras. This space is a Hilbert space if the theory is unitary.
This space may contain a vacuum state, or in statistical mechanics, a thermal state. Unless the central charge vanishes, there cannot exist a state that leaves the entire infinite dimensional conformal symmetry unbroken. The best we can have is a state that is invariant under the generators
$$
L_{n\geq -1}
$$
of the Virasoro algebra, whose basis is . This contains the generators
$$
L_{-1},L_0,L_1
$$
of the global conformal transformations. The rest of the conformal group is spontaneously broken.
Conformal symmetry
### Definition and Jacobian
For a given spacetime and metric, a conformal transformation is a transformation that preserves angles. We will focus on conformal transformations of the flat
$$
d
$$
-dimensional Euclidean space
$$
\mathbb{R}^d
$$
or of the Minkowski space .
|
https://en.wikipedia.org/wiki/Conformal_field_theory
|
passage: The Metropolis–Hastings algorithm is a classic Monte Carlo method which was initially used to sample the canonical ensemble.
- Path integral Monte Carlo, also used to sample the canonical ensemble.
Other
- For rarefied non-ideal gases, approaches such as the cluster expansion use perturbation theory to include the effect of weak interactions, leading to a virial expansion.
- For dense fluids, another approximate approach is based on reduced distribution functions, in particular the radial distribution function.
- Molecular dynamics computer simulations can be used to calculate microcanonical ensemble averages, in ergodic systems. With the inclusion of a connection to a stochastic heat bath, they can also model canonical and grand canonical conditions.
- Mixed methods involving non-equilibrium statistical mechanical results (see below) may be useful.
Non-equilibrium statistical mechanics
Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example:
- heat transport by the internal motions in a material, driven by a temperature imbalance,
- electric currents carried by the motion of charges in a conductor, driven by a voltage imbalance,
- spontaneous chemical reactions driven by a decrease in free energy,
- friction, dissipation, quantum decoherence,
- systems being pumped by external forces (optical pumping, etc.),
- and irreversible processes in general.
All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level.
|
https://en.wikipedia.org/wiki/Statistical_mechanics
|
passage: Note that this latter form is the functional equation for the Riemann zeta function, as originally given by Riemann. The distinction based on z being an integer or not accounts for the fact that the Jacobi theta function converges to the periodic delta function, or Dirac comb in z as
$$
t\rightarrow 0
$$
.
## Relation to Dirichlet L-functions
At rational arguments the Hurwitz zeta function may be expressed as a linear combination of Dirichlet L-functions and vice versa: The Hurwitz zeta function coincides with Riemann's zeta function ζ(s) when a = 1, when a = 1/2 it is equal to (2s−1)ζ(s), and if a = n/k with k > 2, (n,k) > 1 and 0 < n < k, then
$$
\zeta(s,n/k)=\frac{k^s}{\varphi(k)}\sum_\chi\overline{\chi}(n)L(s,\chi),
$$
the sum running over all Dirichlet characters mod k.
|
https://en.wikipedia.org/wiki/Hurwitz_zeta_function
|
passage: &= \sum_{k=-\infty}^{\infty} x\left[\bigl\lfloor\tfrac{m}{L}\bigr\rfloor - k\right]\cdot h\left[m - \bigl\lfloor\tfrac{m}{L}\bigr\rfloor L + kL\right]\quad
\stackrel{m\ \triangleq\ j + nL}{\longrightarrow}\quad y[j+nL] = \sum_{k=0}^K x[n-k]\cdot h[j+kL],\ \ j = 0,1,\ldots,L-1
\end{align}
$$
In the case
$$
L=2,
$$
function
$$
h
$$
can be designed as a half-band filter, where almost half of the coefficients are zero and need not be included in the dot products.
|
https://en.wikipedia.org/wiki/Upsampling
|
passage: The reduced group C*-algebra (see the reduced group C*-algebra Cr*(G)) is nuclear.
- The reduced group C*-algebra is quasidiagonal (J. Rosenberg, A. Tikuisis, S. White, W. Winter).
- The von Neumann group algebra (see von Neumann algebras associated to groups) of Γ is hyperfinite (A. Connes).
Note that A. Connes also proved that the von Neumann group algebra of any connected locally compact group is hyperfinite, so the last condition no longer applies in the case of connected groups.
Amenability is related to spectral theory of certain operators. For instance, the fundamental group of a closed Riemannian manifold is amenable if and only if the bottom of the spectrum of the Laplacian on the L2-space of the universal cover of the manifold is 0.
## Properties
- Every (closed) subgroup of an amenable group is amenable.
- Every quotient of an amenable group is amenable.
- A group extension of an amenable group by an amenable group is again amenable. In particular, finite direct product of amenable groups are amenable, although infinite products need not be.
- Direct limits of amenable groups are amenable. In particular, if a group can be written as a directed union of amenable subgroups, then it is amenable.
- Amenable groups are unitarizable; the converse is an open problem.
- Countable discrete amenable groups obey the Ornstein isomorphism theorem.
## Examples
- Finite groups are amenable.
|
https://en.wikipedia.org/wiki/Amenable_group
|
passage: A solution is given by
$$
x = a_1m_2n_2+a_2m_1n_1.
$$
Indeed,
$$
\begin{align}
x&=a_1m_2n_2+a_2m_1n_1\\
&=a_1(1 - m_1n_1) + a_2m_1n_1 \\
&=a_1 + (a_2 - a_1)m_1n_1,
\end{align}
$$
implying that
$$
x \equiv a_1 \pmod {n_1}.
$$
The second congruence is proved similarly, by exchanging the subscripts 1 and 2.
#### General case
Consider a sequence of congruence equations:
$$
\begin{align}
x &\equiv a_1 \pmod{n_1} \\
&\vdots \\
x &\equiv a_k \pmod{n_k},
\end{align}
$$
where the
$$
n_i
$$
are pairwise coprime. The two first equations have a solution
$$
a_{1,2}
$$
provided by the method of the previous section. The set of the solutions of these two first equations is the set of all solutions of the equation
$$
x \equiv a_{1,2} \pmod{n_1n_2}.
$$
As the other
$$
n_i
$$
are coprime with
$$
n_1n_2,
$$
this reduces solving the initial problem of equations to a similar problem with
$$
k-1
$$
equations. Iterating the process, one gets eventually the solutions of the initial problem.
|
https://en.wikipedia.org/wiki/Chinese_remainder_theorem
|
passage: Since it is self evident that compilations of species occurrence records cannot cover with any completeness, areas that have received either limited or no sampling, a number of methods have been developed to produce arguably more complete "predictive" or "modelled" distributions for species based on their associated environmental or other preferences (such as availability of food or other habitat requirements); this approach is known as either Environmental niche modelling (ENM) or Species distribution modelling (SDM). Depending on the reliability of the source data and the nature of the models employed (including the scales for which data are available), maps generated from such models may then provide better representations of the "real" biogeographic distributions of either individual species, groups of species, or biodiversity as a whole, however it should also be borne in mind that historic or recent human activities (such as hunting of great whales, or other human-induced exterminations) may have altered present-day species distributions from their potential "full" ecological footprint. Examples of predictive maps produced by niche modelling methods based on either GBIF (terrestrial) or OBIS (marine, plus some freshwater) data are the former Lifemapper project at the University of Kansas (now continued as a part of BiotaPhy) and AquaMaps, which as at 2023 contain modelled distributions for around 200,000 terrestrial, and 33,000 species of teleosts, marine mammals and invertebrates, respectively.
|
https://en.wikipedia.org/wiki/Biogeography
|
passage: Rearranging results in the system's transfer function:
$$
H(z) = \frac{Y(z)}{X(z)} = \frac{\sum_{q=0}^{M}z^{-q}\beta_{q}}{\sum_{p=0}^{N}z^{-p}\alpha_{p}} = \frac{\beta_0 + z^{-1} \beta_1 + z^{-2} \beta_2 + \cdots + z^{-M} \beta_M}{\alpha_0 + z^{-1} \alpha_1 + z^{-2} \alpha_2 + \cdots + z^{-N} \alpha_N}.
$$
### Zeros and poles
From the fundamental theorem of algebra the numerator has
$$
M
$$
roots (corresponding to zeros of ) and the denominator has
$$
N
$$
roots (corresponding to poles). Rewriting the transfer function in terms of zeros and poles
$$
H(z) = \frac{(1 - q_1 z^{-1})(1 - q_2 z^{-1})\cdots(1 - q_M z^{-1}) } { (1 - p_1 z^{-1})(1 - p_2 z^{-1})\cdots(1 - p_N z^{-1})} ,
$$
where
$$
q_k
$$
is the
$$
k^\text{th}
$$
zero and
$$
p_k
$$
is the
$$
k^\text{th}
$$
pole.
|
https://en.wikipedia.org/wiki/Z-transform
|
passage: In 19th-century typesetting, compositors used the term "string" to denote a length of type printed on paper; the string would be measured to determine the compositor's pay.
Use of the word "string" to mean "a sequence of symbols or linguistic elements in a definite order" emerged from mathematics, symbolic logic, and linguistic theory to speak about the formal behavior of symbolic systems, setting aside the symbols' meaning.
For example, logician C. I. Lewis wrote in 1918:
A mathematical system is any set of strings of recognisable marks in which some of the strings are taken initially and the remainder derived from these by operations performed according to rules which are independent of any meaning assigned to the marks. That a system should consist of 'marks' instead of sounds or odours is immaterial.
According to Jean E. Sammet, "the first realistic string handling and pattern matching language" for computers was COMIT in the 1950s, followed by the SNOBOL language of the early 1960s.
## String datatypes
A string datatype is a datatype modeled on the idea of a formal string. Strings are such an important and useful datatype that they are implemented in nearly every programming language. In some languages they are available as primitive types and in others as composite types. The syntax of most high-level programming languages allows for a string, usually quoted in some way, to represent an instance of a string datatype; such a meta-string is called a literal or string literal.
### String length
|
https://en.wikipedia.org/wiki/String_%28computer_science%29
|
passage: Whereas the original problem may be stated in a finite-dimensional space, it often happens that the sets to discriminate are not linearly separable in that space. For this reason, it was proposed that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure that dot products of pairs of input data vectors may be computed easily in terms of the variables in the original space, by defining them in terms of a kernel function
$$
k(x, y)
$$
selected to suit the problem. The hyperplanes in the higher-dimensional space are defined as the set of points whose dot product with a vector in that space is constant, where such a set of vectors is an orthogonal (and thus minimal) set of vectors that defines a hyperplane. The vectors defining the hyperplanes can be chosen to be linear combinations with parameters
$$
\alpha_i
$$
of images of feature vectors
$$
x_i
$$
that occur in the data base.
|
https://en.wikipedia.org/wiki/Support_vector_machine
|
passage: Since signals can travel unobstructed over a body of water far larger than the Detroit River, and cool water temperatures also cause inversions in surface air, this "fringe roaming" sometimes occurs across the Great Lakes, and between islands in the Caribbean. Signals can skip from the Dominican Republic to a mountainside in Puerto Rico and vice versa, or between the U.S. and British Virgin Islands, among others. While unintended cross-border roaming is often automatically removed by mobile phone company billing systems, inter-island roaming is typically not.
## Empirical models
A radio propagation model, also known as the radio wave propagation model or the radio frequency propagation model, is an empirical mathematical formulation for the characterization of radio wave propagation as a function of frequency, distance and other conditions. A single model is usually developed to predict the behavior of propagation for all similar links under similar constraints. Created with the goal of formalizing the way radio waves are propagated from one place to another, such models typically predict the path loss along a link or the effective coverage area of a transmitter.
The inventor of radio communication, Guglielmo Marconi, before 1900 formulated the first crude empirical rule of radio propagation: the maximum transmission distance varied as the square of the height of the antenna.
As the path loss encountered along any radio link serves as the dominant factor for characterization of propagation for the link, radio propagation models typically focus on realization of the path loss with the auxiliary task of predicting the area of coverage for a transmitter or modeling the distribution of signals over different regions.
|
https://en.wikipedia.org/wiki/Radio_propagation
|
passage: Then, the transition probabilities are
$$
p_{xy} = \begin{cases}
\frac{1}{d}\frac{g(y)}{\sum_{z \in \Theta: z \sim_j x} g(z) } & x \sim_j y \\
0 & \text{otherwise}
\end{cases}
$$
So
$$
g(x) p_{xy} = \frac{1}{d}\frac{ g(x) g(y)}{\sum_{z \in \Theta: z \sim_j x} g(z) }
= \frac{1}{d}\frac{ g(y) g(x)}{\sum_{z \in \Theta: z \sim_j y} g(z) }
= g(y) p_{yx}
$$
since
$$
x \sim_j y
$$
is an equivalence relation. Thus the detailed balance equations are satisfied, implying the chain is reversible and it has invariant distribution
$$
\left.g\right.
$$
.
In practice, the index
$$
\left.j\right.
$$
is not chosen at random, and the chain cycles through the indexes in order. In general this gives a non-stationary Markov process, but each individual step will still be reversible, and the overall process will still have the desired stationary distribution (as long as the chain can access all states under the fixed ordering).
|
https://en.wikipedia.org/wiki/Gibbs_sampling
|
passage: The arc of a quadrant (a circular arc) can also be termed a quadrant.
## Area
The total area of a circle is . The area of the sector can be obtained by multiplying the circle's area by the ratio of the angle (expressed in radians) and (because the area of the sector is directly proportional to its angle, and is the angle for the whole circle, in radians):
$$
A = \pi r^2\, \frac{\theta}{2 \pi} = \frac{r^2 \theta}{2}
$$
The area of a sector in terms of can be obtained by multiplying the total area by the ratio of to the total perimeter .
$$
A = \pi r^2\, \frac{L}{2\pi r} = \frac{rL}{2}
$$
Another approach is to consider this area as the result of the following integral:
$$
A = \int_0^\theta\int_0^r dS = \int_0^\theta\int_0^r \tilde{r}\, d\tilde{r}\, d\tilde{\theta} = \int_0^\theta \frac 1 2 r^2\, d\tilde{\theta} = \frac{r^2 \theta}{2}
$$
Converting the central angle into degrees gives
$$
A = \pi r^2 \frac{\theta^\circ}{360^\circ}
$$
##
|
https://en.wikipedia.org/wiki/Circular_sector
|
passage: An offer was made to the Princeton team to be redeployed there. "Like a bunch of professional soldiers," Wilson later recalled, "we signed up, en masse, to go to Los Alamos." Oppenheimer recruited many young physicists, including Feynman, who he telephoned long distance from Chicago to inform that he had found a Presbyterian sanatorium in Albuquerque, New Mexico for Arline. They were among the first to depart for New Mexico, leaving on a train on March 28, 1943. The railroad supplied Arline with a wheelchair, and Feynman paid extra for a private room for her. There they spent their wedding anniversary.
At Los Alamos, Feynman was assigned to Hans Bethe's Theoretical (T) Division, and impressed Bethe enough to be made a group leader. He and Bethe developed the Bethe–Feynman formula for calculating the yield of a fission bomb, which built upon previous work by Robert Serber. As a junior physicist, he was not central to the project. He administered the computation group of human computers in the theoretical division. With Stanley Frankel and Nicholas Metropolis, he assisted in establishing a system for using IBM punched cards for computation. He invented a new method of computing logarithms that he later used on the Connection Machine. An avid drummer, Feynman figured out how to get the machine to click in musical rhythms.
|
https://en.wikipedia.org/wiki/Richard_Feynman
|
passage: Namely, whether γ∞(G) equals the clique covering number for all planar graphs G and whether γ∞(G) can bounded below by the Lovász number, also known as the Lovász theta function.
A number of other open questions are stated in the survey paper , including many questions on the variations of eternal dominating sets mentioned above.
## References
- .
- .
- .
- .
- .
- .
- .
- .
- .
-
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
- .
Category:Graph theory objects
|
https://en.wikipedia.org/wiki/Eternal_dominating_set
|
passage: In rough analogy, a POVM is to a PVM what a mixed state is to a pure state. Mixed states are needed to specify the state of a subsystem of a larger system (see Schrödinger–HJW theorem); analogously, POVMs are necessary to describe the effect on a subsystem of a projective measurement performed on a larger system. POVMs are the most general kind of measurement in quantum mechanics, and can also be used in quantum field theory. They are extensively used in the field of quantum information.
In the simplest case, of a POVM with a finite number of elements acting on a finite-dimensional Hilbert space, a POVM is a set of positive semi-definite matrices
$$
\{F_i\}
$$
on a Hilbert space
$$
\mathcal{H}
$$
that sum to the identity matrix,
$$
\sum_{i=1}^n F_i = \operatorname{I}.
$$
In quantum mechanics, the POVM element
$$
F_i
$$
is associated with the measurement outcome
$$
i
$$
, such that the probability of obtaining it when making a measurement on the quantum state
$$
\rho
$$
is given by
$$
\text{Prob}(i) = \operatorname{tr}(\rho F_i)
$$
,
where
$$
\operatorname{tr}
$$
is the trace operator.
|
https://en.wikipedia.org/wiki/Measurement_in_quantum_mechanics
|
passage: Islamic speculative theology in general approached issues in physics from an atomistic framework.
#### Mu'tazilite atomism
Atomism in Mu'tazilism as an early Islamic theology is a cosmological concept that emphasizes that the universe consists of discrete (juz’ lā yatajazzā) or undivided parts created by God. This concept is also the basis for Mu'tazila's rejection of determinism. With an atomized nature, humans are considered capable of creating actions independently (mubasharah), so they deserve rewards or punishments according to their actions. This is in line with the principle that good and bad are rational and inherent in the essence of the action itself, not just the result of God's decision. The Mu'tazilah theologians and philosophers who are famous for their atomism concepts are Abu al-Hudhayl Al-'Allaf and Al-Jubba'i. While there are also Mu'tazilah theologians who are skeptical of atomism such as Ibrahim al-Nazzam.
#### Al-Ghazali and Asharite atomism
The most successful form of Islamic atomism was in the Asharite school of Islamic theology, most notably in the work of the theologian al-Ghazali (1058–1111). In Asharite atomism, atoms are the only perpetual, material things in existence, and all else in the world is "accidental" meaning something that lasts for only an instant. Nothing accidental can be the cause of anything else, except perception, as it exists for a moment.
|
https://en.wikipedia.org/wiki/Atomism
|
passage: ### Linear discriminant analysis (LDA)
Linear discriminant analysis (LDA) is a generalization of Fisher's linear discriminant, a method used in statistics, pattern recognition, and machine learning to find a linear combination of features that characterizes or separates two or more classes of objects or events.
### Generalized discriminant analysis (GDA)
GDA deals with nonlinear discriminant analysis using kernel function operator. The underlying theory is close to the support-vector machines (SVM) insofar as the GDA method provides a mapping of the input vectors into high-dimensional feature space. Similar to LDA, the objective of GDA is to find a projection for the features into a lower dimensional space by maximizing the ratio of between-class scatter to within-class scatter.
### Autoencoder
Autoencoders can be used to learn nonlinear dimension reduction functions and codings together with an inverse function from the coding to the original representation.
t-SNE
T-distributed Stochastic Neighbor Embedding (t-SNE) is a nonlinear dimensionality reduction technique useful for the visualization of high-dimensional datasets. It is not recommended for use in analysis such as clustering or outlier detection since it does not necessarily preserve densities or distances well.
### UMAP
Uniform manifold approximation and projection (UMAP) is a nonlinear dimensionality reduction technique.
|
https://en.wikipedia.org/wiki/Dimensionality_reduction
|
passage: Examples
For consistency with statistical usage, "CDF" (i.e. Cumulative distribution function) should be replaced by "cumulative histogram", especially since the article links to cumulative distribution function which is derived by dividing values in the cumulative histogram by the overall amount of pixels. The equalized CDF is defined in terms of rank as
$$
rank/pixelcount
$$
.
### Small image
The 8-bit grayscale image shown has the following values:
52 55 61 59 79 61 76 61 62 59 55 104 94 85 59 71 63 65 66 113 144 104 63 72 64 70 70 126 154 109 71 69 67 73 68 106 122 88 68 68 68 79 60 70 77 66 58 75 69 85 64 58 55 61 65 83 70 87 69 68 65 73 78 90
The histogram for this image is shown in the following table. Pixel values that have a zero count are excluded for the sake of brevity.
{| class="wikitable"
|-
! Value !! Count
! Value !! Count
! Value !! Count
! Value !! Count
! Value !!
|
https://en.wikipedia.org/wiki/Histogram_equalization
|
passage: $$
by expanding the first and second term, these expressions read
$$
\Delta f = \frac{\partial^2 f}{\partial r^2} + \frac{2}{r}\frac{\partial f}{\partial r}+\frac{1}{r^2 \sin \theta} \left(\cos \theta \frac{\partial f}{\partial \theta} + \sin \theta \frac{\partial^2 f}{\partial \theta^2} \right) + \frac{1}{r^2 \sin^2 \theta} \frac{\partial^2 f}{\partial \varphi^2},
$$
where represents the azimuthal angle and the zenith angle or co-latitude.
|
https://en.wikipedia.org/wiki/Laplace_operator
|
passage: This was perhaps the earliest result in the computational complexity of quantum computers, proving that they were capable of performing some well-defined computation more efficiently than any classical computer.
- Ethan Bernstein and Umesh Vazirani propose the Bernstein–Vazirani algorithm. It is a restricted version of the Deutsch–Jozsa algorithm where instead of distinguishing between two different classes of functions, it tries to learn a string encoded in a function. The Bernstein–Vazirani algorithm was designed to prove an oracle separation between complexity classes BQP and BPP.
- Research groups at Max Planck Institute of Quantum Optics (Garching) and shortly after at NIST (Boulder) experimentally realize the first crystallized strings of laser-cooled ions. Linear ion crystals constitute the qubit basis for most quantum computing and simulation experiments with trapped ions.
### 1993
Daniel R. Simon, at Université de Montréal, Quebec, Canada, invent an oracle problem, Simon's problem, for which a quantum computer would be exponentially faster than a conventional computer. This algorithm introduces the main ideas which were then developed in Peter Shor's factorization algorithm.
### 1994
- Peter Shor, at AT&T's Bell Labs in New Jersey, publishes Shor's algorithm. It would allow a quantum computer to factor large integers quickly. It solves both the factoring problem and the discrete log problem. The algorithm can theoretically break many of the cryptosystems in use today. Its invention sparked tremendous interest in quantum computers.
-
|
https://en.wikipedia.org/wiki/Timeline_of_quantum_computing_and_communication
|
passage: Clearly,
$$
X
$$
always distinguishes points of
$$
N
$$
, so the canonical pairing is a dual system if and only if
$$
N
$$
separates points of
$$
X.
$$
The following notation is now nearly ubiquitous in duality theory.
The evaluation map will be denoted by
$$
\left\langle x, x^{\prime} \right\rangle = x^{\prime}(x)
$$
(rather than by
$$
c
$$
) and
$$
\langle X, N \rangle
$$
will be written rather than
$$
(X, N, c).
$$
Assumption: As is common practice, if
$$
X
$$
is a vector space and
$$
N
$$
is a vector space of linear functionals on
$$
X,
$$
then unless stated otherwise, it will be assumed that they are associated with the canonical pairing
$$
\langle X, N \rangle.
$$
If
$$
N
$$
is a vector subspace of
$$
X^{\#}
$$
then
$$
X
$$
distinguishes points of
$$
N
$$
(or equivalently,
$$
(X, N, c)
$$
is a duality) if and only if
$$
N
$$
distinguishes points of
$$
X,
$$
or equivalently if
$$
N
$$
is total (that is,
$$
n(x) = 0
$$
for all
$$
n \in N
$$
implies
$$
x = 0
$$
).
### Canonical duality on a topological vector space
Suppose
$$
X
$$
is a topological vector space (TVS) with continuous dual space
|
https://en.wikipedia.org/wiki/Dual_system
|
passage: By contrast, in interactions that are symmetric under parity, such as electromagnetism in atomic and molecular physics, parity serves as a powerful controlling principle underlying quantum transitions.
A matrix representation of P (in any number of dimensions) has determinant equal to −1, and hence is distinct from a rotation, which has a determinant equal to 1. In a two-dimensional plane, a simultaneous flip of all coordinates in sign is not a parity transformation; it is the same as a 180° rotation.
In quantum mechanics, wave functions that are unchanged by a parity transformation are described as even functions, while those that change sign under a parity transformation are odd functions.
## Simple symmetry relations
Under rotations, classical geometrical objects can be classified into scalars, vectors, and tensors of higher rank. In classical physics, physical configurations need to transform under representations of every symmetry group.
Quantum theory predicts that states in a Hilbert space do not need to transform under representations of the group of rotations, but only under projective representations. The word projective refers to the fact that if one projects out the phase of each state, where we recall that the overall phase of a quantum state is not observable, then a projective representation reduces to an ordinary representation. All representations are also projective representations, but the converse is not true, therefore the projective representation condition on quantum states is weaker than the representation condition on classical states.
The projective representations of any group are isomorphic to the ordinary representations of a central extension of the group.
|
https://en.wikipedia.org/wiki/Parity_%28physics%29
|
passage: In particular, it is a set of operators with both algebraic and topological closure properties. In some disciplines such properties are axiomatized and algebras with certain topological structure become the subject of the research.
Though algebras of operators are studied in various contexts (for example, algebras of pseudo-differential operators acting on spaces of distributions), the term operator algebra is usually used in reference to algebras of bounded operators on a Banach space or, even more specially in reference to algebras of operators on a separable Hilbert space, endowed with the operator norm topology.
In the case of operators on a Hilbert space, the Hermitian adjoint map on operators gives a natural involution, which provides an additional algebraic structure that can be imposed on the algebra. In this context, the best studied examples are self-adjoint operator algebras, meaning that they are closed under taking adjoints. These include C*-algebras, von Neumann algebras, and AW*-algebras. C*-algebras can be easily characterized abstractly by a condition relating the norm, involution and multiplication. Such abstractly defined C*-algebras can be identified to a certain closed subalgebra of the algebra of the continuous linear operators on a suitable Hilbert space. A similar result holds for von Neumann algebras.
|
https://en.wikipedia.org/wiki/Operator_algebra
|
passage: It has minimal faithful degree , which is realized by the action on the 24-cell. The group has ID (1152,157478) in the small groups library.
### Cartan matrix
$$
\left[ \begin{array}{rrrr}
2&-1&0&0\\
-1&2&-2&0\\
0&-1&2&-1\\
0&0&-1&2
\end{array} \right]
$$
### F4 lattice
The F4 lattice is a four-dimensional body-centered cubic lattice (i.e. the union of two hypercubic lattices, each lying in the center of the other). They form a ring called the Hurwitz quaternion ring. The 24 Hurwitz quaternions of norm 1 form the vertices of a 24-cell centered at the origin.
### Roots of F4
The 48 root vectors of F4 can be found as the vertices of the 24-cell in two dual configurations, representing the vertices of a disphenoidal 288-cell if the edge lengths of the 24-cells are equal:
24-cell vertices:
- 24 roots by (±1, ±1, 0, 0), permuting coordinate positions
Dual 24-cell vertices:
- 8 roots by (±1, 0, 0, 0), permuting coordinate positions
- 16 roots by (±1/2, ±1/2, ±1/2, ±1/2).
|
https://en.wikipedia.org/wiki/F4_%28mathematics%29
|
passage: The airway can also become blocked by a foreign object. To dislodge the object and solve the choking case, the first aider may use anti-choking methods (such as 'back slaps, 'chest thrusts' or 'abdominal thrusts').
Once the airway has been opened, the first aider would reassess the patient's breathing. If there is no breathing, or the patient is not breathing normally (e.g., agonal breathing), the first aider would initiate CPR, which attempts to restart the patient's breathing by forcing air into the lungs. They may also manually massage the heart to promote blood flow around the body.
If the choking person is an infant, the first aider may use anti-choking methods for babies. During that procedure, series of five strong blows are delivered on the infant's upper back after placing the infant's face in the aider's forearm. If the infant is able to cough or cry, no breathing assistance should be given. Chest thrusts can also be applied with two fingers on the lower half of the middle of the chest. Coughing and crying indicate the airway is open and the foreign object will likely to come out from the force the coughing or crying produces.
A first responder should know how to use an Automatic External Defibrillator (AED) in the case of a person having a sudden cardiac arrest. The survival rate of those who have a cardiac arrest outside of the hospital is low.
|
https://en.wikipedia.org/wiki/First_aid
|
passage: They are also related to wavelet compression. Mipmap textures are used in 3D scenes to decrease the time required to render a scene. They also improve image quality by reducing aliasing and Moiré patterns that occur at large viewing distances, at the cost of 33% more memory per texture.
## Overview
Mipmaps are used for:
- Level of detail (LOD)
- Improving image quality. Rendering from large textures where only small, discontiguous subsets of texels are used can easily produce Moiré patterns;
- Speeding up rendering times, either by reducing the number of texels sampled to render each pixel, or increasing the memory locality of the samples taken;
- Reducing stress on the GPU or CPU.
- Water surface reflections
## Origin
Mipmapping was invented by Lance Williams in 1983 and is described in his paper Pyramidal parametrics. From the abstract: "This paper advances a 'pyramidal parametric' prefiltering and sampling geometry which minimizes aliasing effects and assures continuity within and between target images." The referenced pyramid can be imagined as the set of mipmaps stacked in front of each other.
The first patent issued on Mipmap and texture generation was in 1983 by Johnson Yan, Nicholas Szabo, and Lish-Yann Chen of Link Flight Simulation (Singer). Using their approach, texture could be generated and superimposed on surfaces (curvilinear and planar) of any orientation and could be done in real-time.
|
https://en.wikipedia.org/wiki/Mipmap
|
passage: $$
$$
\theta_{3}'(x) = \frac{\mathrm{d}}{\mathrm{d}x} \,\theta_{3}(x) = \theta_{3}(x)\bigl[\theta_{3}(x)^2 + \theta_{4}(x)^2\bigr]\biggl\{\frac{1}{2\pi x}E\biggl[\frac{\theta_{3}(x)^2 - \theta_{4}(x)^2}{\theta_{3}(x)^2 + \theta_{4}(x)^2}\biggr] - \frac{\theta_{4}(x)^2}{4\,x}\biggr\}
$$
$$
|
https://en.wikipedia.org/wiki/Theta_function
|
passage: Hayes (1973) developed an equational language, Golux, in which different procedures could be obtained by altering the behavior of the theorem prover.
In the meanwhile, Alain Colmerauer in Marseille was working on natural-language understanding, using logic to represent semantics and using resolution for question-answering. During the summer of 1971, Colmerauer invited Kowalski to Marseille, and together they discovered that the clausal form of logic could be used to represent formal grammars and that resolution theorem provers could be used for parsing. They observed that some theorem provers, like hyper-resolution, behave as bottom-up parsers and others, like SL resolution (1971) behave as top-down parsers.
It was in the following summer of 1972, that Kowalski, again working with Colmerauer, developed the procedural interpretation of implications in clausal form. It also became clear that such clauses could be restricted to definite clauses or Horn clauses, and that SL-resolution could be restricted (and generalised) to SLD resolution. Kowalski's procedural interpretation and SLD were described in a 1973 memo, published in 1974.
Colmerauer, with Philippe Roussel, used the procedural interpretation as the basis of Prolog, which was implemented in the summer and autumn of 1972. The first Prolog program, also written in 1972 and implemented in Marseille, was a French question-answering system. The use of Prolog as a practical programming language was given great momentum by the development of a compiler by David H. D. Warren in Edinburgh in 1977.
|
https://en.wikipedia.org/wiki/Logic_programming
|
passage: In mitochondria and chloroplasts, these cytochromes are often combined in electron transport and related metabolic pathways:
Cytochromes Combination
a and a3 Cytochrome c oxidase ("Complex IV") with electrons delivered to complex by soluble cytochrome c (hence the name)
b and c1 Coenzyme Q - cytochrome c reductase ("Complex III")
b6 and f Plastoquinol—plastocyanin reductase
A distinct family of cytochromes is the cytochrome P450 family, so named for the characteristic Soret peak formed by absorbance of light at wavelengths near 450 nm when the heme iron is reduced (with sodium dithionite) and complexed to carbon monoxide. These enzymes are primarily involved in steroidogenesis and detoxification.
## References
## External links
- Scripps Database of Metalloproteins
-
|
https://en.wikipedia.org/wiki/Cytochrome
|
passage: The rule of weakening becomes admissible if the axiom (I) is changed to derive any sequent of the form
$$
\Gamma , A \vdash A , \Delta
$$
. Any weakening that appears in a derivation can then be moved to the beginning of the proof. This may be a convenient change when constructing proofs bottom-up.
One may also change whether rules with more than one premise share the same context for each of those premises or split their contexts between them: For example,
$$
({\lor}L)
$$
may be instead formulated as
$$
\cfrac{\Gamma, A \vdash \Delta \qquad \Sigma, B \vdash \Pi}{\Gamma, \Sigma, A \lor B \vdash \Delta, \Pi}.
$$
Contraction and weakening make this version of the rule interderivable with the version above, although in their absence, as in linear logic, these rules define different connectives.
|
https://en.wikipedia.org/wiki/Sequent_calculus
|
passage: This connection would ultimately lead to the first proof of Fermat's Last Theorem in number theory through algebraic geometry techniques of modularity lifting developed by Andrew Wiles in 1995.
In the 1960s, Goro Shimura introduced Shimura varieties as generalizations of modular curves. Since the 1979, Shimura varieties have played a crucial role in the Langlands program as a natural realm of examples for testing conjectures.
In papers in 1977 and 1978, Barry Mazur proved the torsion conjecture giving a complete list of the possible torsion subgroups of elliptic curves over the rational numbers. Mazur's first proof of this theorem depended upon a complete analysis of the rational points on certain modular curves. In 1996, the proof of the torsion conjecture was extended to all number fields by Loïc Merel.
In 1983, Gerd Faltings proved the Mordell conjecture, demonstrating that a curve of genus greater than 1 has only finitely many rational points (where the Mordell–Weil theorem only demonstrates finite generation of the set of rational points as opposed to finiteness).
In 2001, the proof of the local Langlands conjectures for GLn was based on the geometry of certain Shimura varieties.
In the 2010s, Peter Scholze developed perfectoid spaces and new cohomology theories in arithmetic geometry over p-adic fields with application to Galois representations and certain cases of the weight-monodromy conjecture.
|
https://en.wikipedia.org/wiki/Arithmetic_geometry
|
passage: If we take the two sentences "M. Smith likes fishing. But he doesn't like biking", it would be beneficial to detect that "he" is referring to the previously detected person "M. Smith".
- Relationship extraction: identification of relations between entities, such as:
- PERSON works for ORGANIZATION (extracted from the sentence "Bill works for IBM.")
- PERSON located in LOCATION (extracted from the sentence "Bill is in France.")
- Semi-structured information extraction which may refer to any IE that tries to restore some kind of information structure that has been lost through publication, such as:
- Table extraction: finding and extracting tables from documents.
- Table information extraction : extracting information in structured manner from the tables. This task is more complex than table extraction, as table extraction is only the first step, while understanding the roles of the cells, rows, columns, linking the information inside the table and understanding the information presented in the table are additional tasks necessary for table information extraction.
- Comments extraction : extracting comments from the actual content of articles in order to restore the link between authors of each of the sentences
- Language and vocabulary analysis
- Terminology extraction: finding the relevant terms for a given corpus
- Audio extraction
- Template-based music extraction: finding relevant characteristic in an audio signal taken from a given repertoire; for instance time indexes of occurrences of percussive sounds can be extracted in order to represent the essential rhythmic component of a music piece.
|
https://en.wikipedia.org/wiki/Information_extraction
|
passage: The version number (e.g. v5.0 in Windows 2000) is based on the operating system version; it should not be confused with the NTFS version number (v3.1 since Windows XP).
Although subsequent versions of Windows added new file system-related features, they did not change NTFS itself. For example, Windows Vista implemented NTFS symbolic links, Transactional NTFS, partition shrinking, and self-healing. NTFS symbolic links are a new feature in the file system; all the others are new operating system features that make use of NTFS features already in place.
## Scalability
NTFS is optimized for 4 KB clusters, but supports a maximum cluster size of 2MB. (Earlier implementations support up to 64KB) The maximum NTFS volume size that the specification can support is clusters, but not all implementations achieve this theoretical maximum, as discussed below.
The maximum NTFS volume size implemented in Windows XP Professional is clusters, partly due to partition table limitations. For example, using 64KB clusters, the maximum size Windows XP NTFS volume is 256TB minus 64KB. Using the default cluster size of 4KB, the maximum NTFS volume size is 16TB minus 4KB. Both of these are vastly higher than the 128GB limit in Windows XP SP1.
|
https://en.wikipedia.org/wiki/NTFS
|
passage: Active NF-κB induces the expression of anti-apoptotic genes such as Bcl-2, resulting in inhibition of apoptosis. NF-κB has been found to play both an antiapoptotic role and a proapoptotic role depending on the stimuli utilized and the cell type.
### HIV progression
The progression of the human immunodeficiency virus infection into AIDS is due primarily to the depletion of CD4+ T-helper lymphocytes in a manner that is too rapid for the body's bone marrow to replenish the cells, leading to a compromised immune system. One of the mechanisms by which T-helper cells are depleted is apoptosis, which results from a series of biochemical pathways:
1. HIV enzymes deactivate anti-apoptotic Bcl-2. This does not directly cause cell death but primes the cell for apoptosis should the appropriate signal be received. In parallel, these enzymes activate proapoptotic procaspase-8, which does directly activate the mitochondrial events of apoptosis.
1. HIV may increase the level of cellular proteins that prompt Fas-mediated apoptosis.
1. HIV proteins decrease the amount of CD4 glycoprotein marker present on the cell membrane.
1. Released viral particles and proteins present in extracellular fluid are able to induce apoptosis in nearby "bystander" T helper cells.
1.
|
https://en.wikipedia.org/wiki/Apoptosis
|
passage: At high optical powers, scattering can also be caused by nonlinear optical processes in the fiber.
### UV-Vis-IR absorption
In addition to light scattering, attenuation or signal loss can also occur due to selective absorption of specific wavelengths. Primary material considerations include both electrons and molecules as follows:
- At the electronic level, it depends on whether the electron orbitals are spaced (or "quantized") such that they can absorb a quantum of light (or photon) of a specific wavelength or frequency in the ultraviolet (UV) or visible ranges. This is what gives rise to color.
- At the atomic or molecular level, it depends on the frequencies of atomic or molecular vibrations or chemical bonds, how closely packed its atoms or molecules are, and whether or not the atoms or molecules exhibit long-range order. These factors will determine the capacity of the material to transmit longer wavelengths in the infrared (IR), far IR, radio, and microwave ranges.
The design of any optically transparent device requires the selection of materials based upon knowledge of its properties and limitations. The crystal structure absorption characteristics observed at the lower frequency regions (mid- to far-IR wavelength range) define the long-wavelength transparency limit of the material. They are the result of the interactive coupling between the motions of thermally induced vibrations of the constituent atoms and molecules of the solid lattice and the incident light wave radiation.
|
https://en.wikipedia.org/wiki/Optical_fiber
|
passage: Plants are the eukaryotes that form the kingdom Plantae; they are predominantly photosynthetic. This means that they obtain their energy from sunlight, using chloroplasts derived from endosymbiosis with cyanobacteria to produce sugars from carbon dioxide and water, using the green pigment chlorophyll. Exceptions are parasitic plants that have lost the genes for chlorophyll and photosynthesis, and obtain their energy from other plants or fungi. Most plants are multicellular, except for some green algae.
Historically, as in Aristotle's biology, the plant kingdom encompassed all living things that were not animals, and included algae and fungi.
## Definition
s have narrowed since then; current definitions exclude fungi and some of the algae. By the definition used in this article, plants form the clade Viridiplantae (green plants), which consists of the green algae and the embryophytes or land plants (hornworts, liverworts, mosses, lycophytes, ferns, conifers and other gymnosperms, and flowering plants). A definition based on genomes includes the Viridiplantae, along with the red algae and the glaucophytes, in the clade Archaeplastida.
There are about 380,000 known species of plants, of which the majority, some 260,000, produce seeds. They range in size from single cells to the tallest trees.
|
https://en.wikipedia.org/wiki/Plant
|
passage: Modern Unix-derivatives are generally based on module-loading monolithic kernels. Examples of this are the Linux kernel in the many distributions of GNU, IBM AIX, as well as the Berkeley Software Distribution variant kernels such as FreeBSD, DragonFly BSD, OpenBSD, NetBSD, and macOS. Apart from these alternatives, amateur developers maintain an active operating system development community, populated by self-written hobby kernels which mostly end up sharing many features with Linux, FreeBSD, DragonflyBSD, OpenBSD or NetBSD kernels and/or being compatible with them.
### Classic Mac OS and macOS
Apple first launched its classic Mac OS in 1984, bundled with its Macintosh personal computer. Apple moved to a nanokernel design in Mac OS 8.6. Against this, the modern macOS (originally named Mac OS X) is based on Darwin, which uses a hybrid kernel called XNU, which was created by combining the 4.3BSD kernel and the Mach kernel.
Microsoft Windows
Microsoft Windows was first released in 1985 as an add-on to MS-DOS. Because of its dependence on another operating system, initial releases of Windows, prior to Windows 95, were considered an operating environment (not to be confused with an operating system). This product line continued to evolve through the 1980s and 1990s, with the Windows 9x series adding 32-bit addressing and pre-emptive multitasking; but ended with the release of Windows Me in 2000.
Microsoft also developed Windows NT, an operating system with a very similar interface, but intended for high-end and business users.
|
https://en.wikipedia.org/wiki/Kernel_%28operating_system%29
|
passage: ## Transceivers versus transponders
Transceivers Since communication over a single wavelength is one-way (simplex communication), and most practical communication systems require two-way (duplex communication) communication, two wavelengths will be required if on the same fiber; if separate fibers are used in a so-called fiber pair, then the same wavelength is normally used and it is not WDM. As a result, at each end both a transmitter and a receiver will be required. A combination of a transmitter and a receiver is called a transceiver; it converts an electrical signal to and from an optical signal. WDM transceivers made for single-strand operation require the opposing transmitters to use different wavelengths. WDM transceivers additionally require an optical splitter/combiner to couple the transmitter and receiver paths onto the one fiber strand.
Transponder In practice, the signal inputs and outputs will not be electrical but optical instead (typically at 1550 nm). This means that in effect wavelength converters are needed instead, which is exactly what a transponder is. A transponder can be made up of two transceivers placed after each other: the first transceiver converting the 1550 nm optical signal to/from an electrical signal, and the second transceiver converting the electrical signal to/from an optical signal at the required wavelength. Transponders that don't use an intermediate electrical signal (all-optical transponders) are in development.
|
https://en.wikipedia.org/wiki/Wavelength-division_multiplexing
|
passage: In 3D computer graphics, solid objects are usually modeled by polyhedra. A face of a polyhedron is a planar polygon bounded by straight line segments, called edges. Curved surfaces are usually approximated by a polygon mesh. Computer programs for line drawings of opaque objects must be able to decide which edges or which parts of the edges are hidden by an object itself or by other objects, so that those edges can be clipped during rendering. This problem is known as hidden-line removal.
The first known solution to the hidden-line problem was devised by L. G. Roberts in 1963. However, it severely restricts the model: it requires that all objects be convex. Ruth A. Weiss of Bell Labs documented her 1964 solution to this problem in a 1965 paper.
In 1966 Ivan E. Sutherland listed 10 unsolved problems in computer graphics. Problem number seven was "hidden-line removal". In terms of computational complexity, this problem was solved by Frank Devai in 1986.
Models, e.g. in computer-aided design, can have thousands or millions of edges. Therefore, a computational-complexity approach expressing resource requirements (such as time and memory) as the function of problem sizes is crucial. Time requirements are particularly important in interactive systems.
Problem sizes for hidden-line removal are the total number of the edges of the model and the total number of the visible segments of the edges. Visibility can change at the intersection points of the images of the edges.
|
https://en.wikipedia.org/wiki/Hidden-line_removal
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.