text
stringlengths
82
2.62k
source
stringlengths
31
108
passage: By construction the values in a quotient filter are stored in sorted order. Each run is associated with a specific quotient value, which provides the most significant portion of the fingerprint, the runs are stored in order and each slot in the run provides the least significant portion of the fingerprint. So, by working from left to right, one can reconstruct all the fingerprints and the resulting list of integers will be in sorted order. Merging two quotient filters is then a simple matter of converting each quotient filter into such a list, merging the two lists and using it to populate a new larger quotient filter. Similarly, we can halve or double the size of a quotient filter without rehashing the keys since the fingerprints can be recomputed using just the quotients and remainders.
https://en.wikipedia.org/wiki/Quotient_filter
passage: Thus we get the diffusion map from the original data to a k-dimensional space which is embedded in the original space. In it is proved that $$ D_t(x_i,x_j)^2 \approx||\Psi_t(x_i)-\Psi_t(x_j)||^2 \, $$ so the Euclidean distance in the diffusion coordinates approximates the diffusion distance. ## Algorithm The basic algorithm framework of diffusion map is as: Step 1. Given the similarity matrix L. Step 2. Normalize the matrix according to parameter $$ \alpha $$ : $$ L^{(\alpha)} = D^{-\alpha} L D^{-\alpha} $$ . Step 3. Form the normalized matrix $$ M=({D}^{(\alpha)})^{-1}L^{(\alpha)} $$ . Step 4. Compute the k largest eigenvalues of $$ M^t $$ and the corresponding eigenvectors. Step 5. Use diffusion map to get the embedding $$ \Psi_t $$ . ## Application In the paper Nadler et al. showed how to design a kernel that reproduces the diffusion induced by a Fokker–Planck equation. They also explained that, when the data approximate a manifold, one can recover the geometry of this manifold by computing an approximation of the Laplace–Beltrami operator.
https://en.wikipedia.org/wiki/Diffusion_map
passage: These devices are called optical crossconnectors (OXCs). Various categories of OXCs include electronic ("opaque"), optical ("transparent"), and wavelength-selective devices. ## Enhanced WDM Cisco's Enhanced WDM system is a network architecture that combines two different types of multiplexing technologies to transmit data over optical fibers. EWDM combines Coarse Wave Division Multiplexing (CWDM) connections using SFPs and GBICs with Dense Wave Division Multiplexing (DWDM) connections using XENPAK, X2 or XFP DWDM modules. The Enhanced WDM system can use either passive or boosted DWDM connections to allow a longer range for the connection. In addition to this, C form-factor pluggable modules deliver Ethernet suitable for high-speed Internet backbone connections. ## Shortwave WDM Shortwave WDM uses vertical-cavity surface-emitting laser (VCSEL) transceivers with four wavelengths in the 846 to 953 nm range over single OM5 fiber, or two-fiber connectivity for OM3/OM4 fiber. ## Transceivers versus transponders Transceivers Since communication over a single wavelength is one-way (simplex communication), and most practical communication systems require two-way (duplex communication) communication, two wavelengths will be required if on the same fiber; if separate fibers are used in a so-called fiber pair, then the same wavelength is normally used and it is not WDM. As a result, at each end both a transmitter and a receiver will be required.
https://en.wikipedia.org/wiki/Wavelength-division_multiplexing
passage: ## Busemann functions on a Hadamard space In a Hadamard space, where any two points are joined by a unique geodesic segment, the function $$ F = F_t $$ is convex, i.e. convex on geodesic segments $$ [x,y] $$ . Explicitly this means that if $$ z(s) $$ is the point which divides $$ [x,y] $$ in the ratio , then $$ F(z(s)) \leq s F(x) + (1 - s) F(y) $$ . For fixed $$ a $$ the function $$ d(x,a) $$ is convex and hence so are its translates; in particular, if $$ \gamma $$ is a geodesic ray in $$ X $$ , then $$ F_t $$ is convex. Since the Busemann function $$ B_\gamma $$ is the pointwise limit of $$ F_t $$ , - Busemann functions are convex on Hadamard spaces. - On a Hadamard space, the functions converge uniformly to uniformly on any bounded subset of . Let . Since $$ \gamma (t) $$ is parametrised by arclength, Alexandrov's first comparison theorem for Hadamard spaces implies that the function is convex.
https://en.wikipedia.org/wiki/Busemann_function
passage: ### Martingale sequences with respect to another sequence More generally, a sequence Y1, Y2, Y3 ... is said to be a martingale with respect to another sequence X1, X2, X3 ... if for all n $$ \mathbf{E} ( \vert Y_n \vert )< \infty $$ $$ \mathbf{E} (Y_{n+1}\mid X_1,\ldots,X_n)=Y_n. $$ Similarly, a continuous-time martingale with respect to the stochastic process Xt is a stochastic process Yt such that for all t $$ \mathbf{E} ( \vert Y_t \vert )<\infty $$ $$ \mathbf{E} ( Y_{t} \mid \{ X_{\tau}, \tau \leq s \} ) = Y_s\quad \forall s \le t. $$ This expresses the property that the conditional expectation of an observation at time t, given all the observations up to time $$ s $$ , is equal to the observation at time s (of course, provided that s ≤ t). The second property implies that $$ Y_n $$ is measurable with respect to $$ X_1 \dots X_n $$ .
https://en.wikipedia.org/wiki/Martingale_%28probability_theory%29
passage: With the aid of $$ k $$ , we can form a set of samples which has the distribution $$ g(\alpha_{t+1},k|Y_{t+1}) $$ . Then, we draw from these sample set $$ g(\alpha_{t+1},k|Y_{t+1}) $$ instead of directly from $$ \widehat{f}(\alpha_{t+1}|Y_{t+1}) $$ . In other words, the samples are drawn from $$ \widehat{f}(\alpha_{t+1}|Y_{t+1}) $$ with different probability. The samples are ultimately utilized to approximate $$ f(\alpha_{t+1}|Y_{t+1}) $$ . Take the SIR method for example: - The particle filters draw $$ R $$ samples from $$ g(\alpha_{t+1},k|Y_{t+1}) $$ . - Assign each samples with the weight $$ \pi_j=\frac{\omega_j}{\sum_{i=1}^R\omega_i}, \omega_j=\frac{f(y_{t+1}|\alpha_{t+1}^j)f(\alpha_{t+1}^j|\alpha_t^k)}{g(\alpha_{t+1}^j,k^j|Y_{t+1})} $$ . -
https://en.wikipedia.org/wiki/Auxiliary_particle_filter
passage: Splitting and recombination of strings correspond to particle emission and absorption, giving rise to the interactions between particles. There are notable differences between the world described by string theory and the everyday world. In everyday life, there are three familiar dimensions of space (up/down, left/right, and forward/backward), and there is one dimension of time (later/earlier). Thus, in the language of modern physics, one says that spacetime is four-dimensional. One of the peculiar features of string theory is that it requires extra dimensions of spacetime for its mathematical consistency. In superstring theory, the version of the theory that incorporates a theoretical idea called supersymmetry, there are six extra dimensions of spacetime in addition to the four that are familiar from everyday experience. One of the goals of current research in string theory is to develop models in which the strings represent particles observed in high energy physics experiments. For such a model to be consistent with observations, its spacetime must be four-dimensional at the relevant distance scales, so one must look for ways to restrict the extra dimensions to smaller scales. In most realistic models of physics based on string theory, this is accomplished by a process called compactification, in which the extra dimensions are assumed to "close up" on themselves to form circles. In the limit where these curled up dimensions become very small, one obtains a theory in which spacetime has effectively a lower number of dimensions. A standard analogy for this is to consider a multidimensional object such as a garden hose.
https://en.wikipedia.org/wiki/Mirror_symmetry_%28string_theory%29
passage: Suppose that the goal is to estimate $$ P(E) $$ for $$ E $$ on the tail of $$ P(E) $$ . Formally, $$ P(E) $$ can be written as $$ P(E) = \int_\Omega P(E\mid x) P(x) \,dx = \int_\Omega \delta\big(E - E(x)\big) P(x) \,dx = E \big(P(E\mid X)\big) $$ and, thus, estimating $$ P(E) $$ can be accomplished by estimating the expected value of the indicator function $$ A_E(x) \equiv \mathbf{1}_E(x) $$ , which is 1 when $$ E(x) \in [E, E + \Delta E] $$ and zero otherwise. Because $$ E $$ is on the tail of $$ P(E) $$ , the probability to draw a state $$ x $$ with $$ E(x) $$ on the tail of $$ P(E) $$ is proportional to $$ P(E) $$ , which is small by definition. The Metropolis–Hastings algorithm can be used here to sample (rare) states more likely and thus increase the number of samples used to estimate $$ P(E) $$ on the tails.
https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm
passage: This relation between the angles remains approximately true when the vectors don't lie in the same plane, especially when the angles are small. The angle between N and H is therefore sometimes called the halfway angle. Considering that the angle between the halfway vector and the surface normal is likely to be smaller than the angle between R and V used in Phong's model (unless the surface is viewed from a very steep angle for which it is likely to be larger), and since Phong is using $$ \left( R \cdot V \right)^{\alpha}, $$ an exponent can be set $$ \alpha^\prime > \alpha $$ such that $$ \left(N \cdot H \right)^{\alpha^\prime} $$ is closer to the former expression. For front-lit surfaces (specular reflections on surfaces facing the viewer), $$ \alpha^\prime = 4\,\alpha $$ will result in specular highlights that very closely match the corresponding Phong reflections. However, while the Phong reflections are always round for a flat surface, the Blinn–Phong reflections become elliptical when the surface is viewed from a steep angle. This can be compared to the case where the sun is reflected in the sea close to the horizon, or where a far away street light is reflected in wet pavement, where the reflection will always be much more extended vertically than horizontally.
https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_reflection_model
passage: For each , this construction is identical to a quantum harmonic oscillator. The quantum field is an infinite array of quantum oscillators. The quantum Hamiltonian then amounts to $$ H = \sum_{k=-\infty}^{\infty} \hbar\omega_k a_k^\dagger a_k = \sum_{k=-\infty}^{\infty} \hbar\omega_k N_k , $$ where may be interpreted as the number operator giving the number of particles in a state with momentum . This Hamiltonian differs from the previous expression by the subtraction of the zero-point energy of each harmonic oscillator. This satisfies the condition that must annihilate the vacuum, without affecting the time-evolution of operators via the above exponentiation operation. This subtraction of the zero-point energy may be considered to be a resolution of the quantum operator ordering ambiguity, since it is equivalent to requiring that all creation operators appear to the left of annihilation operators in the expansion of the Hamiltonian. This procedure is known as Wick ordering or normal ordering. #### Other fields All other fields can be quantized by a generalization of this procedure. Vector or tensor fields simply have more components, and independent creation and destruction operators must be introduced for each independent component. If a field has any internal symmetry, then creation and destruction operators must be introduced for each component of the field related to this symmetry as well.
https://en.wikipedia.org/wiki/Canonical_quantization
passage: Pharmacokinetics is the movement of the drug in the body, it is usually described as 'what the body does to the drug' the physico-chemical properties of a drug will affect the rate and extent of absorption, extent of distribution, metabolism and elimination. The drug needs to have the appropriate molecular weight, polarity etc. in order to be absorbed, the fraction of a drug that reaches the systemic circulation is termed bioavailability, this is simply a ratio of the peak plasma drug levels after oral administration and the drug concentration after an IV administration (first pass effect is avoided and therefore no amount drug is lost). A drug must be lipophilic (lipid soluble) in order to pass through biological membranes because biological membranes are made up of a lipid bilayer (phospholipids etc.). Once the drug reaches the blood circulation it is then distributed throughout the body and being more concentrated in highly perfused organs. ### Gene expression modulation and epigenetics Apart from classical pharmacological targets, drugs may exert effects through direct or indirect gene expression modulation, or even introduce persistent state changes through epigenetic reprogramming. Therefore, drugs should be screened for off-target activity by gene expression profiling, in addition to conventional ligand binding, enzyme assays, etc. ## Administration, drug policy and safety ### Drug policy In the United States, the Food and Drug Administration (FDA) is responsible for creating guidelines for the approval and use of drugs. The FDA requires that all approved drugs fulfill two requirements: 1.
https://en.wikipedia.org/wiki/Pharmacology
passage: These observations make it difficult to determine whether female or resource dispersion primarily influences male aggregation, especially in lieu of the apparent difficulty that males may have defending resources and females in such densely populated areas. Because the reason for male aggregation into leks is unclear, five hypotheses have been proposed. These postulates propose the following as reasons for male lekking: hotspot, predation reduction, increased female attraction, hotshot males, facilitation of female choice. Bradbury, J. E. and Gibson, R. M. (1983) Leks and mate choice. In: Mate Choice (ed. P. Bateson). pp. 109–138. Cambridge University Press. Cambridge With all of the mating behaviors discussed, the primary factors influencing differences within and between species are ecology, social conflicts, and life history differences. In some other instances, neither direct nor indirect competition is seen. Instead, in species like the Edith's checkerspot butterfly, males' efforts are directed at acquisition of females and they exhibit indiscriminate mate location behavior, where, given the low cost of mistakes, they blindly attempt to mate both correctly with females and incorrectly with other objects. ### Mating systems with male parental care #### Monogamy Monogamy is the mating system in 90% of birds, possibly because each male and female has a greater number of offspring if they share in raising a brood. In obligate monogamy, males feed females on the nest, or share in incubation and chick-feeding.
https://en.wikipedia.org/wiki/Behavioral_ecology
passage: Rather than simply choosing a single sequence $$ \{{\color{OliveGreen}c_t}\} $$ , the consumer now must choose a sequence $$ \{{\color{OliveGreen}c_t}\} $$ for each possible realization of a $$ \{r_t\} $$ in such a way that their lifetime expected utility is maximized: $$ \max_{ \left \{ c_{t} \right \}_{t=0}^{\infty} } \mathbb{E}\bigg( \sum_{t=0} ^{\infty} \beta^t u ({\color{OliveGreen}c_t}) \bigg). $$ The expectation $$ \mathbb{E} $$ is taken with respect to the appropriate probability measure given by Q on the sequences of rs. Because r is governed by a Markov process, dynamic programming simplifies the problem significantly. Then the Bellman equation is simply: $$ V(a, r) = \max_{ 0 \leq c \leq a } \{ u(c) + \beta \int V((1+r) (a - c), r') Q(r, d\mu_r) \} . $$ Under some reasonable assumption, the resulting optimal policy function g(a,r) is measurable.
https://en.wikipedia.org/wiki/Bellman_equation
passage: Its points are marked by the radius-vector $$ \displaystyle\mathbf r $$ . The space whose points are marked by the pair of vectors $$ \displaystyle(\mathbf r,\mathbf v) $$ is called the phase space of the dynamical system (). ## Euclidean structure The configuration space and the phase space of the dynamical system () both are Euclidean spaces, i. e. they are equipped with a Euclidean structure. The Euclidean structure of them is defined so that the kinetic energy of the single multidimensional particle with the unit mass $$ \displaystyle m=1 $$ is equal to the sum of kinetic energies of the three-dimensional particles with the masses $$ \displaystyle m_1,\,\ldots,\,m_N $$ : ## Constraints and internal coordinates In some cases the motion of the particles with the masses $$ \displaystyle m_1,\,\ldots,\,m_N $$ can be constrained. Typical constraints look like scalar equations of the form Constraints of the form () are called holonomic and scleronomic. In terms of the radius-vector $$ \displaystyle\mathbf r $$ of the Newtonian dynamical system () they are written as Each such constraint reduces by one the number of degrees of freedom of the Newtonian dynamical system (). Therefore, the constrained system has $$ \displaystyle n=3\,N-K $$ degrees of freedom. Definition.
https://en.wikipedia.org/wiki/Newtonian_dynamics
passage: The space of all natural cubic splines, for instance, is a subspace of the space of all cubic C2 splines. The literature of splines is replete with names for special types of splines. These names have been associated with: - The choices made for representing the spline, for example: - using basis functions for the entire spline (giving us the name B-splines) - using Bernstein polynomials as employed by Pierre Bézier to represent each polynomial piece (giving us the name Bézier splines) - The choices made in forming the extended knot vector, for example: - using single knots for continuity and spacing these knots evenly on (giving us uniform splines) - using knots with no restriction on spacing (giving us nonuniform splines) - Any special conditions imposed on the spline, for example: - enforcing zero second derivatives at and (giving us natural splines) - requiring that given data values be on the spline (giving us interpolating splines) Often a special name was chosen for a type of spline satisfying two or more of the main items above. For example, the Hermite spline is a spline that is expressed using Hermite polynomials to represent each of the individual polynomial pieces. These are most often used with ; that is, as Cubic Hermite splines. In this degree they may additionally be chosen to be only tangent-continuous (); which implies that all interior knots are double.
https://en.wikipedia.org/wiki/Spline_%28mathematics%29
passage: Thus, when X = N and μ is counting measure on N, then any sequence {Hk} of separable Hilbert spaces can be considered as a measurable family. Moreover, $$ \int^\oplus_X H_x \, \mathrm{d} \mu(x) \cong \bigoplus_{k \in \mathbb{N}} H_k $$ ## Decomposable operators For the example of a discrete measure on a countable set, any bounded linear operator T on $$ H = \bigoplus_{k \in \mathbb{N}} H_k $$ is given by an infinite matrix $$ \begin{bmatrix} T_{1 1} & T_{1 2} & \cdots & T_{1 n} & \cdots \\ T_{2 1} & T_{2 2} & \cdots & T_{2 n} & \cdots \\ \vdots & \vdots & \ddots & \vdots & \cdots \\ T_{n 1} & T_{n 2} & \cdots & T_{n n} & \cdots \\ \vdots & \vdots & \cdots & \vdots & \ddots \end{bmatrix}. $$ For this example, of a discrete measure on a countable set, decomposable operators are defined as the operators that are block diagonal, having zero for all non-diagonal entries.
https://en.wikipedia.org/wiki/Direct_integral
passage: (Equivalently, by the state-field correspondence, the sum runs over all states in the space of states.) Some fields may actually be absent, in particular due to constraints from symmetry: conformal symmetry, or extra symmetries. If all fields are primary or descendant, the sum over fields can be reduced to a sum over primaries, by rewriting the contributions of any descendant in terms of the contribution of the corresponding primary: $$ O_1(x_1)O_2(x_2) = \sum_p C_{12p}P_p(x_1-x_2,\partial_{x_2}) O_p(x_2), $$ where the fields $$ O_p $$ are all primary, and $$ C_{12p} $$ is the three-point structure constant (which for this reason is also called OPE coefficient). The differential operator $$ P_p(x_1-x_2,\partial_{x_2}) $$ is an infinite series in derivatives, which is determined by conformal symmetry and therefore in principle known. Viewing the OPE as a relation between correlation functions shows that the OPE must be associative. Furthermore, if the space is Euclidean, the OPE must be commutative, because correlation functions do not depend on the order of the fields, i.e. . The existence of the operator product expansion is a fundamental axiom of the conformal bootstrap. However, it is generally not necessary to compute operator product expansions and in particular the differential operators .
https://en.wikipedia.org/wiki/Conformal_field_theory
passage: A shielded rectangular conductor can also be used and this has certain manufacturing advantages over coax and can be seen as the forerunner of the planar technologies (stripline and microstrip). However, planar technologies really started to take off when printed circuits were introduced. These methods are significantly cheaper than waveguide and have largely taken its place in most bands. However, waveguide is still favoured in the higher microwave bands from around Ku band upwards. ## Properties ### Propagation modes and cutoff frequencies A propagation mode in a waveguide is one solution of the wave equations, or, in other words, the form of the wave. Due to the constraints of the boundary conditions, there are only limited frequencies and forms for the wave function which can propagate in the waveguide. The lowest frequency in which a certain mode can propagate is the cutoff frequency of that mode. The mode with the lowest cutoff frequency is the fundamental mode of the waveguide, and its cutoff frequency is the waveguide cutoff frequency. Propagation modes are computed by solving the Helmholtz equation alongside a set of boundary conditions depending on the geometrical shape and materials bounding the region. The usual assumption for infinitely long uniform waveguides allows us to assume a propagating form for the wave, i.e. stating that every field component has a known dependency on the propagation direction (i.e. $$ z $$ ).
https://en.wikipedia.org/wiki/Waveguide
passage: $$ As a corollary, the awkward terms can now be written in terms of a divergence by comparison with the vector Green equation, $$ \mathbf{P}\cdot \left[ \nabla \left(\nabla \cdot \mathbf{Q} \right) \right] - \mathbf{Q} \cdot \left[ \nabla \left( \nabla \cdot \mathbf{P} \right) \right] = \nabla \cdot\left[\mathbf{P}\left(\nabla\cdot\mathbf{Q}\right)-\mathbf{Q} \left( \nabla \cdot \mathbf{P} \right) \right]. $$ This result can be verified by expanding the divergence of a scalar times a vector on the RHS.
https://en.wikipedia.org/wiki/Green%27s_identities
passage: Some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled; e.g., the notion, due to Riemann and others, that space itself might be curved. Theoretical problems that need computational investigation are often the concern of computational physics. Theoretical advances may consist in setting aside old, incorrect paradigms (e.g., aether theory of light propagation, caloric theory of heat, burning consisting of evolving phlogiston, or astronomical bodies revolving around the Earth) or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result. Enc. Britannica (1994), pg 844. Sometimes though, advances may proceed along different paths. For example, an essentially correct theory may need some conceptual or factual revisions; atomic theory, first postulated millennia ago (by several thinkers in Greece and India) and the two-fluid theory of electricity are two cases in this point. However, an exception to all the above is the wave–particle duality, a theory combining aspects of different, opposing models via the Bohr complementarity principle.
https://en.wikipedia.org/wiki/Theoretical_physics
passage: Each individual component of the gradient, $$ \partial C/\partial w^l_{jk}, $$ can be computed by the chain rule; but doing this separately for each weight is inefficient. Backpropagation efficiently computes the gradient by avoiding duplicate calculations and not computing unnecessary intermediate values, by computing the gradient of each layer – specifically the gradient of the weighted input of each layer, denoted by $$ \delta^l $$ – from back to front. Informally, the key point is that since the only way a weight in $$ W^l $$ affects the loss is through its effect on the next layer, and it does so linearly, $$ \delta^l $$ are the only data you need to compute the gradients of the weights at layer $$ l $$ , and then the gradients of weights of previous layer can be computed by $$ \delta^{l-1} $$ and repeated recursively. This avoids inefficiency in two ways. First, it avoids duplication because when computing the gradient at layer $$ l $$ , it is unnecessary to recompute all derivatives on later layers $$ l+1, l+2, \ldots $$ each time.
https://en.wikipedia.org/wiki/Backpropagation
passage: Molecules may be found in many environments, however, from stellar atmospheres to those of planetary satellites. Most of these locations are relatively cool, and molecular emission is most easily studied via photons emitted when the molecules make transitions between low rotational energy states. One molecule, composed of the abundant carbon and oxygen atoms, and very stable against dissociation into atoms, is carbon monoxide (CO). The wavelength of the photon emitted when the CO molecule falls from its lowest excited state to its zero energy, or ground, state is 2.6mm, or 115 gigahertz. This frequency is a thousand times higher than typical FM radio frequencies. At these high frequencies, molecules in the Earth's atmosphere can block transmissions from space, and telescopes must be located in dry (water is an important atmospheric blocker), high sites. Radio telescopes must have very accurate surfaces to produce high fidelity images. On February 21, 2014, NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
https://en.wikipedia.org/wiki/Atomic_and_molecular_astrophysics
passage: If R is not a member of itself, then its definition entails that it is a member of itself; yet, if it is a member of itself, then it is not a member of itself, since it is the set of all sets that are not members of themselves. The resulting contradiction is Russell's paradox. In symbols: $$ \text{Let } R = \{ x \mid x \not \in x \} \text{, then } R \in R \iff R \not \in R $$ This came around a time of several paradoxes or counter-intuitive results. For example, that the parallel postulate cannot be proved, the existence of mathematical objects that cannot be computed or explicitly described, and the existence of theorems of arithmetic that cannot be proved with Peano arithmetic. The result was a foundational crisis of mathematics. ## Basic concepts and notation Set theory begins with a fundamental binary relation between an object and a set . If is a member (or element) of , the notation is used. A set is described by listing elements separated by commas, or by a characterizing property of its elements, within braces { }. Since sets are objects, the membership relation can relate sets as well, i.e., sets themselves can be members of other sets. A derived binary relation between two sets is the subset relation, also called set inclusion. If all the members of set are also members of set , then is a subset of , denoted .
https://en.wikipedia.org/wiki/Set_theory
passage: ### Exactness If $$ 0 \to U \to V \to W \to 0 $$ is a short exact sequence of vector spaces, then $$ 0 \to {\textstyle\bigwedge}^{\!1}(U) \wedge {\textstyle\bigwedge}(V) \to {\textstyle\bigwedge}(V) \to {\textstyle\bigwedge}(W) \to 0 $$ is an exact sequence of graded vector spaces, as is $$ 0 \to {\textstyle\bigwedge}(U) \to {\textstyle\bigwedge}(V). $$ ### Direct sums In particular, the exterior algebra of a direct sum is isomorphic to the tensor product of the exterior algebras:
https://en.wikipedia.org/wiki/Exterior_algebra
passage: ### Chu–Vandermonde identity The identity generalizes to non-integer arguments. In this case, it is known as the Chu–Vandermonde identity (see Askey 1975, pp. 59–60) and takes the form $$ {s+t \choose n}=\sum_{k=0}^n {s \choose k}{t \choose n-k} $$ for general complex-valued s and t and any non-negative integer n. It can be proved along the lines of the algebraic proof above by multiplying the binomial series for $$ (1+x)^s $$ and $$ (1+x)^t $$ and comparing terms with the binomial series for $$ (1+x)^{s+t} $$ . This identity may be rewritten in terms of the falling Pochhammer symbols as $$ (s+t)_n = \sum_{k=0}^n {n \choose k} (s)_k (t)_{n-k} $$ in which form it is clearly recognizable as an umbral variant of the binomial theorem (for more on umbral variants of the binomial theorem, see binomial type).
https://en.wikipedia.org/wiki/Vandermonde%27s_identity
passage: Usually the resulting logfile is stored on so-called "stable storage", that is a storage medium that is assumed to survive crashes and hardware failures. To gather the necessary information for the logs, two data structures have to be maintained: the dirty page table (DPT) and the transaction table (TT). The dirty page table keeps record of all the pages that have been modified, and not yet written to disk, and the first Sequence Number that caused that page to become dirty. The transaction table contains all currently running transactions and the Sequence Number of the last log entry they created. We create log records of the form (Sequence Number, Transaction ID, Page ID, Redo, Undo, Previous Sequence Number). The Redo and Undo fields keep information about the changes this log record saves and how to undo them. The Previous Sequence Number is a reference to the previous log record that was created for this transaction. In the case of an aborted transaction, it's possible to traverse the log file in reverse order using the Previous Sequence Numbers, undoing all actions taken within the specific transaction. Every transaction implicitly begins with the first "Update" type of entry for the given Transaction ID, and is committed with "End Of Log" (EOL) entry for the transaction. During a recovery, or while undoing the actions of an aborted transaction, a special kind of log record is written, the Compensation Log Record (CLR), to record that the action has already been undone.
https://en.wikipedia.org/wiki/Algorithms_for_Recovery_and_Isolation_Exploiting_Semantics
passage: $$ With this representation for and , it is evident that $$ \begin{align} E(x, y; u) &= \sum_{n=0}^\infty \frac{u^n}{2^n n! \sqrt{\pi}} \, H_n(x) H_n(y) e^{-\frac{x^2+y^2}{2}} \\ &= \frac{e^{\frac{x^2+y^2}{2}}}{4\pi\sqrt{\pi}}\iint\left( \sum_{n=0}^\infty \frac{1}{2^n n!} (-ust)^n \right ) e^{isx+ity - \frac{s^2}{4} - \frac{t^2}{4}}\, ds\,dt \\ & =\frac{e^{\frac{x^2+y^2}{2}}}{4\pi\sqrt{\pi}}\iint e^{-\frac{ust}{2}} \, e^{isx+ity - \frac{s^2}{4} - \frac{t^2}{4}}\, ds\,dt, \end{align} $$ and
https://en.wikipedia.org/wiki/Hermite_polynomials
passage: If this were valid, it would be extremely useful, because then (at least locally), one has decoupled the system into n scalar differential equations which one can easily solve to find that (locally): $$ Z_i = g_i \exp\left(M^{(i)} \log(x_i)+\sum_{j=1}^{r_i}\frac{T^{(i)}_j}{x_i^{j}}\right). $$ However, this does not work - because the power series solved term-for-term for g will not, in general, converge. Jimbo, Miwa and Ueno showed that this approach nevertheless provides canonical solutions near the singularities, and can therefore be used to define extended monodromy data. This is due to a theorem of George Birkhoff which states that given such a formal series, there is a unique convergent function Gi such that in any sufficiently large sector around the pole, Gi is asymptotic to gi, and $$ Y = G_i \exp\left(M^{(i)} \log(x_i)+\sum_{j=1}^{r_i}\frac{T^{(i)}_j}{x_i^{j}}\right). $$ is a true solution of the differential equation. A canonical solution therefore appears in each such sector near each pole.
https://en.wikipedia.org/wiki/Isomonodromic_deformation
passage: This precursor to dopamine can penetrate through the blood–brain barrier, whereas the neurotransmitter dopamine cannot. There has been extensive research to determine whether L-dopa is a better treatment for Parkinson's disease rather than other dopamine agonists. Some believe that the long-term use of L-dopa will compromise neuroprotection and, thus, eventually lead to dopaminergic cell death. Though there has been no proof, in-vivo or in-vitro, some still believe that the long-term use of dopamine agonists is better for the patient. Alzheimer's disease While there are a variety of hypotheses that have been proposed for the cause of Alzheimer's disease, the knowledge of this disease is far from complete to explain, making it difficult to develop methods for treatment. In the brain of Alzheimer's patients, both neuronal nicotinic acetylcholine (nACh) receptors and NMDA receptors are known to be down-regulated. Thus, four anticholinesterases, such as Donepezil and Rivastigmine, have been developed and approved by the U.S. Food and Drug Administration (FDA) for the treatment in the U.S.A. However, these are not ideal drugs, considering their side-effects and limited effectiveness. The excessive stimulation of muscarinic and nicotinic receptors by acetylcholine may contribute to the side effects that anticholinesterases have. One promising drug, nefiracetam, is being developed for the treatment of Alzheimer's and other patients with dementia, and has unique actions in potentiating the activity of both nACh receptors and NMDA receptors.
https://en.wikipedia.org/wiki/Neuropharmacology
passage: Often, the added information is that one of the powers of a certain variable is even or odd. As an example, for the projectile problem, using orientational symbols, , being in the xy-plane will thus have dimension and the range of the projectile will be of the form: $$ R = g^a\,v^b\,\theta^c\text{ which means }\mathsf{L}\,1_\mathrm{x} \sim \left(\frac{\mathsf{L}\,1_\text{y}}{\mathsf{T}^2}\right)^a \left(\frac{\mathsf{L}}{\mathsf{T}}\right)^b\,1_\mathsf{z}^c.\, $$ Dimensional homogeneity will now correctly yield and , and orientational homogeneity requires that . In other words, that must be an odd integer. In fact, the required function of theta will be which is a series consisting of odd powers of . It is seen that the Taylor series of and are orientationally homogeneous using the above multiplication table, while expressions like and are not, and are (correctly) deemed unphysical. Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, the radian may still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis.
https://en.wikipedia.org/wiki/Dimensional_analysis
passage: Let B equal $$ S^2 $$ and E equal $$ S^3. $$ Let p be the Hopf fibration, which has fiber $$ S^1. $$ From the long exact sequence $$ \cdots \to \pi_n(S^1) \to \pi_n(S^3) \to \pi_n(S^2) \to \pi_{n-1} (S^1) \to \cdots $$ and the fact that $$ \pi_n(S^1) = 0 $$ for $$ n \geq 2, $$ we find that $$ \pi_n(S^3) = \pi_n(S^2) $$ for $$ n \geq 3. $$ In particular, $$ \pi_3(S^2) = \pi_3(S^3) = \Z. $$ In the case of a cover space, when the fiber is discrete, we have that $$ \pi_n(E) $$ is isomorphic to $$ \pi_n(B) $$ for $$ n > 1, $$ that $$ \pi_n(E) $$ embeds injectively into $$ \pi_n(B) $$ for all positive $$ n, $$ and that the subgroup of $$ \pi_1(B) $$ that corresponds to the embedding of $$ \pi_1(E) $$ has cosets in bijection with the elements of the fiber. When the fibration is the mapping fibre, or dually, the cofibration is the mapping cone, then the resulting exact (or dually, coexact) sequence is given by the Puppe sequence.
https://en.wikipedia.org/wiki/Homotopy_group
passage: DropConnect is similar to dropout as it introduces dynamic sparsity within the model, but differs in that the sparsity is on the weights, rather than the output vectors of a layer. In other words, the fully connected layer with DropConnect becomes a sparsely connected layer in which the connections are chosen at random during the training stage. #### Stochastic pooling A major drawback to dropout is that it does not have the same benefits for convolutional layers, where the neurons are not fully connected. Even before dropout, in 2013 a technique called stochastic pooling, the conventional deterministic pooling operations were replaced with a stochastic procedure, where the activation within each pooling region is picked randomly according to a multinomial distribution, given by the activities within the pooling region. This approach is free of hyperparameters and can be combined with other regularization approaches, such as dropout and data augmentation. An alternate view of stochastic pooling is that it is equivalent to standard max pooling but with many copies of an input image, each having small local deformations. This is similar to explicit elastic deformations of the input images, which delivers excellent performance on the MNIST data set. Using stochastic pooling in a multilayer model gives an exponential number of deformations since the selections in higher layers are independent of those below. #### Artificial data Because the degree of model overfitting is determined by both its power and the amount of training it receives, providing a convolutional network with more training examples can reduce overfitting.
https://en.wikipedia.org/wiki/Convolutional_neural_network
passage: While reachability seems to be a good tool to find erroneous states, for practical problems the constructed graph usually has far too many states to calculate. To alleviate this problem, linear temporal logic is usually used in conjunction with the tableau method to prove that such states cannot be reached. Linear temporal logic uses the semi-decision technique to find if indeed a state can be reached, by finding a set of necessary conditions for the state to be reached then proving that those conditions cannot be satisfied. ### Liveness Petri nets can be described as having different degrees of liveness $$ L_1 - L_4 $$ . A Petri net $$ (N, M_0) $$ is called $$ L_k $$ -live if and only if all of its transitions are $$ L_k $$ -live, where a transition is - dead, if it can never fire, i.e. it is not in any firing sequence in $$ L(N,M_0) $$ - $$ L_1 $$ -live (potentially fireable), if and only if it may fire, i.e. it is in some firing sequence in $$ L(N,M_0) $$ - $$ L_2 $$ -live if it can fire arbitrarily often, i.e. if for every positive integer , it occurs at least times in some firing sequence in $$ L(N,M_0) $$ -
https://en.wikipedia.org/wiki/Petri_net
passage: ### Formal proof For the formal proof, algorithms are presumed to define partial functions over strings and are themselves represented by strings. The partial function computed by the algorithm represented by a string a is denoted Fa. This proof proceeds by reductio ad absurdum: we assume that there is a non-trivial property that is decided by an algorithm, and then show that it follows that we can decide the halting problem, which is not possible, and therefore a contradiction. Let us now assume that P(a) is an algorithm that decides some non-trivial property of Fa. Without loss of generality we may assume that P(no-halt) = "no", with no-halt being the representation of an algorithm that never halts. If this is not true, then this holds for the algorithm that computes the negation of the property P. Now, since P decides a non-trivial property, it follows that there is a string b that represents an algorithm Fb and P(b) = "yes". We can then define an algorithm H(a, i) as follows: 1. construct a string t that represents an algorithm T(j) such that T first simulates the computation of Fa(i), then T simulates the computation of Fb(j) and returns its result. 2. return P(t). We can now show that H decides the halting problem: - Assume that the algorithm represented by a halts on input i.
https://en.wikipedia.org/wiki/Rice%27s_theorem
passage: ### Y-load to Δ-load transformation equations Let $$ R_\text{T} = R_\text{a} + R_\text{b} + R_\text{c} $$ . We can write the Δ to Y equations as $$ R_1 = \frac{R_\text{b}R_\text{c}}{R_\text{T}} $$   (1) $$ R_2 = \frac{R_\text{a}R_\text{c}}{R_\text{T}} $$   (2) $$ R_3 = \frac{R_\text{a}R_\text{b}}{R_\text{T}}. $$   (3) Multiplying the pairs of equations yields $$ R_1 R_2 = \frac{R_\text{a}R_\text{b}R_\text{c}^2 }{R_\text{T}^2} $$   (4) $$ R_1 R_3 = \frac{R_\text{a}R_\text{b}^2 R_\text{c}}{R_\text{T}^2} $$   (5) $$ R_2 R_3 = \frac{R_\text{a}^2 R_\text{b}R_\text{c}}{R_\text{T}^2} $$   (6) and the sum of these equations is $$ R_1 R_2 + R_1 R_3 + R_2 R_3 = \frac{ BLOCK0 {R_\text{T}^2} $$   (7) Factor $$
https://en.wikipedia.org/wiki/Y-%CE%94_transform
passage: The utility of Feynman diagrams is that other types of physical phenomena that are part of the general picture of fundamental interactions but are conceptually separate from forces can also be described using the same rules. For example, a Feynman diagram can describe in succinct detail how a neutron decays into an electron, proton, and antineutrino, an interaction mediated by the same gauge boson that is responsible for the weak nuclear force. ## Fundamental interactions All of the known forces of the universe are classified into four fundamental interactions. The strong and the weak forces act only at very short distances, and are responsible for the interactions between subatomic particles, including nucleons and compound nuclei. The electromagnetic force acts between electric charges, and the gravitational force acts between masses. All other forces in nature derive from these four fundamental interactions operating within quantum mechanics, including the constraints introduced by the Schrödinger equation and the Pauli exclusion principle. For example, friction is a manifestation of the electromagnetic force acting between atoms of two surfaces. The forces in springs, modeled by Hooke's law, are also the result of electromagnetic forces. Centrifugal forces are acceleration forces that arise simply from the acceleration of rotating frames of reference. The fundamental theories for forces developed from the unification of different ideas. For example, Newton's universal theory of gravitation showed that the force responsible for objects falling near the surface of the Earth is also the force responsible for the falling of celestial bodies about the Earth (the Moon) and around the Sun (the planets).
https://en.wikipedia.org/wiki/Force
passage: For $$ N $$ interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function $$ V $$ is not simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle. For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle, that is $$ V = \sum_{i=1}^N V(\mathbf{r}_i,t) = V(\mathbf{r}_1,t) + V(\mathbf{r}_2,t) + \cdots + V(\mathbf{r}_N,t) $$ The general form of the Hamiltonian in this case is: $$ \begin{align} \hat{H} & = -\frac{\hbar^2}{2}\sum_{i=1}^N \frac{1}{m_i}\nabla_i^2 + \sum_{i=1}^N V_i \\[6pt] & = \sum_{i=1}^N \left(-\frac{\hbar^2}{2m_i}\nabla_i^2 + V_i \right) \\[6pt] & = \sum_{i=1}^N \hat{H}_i \end{align} $$ where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle.
https://en.wikipedia.org/wiki/Hamiltonian_%28quantum_mechanics%29
passage: In the Riemannian case, $$ t=1 $$ . Since Hodge star takes an orthonormal basis to an orthonormal basis, it is an isometry on the exterior algebra $$ \bigwedge V $$ . ## Geometric explanation The Hodge star is motivated by the correspondence between a subspace of and its orthogonal subspace (with respect to the scalar product), where each space is endowed with an orientation and a numerical scaling factor. Specifically, a non-zero decomposable -vector $$ w_1\wedge\cdots\wedge w_k\in \textstyle\bigwedge^{\!k} V $$ corresponds by the Plücker embedding to the subspace $$ W $$ with oriented basis $$ w_1,\ldots,w_k $$ , endowed with a scaling factor equal to the -dimensional volume of the parallelepiped spanned by this basis (equal to the Gramian, the determinant of the matrix of scalar products $$ \langle w_i, w_j \rangle $$ ).
https://en.wikipedia.org/wiki/Hodge_star_operator
passage: ## Astronomy Earth-based receivers can detect radio signals emanating from distant stars or regions of ionized gas. Receivers in radio telescopes can detect the general direction of such naturally-occurring radio sources, sometimes correlating their location with objects visible with optical telescopes. Accurate measurement of the arrival time of radio impulses by two radio telescopes at different places on Earth, or the same telescope at different times in Earth's orbit around the Sun, may also allow estimation of the distance to a radio object. ### Sport Events hosted by groups and organizations that involve the use of radio direction finding skills to locate transmitters at unknown locations have been popular since the end of World War II. Many of these events were first promoted in order to practice the use of radio direction finding techniques for disaster response and civil defense purposes, or to practice locating the source of radio frequency interference. The most popular form of the sport, worldwide, is known as Amateur Radio Direction Finding or by its international abbreviation ARDF. Another form of the activity, known as "transmitter hunting", "mobile T-hunting" or "fox hunting" takes place in a larger geographic area, such as the metropolitan area of a large city, and most participants travel in motor vehicles while attempting to locate one or more radio transmitters with radio direction-finding techniques. ## Direction finding at microwave frequencies DF techniques for microwave frequencies were developed in the 1940s, in response to the growing numbers of transmitters operating at these higher frequencies. This required the design of new antennas and receivers for the DF systems.
https://en.wikipedia.org/wiki/Direction_finding
passage: Over time, other iterations of this map type arose; most notable are the sinusoidal projection and the Bonne projection. The Werner projection places its standard parallel at the North Pole; a sinusoidal projection places its standard parallel at the equator; and the Bonne projection is intermediate between the two. In 1569, mapmaker Gerardus Mercator first published a map based on his Mercator projection, which uses equally-spaced parallel vertical lines of longitude and parallel latitude lines spaced farther apart as they get farther away from the equator. By this construction, courses of constant bearing are conveniently represented as straight lines for navigation. The same property limits its value as a general-purpose world map because regions are shown as increasingly larger than they actually are the further from the equator they are. Mercator is also credited as the first to use the word "atlas" to describe a collection of maps. In the later years of his life, Mercator resolved to create his Atlas, a book filled with many maps of different regions of the world, as well as a chronological history of the world from the Earth's creation by God until 1568. He was unable to complete it to his satisfaction before he died. Still, some additions were made to the Atlas after his death, and new editions were published after his death. In 1570, the Brabantian cartographer Abraham Ortelius, strongly encouraged by Gillis Hooftman, created the first true modern atlas, Theatrum Orbis Terrarum.
https://en.wikipedia.org/wiki/Cartography
passage: 1. Iterate the following until $$ |\mathcal{C}| = n $$ : 1. Find the current cluster with 2 or more objects that has the largest diameter: $$ C_* = \arg\max_{C\in \mathcal{C}} \max_{i_1,i_2\in C} \delta(i_1,i_2) $$ 1. Find the object in this cluster with the most dissimilarity to the rest of the cluster: $$ i^* = \arg\max_{i\in C_*} \frac{1}{|C_*|-1}\sum_{j\in C_*\setminus\{i\}} \delta(i,j) $$ 1. Pop $$ i^* $$ from its old cluster $$ C_* $$ and put it into a new splinter group $$ C_\textrm{new} = \{i^*\} $$ . 1. As long as $$ C_* $$ isn't empty, keep migrating objects from $$ C_* $$ to add them to $$ C_\textrm{new} $$ .
https://en.wikipedia.org/wiki/Hierarchical_clustering
passage: Frequencies above the corner frequency are attenuated the higher the frequency, the higher the attenuation. ### Phase plot The phase Bode plot is obtained by plotting the phase angle of the transfer function given by $$ \arg H_{\text{lp}}(\mathrm{j} \omega) = -\tan^{-1}\frac{\omega}{\omega_\text{c}} $$ versus $$ \omega $$ , where $$ \omega $$ and $$ \omega_\text{c} $$ are the input and cutoff angular frequencies respectively. For input frequencies much lower than corner, the ratio $$ \omega/\omega_\text{c} $$ is small, and therefore the phase angle is close to zero. As the ratio increases, the absolute value of the phase increases and becomes −45° when $$ \omega = \omega_\text{c} $$ . As the ratio increases for input frequencies much greater than the corner frequency, the phase angle asymptotically approaches −90°. The frequency scale for the phase plot is logarithmic. ### Normalized plot The horizontal frequency axis, in both the magnitude and phase plots, can be replaced by the normalized (nondimensional) frequency ratio $$ \omega/\omega_\text{c} $$ .
https://en.wikipedia.org/wiki/Bode_plot
passage: At the terminal the action potential provokes the release of neurotransmitters across the synapse, which bind to receptors on the post-synaptic cell such as another neuron, myocyte or secretory cell. Myelin is made by specialized non-neuronal glial cells, that provide insulation, and nutritional and homeostatic support, along the length of the axon. In the central nervous system, myelination is formed by glial cells called oligodendrocytes, each of which sends out cellular extensions known as foot processes to myelinate multiple nearby axons. In the peripheral nervous system, myelin is formed by Schwann cells, which myelinate only a section of an axon. In the CNS, axons carry electrical signals from one nerve cell body to another. The "insulating" function for myelin is essential for efficient motor function (i.e. movement such as walking), sensory function (e.g. sight, hearing, smell, the feeling of touch or pain) and cognition (e.g. acquiring and recalling knowledge), as demonstrated by the consequence of disorders that affect myelination, such as the genetically determined leukodystrophies; the acquired inflammatory demyelinating disease, multiple sclerosis; and the inflammatory demyelinating peripheral neuropathies. Due to its high prevalence, multiple sclerosis, which specifically affects the central nervous system, is the best known demyelinating disorder.
https://en.wikipedia.org/wiki/Myelin
passage: Orthoclase hornblende, Augite, Biotite Little or no Quartz: Plagioclase Hornblende, Augite, Biotite No Quartz Plagioclase Augite, Olivine No Felspar Augite, Hornblende, Olivine Plutonic or Abyssal type Granite Syenite Diorite Gabbro Peridotite Intrusive or Hypabyssal type Quartz-porphyry Orthoclase-porphyry Porphyrite Dolerite Picrite Lavas or Effusive type Rhyolite, Obsidian Trachyte Andesite Basalt Komatiite Rocks that contain leucite or nepheline, either partly or wholly replacing felspar, are not included in this table. They are essentially of intermediate or of mafic character. We might in consequence regard them as varieties of syenite, diorite, gabbro, etc., in which feldspathoid minerals occur, and indeed there are many transitions between syenites of ordinary type and nepheline — or leucite — syenite, and between gabbro or dolerite and theralite or essexite. But, as many minerals develop in these "alkali" rocks that are uncommon elsewhere, it is convenient in a purely formal classification like that outlined here to treat the whole assemblage as a distinct series.
https://en.wikipedia.org/wiki/Geochemistry
passage: It is related to Nvidia's Cg, but is only supported by DirectX and Xbox. HLSL programs are compiled into bytecode equivalent of DirectX shader assembly language. HLSL was introduced as an optional alternative to the shader assembly language in Direct3D 9, but became a requirement in Direct3d 10 and higher, where the shader assembly language is deprecated. ### Adobe Pixel Bender and Adobe Graphics Assembly Language Adobe Systems added Pixel Bender as part of the Adobe Flash 10 API. Pixel Bender could only process pixel but not 3D-vertex data. Flash 11 introduced an entirely new 3D API called Stage3D, which uses its own shading language called Adobe Graphics Assembly Language (AGAL), which offers full 3D acceleration support. GPU acceleration for Pixel Bender was removed in Flash 11.8. AGAL is a low-level but platform-independent shading language, which can be compiled, for example, or GLSL. ### PlayStation Shader Language Sony announced PlayStation Shader Language (PSSL) as a shading language similar to Cg/HLSL, but specific to the PlayStation 4. PSSL is said to be largely compatible with the HLSL shader language from DirectX 12, but with additional features for the PS4 and PS5 platforms. ### Metal Shading Language Apple has created a low-level graphics API, called Metal, which runs on most Macs made since 2012, iPhones since the 5S, and iPads since the iPad Air. Metal has its own shading language called Metal Shading Language (MSL), which is based on C++14 and implemented using clang and LLVM.
https://en.wikipedia.org/wiki/Shading_language
passage: The time it takes to unzip or zip is proportional to one plus the number of nodes on the path(s) being unzipped/zipped. The expected depth of any node in a zip tree is at most 1.5 log n, making the expected running time of the insert, delete, and search operations all O(log n). ### Insertion When inserting a node x into a zip tree, first generate a new rank from a geometric distribution with a probability of success of 1/2. Let x.key be the key of the node x, and let x.rank be the rank of the node x. Then, follow the search path for x in the tree until finding a node u such that u.rank <= x.rank and u.key < x.key. Continue along the search path for x, "unzippping" every node v passed by placing them either in path P if v.key < x.key, or path Q if v.key > x.key. Keys must be unique so if at any point v.key = x.key, the search stops and no new node is inserted. Once the search for x is complete, x is inserted in place of the node u. The top node of the path P becomes the left child of x and the top node of Q becomes the right child. The parent and child pointers between u and u.parent will be updated accordingly with x, and if u was previously the root node, x becomes the new root. ### Deletion When deleting a node x, first search the tree to find it.
https://en.wikipedia.org/wiki/Zip_tree
passage: To communicate, two peer entities at a given layer use a protocol specific to that layer which is implemented by using services of the layer below. For each layer, there are two types of standards: protocol standards defining how peer entities at a given layer communicate, and service standards defining how a given layer communicates with the layer above it. In the OSI model, the layers and their functionality are (from highest to lowest layer): - The Application layer may provide the following services to the application processes: identification of the intended communication partners, establishment of the necessary authority to communicate, determination of availability and authentication of the partners, agreement on privacy mechanisms for the communication, agreement on responsibility for error recovery and procedures for ensuring data integrity, synchronization between cooperating application processes, identification of any constraints on syntax (e.g. character sets and data structures), determination of cost and acceptable quality of service, selection of the dialogue discipline, including required logon and logoff procedures. - The presentation layer may provide the following services to the application layer: a request for the establishment of a session, data transfer, negotiation of the syntax to be used between the application layers, any necessary syntax transformations, formatting and special purpose transformations (e.g., data compression and data encryption). -
https://en.wikipedia.org/wiki/Communication_protocol
passage: In mathematics, a fractal dimension is a term invoked in the science of geometry to provide a rational statistical index of complexity detail in a pattern. A fractal pattern changes with the scale at which it is measured. It is also a measure of the space-filling capacity of a pattern and tells how a fractal scales differently, in a fractal (non-integer) dimension. The main idea of "fractured" dimensions has a long history in mathematics, but the term itself was brought to the fore by Benoit Mandelbrot based on his 1967 paper on self-similarity in which he discussed fractional dimensions. In that paper, Mandelbrot cited previous work by Lewis Fry Richardson describing the counter-intuitive notion that a coastline's measured length changes with the length of the measuring stick used (see Fig. 1). In terms of that notion, the fractal dimension of a coastline quantifies how the number of scaled measuring sticks required to measure the coastline changes with the scale applied to the stick. There are several formal mathematical definitions of fractal dimension that build on this basic concept of change in detail with change in scale, see below. Ultimately, the term fractal dimension became the phrase with which Mandelbrot himself became most comfortable with respect to encapsulating the meaning of the word fractal, a term he created.
https://en.wikipedia.org/wiki/Fractal_dimension
passage: For example, if time is measured in seconds, then frequency is in hertz. The Fourier transform can also be written in terms of angular frequency, $$ \omega = 2\pi \xi, $$ whose units are radians per second. The substitution $$ \xi = \tfrac{\omega}{2 \pi} $$ into produces this convention, where function $$ \widehat f $$ is relabeled $$ \widehat {f_1}: $$ $$ \begin{align} \widehat {f_3}(\omega) &\triangleq \int_{-\infty}^{\infty} f(x)\cdot e^{-i\omega x}\, dx = \widehat{f_1}\left(\tfrac{\omega}{2\pi}\right),\\ f(x) &= \frac{1}{2\pi} \int_{-\infty}^{\infty} \widehat{f_3}(\omega)\cdot e^{i\omega x}\, d\omega. \end{align} $$ Unlike the definition, the Fourier transform is no longer a unitary transformation, and there is less symmetry between the formulas for the transform and its inverse.
https://en.wikipedia.org/wiki/Fourier_transform
passage: Oxygen 90.18 −182.97 Methane 111.7 −161.45 Krypton 119.93 −153.415 ## Industrial applications Liquefied gases, such as liquid nitrogen and liquid helium, are used in many cryogenic applications. Liquid nitrogen is the most commonly used element in cryogenics and is legally purchasable around the world. Liquid helium is also commonly used and allows for the lowest attainable temperatures to be reached. These liquids may be stored in Dewar flasks, which are double-walled containers with a high vacuum between the walls to reduce heat transfer into the liquid. Typical laboratory Dewar flasks are spherical, made of glass and protected in a metal outer container. Dewar flasks for extremely cold liquids such as liquid helium have another double-walled container filled with liquid nitrogen. Dewar flasks are named after their inventor, James Dewar, the man who first liquefied hydrogen. Thermos bottles are smaller vacuum flasks fitted in a protective casing. Cryogenic barcode labels are used to mark Dewar flasks containing these liquids, and will not frost over down to −195 degrees Celsius. Cryogenic transfer pumps are the pumps used on LNG piers to transfer liquefied natural gas from LNG carriers to LNG storage tanks, as are cryogenic valves. ### Cryogenic processing The field of cryogenics advanced during World War II when scientists found that metals frozen to low temperatures showed more resistance to wear.
https://en.wikipedia.org/wiki/Cryogenics
passage: On February 28, 1944, this was endorsed by Robert Bacher, also from Cornell, and one of the most senior scientists at Los Alamos. This led to an offer being made in August 1944, which Feynman accepted. Oppenheimer had also hoped to recruit Feynman to the University of California, but the head of the physics department, Raymond T. Birge, was reluctant. He made Feynman an offer in May 1945, but Feynman turned it down. Cornell matched its salary offer of $3,900 () per annum. Feynman became one of the first of the Los Alamos Laboratory's group leaders to depart, leaving for Ithaca, New York, in October 1945. Because Feynman was no longer working at the Los Alamos Laboratory, he was no longer exempt from the draft. At his induction physical, Army psychiatrists diagnosed Feynman as suffering from a mental illness and the Army gave him a 4-F exemption on mental grounds. His father died suddenly on October 8, 1946, and Feynman suffered from depression. On October 17, 1946, he wrote a letter to Arline, expressing his deep love and heartbreak. The letter was sealed and only opened after his death. "Please excuse my not mailing this," the letter concluded, "but I don't know your new address." Unable to focus on research problems, Feynman began tackling physics problems, not for utility, but for self-satisfaction.
https://en.wikipedia.org/wiki/Richard_Feynman
passage: This is equal to 1 if the proposal density is symmetric. Then the new state $$ x_{t+1} $$ is chosen according to the following rules. If $$ a \geq 1{:} $$ $$ x_{t+1} = x', $$ else: $$ x_{t+1} = \begin{cases} x' & \text{with probability } a, \\ x_t & \text{with probability } 1-a. \end{cases} $$ The Markov chain is started from an arbitrary initial value $$ x_0 $$ , and the algorithm is run for many iterations until this initial state is "forgotten". These samples, which are discarded, are known as burn-in. The remaining set of accepted values of $$ x $$ represent a sample from the distribution $$ P(x) $$ . The algorithm works best if the proposal density matches the shape of the target distribution $$ P(x) $$ , from which direct sampling is difficult, that is $$ g(x' \mid x_t) \approx P(x') $$ . If a Gaussian proposal density $$ g $$ is used, the variance parameter $$ \sigma^2 $$ has to be tuned during the burn-in period. This is usually done by calculating the acceptance rate, which is the fraction of proposed samples that is accepted in a window of the last $$ N $$ samples.
https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm
passage: However, in MEMMs, estimating the parameters of the maximum-entropy distributions used for the transition probabilities can be done for each transition distribution in isolation. A drawback of MEMMs is that they potentially suffer from the "label bias problem," where states with low-entropy transition distributions "effectively ignore their observations." Conditional random fields were designed to overcome this weakness, which had already been recognised in the context of neural network-based Markov models in the early 1990s. Another source of label bias is that training is always done with respect to known previous tags, so the model struggles at test time when there is uncertainty in the previous tag. ## References Category:Markov models Category:Statistical natural language processing
https://en.wikipedia.org/wiki/Maximum-entropy_Markov_model
passage: SRA strictly implies this one-point second-order interpolation by a simple rational function. We can notice that even third order method is a variation of Newton's method. We see the Newton's steps are multiplied by some factors. These factors are called the convergence factors of the variations, which are useful for analyzing the rate of convergence. See Gander (1978). ## References - . - . - . - . Category:Interpolation
https://en.wikipedia.org/wiki/Simple_rational_approximation
passage: More generally, if the hill height function is differentiable, then the gradient of dotted with a unit vector gives the slope of the hill in the direction of the vector, the directional derivative of along the unit vector. ## Notation The gradient of a function $$ f $$ at point $$ a $$ is usually written as $$ \nabla f (a) $$ . It may also be denoted by any of the following: - $$ \vec{\nabla} f (a) $$ : to emphasize the vector nature of the result. - $$ \operatorname{grad} f $$ - $$ \partial_i f $$ and $$ f_{i} $$ : Written with Einstein notation, where repeated indices () are summed over. ## Definition The gradient (or gradient vector field) of a scalar function is denoted or where (nabla) denotes the vector differential operator, del. The notation is also commonly used to represent the gradient. The gradient of is defined as the unique vector field whose dot product with any vector at each point is the directional derivative of along . That is, $$ \big(\nabla f(x)\big)\cdot \mathbf{v} = D_{\mathbf v}f(x) $$ where the right-hand side is the directional derivative and there are many ways to represent it. Formally, the derivative is dual to the gradient; see relationship with derivative.
https://en.wikipedia.org/wiki/Gradient
passage: When there are only a few addressing modes, the particular addressing mode required is usually encoded within the instruction code (e.g. IBM System/360 and successors, most RISC). But when there are many addressing modes, a specific field is often set aside in the instruction to specify the addressing mode. The DEC VAX allowed multiple memory operands for almost all instructions, and so reserved the first few bits of each operand specifier to indicate the addressing mode for that particular operand. Keeping the addressing mode specifier bits separate from the opcode operation bits produces an orthogonal instruction set. Even on a computer with many addressing modes, measurements of actual programs indicate that the simple addressing modes listed below account for some 90% or more of all addressing modes used. Since most such measurements are based on code generated from high-level languages by compilers, this reflects to some extent the limitations of the compilers being used. ## Important use case Some instruction set architectures, such as Intel x86, IBM/360 and its successors, have a load effective address instruction. This calculates the effective operand address and loads it into a register, without accessing the memory it refers to. This can be useful when passing the address of an array element to a subroutine. It may also be a clever way of doing more calculations than normal in one instruction; for example, using such an instruction with the addressing mode "base+index+offset" (detailed below) allows one to add two registers and a constant together in one instruction and store the result in a third register.
https://en.wikipedia.org/wiki/Addressing_mode
passage: Spamvertising Spamvertising is advertising through the medium of spam. Opt-in, confirmed opt-in, double opt-in, opt-out Opt-in, confirmed opt-in, double opt-in, opt-out refers to whether the people on a mailing list are given the option to be put in, or taken out, of the list. Confirmation (and "double", in marketing speak) refers to an email address transmitted e.g. through a web form being confirmed to actually request joining a mailing list, instead of being added to the list without verification. Final, Ultimate Solution for the Spam Problem (FUSSP) An ironic reference to naïve developers who believe they have invented the perfect spam filter, which will stop all spam from reaching users' inboxes while deleting no legitimate email accidentally. ## History
https://en.wikipedia.org/wiki/Email_spam
passage: It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of relative to is given by: $$ I(X;Y) = \mathbb{E}_{X,Y} [SI(x,y)] = \sum_{x,y} p(x,y) \log \frac{p(x,y)}{p(x)\, p(y)} $$ where (Specific mutual Information) is the pointwise mutual information. A basic property of the mutual information is that $$ I(X;Y) = H(X) - H(X|Y).\, $$ That is, knowing Y, we can save an average of bits in encoding X compared to not knowing Y. Mutual information is symmetric: $$ I(X;Y) = I(Y;X) = H(X) + H(Y) - H(X,Y).\, $$ Mutual information can be expressed as the average ### Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X: $$ I(X;Y) = \mathbb E_{p(y)} [D_{\mathrm{KL}}( p(X|Y=y) \| p(X) )]. $$ In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of
https://en.wikipedia.org/wiki/Information_theory
passage: It is possible to create additional linked lists of elements that use internal storage by using external storage, and having the cells of the additional linked lists store references to the nodes of the linked list containing the data. In general, if a set of data structures needs to be included in linked lists, external storage is the best approach. If a set of data structures need to be included in only one linked list, then internal storage is slightly better, unless a generic linked list package using external storage is available. Likewise, if different sets of data that can be stored in the same data structure are to be included in a single linked list, then internal storage would be fine. Another approach that can be used with some languages involves having different data structures, but all have the initial fields, including the next (and prev if double linked list) references in the same location. After defining separate structures for each type of data, a generic structure can be defined that contains the minimum amount of data shared by all the other structures and contained at the top (beginning) of the structures. Then generic routines can be created that use the minimal structure to perform linked list type operations, but separate routines can then handle the specific data. This approach is often used in message parsing routines, where several types of messages are received, but all start with the same set of fields, usually including a field for message type. The generic routines are used to add new messages to a queue when they are received, and remove them from the queue in order to process the message. The message type field is then used to call the correct routine to process the specific type of message.
https://en.wikipedia.org/wiki/Linked_list
passage: The special linear group $$ \operatorname{SL}(n,\R) $$ can be characterized as the group of volume and orientation-preserving linear transformations of $$ \R^n $$ . The group $$ \operatorname{SL}(n,\C) $$ is simply connected, while $$ \operatorname{SL}(n,\R) $$ is not. $$ \operatorname{SL}(n,\R) $$ has the same fundamental group as $$ \operatorname{GL}^+(n,\R) $$ , that is, $$ \Z $$ for $$ n=2 $$ and $$ \Z_2 $$ for $$ n>2 $$ . ## Other subgroups ### Diagonal subgroups The set of all invertible diagonal matrices forms a subgroup of $$ \operatorname{GL}(n,F) $$ isomorphic to $$ (F^\times)^n $$ . In fields like $$ \R $$ and $$ \C $$ , these correspond to rescaling the space; the so-called dilations and contractions. A scalar matrix is a diagonal matrix which is a constant times the identity matrix. The set of all nonzero scalar matrices forms a subgroup of $$ \operatorname{GL}(n,F) $$ isomorphic to $$ F^\times $$ . This group is the center of $$ \operatorname{GL}(n,F) $$ .
https://en.wikipedia.org/wiki/General_linear_group
passage: If J is induced by a complex structure, then it is induced by a unique complex structure. Given any linear map A on each tangent space of M; i.e., A is a tensor field of rank (1, 1), then the Nijenhuis tensor is a tensor field of rank (1,2) given by $$ N_A(X,Y) = -A^2[X,Y]+A([AX,Y]+[X,AY]) -[AX,AY]. \, $$ or, for the usual case of an almost complex structure A=J such that $$ J^2=-Id $$ , $$ N_J(X,Y) = [X,Y]+J([JX,Y]+[X,JY])-[JX,JY]. \, $$ The individual expressions on the right depend on the choice of the smooth vector fields X and Y, but the left side actually depends only on the pointwise values of X and Y, which is why NA is a tensor. This is also clear from the component formula $$ -(N_A)_{ij}^k=A_i^m\partial_m A^k_j -A_j^m\partial_mA^k_i-A^k_m(\partial_iA^m_j-\partial_jA^m_i). $$ In terms of the Frölicher–Nijenhuis bracket, which generalizes the Lie bracket of vector fields, the Nijenhuis tensor NA is just one-half of [A, A].
https://en.wikipedia.org/wiki/Almost_complex_manifold
passage: It is the translocation between chromosomes 14 and 18. This over activity can result in the development of follicular lymphoma. ### Autophagy Macroautophagy, often referred to as autophagy, is a catabolic process that results in the autophagosomic-lysosomal degradation of bulk cytoplasmic contents, abnormal protein aggregates, and excess or damaged organelles. Autophagy is generally activated by conditions of nutrient deprivation but has also been associated with physiological as well as pathological processes such as development, differentiation, neurodegenerative diseases, stress, infection and cancer. #### Mechanism A critical regulator of autophagy induction is the kinase mTOR, which when activated, suppresses autophagy and when not activated promotes it. Three related serine/threonine kinases, UNC-51-like kinase -1, -2, and -3 (ULK1, ULK2, UKL3), which play a similar role as the yeast Atg1, act downstream of the mTOR complex. ULK1 and ULK2 form a large complex with the mammalian homolog of an autophagy-related (Atg) gene product (mAtg13) and the scaffold protein FIP200. Class III PI3K complex, containing hVps34, Beclin-1, p150 and Atg14-like protein or ultraviolet irradiation resistance-associated gene (UVRAG), is required for the induction of autophagy.
https://en.wikipedia.org/wiki/Programmed_cell_death
passage: ## Incenter The incenter of a tangential quadrilateral lies on its Newton line (which connects the midpoints of the diagonals). The ratio of two opposite sides in a tangential quadrilateral can be expressed in terms of the distances between the incenter I and the vertices according to $$ \frac{AB}{CD}=\frac{IA\cdot IB}{IC\cdot ID},\quad\quad \frac{BC}{DA}=\frac{IB\cdot IC}{ID\cdot IA}. $$ The product of two adjacent sides in a tangential quadrilateral ABCD with incenter I satisfies $$ AB\cdot BC=IB^2+\frac{IA\cdot IB\cdot IC}{ID}. $$ If I is the incenter of a tangential quadrilateral ABCD, then $$ IA\cdot IC+IB\cdot ID=\sqrt{AB\cdot BC\cdot CD\cdot DA}. $$ The incenter I in a tangential quadrilateral ABCD coincides with the "vertex centroid" of the quadrilateral if and only if $$ IA\cdot IC=IB\cdot ID. $$ If Mp and Mq are the midpoints of the diagonals AC and BD respectively in a tangential quadrilateral ABCD with incenter I, then $$ \frac{IM_p}{IM_q}=\frac{IA\cdot IC}{IB\cdot ID}=\frac{e+g}{f+h} $$ where e, f, g and h are the tangent lengths at A, B, C and D respectively.
https://en.wikipedia.org/wiki/Tangential_quadrilateral
passage: $$ $$ \begin{align} c_q(n) &=\frac{\mu\left(\frac{q}{\gcd(q, n)}\right)}{\phi\left(\frac{q}{\gcd(q, n)}\right)}\phi(q)\\ &=\sum_{\delta\mid \gcd(q,n)}\mu\left(\frac{q}{\delta}\right)\delta. \end{align} $$         Note that   $$ \phi(q) = \sum_{\delta\mid q}\mu\left(\frac{q}{\delta}\right)\delta. $$ $$ c_q(1) = \mu(q). $$ $$ c_q(q) = \phi(q). $$ $$ \sum_{\delta\mid n}d^{3}(\delta) = \left(\sum_{\delta\mid n}d(\delta)\right)^2. $$       Compare this with $$ d(uv) = \sum_{\delta\mid \gcd(u,v)}\mu(\delta)d\left(\frac{u}{\delta}\right)d\left(\frac{v}{\delta}\right). $$ $$
https://en.wikipedia.org/wiki/Arithmetic_function
passage: For this reason, code which needs to run particularly quickly and efficiently may require the use of a lower-level language, even if a higher-level language would make the coding easier. In many cases, critical portions of a program mostly in a high-level language can be hand-coded in assembly language, leading to a much faster, more efficient, or simply reliably functioning optimised program. However, with the growing complexity of modern microprocessor architectures, well-designed compilers for high-level languages frequently produce code comparable in efficiency to what most low-level programmers can produce by hand, and the higher abstraction may allow for more powerful techniques providing better overall results than their low-level counterparts in particular settings. High-level languages are designed independent of a specific computing system architecture. This facilitates executing a program written in such a language on any computing system with compatible support for the Interpreted or JIT program. High-level languages can be improved as their designers develop improvements. In other cases, new high-level languages evolve from one or more others with the goal of aggregating the most popular constructs with new or improved features. An example of this is Scala which maintains backward compatibility with Java, meaning that programs and libraries written in Java will continue to be usable even if a programming shop switches to Scala; this makes the transition easier and the lifespan of such high-level coding indefinite. In contrast, low-level programs rarely survive beyond the system architecture which they were written for without major revision. This is the engineering 'trade-off' for the 'Abstraction Penalty'.
https://en.wikipedia.org/wiki/High-level_programming_language
passage: Nasal vowels and nasal consonants are produced in the process of nasalisation. The hollow cavities of the paranasal sinuses act as sound chambers that modify and amplify speech and other vocal sounds. There are several plastic surgery procedures that can be done on the nose, known as rhinoplasties available to correct various structural defects or to change the shape of the nose. Defects may be congenital, or result from nasal disorders or from trauma. These procedures are a type of reconstructive surgery. Elective procedures to change a nose shape are a type of cosmetic surgery. ## Structure Several bones and cartilages make up the bony-cartilaginous framework of the nose, and the internal structure. The nose is also made up of types of soft tissue such as skin, epithelia, mucous membrane, muscles, nerves, and blood vessels. In the skin there are sebaceous glands, and in the mucous membrane there are nasal glands. The bones and cartilages provide strong protection for the internal structures of the nose. There are several muscles that are involved in movements of the nose. The arrangement of the cartilages allows flexibility through muscle control to enable airflow to be modified. ### Bones The bony structure of the nose is provided by the maxilla, frontal bone, and a number of smaller bones.
https://en.wikipedia.org/wiki/Human_nose
passage: In the mathematical field of descriptive set theory, a subset of a Polish space $$ X $$ is an analytic set if it is a continuous image of a Polish space. These sets were first defined by and his student . ## Definition There are several equivalent definitions of analytic set. The following conditions on a subspace A of a Polish space X are equivalent: - A is analytic. - A is empty or a continuous image of the Baire space ωω. - A is a Suslin space, in other words A is the image of a Polish space under a continuous mapping. - A is the continuous image of a Borel set in a Polish space. - A is a Suslin set, the image of the Suslin operation. - There is a Polish space $$ Y $$ and a Borel set $$ B\subseteq X\times Y $$ such that $$ A $$ is the projection of $$ B $$ onto $$ X $$ ; that is, $$ A=\{x\in X|(\exists y\in Y)\langle x,y \rangle\in B\}. $$ - A is the projection of a closed set in the cartesian product of X with the Baire space. - A is the projection of a Gδ set in the cartesian product of X with the Cantor space 2ω. An alternative characterization, in the specific, important, case that $$ X $$ is Baire space ωω, is that the analytic sets are precisely the projections of trees on $$ \omega\times\omega $$ .
https://en.wikipedia.org/wiki/Analytic_set
passage: Given that in general for a closed system with generalized coordinates and canonical momenta , $$ p_i = \frac{\partial S}{\partial q_i} = \frac{\partial S}{\partial x_i}, \quad E = -\frac{\partial S}{\partial t} = - c \cdot \frac{\partial S}{\partial x^{0}}, $$ it is immediate (recalling , , , and , , , in the present metric convention) that $$ p_\mu =\frac{\partial S}{\partial x^\mu} = \left(-{E \over c}, \mathbf p\right) $$ is a covariant four-vector with the three-vector part being the canonical momentum. Consider initially a system of one degree of freedom . In the derivation of the equations of motion from the action using Hamilton's principle, one finds (generally) in an intermediate stage for the variation of the action, $$ \delta S = \left. \left[ \frac{\partial L}{\partial \dot q}\delta q\right]\right|_{t_1}^{t_2} + \int_{t_1}^{t_2} \left( \frac{\partial L}{\partial q} - \frac{d}{dt} \frac{\partial L}{\partial \dot q}\right)\delta q dt. $$ The assumption is then that the varied paths satisfy , from which Lagrange's equations follow at once.
https://en.wikipedia.org/wiki/Four-momentum
passage: Each new iteration of Newton's method will be denoted by `x1`. We will check during the computation whether the denominator (`yprime`) becomes too small (smaller than `epsilon`), which would be the case if , since otherwise a large amount of error could be introduced. ```python3 def f(x): return x**2 - 2 # f(x) = x^2 - 2 def f_prime(x): return 2*x # f'(x) = 2x def newtons_method(x0, f, f_prime, tolerance, epsilon, max_iterations): """Newton's method Args: x0: The initial guess f: The function whose root we are trying to find f_prime: The derivative of the function tolerance: Stop when iterations change by less than this epsilon: Do not divide by a number smaller than this max_iterations: The maximum number of iterations to compute """ for _ in range(max_iterations): y = f(x0) yprime = f_prime(x0) if abs(yprime) < epsilon: # Give up if the denominator is too small break x1 = x0 - y / yprime # Do Newton's computation if abs(x1 - x0) <= tolerance: # Stop when the result is within the desired tolerance return x1 # x1 is a solution within tolerance and maximum number of iterations x0 = x1 # Update x0 to start the process again return None # Newton's method did not converge ```
https://en.wikipedia.org/wiki/Newton%27s_method
passage: Addition is defined coordinate-wise; that is, $$ (x_1,y_1) + (x_2,y_2) = (x_1+x_2, y_1 + y_2) $$ , which is the same as vector addition. Given two structures $$ A $$ and $$ B $$ , their direct sum is written as $$ A\oplus B $$ . Given an indexed family of structures $$ A_i $$ , indexed with $$ i \in I $$ , the direct sum may be written $$ A=\bigoplus_{i\in I}A_i $$ . Each Ai is called a direct summand of A. If the index set is finite, the direct sum is the same as the direct product. In the case of groups, if the group operation is written as $$ + $$ the phrase "direct sum" is used, while if the group operation is written $$ $$ the phrase "direct product" is used. When the index set is infinite, the direct sum is not the same as the direct product since the direct sum has the extra requirement that all but finitely many coordinates must be zero. ### Internal and external direct sums A distinction is made between internal and external direct sums though both are isomorphic. If the summands are defined first, and the direct sum is then defined in terms of the summands, there is an external direct sum.
https://en.wikipedia.org/wiki/Direct_sum
passage: This product occurs naturally in the study of ## Dirichlet series such as the Riemann zeta function. It describes the multiplication of two Dirichlet series in terms of their coefficients: $$ \left(\sum_{n\geq 1}\frac{f(n)}{n^s}\right) \left(\sum_{n\geq 1}\frac{g(n)}{n^s}\right) \ = \ \left(\sum_{n\geq 1}\frac{(f*g)(n)}{n^s}\right). $$ ## ### Properties The set of arithmetic functions forms a commutative ring, the , with addition given by pointwise addition and multiplication by Dirichlet convolution. The multiplicative identity is the unit function $$ \varepsilon $$ defined by $$ \varepsilon(n)=1 $$ if $$ n=1 $$ and $$ 0 $$ otherwise. The units (invertible elements) of this ring are the arithmetic functions $$ f $$ with $$ f(1) \neq 0 $$ . Specifically, Dirichlet convolution is associative, $$ (f * g) * h = f * (g * h), $$ distributive over addition $$ f * (g + h) = f * g + f * h $$ , commutative, $$ f * g = g * f $$ , and has an identity element, $$ f * \varepsilon $$ = $$ \varepsilon * f = f $$ .
https://en.wikipedia.org/wiki/Dirichlet_convolution
passage: Lie algebroids were introduced in 1967 by Jean Pradines. ## Definition and basic concepts A Lie algebroid is a triple $$ (A, [\cdot,\cdot], \rho) $$ consisting of - a vector bundle $$ A $$ over a manifold $$ M $$ - a Lie bracket $$ [\cdot,\cdot] $$ on its space of sections $$ \Gamma (A) $$ - a morphism of vector bundles $$ \rho: A\rightarrow TM $$ , called the anchor, where $$ TM $$ is the tangent bundle of $$ M $$ such that the anchor and the bracket satisfy the following Leibniz rule: $$ [X,fY]=\rho(X)f\cdot Y + f[X,Y] $$ where $$ X,Y \in \Gamma(A), f\in C^\infty(M) $$ . Here $$ \rho(X)f $$ is the image of $$ f $$ via the derivation $$ \rho(X) $$ , i.e. the Lie derivative of $$ f $$ along the vector field $$ \rho(X) $$ . The notation $$ \rho(X)f \cdot Y $$ denotes the (point-wise) product between the function $$ \rho(X)f $$ and the vector field $$ Y $$ .
https://en.wikipedia.org/wiki/Lie_algebroid
passage: The quality of the training data is essential for the evolution of good solutions. A good training set should be representative of the problem at hand and also well-balanced, otherwise the algorithm might get stuck at some local optimum. In addition, it is also important to avoid using unnecessarily large datasets for training as this will slow things down unnecessarily. A good rule of thumb is to choose enough records for training to enable a good generalization in the validation data and leave the remaining records for validation and testing. Fitness functions Broadly speaking, there are essentially three different kinds of problems based on the kind of prediction being made: 1. Problems involving numeric (continuous) predictions; 1. Problems involving categorical or nominal predictions, both binomial and multinomial; 1. Problems involving binary or Boolean predictions. The first type of problem goes by the name of regression; the second is known as classification, with logistic regression as a special case where, besides the crisp classifications like "Yes" or "No", a probability is also attached to each outcome; and the last one is related to Boolean algebra and logic synthesis. ##### Fitness functions for regression In regression, the response or dependent variable is numeric (usually continuous) and therefore the output of a regression model is also continuous. So it's quite straightforward to evaluate the fitness of the evolving models by comparing the output of the model to the value of the response in the training data.
https://en.wikipedia.org/wiki/Gene_expression_programming
passage: It is provably impossible to create an algorithm that can losslessly compress any data. While there have been many claims through the years of companies achieving "perfect compression" where an arbitrary number N of random bits can always be compressed to N − 1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 1. On the other hand, it has also been proven that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity. Hence it is possible that any particular file, even if it appears random, may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant pi, which appear random but can be generated by a very small program. However, even though it cannot be determined whether a particular file is incompressible, a simple theorem about incompressible strings shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor). ### Mathematical background Abstractly, a compression algorithm can be viewed as a function on sequences (normally of octets). Compression is successful if the resulting sequence is shorter than the original sequence (and the instructions for the decompression map).
https://en.wikipedia.org/wiki/Lossless_compression
passage: In the more general definition, the Hilbert space fibers Hx are allowed to vary from point to point without having a local triviality requirement (local in a measure-theoretic sense). One of the main theorems of the von Neumann theory is to show that in fact the more general definition is equivalent to the simpler one given here. Note that the direct integral of a measurable family of Hilbert spaces depends only on the measure class of the measure μ; more precisely: Theorem. Suppose μ, ν are σ-finite countably additive measures on X that have the same sets of measure 0. Then the mapping $$ s \mapsto \left(\frac{\mathrm{d} \mu}{\mathrm{d} \nu}\right)^{1/2} s $$ is a unitary operator $$ \int^\oplus_X H_x \, \mathrm{d} \mu(x) \rightarrow \int^\oplus_X H_x \, \mathrm{d} \nu(x). $$ ### Example The simplest example occurs when X is a countable set and μ is a discrete measure. Thus, when X = N and μ is counting measure on N, then any sequence {Hk} of separable Hilbert spaces can be considered as a measurable family.
https://en.wikipedia.org/wiki/Direct_integral
passage: Visual computing is a generic term for all computer science disciplines dealing with images and 3D models, such as computer graphics, image processing, visualization, computer vision, virtual and augmented reality, video processing, and computational visualistics. Visual computing also includes aspects of pattern recognition, human computer interaction, machine learning and digital libraries. The core challenges are the acquisition, processing, analysis and rendering of visual information (mainly images and video). Application areas include industrial quality control, medical image processing and visualization, surveying, robotics, multimedia systems, virtual heritage, special effects in movies and television, and Ludology. This includes Digital Arts and Digital Media Studies. ## History and overview Visual computing is a fairly new term, which got its current meaning around 2005, when the International Symposium on Visual Computing first convened. Areas of computer technology concerning images, such as image formats, filtering methods, color models, and image metrics, have in common many mathematical methods and algorithms. When computer scientists working in computer science disciplines that involve images, such as computer graphics, image processing, and computer vision, noticed that their methods and applications increasingly overlapped, they began using the term "visual computing" to describe these fields collectively. And also the programming methods on graphics hardware, the manipulation tricks to handle huge data, textbooks and conferences, the scientific communities of these disciplines and working groups at companies intermixed more and more. Furthermore, applications increasingly needed techniques from more than one of these fields concurrently.
https://en.wikipedia.org/wiki/Visual_computing
passage: The rough set The tuple $$ \langle{\underline P}X,{\overline P}X\rangle $$ composed of the lower and upper approximation is called a rough set; thus, a rough set is composed of two crisp sets, one representing a lower boundary of the target set $$ X $$ , and the other representing an upper boundary of the target set $$ X $$ . The accuracy of the rough-set representation of the set $$ X $$ can be given by the following: $$ \alpha_{P}(X) = \frac{\left | {\underline P}X \right |} {\left | {\overline P}X \right |} $$ That is, the accuracy of the rough set representation of $$ X $$ , $$ \alpha_{P}(X) $$ , $$ 0 \leq \alpha_{P}(X) \leq 1 $$ , is the ratio of the number of objects which can positively be placed in $$ X $$ to the number of objects that can possibly be placed in $$ X $$ – this provides a measure of how closely the rough set is approximating the target set. Clearly, when the upper and lower approximations are equal (i.e., boundary region empty), then $$ \alpha_{P}(X) = 1 $$ , and the approximation is perfect; at the other extreme, whenever the lower approximation is empty, the accuracy is zero (regardless of the size of the upper approximation).
https://en.wikipedia.org/wiki/Rough_set
passage: 25 15 17 15 0017 R > CEAKBMRYUVDNFLTXW(G)ZOIJQPHS CDVA 25 15 18 16 0018 T > TLJRVQHGUCXBZYSWFDO(A)IEPKNM CDVB 25 15 18 17 0019 B > Y(H)LPGTEBKWICSVUDRQMFONJZAX CDVC 25 15 18 18 0020 E > KRUL(G)JEWNFADVIPOYBXZCMHSQT CDVD 25 15 18 19 0021 K > RCBPQMVZXY(U)OFSLDEANWKGTIJH CDVE 25 15 18 20 0022 A > (F)CBJQAWTVDYNXLUSEZPHOIGMKR CDVF 25 15 18 21 0023 N > VFTQSBPORUZWY(X)HGDIECJALNMK CDVG 25 15 18 22 0024 N > JSRHFENDUAZYQ(G)XTMCBPIWVOLK CDVH 25 15 18 23 0025 T > RCBUTXVZJINQPKWMLAY(E)DGOFSH CDVI 25 15 18 24 0026 Z > URFXNCMYLVPIGESKTBOQAJZDH(W) CDVJ 25 15 18 25 0027 U > JIOZFEWMBAUSHPCNRQLV(K)TGYXD CDVK 25 15 18 26 0028 G > ZGVRKO(B)XLNEIWJFUSDQYPCMHTA CDVL 25 15 18 01 0029 E > RMJV(L)YQZKCIEBONUGAWXPDSTFH CDVM 25 15 18 02 0030 B > G(K)QRFEANZPBMLHVJCDUXSOYTWI CDWN 25 15 19 03 0031 E > YMZT(G)VEKQOHPBSJLIUNDRFXWAC CDWO 25 15 19 04 0032 N >
https://en.wikipedia.org/wiki/Enigma_machine
passage: yc = f(c) else: # yc > yd to find the maximum a, c = c, d yc = yd d = a + invphi * h yd = f(d) return (a, d) if yc < yd else (c, b)
https://en.wikipedia.org/wiki/Golden-section_search
passage: ## History The precursor of the transforms were the Fourier series to express functions in finite intervals. Later the Fourier transform was developed to remove the requirement of finite intervals. Using the Fourier series, just about any practical function of time (the voltage across the terminals of an electronic device for example) can be represented as a sum of sines and cosines, each suitably scaled (multiplied by a constant factor), shifted (advanced or retarded in time) and "squeezed" or "stretched" (increasing or decreasing the frequency). The sines and cosines in the Fourier series are an example of an orthonormal basis. ## Usage example As an example of an application of integral transforms, consider the Laplace transform. This is a technique that maps differential or integro-differential equations in the "time" domain into polynomial equations in what is termed the "complex frequency" domain. (Complex frequency is similar to actual, physical frequency but rather more general. Specifically, the imaginary component ω of the complex frequency s = −σ + iω corresponds to the usual concept of frequency, viz., the rate at which a sinusoid cycles, whereas the real component σ of the complex frequency corresponds to the degree of "damping", i.e. an exponential decrease of the amplitude.)
https://en.wikipedia.org/wiki/Integral_transform
passage: ### Controlled gate to construct the Bell state Controlled gates act on 2 or more qubits, where one or more qubits act as a control for some specified operation. In particular, the controlled NOT gate (CNOT or CX) acts on 2 qubits, and performs the NOT operation on the second qubit only when the first qubit is $$ |1\rangle $$ , and otherwise leaves it unchanged. With respect to the unentangled product basis $$ \{|00\rangle $$ , $$ |01\rangle $$ , $$ |10\rangle $$ , $$ |11\rangle\} $$ , it maps the basis states as follows: $$ | 0 0 \rangle \mapsto | 0 0 \rangle $$ $$ | 0 1 \rangle \mapsto | 0 1 \rangle $$ $$ | 1 0 \rangle \mapsto | 1 1 \rangle $$ $$ | 1 1 \rangle \mapsto | 1 0 \rangle $$ . A common application of the CNOT gate is to maximally entangle two qubits into the $$ |\Phi^+\rangle $$ Bell state.
https://en.wikipedia.org/wiki/Qubit
passage: ## Definitions Because this is a linear differential equation, solutions can be scaled to any amplitude. The amplitudes chosen for the functions originate from the early work in which the functions appeared as solutions to definite integrals rather than solutions to differential equations. Because the differential equation is second-order, there must be two linearly independent solutions: one of the first kind and one of the second kind. Depending upon the circumstances, however, various formulations of these solutions are convenient. Different variations are summarized in the table below and described in the following sections. The subscript n is typically used in place of $$ \alpha $$ when $$ \alpha $$ is known to be an integer. Type First kind Second kind Bessel functions Modified Bessel functions Hankel functions Spherical Bessel functions Modified spherical Bessel functions Spherical Hankel functions Bessel functions of the second kind and the spherical Bessel functions of the second kind are sometimes denoted by and , respectively, rather than and . ### Bessel functions of the first kind: Jα Bessel functions of the first kind, denoted as , are solutions of Bessel's differential equation. For integer or positive , Bessel functions of the first kind are finite at the origin (); while for negative non-integer , Bessel functions of the first kind diverge as approaches zero.
https://en.wikipedia.org/wiki/Bessel_function
passage: This discovery prompted the letter from Szilárd and signed by Albert Einstein to President Franklin D. Roosevelt, warning of the possibility that Nazi Germany might be attempting to build an atomic bomb. On December 2, 1942, a team led by Fermi (and including Szilárd) produced the first artificial self-sustaining nuclear chain reaction with the Chicago Pile-1 experimental reactor in a racquets court below the bleachers of Stagg Field at the University of Chicago. Fermi's experiments at the University of Chicago were part of Arthur H. Compton's Metallurgical Laboratory of the Manhattan Project; the lab was renamed Argonne National Laboratory and tasked with conducting research in harnessing fission for nuclear energy. In 1956, Paul Kuroda of the University of Arkansas postulated that a natural fission reactor may have once existed. Since nuclear chain reactions may only require natural materials (such as water and uranium, if the uranium has sufficient amounts of 235U), it was possible to have these chain reactions occur in the distant past when uranium-235 concentrations were higher than today, and where there was the right combination of materials within the Earth's crust. Uranium-235 made up a larger share of uranium on Earth in the geological past because of the different half-lives of the isotopes and , the former decaying almost an order of magnitude faster than the latter. Kuroda's prediction was verified with the discovery of evidence of natural self-sustaining nuclear chain reactions in the past at Oklo in Gabon in September 1972.
https://en.wikipedia.org/wiki/Nuclear_chain_reaction
passage: This makes classical logic a special fragment of CoL. Thus CoL is a conservative extension of classical logic. Computability logic is more expressive, constructive and computationally meaningful than classical logic. Besides classical logic, independence-friendly (IF) logic and certain proper extensions of linear logic and intuitionistic logic also turn out to be natural fragments of CoL.G. Japaridze, The intuitionistic fragment of computability logic at the propositional level. Annals of Pure and Applied Logic 147 (2007), pages 187–227. Hence meaningful concepts of "intuitionistic truth", "linear-logic truth" and "IF-logic truth" can be derived from the semantics of CoL. CoL systematically answers the fundamental question of what can be computed and how; thus CoL has many applications, such as constructive applied theories, knowledge base systems, systems for planning and action. Out of these, only applications in constructive applied theories have been extensively explored so far: a series of CoL-based number theories, termed "clarithmetics", have been constructedG. Japaridze, Build your own clarithmetic I: Setup and completeness. Logical Methods in Computer Science 12 (2016), Issue 3, paper 8, pp. 1–59. as computationally and complexity-theoretically meaningful alternatives to the classical-logic-based first-order Peano arithmetic and its variations such as systems of bounded arithmetic.
https://en.wikipedia.org/wiki/Computability_logic
passage: if and only if, for all sentences $$ \phi $$ in the language of the theory $$ \mathcal{T} $$ , if $$ \mathcal{T} \vdash \phi $$ , then $$ \phi \in \mathcal{T} $$ ; or, equivalently, if $$ \mathcal{T}' $$ is a finite subset of $$ \mathcal{T} $$ (possibly the set of axioms of $$ \mathcal{T} $$ in the case of finitely axiomatizable theories) and $$ \mathcal{T}' \vdash \phi $$ , then $$ \phi \in \mathcal{T}' $$ , and therefore $$ \phi \in \mathcal{T} $$ . ### Consistency and completeness A syntactically consistent theory is a theory from which not every sentence in the underlying language can be proven (with respect to some deductive system, which is usually clear from context). In a deductive system (such as first-order logic) that satisfies the principle of explosion, this is equivalent to requiring that there is no sentence φ such that both φ and its negation can be proven from the theory. A satisfiable theory is a theory that has a model. This means there is a structure M that satisfies every sentence in the theory. Any satisfiable theory is syntactically consistent, because the structure satisfying the theory will satisfy exactly one of φ and the negation of φ, for each sentence φ.
https://en.wikipedia.org/wiki/Theory_%28mathematical_logic%29
passage: ## Theory ### Structure theorem for commutative Noetherian rings Over a commutative Noetherian ring $$ R $$ , every injective module is a direct sum of indecomposable injective modules and every indecomposable injective module is the injective hull of the residue field at a prime $$ \mathfrak{p} $$ . That is, for an injective $$ I \in \text{Mod}(R) $$ , there is an isomorphismwhere $$ E(R/\mathfrak{p}_i) $$ are the injective hulls of the modules $$ R/\mathfrak{p}_i $$ . In addition, if $$ I $$ is the injective hull of some module $$ M $$ then the $$ \mathfrak{p}_i $$ are the associated primes of $$ M $$ . ### Submodules, quotients, products, and sums, Bass-Papp Theorem Any product of (even infinitely many) injective modules is injective; conversely, if a direct product of modules is injective, then each module is injective . Every direct sum of finitely many injective modules is injective. In general, submodules, factor modules, or infinite direct sums of injective modules need not be injective. Every submodule of every injective module is injective if and only if the ring is Artinian semisimple ; every factor module of every injective module is injective if and only if the ring is hereditary, .
https://en.wikipedia.org/wiki/Injective_module
passage: In this discussion, an instance of a feature representation is referred to as a , or simply descriptor. ### Certainty or confidence Two examples of image features are local edge orientation and local velocity in an image sequence. In the case of orientation, the value of this feature may be more or less undefined if more than one edge are present in the corresponding neighborhood. Local velocity is undefined if the corresponding image region does not contain any spatial variation. As a consequence of this observation, it may be relevant to use a feature representation that includes a measure of certainty or confidence related to the statement about the feature value. Otherwise, it is a typical situation that the same descriptor is used to represent feature values of low certainty and feature values close to zero, with a resulting ambiguity in the interpretation of this descriptor. Depending on the application, such an ambiguity may or may not be acceptable. In particular, if a featured image will be used in subsequent processing, it may be a good idea to employ a feature representation that includes information about certainty or confidence. This enables a new feature descriptor to be computed from several descriptors, for example, computed at the same image point but at different scales, or from different but neighboring points, in terms of a weighted average where the weights are derived from the corresponding certainties. In the simplest case, the corresponding computation can be implemented as a low-pass filtering of the featured image. The resulting feature image will, in general, be more stable to noise. ### Averageability
https://en.wikipedia.org/wiki/Feature_%28computer_vision%29
passage: In the case of a normed vector space, the statement is: $$ \big|\|u\|-\|v\|\big| \leq \|u-v\|, $$ or for metric spaces, $$ |d(A, C) - d(B, C)| \leq d(A, B) $$ . This implies that the norm $$ \|\cdot\| $$ as well as the distance-from- $$ z $$ function $$ d(z ,\cdot) $$ are Lipschitz continuous with Lipschitz constant , and therefore are in particular uniformly continuous. The proof of the reverse triangle inequality from the usual one uses $$ \|v-u\| = \|{-}1(u-v)\| = |{-}1|\cdot\|u-v\| = \|u-v\| $$ to find: $$ \|u\| = \|(u-v) + v\| \leq \|u-v\| + \|v\| \Rightarrow \|u\| - \|v\| \leq \|u-v\|, $$ $$ \|v\| = \|(v-u) + u\| \leq \|v-u\| + \|u\| \Rightarrow \|u\| - \|v\| \geq -\|u-v\|, $$ Combining these two statements gives: $$ -\|u-v\| \leq \|u\|-\|v\| \leq \|u-v\| \Rightarrow \big|\|u\|-\|v\|\big| \leq \|u-v\|. $$ In
https://en.wikipedia.org/wiki/Triangle_inequality
passage: With an engineered combination of two birefringent materials, an achromatic waveplate can be manufactured such that the spectral response of its phase retardance can be nearly flat. A common use of waveplates—particularly the sensitive-tint (full-wave) and quarter-wave plates—is in optical mineralogy. Addition of plates between the polarizers of a petrographic microscope makes the optical identification of minerals in thin sections of rocks easier, in particular by allowing deduction of the shape and orientation of the optical indicatrices within the visible crystal sections. This alignment can allow discrimination between minerals which otherwise appear very similar in plane polarized and cross polarized light. ## Principles of operation A waveplate works by shifting the phase between two perpendicular polarization components of the light wave. A typical waveplate is simply a birefringent crystal with a carefully chosen orientation and thickness. The crystal is cut into a plate, with the orientation of the cut chosen so that the optic axis of the crystal is parallel to the surfaces of the plate. This results in two axes in the plane of the cut: the ordinary axis, with index of refraction no, and the extraordinary axis, with index of refraction ne. The ordinary axis is perpendicular to the optic axis. The extraordinary axis is parallel to the optic axis.
https://en.wikipedia.org/wiki/Waveplate
passage: The current (January 2021) OWL version of COSMO has over 24000 types (OWL classes), over 1350 relations, and over 21000 restrictions. The COSMO itself (COSMO.owl) and other related and explanatory files can be obtained at the link for COSMO in the External Links section below. Cyc Cyc is a proprietary system, under development since 1986, consisting of a foundation ontology and several domain-specific ontologies (called microtheories). A subset of those ontologies was released for free under the name OpenCyc in 2002 and was available until circa 2016. A subset of Cyc called ResearchCyc was made available for free non-commercial research use in 2006. DOLCE Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) is a foundational ontology designed in 2002 in the context of the WonderWeb EU project, developed by Nicola Guarino and his associates at the Laboratory for Applied Ontology (LOA). As implied by its acronym, DOLCE is oriented toward capturing the ontological categories underlying natural language and human common sense. DOLCE, however, does not commit to a strictly referentialist metaphysics related to the intrinsic nature of the world. Rather, the categories it introduces are thought of as cognitive artifacts, which are ultimately depending on human perception, cultural imprints, and social conventions. In this sense, they intend to be just descriptive (vs prescriptive) notions, which support the formal specification of domain conceptualizations.
https://en.wikipedia.org/wiki/Upper_ontology
passage: They argue that the course of the universe is absolutely determined, but that humans are screened from knowledge of the determinative factors. So, they say, it only appears that things proceed in a probabilistically way. John S. Bell analyzed Einstein's work in his famous Bell's theorem, which demonstrates that quantum mechanics can makes statistical predictions that would be violated if local hidden variables really existed. Many experiments have verified the quantum predictions. ### Other interpretations Bell's theorem only applies to local hidden variables. Quantum mechanics can be formulated with non-local hidden variables to achieve a deterministic theory that is in agreement with experiment. An example is the Bohm interpretation of quantum mechanics. Bohm's Interpretation, though, violates special relativity and it is highly controversial whether or not it can be reconciled without giving up on determinism. The Many worlds interpretation focuses on the deterministic nature of the Schrodinger's equation. For any closed system, including the entire universe, the wavefunction solutions to this equation evolve deterministically. The apparent randomness of observations corresponds to branching of the wavefunction, with one world for each possible outcome. Another foundational assumption to quantum mechanics is that of free will, which has been argued to be foundational to the scientific method as a whole. Bell acknowledged that abandoning this assumption would both allow for the maintenance of determinism as well as locality. This perspective is known as superdeterminism, and is defended by some physicists such as Sabine Hossenfelder and Tim Palmer.
https://en.wikipedia.org/wiki/Determinism
passage: ### The Lie quadric The Lie quadric of the plane is defined as follows. Let R3,2 denote the space R5 of 5-tuples of real numbers, equipped with the signature (3,2) symmetric bilinear form defined by $$ (x_0,x_1,x_2,x_3,x_4)\cdot (y_0,y_1,y_2,y_3,y_4) = - x_0 y_0 - x_1 y_1 + x_2 y_2 + x_3 y_4 + x_4 y_3. $$ The projective space RP4 is the space of lines through the origin in R5 and is the space of nonzero vectors x in R5 up to scale, where x= (x0,x1,x2,x3,x4). The planar Lie quadric Q consists of the points [x] in projective space represented by vectors x with x · x = 0. To relate this to planar geometry it is necessary to fix an oriented timelike line. The chosen coordinates suggest using the point [1,0,0,0,0] ∈ RP4. Any point in the Lie quadric Q can then be represented by a vector x = λ(1,0,0,0,0) + v, where v is orthogonal to (1,0,0,0,0). Since [x] ∈ Q, v · v = λ2 ≥ 0. The orthogonal space to (1,0,0,0,0), intersected with the Lie quadric, is the two dimensional celestial sphere S in Minkowski space-time.
https://en.wikipedia.org/wiki/Lie_sphere_geometry
passage: ## Generalizations Beyond its classical formulation for real numbers and convex functions, Jensen’s inequality has been extended to the realm of operator theory. In this non‐commutative setting the inequality is expressed in terms of operator convex functions—that is, functions defined on an interval I that satisfy $$ f\bigl(\lambda x + (1-\lambda)y\bigr)\le\lambda f(x)+(1-\lambda)f(y) $$ for every pair of self‐adjoint operators x and y (with spectra in I) and every scalar $$ \lambda\in[0,1] $$ . Hansen and Pedersen established a definitive version of this inequality by considering genuine non‐commutative convex combinations.
https://en.wikipedia.org/wiki/Jensen%27s_inequality
passage: This is known as the "group completion of a semigroup" or "group of fractions of a semigroup". ### Properties In the language of category theory, any universal construction gives rise to a functor; one thus obtains a functor from the category of commutative monoids to the category of abelian groups which sends the commutative monoid M to its Grothendieck group K. This functor is left adjoint to the forgetful functor from the category of abelian groups to the category of commutative monoids. For a commutative monoid M, the map i : M → K is injective if and only if M has the cancellation property, and it is bijective if and only if M is already a group. ### Example: the integers The easiest example of a Grothendieck group is the construction of the integers $$ \Z $$ from the (additive) natural numbers $$ \N $$ . First one observes that the natural numbers (including 0) together with the usual addition indeed form a commutative monoid $$ (\N, +). $$ Now when one uses the Grothendieck group construction one obtains the formal differences between natural numbers as elements n − m and one has the equivalence relation $$ n - m \sim n' - m' \iff n + m' + k = n'+ m + k $$ for some $$ k \iff n + m' = n' + m $$ .
https://en.wikipedia.org/wiki/Grothendieck_group
passage: #### Amplitude part Squaring both equations and adding them together gives $$ \left. \begin{aligned} A^2 (1-\omega^2)^2 &= \cos^2\varphi \\ (2 \zeta \omega A)^2 &= \sin^2\varphi \end{aligned} \right\} \Rightarrow A^2[(1 - \omega^2)^2 + (2 \zeta \omega)^2] = 1. $$ Therefore, $$ A = A(\zeta, \omega) = \sgn \left( \frac{-\sin\varphi}{2 \zeta \omega} \right) \frac{1}{\sqrt{(1 - \omega^2)^2 + (2 \zeta \omega)^2}}. $$ Compare this result with the theory section on resonance, as well as the "magnitude part" of the RLC circuit. This amplitude function is particularly important in the analysis and understanding of the frequency response of second-order systems. #### Phase part To solve for , divide both equations to get $$ \tan\varphi = -\frac{2 \zeta \omega}{1 - \omega^2} = \frac{2 \zeta \omega}{\omega^2 - 1}~~ \implies ~~ \varphi \equiv \varphi(\zeta, \omega) = \arctan \left( \frac{2 \zeta \omega}{\omega^2 - 1} \right ) + n\pi. $$ This phase function is particularly important in the analysis and understanding of the frequency response of second-order systems.
https://en.wikipedia.org/wiki/Harmonic_oscillator
passage: Nobel Prize winner Elizabeth Blackburn, who was co-founder of one company, promoted the clinical utility of telomere length measures. ## In wildlife During the last two decades, eco-evolutionary studies have investigated the relevance of life-history traits and environmental conditions on telomeres of wildlife. Most of these studies have been conducted in endotherms, i.e. birds and mammals. They have provided evidence for the inheritance of telomere length; however, heritability estimates vary greatly within and among species. Age and telomere length often negatively correlate in vertebrates, but this decline is variable among taxa and linked to the method used for estimating telomere length. In contrast, the available information shows no sex differences in telomere length across vertebrates. Phylogeny and life history traits such as body size or the pace of life can also affect telomere dynamics. For example, it has been described across species of birds and mammals. In 2019, a meta-analysis confirmed that the exposure to stressors (e.g. pathogen infection, competition, reproductive effort and high activity level) was associated with shorter telomeres across different animal taxa. Studies on ectotherms, and other non-mammalian organisms, show that there is no single universal model of telomere erosion; rather, there is wide variation in relevant dynamics across Metazoa, and even within smaller taxonomic groups these patterns appear diverse.
https://en.wikipedia.org/wiki/Telomere
passage: As Robert Styer puts it in his paper calculating this series: "Amazingly, the same value of N that begins the least sequence of six consecutive happy numbers also begins the least sequence of seven consecutive happy numbers. " The number of 10-happy numbers up to 10n for 1 ≤ n ≤ 20 is 3, 20, 143, 1442, 14377, 143071, 1418854, 14255667, 145674808, 1492609148, 15091199357, 149121303586, 1443278000870, 13770853279685, 130660965862333, 1245219117260664, 12024696404768025, 118226055080025491, 1183229962059381238, 12005034444292997294. ## Happy primes A $$ b $$ -happy prime is a number that is both $$ b $$ -happy and prime. Unlike happy numbers, rearranging the digits of a $$ b $$ -happy prime will not necessarily create another happy prime. For instance, while 19 is a 10-happy prime, 91 = 13 × 7 is not prime (but is still 10-happy). All prime numbers are 2-happy and 4-happy primes, as base 2 and base 4 are happy bases. ### 6-happy primes In base 6, the 6-happy primes below 1296 = 64 are 211, 1021, 1335, 2011, 2425, 2555, 3351, 4225, 4441, 5255, 5525
https://en.wikipedia.org/wiki/Happy_number
passage: However, a function does not need to be differentiable for its Jacobian matrix to be defined, since only its first-order partial derivatives are required to exist. If is differentiable at a point in , then its differential is represented by . In this case, the linear transformation represented by is the best linear approximation of near the point , in the sense that $$ \mathbf f(\mathbf x) - \mathbf f(\mathbf p) = \mathbf J_{\mathbf f}(\mathbf p)(\mathbf x - \mathbf p) + o(\|\mathbf x - \mathbf p\|) \quad (\text{as } \mathbf{x} \to \mathbf{p}), $$ where is a quantity that approaches zero much faster than the distance between and does as approaches . This approximation specializes to the approximation of a scalar function of a single variable by its Taylor polynomial of degree one, namely $$ f(x) - f(p) = f'(p) (x - p) + o(x - p) \quad (\text{as } x \to p). $$ In this sense, the Jacobian may be regarded as a kind of "first-order derivative" of a vector-valued function of several variables. In particular, this means that the gradient of a scalar-valued function of several variables may too be regarded as its "first-order derivative".
https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant
passage: For fermions in the half-integer spin representation, it is shown that there are only these two types of SU(2) anomalies and the linear combinations of these two anomalies; these classify all global SU(2) anomalies. This new SU(2) anomaly also plays an important rule for confirming the consistency of SO(10) grand unified theory, with a Spin(10) gauge group and chiral fermions in the 16-dimensional spinor representations, defined on non-spin manifolds. ### Higher anomalies involving higher global symmetries: Pure Yang–Mills gauge theory as an example The concept of global symmetries can be generalized to higher global symmetries, such that the charged object for the ordinary 0-form symmetry is a particle, while the charged object for the n-form symmetry is an n-dimensional extended operator. It is found that the 4 dimensional pure Yang–Mills theory with only SU(2) gauge fields with a topological theta term $$ \theta=\pi, $$ can have a mixed higher 't Hooft anomaly between the 0-form time-reversal symmetry and 1-form Z2 center symmetry. The 't Hooft anomaly of 4 dimensional pure Yang–Mills theory can be precisely written as a 5 dimensional invertible topological field theory or mathematically a 5 dimensional bordism invariant, generalizing the anomaly inflow picture to this Z2 class of global anomaly involving higher symmetries.
https://en.wikipedia.org/wiki/Anomaly_%28physics%29
passage: - Interpolation of the fields from the mesh to the particle locations. Models which include interactions of particles only through the average fields are called PM (particle-mesh). Those which include direct binary interactions are PP (particle-particle). Models with both types of interactions are called PP-PM or P3M. Since the early days, it has been recognized that the PIC method is susceptible to error from so-called discrete particle noise. This error is statistical in nature, and today it remains less-well understood than for traditional fixed-grid methods, such as Eulerian or semi-Lagrangian schemes. Modern geometric PIC algorithms are based on a very different theoretical framework. These algorithms use tools of discrete manifold, interpolating differential forms, and canonical or non-canonical symplectic integrators to guarantee gauge invariant and conservation of charge, energy-momentum, and more importantly the infinitely dimensional symplectic structure of the particle-field system. These desired features are attributed to the fact that geometric PIC algorithms are built on the more fundamental field-theoretical framework and are directly linked to the perfect form, i.e., the variational principle of physics. ## Basics of the PIC plasma simulation technique Inside the plasma research community, systems of different species (electrons, ions, neutrals, molecules, dust particles, etc.) are investigated.
https://en.wikipedia.org/wiki/Particle-in-cell