text
stringlengths
82
2.62k
source
stringlengths
31
108
passage: The closure or span $$ \operatorname{cl}(A) $$ of a subset $$ A $$ of $$ E $$ is the set $$ \operatorname{cl}(A) = \Bigl\{\ x \in E \mid r(A) = r\bigl( A \cup \{x\} \bigr) \Bigr\} $$ . This defines a closure operator $$ \operatorname{cl}: \mathcal{P}(E) \mapsto \mathcal{P}(E) $$ where $$ \mathcal{P} $$ denotes the power set, with the following properties: - (C1) For all subsets $$ X $$ of $$ E $$ , $$ X \subseteq \operatorname{cl}(X) $$ . - (C2) For all subsets $$ X $$ of $$ E $$ , $$ \operatorname{cl}(X)= \operatorname{cl}\left( \operatorname{cl}\left( X \right) \right) $$ . - (C3) For all subsets $$ X $$ and $$ Y $$ of $$ E $$ with $$ X\subseteq Y $$ , $$ \operatorname{cl}(X)\subseteq \operatorname{cl}(Y) $$ . - (C4)
https://en.wikipedia.org/wiki/Matroid
passage: At the conference on Functional Programming Languages and Computer Architecture (FPCA '87) in Portland, Oregon, there was a strong consensus that a committee be formed to define an open standard for such languages. The committee's purpose was to consolidate existing functional languages into a common one to serve as a basis for future research in functional-language design. ### Haskell 1.0 to 1.4 Haskell was developed by a committee, attempting to bring together off the shelf solutions where possible. Type classes, which enable type-safe operator overloading, were first proposed by Philip Wadler and Stephen Blott to address the ad-hoc handling of equality types and arithmetic overloading in languages at the time. In early versions of Haskell up until and including version 1.2, user interaction and input/output (IO) were handled by both streams based and continuation based mechanisms which were widely considered unsatisfactory. In version 1.3, monadic IO was introduced, along with the generalisation of type classes to higher kinds (type constructors). Along with "do notation", which provides syntactic sugar for the Monad type class, this gave Haskell an effect system that maintained referential transparency and was convenient. Other notable changes in early versions were the approach to the 'seq' function, which creates a data dependency between values, and is used in lazy languages to avoid excessive memory consumption; with it moving from a type class to a standard function to make refactoring more practical. The first version of Haskell ("Haskell 1.0") was defined in 1990.
https://en.wikipedia.org/wiki/Haskell
passage: Many of the claims regarding the efficacy of alternative medicines are controversial, since research on them is frequently of low quality and methodologically flawed. Selective publication bias, marked differences in product quality and standardisation, and some companies making unsubstantiated claims call into question the claims of efficacy of isolated examples where there is evidence for alternative therapies. The Scientific Review of Alternative Medicine points to confusions in the general population – a person may attribute symptomatic relief to an otherwise-ineffective therapy just because they are taking something (the placebo effect); the natural recovery from or the cyclical nature of an illness (the regression fallacy) gets misattributed to an alternative medicine being taken; a person not diagnosed with science-based medicine may never originally have had a true illness diagnosed as an alternative disease category. Edzard Ernst, the first university professor of Complementary and Alternative Medicine, characterized the evidence for many alternative techniques as weak, nonexistent, or negative and in 2011 published his estimate that about 7.4% were based on "sound evidence", although he believes that may be an overestimate. Ernst has concluded that 95% of the alternative therapies he and his team studied, including acupuncture, herbal medicine, homeopathy, and reflexology, are "statistically indistinguishable from placebo treatments", but he also believes there is something that conventional doctors can usefully learn from the chiropractors and homeopath: this is the therapeutic value of the placebo effect, one of the strangest phenomena in medicine.
https://en.wikipedia.org/wiki/Alternative_medicine
passage: The PGZ decoder does not determine ν directly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trial ν and sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trial ν is reduced by one and the next smaller system is examined. #### Find the roots of the error locator polynomial Use the coefficients Λi found in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locators Xk are the reciprocals of those roots. The order of coefficients of the error location polynomial can be reversed, in which case the roots of that reversed polynomial are the error locators $$ X_k $$ (not their reciprocals $$ X_k^{-1} $$ ). Chien search is an efficient implementation of this step. #### Calculate the error values Once the error locators Xk are known, the error values can be determined. This can be done by direct solution for Yk in the error equations matrix given above, or using the Forney algorithm. #### Calculate the error locations Calculate ik by taking the log base $$ \alpha $$ of Xk. This is generally done using a precomputed lookup table. ####
https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction
passage: DNA viruses The genome replication of most DNA viruses takes place in the cell's nucleus. If the cell has the appropriate receptor on its surface, these viruses enter the cell either by direct fusion with the cell membrane (e.g., herpesviruses) or—more usually—by receptor-mediated endocytosis. Most DNA viruses are entirely dependent on the host cell's DNA and RNA synthesising machinery and RNA processing machinery. Viruses with larger genomes may encode much of this machinery themselves. In eukaryotes, the viral genome must cross the cell's nuclear membrane to access this machinery, while in bacteria it need only enter the cell. RNA viruses Replication of RNA viruses usually takes place in the cytoplasm. RNA viruses can be placed into four different groups depending on their modes of replication. The polarity (whether or not it can be used directly by ribosomes to make proteins) of single-stranded RNA viruses largely determines the replicative mechanism; the other major criterion is whether the genetic material is single-stranded or double-stranded. All RNA viruses use their own RNA replicase enzymes to create copies of their genomes. Reverse transcribing viruses Reverse transcribing viruses have ssRNA (Retroviridae, Metaviridae, Pseudoviridae) or dsDNA (Caulimoviridae, and Hepadnaviridae) in their particles.
https://en.wikipedia.org/wiki/Virus
passage: Conversely, the choice of a point called the origin and an orthonormal basis of the space of translations is equivalent with defining an isomorphism between a Euclidean space of dimension and $$ \R^n $$ viewed as a Euclidean space. It follows that everything that can be said about a Euclidean space can also be said about $$ \R^n. $$ Therefore, many authors, especially at elementary level, call $$ \R^n $$ the standard Euclidean space of dimension , or simply the Euclidean space of dimension . A reason for introducing such an abstract definition of Euclidean spaces, and for working with $$ \mathbb{E}^n $$ instead of $$ \R^n $$ is that it is often preferable to work in a coordinate-free and origin-free manner (that is, without choosing a preferred basis and a preferred origin). Another reason is that there is no standard origin nor any standard basis in the physical world. ### Technical definition A is a finite-dimensional inner product space over the real numbers. A Euclidean space is an affine space over the reals such that the associated vector space is a Euclidean vector space. Euclidean spaces are sometimes called Euclidean affine spaces to distinguish them from Euclidean vector spaces. If is a Euclidean space, its associated vector space (Euclidean vector space) is often denoted $$ \overrightarrow E. $$ The dimension of a Euclidean space is the dimension of its associated vector space.
https://en.wikipedia.org/wiki/Euclidean_space
passage: The US has five of the top 10; Italy two, Japan, Finland, Switzerland have one each. In June 2018, all combined supercomputers on the TOP500 list broke the 1 exaFLOPS mark. ## History In 1960, UNIVAC built the Livermore Atomic Research Computer (LARC), today considered among the first supercomputers, for the US Navy Research and Development Center. It still used high-speed drum memory, rather than the newly emerging disk drive technology. Also, among the first supercomputers was the IBM 7030 Stretch. The IBM 7030 was built by IBM for the Los Alamos National Laboratory, which then in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined instructions, prefetched data through a memory controller and included pioneering random access disk drives. The IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France also bought the computer, and it became the basis for the IBM 7950 Harvest, a supercomputer built for cryptanalysis. The third pioneering supercomputer project in the early 1960s was the Atlas at the University of Manchester, built by a team led by Tom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because magnetic storage with such a capacity was unaffordable, the actual core memory of the Atlas was only 16,000 words, with a drum providing memory for a further 96,000 words.
https://en.wikipedia.org/wiki/Supercomputer
passage: Convergent evolution is an alternative explanation for why coral reef fish have come to resemble each other; the same applies to benthic marine invertebrates such as sponges and nudibranchs. ### Living and non-living models In its broadest definition, mimicry can include non-living models. The specific terms masquerade and mimesis are sometimes used when the models are inanimate, and the mimicry's purpose is crypsis. For example, animals such as flower mantises, planthoppers, comma and geometer moth caterpillars resemble twigs, bark, leaves, bird droppings or flowers. In addition, predators may make use of resemblance to harmless objects in aggressive masquerade, to enable them to approach prey. This wolf in sheep's clothing strategy differs from the more specific resemblance to the prey in aggressive mimicry, where the prey is both model and dupe. Many animals bear eyespots, which are hypothesized to resemble the eyes of larger animals. They may not resemble any specific organism's eyes, and whether or not animals respond to them as eyes is also unclear. The model is usually another species, except in automimicry, where members of the species mimic other members, or other parts of their own bodies, and in inter-sexual mimicry, where members of one sex mimic members of the other. ### Types Many types of mimicry have been described. An overview of each follows, highlighting the similarities and differences between the various forms.
https://en.wikipedia.org/wiki/Mimicry
passage: ## Notation and terminology The notation $$ \chi_A $$ is also used to denote the characteristic function in convex analysis, which is defined as if using the reciprocal of the standard definition of the indicator function. A related concept in statistics is that of a dummy variable. (This must not be confused with "dummy variables" as that term is usually used in mathematics, also called a bound variable.) The term "characteristic function" has an unrelated meaning in classic probability theory. For this reason, traditional probabilists use the term indicator function for the function defined here almost exclusively, while mathematicians in other fields are more likely to use the term characteristic function to describe the function that indicates membership in a set. In fuzzy logic and modern many-valued logic, predicates are the characteristic functions of a probability distribution. That is, the strict true/false valuation of the predicate is replaced by a quantity interpreted as the degree of truth. ## Basic properties The indicator or characteristic function of a subset of some set maps elements of to the codomain $$ \{0,\, 1\}. $$ This mapping is surjective only when is a non-empty proper subset of .
https://en.wikipedia.org/wiki/Indicator_function
passage: However, if the time that is spent suspending a thread and then restoring it can be proven to be always more than the time that must be waited for a thread to become ready to run after being blocked in a particular situation, then spinlocks are an acceptable solution (for that situation only). ## Bound on the mutual exclusion problem One binary test&set register is sufficient to provide the deadlock-free solution to the mutual exclusion problem. But a solution built with a test&set register can possibly lead to the starvation of some processes which become caught in the trying section. In fact, $$ \Omega(\sqrt{n}) $$ distinct memory states are required to avoid lockout. To avoid unbounded waiting, n distinct memory states are required. ## Recoverable mutual exclusion Most algorithms for mutual exclusion are designed with the assumption that no failure occurs while a process is running inside the critical section. However, in reality such failures may be commonplace. For example, a sudden loss of power or faulty interconnect might cause a process in a critical section to experience an unrecoverable error or otherwise be unable to continue. If such a failure occurs, conventional, non-failure-tolerant mutual exclusion algorithms may deadlock or otherwise fail key liveness properties. To deal with this problem, several solutions using crash-recovery mechanisms have been proposed.
https://en.wikipedia.org/wiki/Mutual_exclusion
passage: Constant symbols could include the natural number $$ 0 $$ , the Boolean value $$ \mathrm{true} $$ , and functions such as the successor function $$ \mathrm{S} $$ and conditional operator $$ \mathrm{if} $$ . Thus some terms could be $$ 0 $$ , $$ (\mathrm{S}\,0) $$ , $$ (\mathrm{S}\,(\mathrm{S}\,0)) $$ , and $$ (\mathrm{if}\,\mathrm{true}\,0\,(\mathrm{S}\,0)) $$ . ### Judgments Most type theories have 4 judgments: - " $$ T $$ is a type" - " $$ t $$ is a term of type $$ T $$ " - "Type $$ T_1 $$ is equal to type $$ T_2 $$ " - "Terms $$ t_1 $$ and $$ t_2 $$ both of type $$ T $$ are equal" Judgments may follow from assumptions. For example, one might say "assuming $$ x $$ is a term of type $$ \mathsf{bool} $$ and $$ y $$ is a term of type $$ \mathsf{nat} $$ , it follows that $$ (\mathrm{if}\,x\,y\,y) $$ is a term of type $$ \mathsf{nat} $$ ".
https://en.wikipedia.org/wiki/Type_theory
passage: ## List of mathematical spaces - Affine space - Algebraic space - Baire space - Banach space - Base space - Bergman space - Berkovich space - Besov space - Borel space - Calabi-Yau space - Cantor space - Cauchy space - Cellular space - Chu space - Closure space - Conformal space - Complex analytic space - Drinfeld's symmetric space - Eilenberg–Mac Lane space - Euclidean space - Fiber space - Finsler space - First-countable space - Fréchet space - Function space - G-space - Geometric space - Green space (topological space) - Hardy space - Hausdorff space - Heisenberg space - Hilbert space - Homogeneous space - Inner product space - Kolmogorov space - Lp-space - Lens space - Liouville space - Locally finite space - Loop space - Lorentz space - Mapping space - Measure space - Metric space - Minkowski space - Müntz space - Normed space - Paracompact space - Perfectoid space - Planar space - Polish space - Probability space - Projective space - Proximity space - Quadratic space - Quotient space (disambiguation) - Riemann's Moduli space - Sample space - Sequence space - Sierpiński space - Sobolev space - Standard space - State space - Stone space - Symplectic space (disambiguation) - T2 space - Teichmüller space - Tensor space - Topological space - Topological vector space - Total space - Uniform space - Vector space
https://en.wikipedia.org/wiki/Space_%28mathematics%29
passage: In mathematics, the closed graph theorem may refer to one of several basic results characterizing continuous functions in terms of their graphs. Each gives conditions when functions with closed graphs are necessarily continuous. A blog post by T. Tao lists several closed graph theorems throughout mathematics. ## Graphs and maps with closed graphs If $$ f : X \to Y $$ is a map between topological spaces then the graph of $$ f $$ is the set $$ \Gamma_f := \{ (x, f(x)) : x \in X \} $$ or equivalently, $$ \Gamma_f := \{ (x, y) \in X \times Y : y = f(x) \} $$ It is said that the graph of is closed if $$ \Gamma_f $$ is a closed subset of $$ X \times Y $$ (with the product topology). Any continuous function into a Hausdorff space has a closed graph (see ) Any linear map, $$ L : X \to Y, $$ between two topological vector spaces whose topologies are (Cauchy) complete with respect to translation invariant metrics, and if in addition (1a) $$ L $$ is sequentially continuous in the sense of the product topology, then the map $$ L $$ is continuous and its graph, , is necessarily closed.
https://en.wikipedia.org/wiki/Closed_graph_theorem
passage: Polymers in this region would need to use a time-temperature superposition to get more detailed information to cautiously decide how to use the materials. For instance, if the material is used to cope with short interaction time purpose, it could present as 'hard' material. While using for long interaction time purposes, it would act as 'soft' material. - Region V: Viscous polymer flows easily in this region. Another significant drop in stiffness. Extreme cold temperatures can cause viscoelastic materials to change to the glass phase and become brittle. For example, exposure of pressure sensitive adhesives to extreme cold (dry ice, freeze spray, etc.) causes them to lose their tack, resulting in debonding. ## Viscoelastic creep When subjected to a step constant stress, viscoelastic materials experience a time-dependent increase in strain. This phenomenon is known as viscoelastic creep. At time $$ t_0 $$ , a viscoelastic material is loaded with a constant stress that is maintained for a sufficiently long time period. The material responds to the stress with a strain that increases until the material ultimately fails, if it is a viscoelastic liquid. If, on the other hand, it is a viscoelastic solid, it may or may not fail depending on the applied stress versus the material's ultimate resistance.
https://en.wikipedia.org/wiki/Viscoelasticity
passage: If the ratio of radii falls beyond these limiting cases, the circles cannot satisfy the problem's area constraint. In the case of two circles of equal size, these equations can be simplified somewhat. The rhombus formed by the two circle centers and the two crossing points, with side lengths equal to the radius, has an angle $$ \theta\approx 2.605 $$ radians at the circle centers, found by solving the equation $$ \theta-\sin\theta=\frac{2\pi}{3}, $$ from which it follows that the ratio of the distance between their centers to their radius is $$ 2\cos\tfrac\theta2\approx0.529864 $$ .
https://en.wikipedia.org/wiki/Mrs._Miniver%27s_problem
passage: The discrete unit sample function is more simply defined as: $$ \delta[n] = \begin{cases} 1 & n = 0 \\ 0 & n \text{ is another integer}\end{cases} $$ In comparison, in continuous-time systems the Dirac delta function is often confused for both the Kronecker delta function and the unit sample function. The Dirac delta is defined as: $$ \begin{cases} \int_{-\varepsilon}^{+\varepsilon}\delta(t)dt = 1 & \forall \varepsilon > 0 \\ \delta(t) = 0 & \forall t \neq 0\end{cases} $$ Unlike the Kronecker delta function $$ \delta_{ij} $$ and the unit sample function $$ \delta[n] $$ , the Dirac delta function $$ \delta(t) $$ does not have an integer index, it has a single continuous non-integer value . In continuous-time systems, the term "unit impulse function" is used to refer to the Dirac delta function $$ \delta(t) $$ or, in discrete-time systems, the Kronecker delta function $$ \delta[n] $$ .
https://en.wikipedia.org/wiki/Kronecker_delta
passage: When $$ m>1 $$ , the branch cuts of $$ \operatorname{am}(u,m) $$ in the $$ u $$ -plane cross the real line at $$ 2(2s+1)K(1/m)/\sqrt{m} $$ for $$ s\in\mathbb{Z} $$ ; therefore for $$ m>1 $$ , $$ \operatorname{am}(u,m) $$ is not continuous in $$ u $$ on the real line and jumps by $$ 2\pi $$ on the discontinuities. But defining $$ \operatorname{am}(u,m) $$ this way gives rise to very complicated branch cuts in the $$ m $$ -plane (not the $$ u $$ -plane); they have not been fully described as of yet. Let $$ E(\varphi,m)=\int_0^{\varphi}\sqrt{1-m\sin^2\theta}\,\mathrm d\theta $$ be the incomplete elliptic integral of the second kind with parameter $$ m $$ . Then the Jacobi epsilon function can be defined as $$ \mathcal{E}(u,m)=E(\operatorname{am}(u,m),m) $$ for $$ u\in\mathbb{R} $$ and $$ 0<m<1 $$ and by analytic continuation in each of the variables otherwise: the Jacobi epsilon function is meromorphic in the whole complex plane (in both $$ u $$ and $$ m $$ ).
https://en.wikipedia.org/wiki/Jacobi_elliptic_functions
passage: Aliasing occurs when adjacent copies of X(f) overlap. The purpose of the anti-aliasing filter is to ensure that the reduced periodicity does not create overlap. The condition that ensures the copies of X(f) do not overlap each other is: $$ B < \tfrac{0.5}{T} \cdot \tfrac{1}{M}, $$ so that is the maximum cutoff frequency of an ideal anti-aliasing filter. ## By a rational factor Let M/L denote the decimation factor, where: 1. Increase (resample) the sequence by a factor of L. This is called Upsampling, or interpolation. 1. Decimate by a factor of M Step 1 requires a lowpass filter after increasing (expanding) the data rate, and step 2 requires a lowpass filter before decimation. Therefore, both operations can be accomplished by a single filter with the lower of the two cutoff frequencies. For the M > L case, the anti-aliasing filter cutoff,  $$ \tfrac{0.5}{M} $$ cycles per intermediate sample, is the lower frequency.
https://en.wikipedia.org/wiki/Downsampling_%28signal_processing%29
passage: Binary classification is the task of classifying the elements of a set into one of two groups (each called class). Typical binary classification problems include: - Medical testing to determine if a patient has a certain disease or not; - Quality control in industry, deciding whether a specification has been met; - In information retrieval, deciding whether a page should be in the result set of a search or not - In administration, deciding whether someone should be issued with a driving licence or not - In cognition, deciding whether an object is food or not food. When measuring the accuracy of a binary classifier, the simplest way is to count the errors. But in the real world often one of the two classes is more important, so that the number of both of the different types of errors is of interest. For example, in medical testing, detecting a disease when it is not present (a false positive) is considered differently from not detecting a disease when it is present (a false negative). ## Four outcomes Given a classification of a specific data set, there are four basic combinations of actual data category and assigned category: true positives TP (correct positive assignments), true negatives TN (correct negative assignments), false positives FP (incorrect positive assignments), and false negatives FN (incorrect negative assignments). Test outcome positive Test outcome negative Condition positive True positive False negative Condition negative False positive True negative These can be arranged into a 2×2 contingency table, with rows corresponding to actual value – condition positive or condition negative – and columns corresponding to classification value – test outcome positive or test outcome negative.
https://en.wikipedia.org/wiki/Binary_classification
passage: $$ (ax)(ya) = a(xy)a $$ hold in any alternative algebra. In a unital alternative algebra, multiplicative inverses are unique whenever they exist. Moreover, for any invertible element $$ x $$ and all $$ y $$ one has $$ y = x^{-1}(xy). $$ This is equivalent to saying the associator $$ [x^{-1},x,y] $$ vanishes for all such $$ x $$ and $$ y $$ . If $$ x $$ and $$ y $$ are invertible then $$ xy $$ is also invertible with inverse $$ (xy)^{-1} = y^{-1}x^{-1} $$ . The set of all invertible elements is therefore closed under multiplication and forms a Moufang loop. This loop of units in an alternative ring or algebra is analogous to the group of units in an associative ring or algebra. Kleinfeld's theorem states that any simple non-associative alternative ring is a generalized octonion algebra over its center. The structure theory of alternative rings is presented in the book Rings That Are Nearly Associative by Zhevlakov, Slin'ko, Shestakov, and Shirshov. ## Occurrence The projective plane over any alternative division ring is a Moufang plane. Every composition algebra is an alternative algebra, as shown by Guy Roos in 2008:
https://en.wikipedia.org/wiki/Alternative_algebra
passage: The conversion between them is , since the units are , where H is the henry – the SI unit of inductance. Maxwell's equations then take the following forms (using the same notation above): + Maxwell's equations and Lorentz force equation with magnetic monopoles: SI units Name Without magnetic monopoles With magnetic monopoles Weber convention Ampere-meter convention Gauss's law Ampère's law (with Maxwell's extension) Gauss's law for magnetism Faraday's law of induction Lorentz force equation ### Potential formulation Maxwell's equations can also be expressed in terms of potentials as follows: Name Gaussian units SI units (Wb) SI units (A⋅m) Maxwell's equations (assuming Lorenz gauge) Lorenz gauge condition Relation to fields where $$ \Box = \nabla^2 - \frac{1}{c^2}\frac{\partial^2}{{\partial t}^2} $$ ### Tensor formulation Maxwell's equations in the language of tensors makes Lorentz covariance clear. We introduce electromagnetic tensors and preliminary four-vectors in this article as follows: Name Notation Gaussian units SI units (Wb or A⋅m) Electromagnetic tensor Dual electromagnetic tensor Four-current Four-potential Four-force where: - The signature of the Minkowski metric is .
https://en.wikipedia.org/wiki/Magnetic_monopole
passage: Of these the first two, but not the last three, are right triangles. - There exist integer triangles with three rational medians. The smallest has sides (68, 85, 87). Others include (127, 131, 158), (113, 243, 290), (145, 207, 328) and (327, 386, 409). - There are no isosceles Pythagorean triangles. - The only primitive Pythagorean triangles for which the square of the perimeter equals an integer multiple of the area are (3, 4, 5) with perimeter 12 and area 6 and with the ratio of perimeter squared to area being 24; (5, 12, 13) with perimeter 30 and area 30 and with the ratio of perimeter squared to area being 30; and (9, 40, 41) with perimeter 90 and area 180 and with the ratio of perimeter squared to area being 45. - There exists a unique (up to similitude) pair of a rational right triangle and a rational isosceles triangle which have the same perimeter and the same area. The unique pair consists of the (377, 135, 352) triangle and the (366, 366, 132) triangle. There is no pair of such triangles if the triangles are also required to be primitive integral triangles. The authors stress the striking fact that the second assertion can be proved by an elementary argumentation (they do so in their appendix A), whilst the first assertion needs modern highly non-trivial mathematics.
https://en.wikipedia.org/wiki/Integer_triangle
passage: ### Arrays Since an array is a collection of many distinct values, symbolic executors must either treat the entire array as one value or treat each array element as a separate location. The problem with treating each array element separately is that a reference such as "A[i]" can only be specified dynamically, when the value for i has a concrete value. ### Environment interactions Programs interact with their environment by performing system calls, receiving signals, etc. Consistency problems may arise when execution reaches components that are not under control of the symbolic execution tool (e.g., kernel or libraries). Consider the following example: ```c int main() { FILE *fp = fopen("doc.txt"); ... if (condition) { fputs("some data", fp); } else { fputs("some other data", fp); } ... data = fgets(..., fp); } ``` This program opens a file and, based on some condition, writes different kind of data to the file. It then later reads back the written data. In theory, symbolic execution would fork two paths at line 5 and each path from there on would have its own copy of the file. The statement at line 11 would therefore return data that is consistent with the value of "condition" at line 5. In practice, file operations are implemented as system calls in the kernel, and are outside the control of the symbolic execution tool. The main approaches to address this challenge are: Executing calls to the environment directly. The advantage of this approach is that it is simple to implement.
https://en.wikipedia.org/wiki/Symbolic_execution
passage: Uniformly choose an index i from the range $$ x_{k|k}^{(i)} $$ ) Generate a test $$ \hat{x} $$ from the distribution $$ p(x_k|x_{k-1}) $$ with BLOCK44) Generate the probability of $$ \hat{y} $$ using $$ \hat{x} $$ from $$ p(y_k|x_k),~\mbox{with}~x_k=\hat{x} $$ where $$ y_k $$ is the measured value 5) Generate another uniform u from $$ [0, m_k] $$ where BLOCK106) Compare u and BLOCK116a) If u is larger then repeat from step 2 6b) If u is smaller then save $$ \hat{x} $$ as $$ x_{k|k}^{(i)} $$ and increment n 7) If n == N then quit The goal is to generate P "particles" at k using only the particles from $$ k-1 $$ . This requires that a Markov equation can be written (and computed) to generate a $$ x_k $$ based only upon $$ x_{k-1} $$ . This algorithm uses the composition of the P particles from $$ k-1 $$ to generate a particle at k and repeats (steps 2–6) until P particles are generated at k. This can be more easily visualized if x is viewed as a two-dimensional array.
https://en.wikipedia.org/wiki/Particle_filter
passage: - The ratio of the lengths of two line segments on a line or on two parallel lines stays unchanged. As a special case, midpoints are mapped on midpoints. - The centroid of a set of points in space is mapped to the centroid of the image of those points - The length of a line segment parallel to the projection plane remains unchanged. The length of any line segment is not increased if the projection is orthographic. - Any circle that lies in a plane parallel to the projection plane is mapped onto a circle with the same radius. Any other circle is mapped onto an ellipse or a line segment (if direction $$ \vec v $$ is parallel to the circle's plane). - Angles in general are not preserved. But right angles with one line parallel to the projection plane remain unchanged. - Any rectangle is mapped onto a parallelogram or a line segment (if $$ \vec v $$ is parallel to the rectangle's plane). - Any figure in a plane that is parallel to the image plane is congruent to its image. ## Types ### Orthographic projection Orthographic projection is derived from the principles of descriptive geometry, and is a type of parallel projection where the projection rays are perpendicular to the projection plane. It is the projection type of choice for working drawings.
https://en.wikipedia.org/wiki/Parallel_projection
passage: ## Review of some numerical methods which are GDM All the methods below satisfy the first four core properties of GDM (coercivity, GD-consistency, limit-conformity, compactness), and in some cases the fifth one (piecewise constant reconstruction). ### Galerkin methods and conforming finite element methods Let $$ V_h\subset H^1_0(\Omega) $$ be spanned by the finite basis $$ (\psi_i)_{i\in I} $$ . The Galerkin method in $$ V_h $$ is identical to the GDM where one defines - $$ X_{D,0} = \{ u = (u_i)_{i\in I} \} = \mathbb{R}^I, $$ - $$ \Pi_D u = \sum_{i\in I} u_i \psi_i $$ - $$ \nabla_D u = \sum_{i\in I} u_i \nabla\psi_i. $$ In this case, $$ C_D $$ is the constant involved in the continuous Poincaré inequality, and, for all $$ \varphi\in H_\operatorname{div}(\Omega) $$ , $$ W_{D}(\varphi) = 0 $$ (defined by ()). Then () and () are implied by Céa's lemma.
https://en.wikipedia.org/wiki/Gradient_discretisation_method
passage: Taking the quotient by precisely imposes the Leibniz rule. ## ### Examples and basic facts For any commutative ring , the Kähler differentials of the polynomial ring $$ S=R[t_1, \dots, t_n] $$ are a free -module of rank n generated by the differentials of the variables: $$ \Omega^1_{R[t_1, \dots, t_n]/R} = \bigoplus_{i=1}^n R[t_1, \dots t_n] \, dt_i. $$ Kähler differentials are compatible with extension of scalars, in the sense that for a second -algebra and $$ S' = S \otimes_R R' $$ , there is an isomorphism $$ \Omega_{S/R} \otimes_S S' \cong \Omega_{ S'/R'}. $$ As a particular case of this, Kähler differentials are compatible with localizations, meaning that if is a multiplicative set in , then there is an isomorphism $$ W^{-1}\Omega_{S/R} \cong \Omega_{W^{-1}S/R}. $$ Given two ring homomorphisms $$ R \to S \to T $$ , there is a short exact sequence of -modules $$ \Omega_{S/R} \otimes_S T \to \Omega_{T/R} \to \Omega_{T/S} \to 0. $$
https://en.wikipedia.org/wiki/K%C3%A4hler_differential
passage: Once in orbit, their speed keeps them in orbit above the atmosphere. If e.g., an elliptical orbit dips into dense air, the object will lose speed and re-enter (i.e. fall). Occasionally a space craft will intentionally intercept the atmosphere, in an act commonly referred to as an aerobraking maneuver. ### Illustration As an illustration of an orbit around a planet, the Newton's cannonball model may prove useful (see image below). This is a 'thought experiment', in which a cannon on top of a tall mountain is able to fire a cannonball horizontally at any chosen muzzle speed. The effects of air friction on the cannonball are ignored (or perhaps the mountain is high enough that the cannon is above the Earth's atmosphere, which is the same thing). If the cannon fires its ball with a low initial speed, the trajectory of the ball curves downward and hits the ground (A). As the firing speed is increased, the cannonball hits the ground farther (B) away from the cannon, because while the ball is still falling towards the ground, the ground is increasingly curving away from it (see first point, above). All these motions are actually "orbits" in a technical sense—they are describing a portion of an elliptical path around the center of gravity—but the orbits are interrupted by striking the Earth. If the cannonball is fired with sufficient speed, the ground curves away from the ball at least as much as the ball falls—so the ball never strikes the ground.
https://en.wikipedia.org/wiki/Orbit
passage: Notwithstanding this prior work, Paxos offered a particularly elegant formalism, and included one of the earliest proofs of safety for a fault-tolerant distributed consensus protocol. Reconfigurable state machines have strong ties to prior work on reliable group multicast protocols that support dynamic group membership, for example Birman's work in 1985 and 1987 on the virtually synchronous gbcast protocol. However, gbcast is unusual in supporting durability and addressing partitioning failures. Most reliable multicast protocols lack these properties, which are required for implementations of the state machine replication model. This point is elaborated in a paper by Lamport, Malkhi and Zhou. Paxos protocols are members of a theoretical class of solutions to a problem formalized as uniform agreement with crash failures. Lower bounds for this problem have been proved by Keidar and Shraer. Derecho, a C++ software library for cloud-scale state machine replication, offers a Paxos protocol that has been integrated with self-managed virtually synchronous membership. This protocol matches the Keidar and Shraer optimality bounds, and maps efficiently to modern remote DMA (RDMA) datacenter hardware (but uses TCP if RDMA is not available). ## Assumptions In order to simplify the presentation of Paxos, the following assumptions and definitions are made explicit. Techniques to broaden the applicability are known in the literature, and are not covered in this article. ### Processors - Processors operate at arbitrary speed. - Processors may experience failures.
https://en.wikipedia.org/wiki/Paxos_%28computer_science%29
passage: ## Outside a shell A solid, spherically symmetric body can be modeled as an infinite number of concentric, infinitesimally thin spherical shells. If one of these shells can be treated as a point mass, then a system of shells (i.e. the sphere) can also be treated as a point mass. Consider one such shell (the diagram shows a cross-section): (Note: the $$ d\theta $$ in the diagram refers to the small angle, not the arc length. The arc length is Applying Newton's Universal Law of Gravitation, the sum of the forces due to the mass elements in the shaded band is $$ dF = \frac{Gm}{s^2} dM. $$ However, since there is partial cancellation due to the vector nature of the force in conjunction with the circular band's symmetry, the leftover component (in the direction pointing towards is given by $$ dF_r = \frac{Gm}{s^2} \cos(\varphi) \, dM $$ The total force on then, is simply the sum of the force exerted by all the bands.
https://en.wikipedia.org/wiki/Shell_theorem
passage: The pair $$ (W,S) $$ where $$ W $$ is a Coxeter group with generators $$ S=\{r_1, \dots , r_n\} $$ is called a Coxeter system. Note that in general $$ S $$ is not uniquely determined by $$ W $$ . For example, the Coxeter groups of type $$ B_3 $$ and $$ A_1\times A_3 $$ are isomorphic but the Coxeter systems are not equivalent, since the former has 3 generators and the latter has 1 + 3 = 4 generators (see below for an explanation of this notation). A number of conclusions can be drawn immediately from the above definition. - The relation $$ m_{ii} = 1 $$ means that $$ (r_ir_i)^1 = (r_i)^2 = 1 $$ for all $$ i $$  ; as such the generators are involutions. - If $$ m_{ij} = 2 $$ , then the generators $$ r_i $$ and $$ r_j $$ commute. This follows by observing that $$ xx = yy = 1 $$ , together with $$ xyxy = 1 $$ implies that $$ xy = x(xyxy)y = (xx)yx(yy) = yx $$ .
https://en.wikipedia.org/wiki/Coxeter_group
passage: Then the secant $$ P_1 P_2 $$ is parallel to the line $$ Q_1 Q_2 $$ . (The lines $$ x = x_1 $$ and $$ x = x_2 $$ are parallel to the axis of the parabola.) Proof: straight forward calculation for the unit parabola $$ y = x^2 $$ . Application: The 2-points–2-tangents property can be used for the construction of the tangent of a parabola at point $$ P_2 $$ , if $$ P_1, P_2 $$ and the tangent at $$ P_1 $$ are given. Remark 1: The 2-points–2-tangents property of a parabola is an affine version of the 3-point degeneration of Pascal's theorem. Remark 2: The 2-points–2-tangents property should not be confused with the following property of a parabola, which also deals with 2 points and 2 tangents, but is not related to Pascal's theorem. ### Axis direction The statements above presume the knowledge of the axis direction of the parabola, in order to construct the points $$ Q_1, Q_2 $$ . The following property determines the points $$ Q_1, Q_2 $$ by two given points and their tangents only, and the result is that the line $$ Q_1 Q_2 $$ is parallel to the axis of the parabola. Let 1.
https://en.wikipedia.org/wiki/Parabola
passage: The fundamental similarities between Relational and Object databases are the start and the commit or rollback. After starting a transaction, database records or objects are locked, either read-only or read-write. Reads and writes can then occur. Once the transaction is fully defined, changes are committed or rolled back atomically, such that at the end of the transaction there is no inconsistency. ## Distributed transactions Database systems implement distributed transactions as transactions accessing data over multiple nodes. A distributed transaction enforces the ACID properties over multiple nodes, and might include systems such as databases, storage managers, file systems, messaging systems, and other data managers. In a distributed transaction there is typically an entity coordinating all the process to ensure that all parts of the transaction are applied to all relevant systems. Moreover, the integration of Storage as a Service (StaaS) within these environments is crucial, as it offers a virtually infinite pool of storage resources, accommodating a range of cloud-based data store classes with varying availability, scalability, and ACID properties. This integration is essential for achieving higher availability, lower response time, and cost efficiency in data-intensive applications deployed across cloud-based data stores. ## Transactional filesystems The Namesys Reiser4 filesystem for Linux supports transactions, and as of Microsoft Windows Vista, the Microsoft NTFS filesystem supports distributed transactions across networks. There is occurring research into more data coherent filesystems, such as the Warp Transactional Filesystem (WTF).
https://en.wikipedia.org/wiki/Database_transaction
passage: The claw graph and the path graph on 4 vertices both have the same chromatic polynomial, for example. ## Examples Properties - Connected graphs - Bipartite graphs - Planar graphs - Triangle-free graphs - Perfect graphs - Eulerian graphs - Hamiltonian graphs ### Integer invariants - Order, the number of vertices - Size, the number of edges - Number of connected components - Circuit rank, a linear combination of the numbers of edges, vertices, and components - diameter, the longest of the shortest path lengths between pairs of vertices - girth, the length of the shortest cycle - Vertex connectivity, the smallest number of vertices whose removal disconnects the graph - Edge connectivity, the smallest number of edges whose removal disconnects the graph - Chromatic number, the smallest number of colors for the vertices in a proper coloring - Chromatic index, the smallest number of colors for the edges in a proper edge coloring - Choosability (or list chromatic number), the least number k such that G is k-choosable - Independence number, the largest size of an independent set of vertices - Clique number, the largest order of a complete subgraph - Arboricity - Graph genus - Pagenumber - Hosoya index - Wiener index - Colin de Verdière graph invariant - Boxicity ### Real number invariants - Clustering coefficient - Betweenness centrality - Fractional chromatic number - Algebraic connectivity - Isoperimetric number - Estrada index - Strength
https://en.wikipedia.org/wiki/Graph_property
passage: The result is a signal with considerably less content, one that would fit within existing 6 MHz black-and-white signals as a phase modulated differential signal. The average TV displays the equivalent of 350 pixels on a line, but the TV signal contains enough information for only about 50 pixels of blue and perhaps 150 of red. This is not apparent to the viewer in most cases, as the eye makes little use of the "missing" information anyway. ### PAL and SECAM The PAL and SECAM systems use nearly identical or very similar methods to transmit colour. In any case both systems are subsampled. ## Digital The term is much more commonly used in digital media and digital signal processing. The most widely used transform coding technique in this regard is the discrete cosine transform (DCT), proposed by Nasir Ahmed in 1972, and presented by Ahmed with T. Natarajan and K. R. Rao in 1974. This DCT, in the context of the family of discrete cosine transforms, is the DCT-II. It is the basis for the common JPEG image compression standard, which examines small blocks of the image and transforms them to the frequency domain for more efficient quantization (lossy) and data compression. In video coding, the H.26x and MPEG standards modify this DCT image compression technique across frames in a motion image using motion compensation, further reducing the size compared to a series of JPEGs. In audio coding, MPEG audio compression analyzes the transformed data according to a psychoacoustic model that describes the human ear's sensitivity to parts of the signal, similar to the TV model.
https://en.wikipedia.org/wiki/Transform_coding
passage: Semiconductor Semiconductor memory uses semiconductor-based integrated circuit (IC) chips to store information. Data are typically stored in metal–oxide–semiconductor (MOS) memory cells. A semiconductor memory chip may contain millions of memory cells, consisting of tiny MOS field-effect transistors (MOSFETs) and/or MOS capacitors. Both volatile and non-volatile forms of semiconductor memory exist, the former using standard MOSFETs and the latter using floating-gate MOSFETs. In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers that are designed for them. As early as 2006, notebook and desktop computer manufacturers started using flash-based solid-state drives (SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD. Magnetic Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers.
https://en.wikipedia.org/wiki/Computer_data_storage
passage: ### Example 1 If $$ a,b,c>0 $$ , then the AM-GM inequality tells us that $$ (1+a)(1+b)(1+c)\ge 2\sqrt{1\cdot{a}} \cdot 2\sqrt{1\cdot{b}} \cdot 2\sqrt{1\cdot{c}} = 8\sqrt{abc} $$ ### Example 2 A simple upper bound for $$ n! $$ can be found. AM-GM tells us $$ 1+2+\dots+n \ge n\sqrt[n]{n!} $$ $$ \frac{n(n+1)}{2} \ge n\sqrt[n]{n!} $$ and so $$ \left(\frac{n+1}{2}\right)^n \ge n! $$ with equality at $$ n=1 $$ . Equivalently, $$ (n+1)^n \ge 2^nn! $$ ### Example 3 Consider the function $$ f(x,y,z) = \frac{x}{y} + \sqrt{\frac{y}{z}} + \sqrt[3]{\frac{z}{x}} $$ for all positive real numbers , and . Suppose we wish to find the minimal value of this function.
https://en.wikipedia.org/wiki/AM%E2%80%93GM_inequality
passage: In this case, the desired effect in applying a preconditioner is to make the quadratic form of the preconditioned operator $$ P^{-1}A $$ with respect to the $$ P $$ -based scalar product to be nearly spherical. ### Variable and non-linear preconditioning Denoting $$ T = P^{-1} $$ , we highlight that preconditioning is practically implemented as multiplying some vector $$ r $$ by $$ T $$ , i.e., computing the product $$ Tr. $$ In many applications, $$ T $$ is not given as a matrix, but rather as an operator $$ T(r) $$ acting on the vector $$ r $$ . Some popular preconditioners, however, change with $$ r $$ and the dependence on $$ r $$ may not be linear. Typical examples involve using non-linear iterative methods, e.g., the conjugate gradient method, as a part of the preconditioner construction. Such preconditioners may be practically very efficient, however, their behavior is hard to predict theoretically. ### Random preconditioning One interesting particular case of variable preconditioning is random preconditioning, e.g., multigrid preconditioning on random coarse grids. If used in gradient descent methods, random preconditioning can be viewed as an implementation of stochastic gradient descent and can lead to faster convergence, compared to fixed preconditioning, since it breaks the asymptotic "zig-zag" pattern of the gradient descent.
https://en.wikipedia.org/wiki/Preconditioner
passage: The generalization of the theorem to Riemann surfaces is the famous uniformization theorem, which was proved in the 19th century by Henri Poincaré and Felix Klein. Here, too, rigorous proofs were first given after the development of richer mathematical tools (in this case, topology). For the proof of the existence of functions on Riemann surfaces, he used a minimality condition, which he called the Dirichlet principle. Karl Weierstrass found a gap in the proof: Riemann had not noticed that his working assumption (that the minimum existed) might not work; the function space might not be complete, and therefore the existence of a minimum was not guaranteed. Through the work of David Hilbert in the Calculus of Variations, the Dirichlet principle was finally established. Otherwise, Weierstrass was very impressed with Riemann, especially with his theory of abelian functions. When Riemann's work appeared, Weierstrass withdrew his paper from Crelle's Journal and did not publish it. They had a good understanding when Riemann visited him in Berlin in 1859. Weierstrass encouraged his student Hermann Amandus Schwarz to find alternatives to the Dirichlet principle in complex analysis, in which he was successful. An anecdote from Arnold Sommerfeld shows the difficulties which contemporary mathematicians had with Riemann's new ideas. In 1870, Weierstrass had taken Riemann's dissertation with him on a holiday to Rigi and complained that it was hard to understand.
https://en.wikipedia.org/wiki/Bernhard_Riemann
passage: More precisely, in its modern form, Donsker's invariance principle states that: As random variables taking values in the Skorokhod space $$ \mathcal{D}[0,1] $$ , the random function $$ W^{(n)} $$ converges in distribution to a standard Brownian motion $$ W:=(W(t))_{t\in [0,1]} $$ as $$ n\to \infty. $$ ## Formal statement Let Fn be the empirical distribution function of the sequence of i.i.d. random variables $$ X_1, X_2, X_3, \ldots $$ with distribution function F. Define the centered and scaled version of Fn by $$ G_n(x)= \sqrt n ( F_n(x) - F(x) ) $$ indexed by x ∈ R. By the classical central limit theorem, for fixed x, the random variable Gn(x) converges in distribution to a Gaussian (normal) random variable G(x) with zero mean and variance F(x)(1 − F(x)) as the sample size n grows. Theorem (Donsker, Skorokhod, Kolmogorov)
https://en.wikipedia.org/wiki/Donsker%27s_theorem
passage: The case $$ |N| \geq q = R(r, s-1) $$ is treated similarly. ### Case of more colours Lemma 2. If , then $$ R(n_1, \dots, n_c) \leq R(n_1, \dots, n_{c-2}, R(n_{c-1}, n_c)). $$ Proof. Consider a complete graph of $$ R(n_1, \dots, n_{c-2}, R(n_{c-1}, n_c)) $$ vertices and colour its edges with colours. Now 'go colour-blind' and pretend that and are the same colour. Thus the graph is now -coloured. Due to the definition of $$ R(n_1, \dots, n_{c-2}, R(n_{c-1}, n_c)), $$ such a graph contains either a mono-chromatically coloured with colour for some or a -coloured in the 'blurred colour'. In the former case we are finished. In the latter case, we recover our sight again and see from the definition of we must have either a -monochrome or a -monochrome . In either case the proof is complete. Lemma 1 implies that any is finite. The right hand side of the inequality in Lemma 2 expresses a Ramsey number for colours in terms of Ramsey numbers for fewer colours. Therefore, any is finite for any number of colours. This proves the theorem.
https://en.wikipedia.org/wiki/Ramsey%27s_theorem
passage: None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. These correspond to a node at the nucleus for all non-s orbitals in an atom. These orbitals all have some angular momentum, and in the planetary model, they correspond to particles in orbit with eccentricity less than 1.0, so that they do not pass straight through the center of the primary body, but keep somewhat away from it. In addition, the drum modes analogous to p and d modes in an atom show spatial irregularity along the different radial directions from the center of the drum, whereas all of the modes analogous to s modes are perfectly symmetrical in radial direction. The non-radial-symmetry properties of non-s orbitals are necessary to localize a particle with angular momentum and a wave nature in an orbital where it must tend to stay away from the central attraction force, since any particle localized at the point of central attraction could have no angular momentum. For these modes, waves in the drum head tend to avoid the central point. Such features again emphasize that the shapes of atomic orbitals are a direct consequence of the wave nature of electrons. ## Orbital energy In atoms with one electron (hydrogen-like atom), the energy of an orbital (and, consequently, any electron in the orbital) is determined mainly by $$ n $$ . The $$ n=1 $$ orbital has the lowest possible energy in the atom.
https://en.wikipedia.org/wiki/Atomic_orbital
passage: Finally, Allais's prominence was further promoted when he received the Nobel Prize in Economic Sciences in 1988 for "his pioneering contributions to the theory of markets and efficient utilization of resources", thus bolstering the recognition of the paradox. ## Criticisms Whilst the Allais paradox is considered a counterexample to expected utility theory, Luc Wathieu, Professor of Marketing at Georgetown University, argued that the Allais paradox demonstrates the need for a modified utility function, and is not paradoxical in nature. In A Critique of the Allais Paradox (1993), Wathieu contends that the paradox "does not constitute a valid test of the independence axiom" that is required in expected utility theory. This is because the paradox involves the comparison of preferences between two separate cases, rather than the preferences in one choice set. ## Applications The mismatch between human behaviour and classical economics that is highlighted by the Allais paradox indicates the need for a remodelled expected utility function to account for the violation of the independence axiom. Yoshimura et al. (2013) modified the standard utility function proposed by expected utility theory, coined the "dynamic utility function", by including a variable that is dependent on the state of an individual. The findings of this experiment suggested that the switching of preferences apparent in the Allais paradox are due to the state of the individual, which include bankruptcy and wealth. List & Haigh (2005) tests the appearance of the Allais paradox in the behaviours of professional traders through an experiment and compares the results with those of university students.
https://en.wikipedia.org/wiki/Allais_paradox
passage: Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures. Gesture recognition offers a path for computers to begin to better understand and interpret human body language, previously not possible through text or unenhanced graphical user interfaces (GUIs). Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. One area of the field is emotion recognition derived from facial expressions and hand gestures. Users can make simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language, however, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. ## Overview Gesture recognition has application in such areas as: - Automobiles - Consumer electronics - Transit - Gaming - Handheld devices - Defense - Home automation - Automated sign language translation Gesture recognition can be conducted with techniques from computer vision and image processing. The literature includes ongoing work in the computer vision field on capturing gestures or more general human pose and movements by cameras connected to a computer.
https://en.wikipedia.org/wiki/Gesture_recognition
passage: Some estimates have placed the start of the second phase in the Deccan Traps eruptions within 50,000 years after the Chicxulub impact. Combined with mathematical modelling of the seismic waves that would have been generated by the impact, this has led to the suggestion that the Chicxulub impact may have triggered these eruptions by increasing the permeability of the mantle plume underlying the Deccan Traps. Whether the Deccan Traps were a major cause of the extinction, on par with the Chicxulub impact, remains uncertain. Proponents consider the climatic impact of the sulfur dioxide released to have been on par with the Chicxulub impact, and also note the role of flood basalt volcanism in other mass extinctions like the Permian-Triassic extinction event. They consider the Chicxulub impact to have worsened the ongoing climate change caused by the eruptions. Meanwhile, detractors point out the sudden nature of the extinction and that other pulses in Deccan Traps activity of comparable magnitude did not appear to have caused extinctions. They also contend that the causes of different mass extinctions should be assessed separately. In 2020, Alfio Chiarenza and colleagues suggested that the Deccan Traps may even have had the opposite effect: they suggested that the long-term warming caused by its carbon dioxide emissions may have dampened the impact winter from the Chicxulub impact. ### Possible Paleocene survivors Non-avian dinosaur remains have occasionally been found above the K-Pg boundary.
https://en.wikipedia.org/wiki/Dinosaur
passage: 1. Encourages teamwork It is an ideal tool for group brainstorming sessions. It allows team members to contribute different perspectives, enriching the analysis and improving the identification of causes. 1. Organizes causes in a logical way It groups causes into categories (such as the 5Ms or 4Ss), allowing the problem to be analyzed from different angles. This structure helps quickly identify critical areas within the process. ## Root causes Root-cause analysis is intended to reveal key relationships among various variables, and the possible causes provide additional insight into process behavior. It shows high-level causes that lead to the problem encountered by providing a snapshot of the current situation. There can be confusion about the relationships between problems, causes, symptoms and effects. Smith highlights this and the common question “Is that a problem or a symptom?” which mistakenly presumes that problems and symptoms are mutually exclusive categories. A problem is a situation that bears improvement; a symptom is the effect of a cause: a situation can be both a problem and a symptom. At a practical level, a cause is whatever is responsible for, or explains, an effect - a factor "whose presence makes a critical difference to the occurrence of an outcome". The causes emerge by analysis, often through brainstorming sessions, and are grouped into categories on the main branches off the fishbone. To help structure the approach, the categories are often selected from one of the common models shown below, but may emerge as something unique to the application in a specific case. Each potential cause is traced back to find the root cause, often using the 5 Whys technique. Typical categories include:
https://en.wikipedia.org/wiki/Ishikawa_diagram
passage: $$ The curl of a cross product can be written as $$ \nabla\times\left(\mathbf{P}\times\mathbf{Q}\right)=\left(\mathbf{Q}\cdot\nabla\right)\mathbf{P}-\left(\mathbf{P}\cdot\nabla\right)\mathbf{Q}+\mathbf{P}\left(\nabla\cdot\mathbf{Q}\right)-\mathbf{Q}\left(\nabla\cdot\mathbf{P}\right); $$ Green's vector identity can then be rewritten as $$ \mathbf{P}\cdot\Delta \mathbf{Q}-\mathbf{Q}\cdot\Delta \mathbf{P}= \nabla \cdot \left[\mathbf{P} \left(\nabla\cdot\mathbf{Q}\right)-\mathbf{Q} \left( \nabla \cdot \mathbf{P}\right)-\nabla \times \left( \mathbf{P} \times \mathbf{Q} \right) +\mathbf{P}\times\left(\nabla\times\mathbf{Q}\right) - \mathbf{Q}\times \left(\nabla\times\mathbf{P}\right)\right]. $$ Since the divergence of a curl is zero, the third term vanishes to yield Green's second vector identity: $$
https://en.wikipedia.org/wiki/Green%27s_identities
passage: ### Binomial theorem One can prove Bernoulli's inequality for x ≥ 0 using the binomial theorem. It is true trivially for r = 0, so suppose r is a positive integer. Then $$ (1+x)^r = 1 + rx + \tbinom r2 x^2 + ... + \tbinom rr x^r. $$ Clearly $$ \tbinom r2 x^2 + ... + \tbinom rr x^r \ge 0, $$ and hence $$ (1+x)^r \ge 1+rx $$ as required. ### Using convexity For $$ 0\neq x> -1 $$ the function $$ h(\alpha)=(1+x)^\alpha $$ is strictly convex. Therefore, for $$ 0<\alpha<1 $$ holds $$ (1+x)^\alpha=h(\alpha)=h((1-\alpha)\cdot 0+\alpha\cdot 1)<(1-\alpha) h(0)+\alpha h(1)=1+\alpha x $$ and the reversed inequality is valid for $$ \alpha<0 $$ and $$ \alpha>1 $$ . Another way of using convexity is to re-cast the desired inequality to $$ \log (1 + x) \geq \frac{1}{r}\log( 1 + rx) $$ for real $$ r\geq 1 $$ and real $$ x > -1/r $$ .
https://en.wikipedia.org/wiki/Bernoulli%27s_inequality
passage: Material parameters: chemical potential: $$ \mu $$ (J) particle number: $$ N $$   (particles or mole) For a system with different types $$ i $$ of particles, a small change in the internal energy is given by: $$ \mathrm{d}U = T\,\mathrm{d}S - p\,\mathrm{d}V + \sum_i \mu_i \,\mathrm{d}N_i\,, $$ where $$ U $$ is internal energy, $$ T $$ is temperature, $$ S $$ is entropy, $$ p $$ is pressure, $$ V $$ is volume, $$ \mu_i $$ is the chemical potential of the $$ i $$ -th particle type, and $$ N_i $$ is the number of $$ i $$ -type particles in the system. Here, the temperature, pressure, and chemical potential are the generalized forces, which drive the generalized changes in entropy, volume, and particle number respectively. These parameters all affect the internal energy of a thermodynamic system. A small change $$ \mathrm{d}U $$ in the internal energy of the system is given by the sum of the flow of energy across the boundaries of the system due to the corresponding conjugate pair. These concepts will be expanded upon in the following sections. While dealing with processes in which systems exchange matter or energy, classical thermodynamics is not concerned with the rate at which such processes take place, termed kinetics.
https://en.wikipedia.org/wiki/Conjugate_variables_%28thermodynamics%29
passage: #### Velocity ; and so the total specific orbital energy is $$ \epsilon = \epsilon_k+\epsilon_p = \frac{v^2}{2} - \frac{G M}{r} \, $$ Since energy is conserved, $$ \epsilon $$ cannot depend on the distance, $$ r $$ , from the center of the central body to the space vehicle in question, i.e. v must vary with r to keep the specific orbital energy constant. Therefore, the object can reach infinite $$ r $$ only if this quantity is nonnegative, which implies $$ v\geq\sqrt{\frac{2 G M}{r}}. $$ The escape velocity from the Earth's surface is about 11 km/s, but that is insufficient to send the body an infinite distance because of the gravitational pull of the Sun. To escape the Solar System from a location at a distance from the Sun equal to the distance Sun–Earth, but not close to the Earth, requires around 42 km/s velocity, but there will be "partial credit" for the Earth's orbital velocity for spacecraft launched from Earth, if their further acceleration (due to the propulsion system) carries them in the same direction as Earth travels in its orbit.
https://en.wikipedia.org/wiki/Orbital_mechanics
passage: For a completely multiplicative function ƒ(n), and assuming the series converges for Re(s) > σ0, then one has that $$ \frac {F^\prime(s)}{F(s)} = - \sum_{n=1}^\infty \frac{f(n)\Lambda(n)}{n^s} $$ converges for Re(s) > σ0. Here, Λ(n) is the von Mangoldt function. ## Products Suppose $$ F(s)= \sum_{n=1}^\infty f(n)n^{-s} $$ and $$ G(s)= \sum_{n=1}^\infty g(n)n^{-s}. $$ If both F(s) and G(s) are absolutely convergent for s > a and s > b then we have $$ \frac 1 {2T}\int_{-T}^T \,F(a+it)G(b-it)\,dt= \sum_{n=1}^\infty f(n)g(n)n^{-a-b} \text{ as }T \sim \infty. $$ If a = b and ƒ(n) = g(n) we have $$ \frac 1 {2T}\int_{-T}^T |F(a+it)|^2 \, dt= \sum_{n=1}^\infty [f(n)]^2 n^{-2a} \text{ as } T \sim \infty. $$
https://en.wikipedia.org/wiki/Dirichlet_series
passage: ## Including thermodynamic interactions From the formulae of the previous section it appears that the time component of the four-force is the power expended, $$ \mathbf{f}\cdot\mathbf{u} $$ , apart from relativistic corrections $$ \gamma/c $$ . This is only true in purely mechanical situations, when heat exchanges vanish or can be neglected. In the full thermo-mechanical case, not only work, but also heat contributes to the change in energy, which is the time component of the energy–momentum covector. The time component of the four-force includes in this case a heating rate $$ h $$ , besides the power $$ \mathbf{f}\cdot\mathbf{u} $$ . Note that work and heat cannot be meaningfully separated, though, as they both carry inertia. This fact extends also to contact forces, that is, to the stress–energy–momentum tensor. Therefore, in thermo-mechanical situations the time component of the four-force is not proportional to the power $$ \mathbf{f}\cdot\mathbf{u} $$ but has a more generic expression, to be given case by case, which represents the supply of internal energy from the combination of work and heat, and which in the Newtonian limit becomes $$ h + \mathbf{f} \cdot \mathbf{u} $$ .
https://en.wikipedia.org/wiki/Four-force
passage: The minimal model and abundance conjectures would imply that every variety of Kodaira dimension $$ -\infty $$ is uniruled, and it is known that every uniruled variety in characteristic zero is birational to a Fano fiber space. The minimal model and abundance conjectures would imply that every variety of Kodaira dimension 0 is birational to a Calabi-Yau variety with terminal singularities. The Iitaka conjecture states that the Kodaira dimension of a fibration is at least the sum of the Kodaira dimension of the base and the Kodaira dimension of a general fiber; see for a survey. The Iitaka conjecture helped to inspire the development of minimal model theory in the 1970s and 1980s. It is now known in many cases, and would follow in general from the minimal model and abundance conjectures. ## The relationship to Moishezon manifolds Nakamura and Ueno proved the following additivity formula for complex manifolds (). Although the base space is not required to be algebraic, the assumption that all the fibers are isomorphic is very special. Even with this assumption, the formula can fail when the fiber is not Moishezon. Let π: V → W be an analytic fiber bundle of compact complex manifolds, meaning that π is locally a product (and so all fibers are isomorphic as complex manifolds). Suppose that the fiber F is a Moishezon manifold. Then $$ \kappa(V)=\kappa(F)+\kappa(W). $$
https://en.wikipedia.org/wiki/Kodaira_dimension
passage: That is: $$ \operatorname{erf}(x) = \frac 1 {\sqrt\pi} \int_{-x}^x e^{-t^2} \, dt = \frac 2 {\sqrt\pi} \int_0^x e^{-t^2} \, dt\,. $$ These integrals cannot be expressed in terms of elementary functions, and are often said to be special functions. However, many numerical approximations are known; see below for more. The two functions are closely related, namely $$ \Phi(x) = \frac{1}{2} \left[1 + \operatorname{erf}\left( \frac x {\sqrt 2} \right) \right]\,. $$ For a generic normal distribution with density , mean and variance $$ \sigma^2 $$ , the cumulative distribution function is $$ F(x) = \Phi{\left(\frac{x-\mu} \sigma \right)} = \frac{1}{2} \left[1 + \operatorname{erf}\left(\frac{x-\mu}{\sigma \sqrt 2 }\right)\right]\,. $$ The complement of the standard normal cumulative distribution function, $$ Q(x) = 1 - \Phi(x) $$ , is often called the Q-function, especially in engineering texts.
https://en.wikipedia.org/wiki/Normal_distribution
passage: Some human rights organizations strongly criticize individual nation-states for their immigration policies and practices. Treatment of migrants in host countries, both by governments, employers, and original population, is a topic of continual debate and criticism, and the violation of migrant human rights is ongoing. The United Nations Convention on the Protection of the Rights of All Migrant Workers and Members of Their Families, has been ratified by 48 states, most of which are heavy exporters of cheap labor. Major migrant-receiving countries and regionsincluding Western Europe, North America, Pacific Asia, Australia, and the Gulf Stateshave not ratified the convention, even though they are host to the majority of international migrant workers. Although freedom of movement is often recognized as a civil right in many documents such as the Universal Declaration of Human Rights (1948) and the International Covenant on Civil and Political Rights (1966), the freedom only applies to movement within national borders and the ability to return to one's home state. Some argue that free migration is a right, and that the restrictive immigration policies, typical of nation-states, violate this right. Such arguments are common among libertarian perspectives on immigration. Open borders activist Jacob Appel has written, "Treating human beings differently, simply because they were born on the opposite side of a national boundary, is hard to justify under any mainstream philosophical, religious or ethical theory. " Where immigration is permitted, it is typically selective. as of 2003, family reunification accounted for approximately two-thirds of legal immigration to the US every year.
https://en.wikipedia.org/wiki/Immigration
passage: ## Converting from infix notation Edsger W. Dijkstra invented the shunting-yard algorithm to convert infix expressions to postfix expressions (reverse Polish notation), so named because its operation resembles that of a railroad shunting yard. There are other ways of producing postfix expressions from infix expressions. Most operator-precedence parsers can be modified to produce postfix expressions; in particular, once an abstract syntax tree has been constructed, the corresponding postfix expression is given by a simple post-order traversal of that tree. ## Implementations ### Hardware calculators #### Early history The first computer implementing a form of reverse Polish notation (but without the name and also without a stack), was Konrad Zuse's Z3, which he started to construct in 1938 and demonstrated publicly on 12 May 1941. In dialog mode, it allowed operators to enter two operands followed by the desired operation. It was destroyed on 21 December 1943 in a bombing raid. With Zuse's help a first replica was built in 1961. The 1945 Z4 also added a 2-level stack. Other early computers to implement architectures enabling reverse Polish notation were the English Electric Company's KDF9 machine, which was announced in 1960 and commercially available in 1963, and the Burroughs B5000, announced in 1961 and also delivered in 1963: Presumably, the KDF9 designers drew ideas from Hamblin's GEORGE (General Order Generator), an autocode programming system written for a DEUCE computer installed at the University of Sydney, Australia, in 1957.
https://en.wikipedia.org/wiki/Reverse_Polish_notation
passage: Similarly, if $$ f(x)=\begin{cases} 0 & x < 1, \\ x^b & x > 1, \end{cases} $$ then $$ \mathcal M f (s)= \int_1^\infty x^{s-1}x^bdx = \int_1^\infty x^{s+b-1}dx = - \frac 1 {s+b}. $$ Thus $$ \mathcal M f (s) $$ has a simple pole at $$ s=-b $$ and is thus defined for $$ \Re (s)<-b $$ . ### Exponential functions For $$ p > 0 $$ , let $$ f(x)=e^{-px} $$ . Then $$ \mathcal M f (s) = \int_0^\infty x^{s} e^{-px}\frac{dx}{x} = \int_0^\infty \left(\frac{u}{p} \right)^{s}e^{-u} \frac{du}{u} = \frac{1}{p^s}\int_0^\infty u^{s}e^{-u} \frac{du}{u} = \frac{1}{p^{s}}\Gamma(s). $$ ### Zeta function It is possible to use the Mellin transform to produce one of the fundamental formulas for the Riemann zeta function, $$ \zeta(s) $$ .
https://en.wikipedia.org/wiki/Mellin_transform
passage: Evidence from panel data tests incorporating structural change." http://www.uh.edu/~dpapell/realgdp.pdf linear regression to obtain an estimate $$ \hat{a} $$ of the true underlying trend slope $$ a $$ and an estimate $$ \hat{b} $$ of the underlying intercept term b; if the estimate $$ \hat{a} $$ is significantly different from zero, this is sufficient to show with high confidence that the variable Y is non-stationary. The residuals from this regression are given by $$ \hat{e}_t = Y_t - \hat{a} \cdot t - \hat{b}. $$ If these estimated residuals can be statistically shown to be stationary (more precisely, if one can reject the hypothesis that the true underlying errors are non-stationary), then the residuals are referred to as the detrended data, and the original series {Yt} is said to be trend-stationary even though it is not stationary. ## Stationarity around other types of trend ### Exponential growth trend Many economic time series are characterized by exponential growth. For example, suppose that one hypothesizes that gross domestic product is characterized by stationary deviations from a trend involving a constant growth rate. Then it could be modeled as $$ \text{GDP}_t = Be^{at}U_t $$ with Ut being hypothesized to be a stationary error process.
https://en.wikipedia.org/wiki/Trend-stationary_process
passage: \mathcal P^T &= - \mathcal P \\ |\mathcal P| &= \frac{1}{|M|^2}\\ \mathcal P^{-1}(\varepsilon)&= -(M^{-1})^T J M^{-1} = - \mathcal L (\varepsilon)\\ \end{align} $$ where the $$ \mathcal L(\varepsilon) $$ is known as a Lagrange matrix and whose elements correspond to Lagrange brackets.
https://en.wikipedia.org/wiki/Poisson_bracket
passage: This determination is tested by transferring the tissue from a growth regulator-supplemented medium to a basal medium containing essential minerals, vitamins, and a carbon source but no plant growth regulators. At this stage, the tissue completes the induction process and becomes fully determined to its developmental fate. A key concept in this process is canalization, which refers to the ability of a developmental pathway to consistently produce a standard phenotype despite potential genetic or environmental variations. If explants are removed from a shoot-inducing medium before full canalization occurs, shoot formation is significantly reduced, and root development becomes the dominant outcome. This phenomenon highlights the morphogenic plasticity of plant tissues in vitro, demonstrating their ability to adjust to external conditions and developmental cues. ### Differentiation During this phase, the process of morphological differentiation begins, leading to the formation and development of the nascent organ. The initiation of organogenesis is characterized by a distinct shift in polarity, followed by the establishment of radial symmetry and subsequent growth along the newly defined axis, ultimately forming the structural bulge that marks organ initiation. The sequential development of organogenesis can be observed in species such as Pinus oocarpa Schiede, where shoot buds are regenerated directly from cotyledons through direct organogenesis. However, the specific developmental patterns may vary across different plant species grown in vitro.
https://en.wikipedia.org/wiki/Plant_development
passage: The subset of values of for which the Laplace transform converges absolutely is called the region of absolute convergence, or the domain of absolute convergence. In the two-sided case, it is sometimes called the strip of absolute convergence. The Laplace transform is analytic in the region of absolute convergence: this is a consequence of Fubini's theorem and Morera's theorem. Similarly, the set of values for which converges (conditionally or absolutely) is known as the region of conditional convergence, or simply the region of convergence (ROC). If the Laplace transform converges (conditionally) at , then it automatically converges for all with . Therefore, the region of convergence is a half-plane of the form , possibly including some points of the boundary line . In the region of convergence , the Laplace transform of can be expressed by integrating by parts as the integral $$ F(s) = (s-s_0)\int_0^\infty e^{-(s-s_0)t}\beta(t)\,dt, \quad \beta(u) = \int_0^u e^{-s_0t}f(t)\,dt. $$ That is, can effectively be expressed, in the region of convergence, as the absolutely convergent Laplace transform of some other function. In particular, it is analytic. There are several Paley–Wiener theorems concerning the relationship between the decay properties of , and the properties of the Laplace transform within the region of convergence.
https://en.wikipedia.org/wiki/Laplace_transform
passage: This allows the red term to expand all the way down and, thus, removes the green term completely. This yields the new minimum equation: $$ f(A,B,C,D) = A + BC\overline{D} $$ Note that the first term is just , not . In this case, the don't care has dropped a term (the green rectangle); simplified another (the red one); and removed the race hazard (removing the yellow term as shown in the following section on race hazards). The inverse case is simplified as follows: $$ \overline{f(A,B,C,D)} = \overline{A}\,\overline{B} + \overline{A}\,\overline{C} + \overline{A}D $$ Through the use of De Morgan's laws, the product of sums can be determined: $$ \begin{align} f(A,B,C,D) &= \overline{\overline{f(A,B,C,D)}} \\ BLOCK0\end{align} $$ ## Race hazards ### Elimination Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions circumscribed on the map.
https://en.wikipedia.org/wiki/Karnaugh_map
passage: $$ V'\equiv V(i+1) $$ , the expansion coefficients are $$ \begin{alignat}{2} Q_\mathbf{x} &= \ell_\mathbf{x}+ \mathbf{f}_\mathbf{x}^\mathsf{T} V'_\mathbf{x} \\ Q_\mathbf{u} &= \ell_\mathbf{u}+ \mathbf{f}_\mathbf{u}^\mathsf{T} V'_\mathbf{x} \\ Q_{\mathbf{x}\mathbf{x}} &= \ell_{\mathbf{x}\mathbf{x}} + \mathbf{f}_\mathbf{x}^\mathsf{T} V'_{\mathbf{x}\mathbf{x}}\mathbf{f}_\mathbf{x}+V_\mathbf{x}'\cdot\mathbf{f}_{\mathbf{x}\mathbf{x}}\\ Q_{\mathbf{u}\mathbf{u}} &= \ell_{\mathbf{u}\mathbf{u}} + \mathbf{f}_\mathbf{u}^\mathsf{T} V'_{\mathbf{x}\mathbf{x}}\mathbf{f}_\mathbf{u}+{V'_\mathbf{x}} \cdot\mathbf{f}_{\mathbf{u} \mathbf{u}}\\
https://en.wikipedia.org/wiki/Differential_dynamic_programming
passage: Some plot the data on the vertical axis; others plot the data on the horizontal axis. Different sources use slightly different approximations for rankits. The formula used by the "qqnorm" function in the basic "stats" package in R (programming language) is as follows: $$ z_i = \Phi^{-1}\left( \frac{i-a}{n+1-2a} \right), $$ for , where if and 0.5 for n > 10, and is the standard normal quantile function. If the data are consistent with a sample from a normal distribution, the points should lie close to a straight line. As a reference, a straight line can be fit to the points. The further the points vary from this line, the greater the indication of departure from normality. If the sample has mean 0, standard deviation 1 then a line through 0 with slope 1 could be used. With more points, random deviations from a line will be less pronounced. Normal plots are often used with as few as 7 points, e.g., with plotting the effects in a saturated model from a 2-level fractional factorial experiment. With fewer points, it becomes harder to distinguish between random variability and a substantive deviation from normality. ## Other distributions Probability plots for distributions other than the normal are computed in exactly the same way. The normal quantile function is simply replaced by the quantile function of the desired distribution. In this way, a probability plot can easily be generated for any distribution for which one has the quantile function.
https://en.wikipedia.org/wiki/Normal_probability_plot
passage: ### History of units of mass ## Definitions In physical science, one may distinguish conceptually between at least seven different aspects of mass, or seven physical notions that involve the concept of mass. Every experiment to date has shown these seven values to be proportional, and in some cases equal, and this proportionality gives rise to the abstract concept of mass. There are a number of ways mass can be measured or operationally defined: - Inertial mass is a measure of an object's resistance to acceleration when a force is applied. It is determined by applying a force to an object and measuring the acceleration that results from that force. An object with small inertial mass will accelerate more than an object with large inertial mass when acted upon by the same force. One says the body of greater mass has greater inertia. - Active gravitational mass is a measure of the strength of an object's gravitational flux (gravitational flux is equal to the surface integral of gravitational field over an enclosing surface). Gravitational field can be measured by allowing a small "test object" to fall freely and measuring its free-fall acceleration. For example, an object in free-fall near the Moon is subject to a smaller gravitational field, and hence accelerates more slowly, than the same object would if it were in free-fall near the Earth. The gravitational field near the Moon is weaker because the Moon has less active gravitational mass. - Passive gravitational mass is a measure of the strength of an object's interaction with a gravitational field. Passive gravitational mass is determined by dividing an object's weight by its free-fall acceleration.
https://en.wikipedia.org/wiki/Mass
passage: In this particular scenario, the absolute error is precisely 0.1 (calculated as |50 − 49.9|), and the relative error is calculated as the absolute error 0.1 divided by the true value 50, which equals 0.002. This relative error can also be expressed as 0.2%. In a more practical setting, such as when measuring the volume of liquid in a 6 mL beaker, if the instrument reading indicates 5 mL while the true volume is actually 6 mL, the percent error for this particular measurement situation is, when rounded to one decimal place, approximately 16.7% (calculated as |(6 mL − 5 mL) / 6 mL| × 100%). The utility of relative error becomes particularly evident when it is employed to compare the quality of approximations for numbers that possess widely differing magnitudes; for example, approximating the number 1,000 with an absolute error of 3 results in a relative error of 0.003 (or 0.3%). This is, within the context of most scientific or engineering applications, considered a significantly less accurate approximation than approximating the much larger number 1,000,000 with an identical absolute error of 3. In the latter case, the relative error is a mere 0.000003 (or 0.0003%). In the first case, the relative error is 0.003, whereas in the second, more favorable scenario, it is a substantially smaller value of only 0.000003. This comparison clearly highlights how relative error provides a more meaningful and contextually appropriate assessment of precision, especially when dealing with values across different orders of magnitude. There are two crucial features or caveats associated with the interpretation and application of relative error that should always be kept in mind.
https://en.wikipedia.org/wiki/Approximation_error
passage: Positive results: - There is an algorithm A that computes the uniformizing map in the following sense. Let $$ \Omega $$ be a bounded simply-connected domain, and $$ w_0\in\Omega $$ . $$ \partial\Omega $$ is provided to A by an oracle representing it in a pixelated sense (i.e., if the screen is divided to $$ 2^n \times 2^n $$ pixels, the oracle can say whether each pixel belongs to the boundary or not). Then A computes the absolute values of the uniformizing map $$ \phi:(\Omega, w_0) \to (D, 0) $$ with precision $$ 2^{-n} $$ in space bounded by $$ Cn^2 $$ and time $$ 2^{O(n)} $$ , where $$ C $$ depends only on the diameter of $$ \Omega $$ and $$ d(w_0, \partial\Omega). $$ Furthermore, the algorithm computes the value of $$ \phi(w) $$ with precision $$ 2^{-n} $$ as long as $$ |\phi(w)| < 1-2^{-n}. $$ Moreover, A queries $$ \partial\Omega $$ with precision of at most $$ 2^{-O(n)}. $$
https://en.wikipedia.org/wiki/Riemann_mapping_theorem
passage: A major breakthrough happened with the introduction of Reinforcement Learning from Human Feedback (RLHF), a method in which human feedbacks are used to train a reward model that guides the RL agent. Unlike traditional rule-based or supervised systems, RLHF allows models to align their behavior with human judgments on complex and subjective tasks. This technique was initially used in the development of InstructGPT, an effective language model trained to follow human instructions and later in ChatGPT which incorporates RLHF for improving output responses and ensuring safety. More recently, researchers have explored the use of offline RL in NLP to improve dialogue systems without the need of live human interaction. These methods optimize for user engagement, coherence, and diversity based on past conversation logs and pre-trained reward models. ## Statistical comparison of reinforcement learning algorithms Efficient comparison of RL algorithms is essential for research, deployment and monitoring of RL systems. To compare different algorithms on a given environment, an agent can be trained for each algorithm. Since the performance is sensitive to implementation details, all algorithms should be implemented as closely as possible to each other. After the training is finished, the agents can be run on a sample of test episodes, and their scores (returns) can be compared. Since episodes are typically assumed to be i.i.d, standard statistical tools can be used for hypothesis testing, such as T-test and permutation test. This requires to accumulate all the rewards within an episode into a single number—the episodic return.
https://en.wikipedia.org/wiki/Reinforcement_learning
passage: Therefore, $$ f(z) = az $$ , as desired. ## Schwarz–Pick theorem A variant of the Schwarz lemma, known as the Schwarz–Pick theorem (after Georg Pick), characterizes the analytic automorphisms of the unit disc, i.e. bijective holomorphic mappings of the unit disc to itself: Let $$ f: \mathbf{D}\to\mathbf{D} $$ be holomorphic. Then, for all $$ z_1,z_2\in\mathbf{D} $$ , $$ \left|\frac{f(z_1)-f(z_2)}{1-\overline{f(z_1)}f(z_2)}\right| \le \left|\frac{z_1-z_2}{1-\overline{z_1}z_2}\right| $$ and, for all $$ z\in\mathbf{D} $$ , $$ \frac{\left|f'(z)\right|}{1-\left|f(z)\right|^2} \le \frac{1}{1-\left|z\right|^2}. $$ The expression $$ d(z_1,z_2)=\tanh^{-1} \left|\frac{z_1-z_2}{1-\overline{z_1}z_2}\right| $$ is the distance of the points $$ z_1 $$ , $$ z_2 $$ in the Poincaré metric, i.e. the metric in the Poincaré disk model for hyperbolic geometry in dimension two.
https://en.wikipedia.org/wiki/Schwarz_lemma
passage: When adding a control point, the shape of the curve should stay the same, forming the starting point for further adjustments. A number of these operations are discussed below. ### Knot insertion As the term suggests, knot insertion inserts a knot into the knot vector. If the degree of the curve is $$ n $$ , then $$ n-1 $$ control points are replaced by $$ n $$ new ones. The shape of the curve stays the same. A knot can be inserted multiple times, up to the maximum multiplicity of the knot. This is sometimes referred to as knot refinement and can be achieved by an algorithm that is more efficient than repeated knot insertion. ### Knot removal Knot removal is the reverse of knot insertion. Its purpose is to remove knots and the associated control points in order to get a more compact representation. Obviously, this is not always possible while retaining the exact shape of the curve. In practice, a tolerance in the accuracy is used to determine whether a knot can be removed. The process is used to clean up after an interactive session in which control points may have been added manually, or after importing a curve from a different representation, where a straightforward conversion process leads to redundant control points. ### Degree elevation A NURBS curve of a particular degree can always be represented by a NURBS curve of higher degree.
https://en.wikipedia.org/wiki/Non-uniform_rational_B-spline
passage: That is, $$ \begin{align} P_0 + P_1 x_1 + P_2 x_1^2 + P_3 x_1^3 + \dots + P_N x_1^N - f(x_1) &= +\varepsilon \\ P_0 + P_1 x_2 + P_2 x_2^2 + P_3 x_2^3 + \dots + P_N x_2^N - f(x_2) &= -\varepsilon \\ BLOCK1\end{align} $$ Since $$ x_1 $$ , ..., $$ x_{N+2} $$ were given, all of their powers are known, and $$ f(x_1) $$ , ..., $$ f(x_{N+2}) $$ are also known. That means that the above equations are just N+2 linear equations in the N+2 variables $$ P_0 $$ , $$ P_1 $$ , ..., $$ P_N $$ , and $$ \varepsilon $$ . Given the test points $$ x_1 $$ , ..., $$ x_{N+2} $$ , one can solve this system to get the polynomial P and the number $$ \varepsilon $$ . The graph below shows an example of this, producing a fourth-degree polynomial approximating $$ e^x $$ over [−1, 1]. The test points were set at −1, −0.7, −0.1, +0.4, +0.9, and 1. Those values are shown in green.
https://en.wikipedia.org/wiki/Approximation_theory
passage: [recall this is 1/(2N) ] and carry-over f. Therefore, $$ f_2 = \left( 1 \right) \Delta f + \left( 1 - \Delta f \right) f_1 $$ , which is similar to Equation (1) in the previous sub-section. In general, after rearrangement, $$ \begin{align} f_t & = \Delta f + \left( 1 - \Delta f \right) f_{t-1} \\ & = \Delta f \left( 1 - f_{t-1} \right) + f_{t-1} \end{align} $$ The graphs to the left show levels of inbreeding over twenty generations arising from genetic drift for various actual gamodeme sizes (2N). Still further rearrangements of this general equation reveal some interesting relationships. (A) After some simplification, $$ \left( f_t - f_{t-1} \right) = \Delta f \left( 1 - f_{t-1} \right) = \delta f_t $$ . The left-hand side is the difference between the current and previous levels of inbreeding: the change in inbreeding (δft). Notice, that this change in inbreeding (δft) is equal to the de novo inbreeding (Δf) only for the first cycle—when ft-1 is zero. (B) An item of note is the (1-ft-1), which is an "index of non-inbreeding".
https://en.wikipedia.org/wiki/Quantitative_genetics
passage: ### Research within psychiatry is conducted by psychiatrists on an interdisciplinary basis with other professionals, including clinical psychologists, epidemiologists, nurses, social workers, and occupational therapists. Psychiatry is a controversial field, with critics arguing its practices violate ethics, human rights, and science. ## Etymology The term psychiatry was first coined by the German physician Johann Christian Reil in 1808 and literally means the 'medical treatment of the soul' (ψυχή psych- 'soul' from Ancient Greek psykhē 'soul'; -iatry 'medical treatment' from Gk. ιατρικός iātrikos 'medical' from ιάσθαι iāsthai 'to heal'). A medical doctor specializing in psychiatry is a psychiatrist (for a historical overview, see: Timeline of psychiatry). ## Theory and focus Psychiatry refers to a field of medicine focused specifically on the mind, aiming to study, prevent, and treat mental disorders in humans. It has been described as an intermediary between the world from a social context and the world from the perspective of those who are mentally ill. People who specialize in psychiatry often differ from most other mental health professionals and physicians in that they must be familiar with both the social and biological sciences. The discipline studies the operations of different organs and body systems as classified by the patient's subjective experiences and the objective physiology of the patient. Psychiatry treats mental disorders, which are conventionally divided into three general categories: mental illnesses, severe learning disabilities, and personality disorders.
https://en.wikipedia.org/wiki/Psychiatry
passage: The collective choices of the players leads to a payoff profile, i.e. to a payoff for each of the players. The mapping from collective choices to payoff profiles is known to the players, and each player aims to maximize their payoff. If the collective choice is denoted by x, the payoff that player i receives, also known as player i's utility, will be denoted by $$ u_i(x) $$ . We then consider a repetition of this stage game, finitely or infinitely many times. In each repetition, each player chooses one of their stage game options, and when making that choice, they may take into account the choices of the other players in the prior iterations. In this repeated game, a strategy for one of the players is a deterministic rule that specifies the player's choice in each iteration of the stage game, based on all other player's choices in the prior iterations. A choice of strategy for each of the players is a strategy profile, and it leads to a payout profile for the repeated game. There are a number of different ways such a strategy profile can be translated into a payout profile, outlined below. Any Nash equilibrium payoff profile of a repeated game must satisfy two properties: 1. Individual rationality: the payoff must weakly dominate the minmax payoff profile of the constituent stage game. That is, the equilibrium payoff of each player must be at least as large as the minmax payoff of that player.
https://en.wikipedia.org/wiki/Folk_theorem_%28game_theory%29
passage: In 1901, U.S. President William McKinley was shot twice in an assassination attempt while attending the Pan American Exposition in Buffalo, New York. While one bullet only grazed his sternum, another had lodged somewhere deep inside his abdomen and could not be found. A worried McKinley aide sent word to inventor Thomas Edison to rush an X-ray machine to Buffalo to find the stray bullet. It arrived but was not used. While the shooting itself had not been lethal, gangrene had developed along the path of the bullet, and McKinley died of septic shock due to bacterial infection six days later. ### Hazards discovered With the widespread experimentation with X‑rays after their discovery in 1895 by scientists, physicians, and inventors came many stories of burns, hair loss, and worse in technical journals of the time. In February 1896, Professor John Daniel and William Lofland Dudley of Vanderbilt University reported hair loss after Dudley was X-rayed. A child who had been shot in the head was brought to the Vanderbilt laboratory in 1896. Before trying to find the bullet, an experiment was attempted, for which Dudley "with his characteristic devotion to science" volunteered. Daniel reported that 21 days after taking a picture of Dudley's skull (with an exposure time of one hour), he noticed a bald spot in diameter on the part of his head nearest the X-ray tube: "A plate holder with the plates towards the side of the skull was fastened and a coin placed between the skull and the head.
https://en.wikipedia.org/wiki/X-ray
passage: The study of the sets of zeros of polynomials is the object of algebraic geometry. For a set of polynomial equations with several unknowns, there are algorithms to decide whether they have a finite number of complex solutions, and, if this number is finite, for computing the solutions. See System of polynomial equations. The special case where all the polynomials are of degree one is called a system of linear equations, for which another range of different solution methods exist, including the classical Gaussian elimination. A polynomial equation for which one is interested only in the solutions which are integers is called a Diophantine equation. Solving Diophantine equations is generally a very hard task. It has been proved that there cannot be any general algorithm for solving them, or even for deciding whether the set of solutions is empty (see Hilbert's tenth problem). Some of the most famous problems that have been solved during the last fifty years are related to Diophantine equations, such as Fermat's Last Theorem. ## Polynomial expressions Polynomials where indeterminates are substituted for some other mathematical objects are often considered, and sometimes have a special name. ### Trigonometric polynomials A trigonometric polynomial is a finite linear combination of functions sin(nx) and cos(nx) with n taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions.
https://en.wikipedia.org/wiki/Polynomial
passage: As a result, plants can expend less energy on growing as tall as possible and have more resources for growing seeds and expanding their root systems. This could have many practical benefits: for example, grass blades that would grow more slowly than regular grass would not require mowing as frequently, or crop plants might transfer more energy to the grain instead of growing taller. In 2002, the light-induced interaction between a plant phytochrome and phytochrome-interacting factor (PIF) was used to control gene transcription in yeast. This was the first example of using photoproteins from another organism for controlling a biochemical pathway. ## References ## Sources - - "Tripping the Light Switch Fantastic", by Jim De Quattro, 1991. - "Nature's Timekeeping", by Kit Smith, 2004. - Terry and Gerry Audesirk. Biology: Life on Earth. - Linda C Sage. A pigment of the imagination: a history of phytochrome research. Academic Press 1992. - Gururani, Mayank Anand, Markkandan Ganesan, and Pill-Soon Song. "Photo-biotechnology as a tool to improve agronomic traits in crops." Biotechnology Advances (2014). Category:Plant physiology Category:Biological pigments Category:Sensory receptors
https://en.wikipedia.org/wiki/Phytochrome
passage: The Arnoldi process also constructs $$ \tilde{H}_n $$ , an ( $$ n+1 $$ )-by- $$ n $$ upper Hessenberg matrix which satisfies $$ AQ_n = Q_{n+1} \tilde{H}_n \, $$ an equality which is used to simplify the calculation of $$ y_n $$ (see ). Note that, for symmetric matrices, a symmetric tri-diagonal matrix is actually achieved, resulting in the MINRES method. Because columns of $$ Q_n $$ are orthonormal, we have $$ \begin{align} \left\| r_n \right\| &= \left\| b - A x_n \right\| \\ &= \left\| b - A(x_0 + Q_n y_n) \right\| \\ &= \left\| r_0 - A Q_n y_n \right\| \\ &= \left\| \beta q_1 - A Q_n y_n \right\| \\ &= \left\| \beta q_1 - Q_{n+1} \tilde{H}_n y_n \right\| \\ &= \left\| Q_{n+1} (\beta e_1 - \tilde{H}_n y_n) \right\| \\ &= \left\| \beta e_1 - \tilde{H}_n y_n \right\| \end{align} $$ where $$ e_1 = (1,0,0,\ldots,0)^T \, $$ is the first vector in the standard basis of $$ \mathbb{R}^{n+1} $$ , and $$ \beta = \|r_0\| \, , $$ $$ r_0 $$ being the first trial residual vector (usually $$ b $$ ).
https://en.wikipedia.org/wiki/Generalized_minimal_residual_method
passage: The sum is taken over all possible outcomes of . As above, the expression is undefined if $$ P(Y=y) = 0 $$ . Conditioning on a discrete random variable is the same as conditioning on the corresponding event: $$ \operatorname{E} (X \mid Y=y) = \operatorname{E} (X \mid A) $$ where is the set $$ \{ Y = y \} $$ .
https://en.wikipedia.org/wiki/Conditional_expectation
passage: This provided an order of magnitude more capacity—for the same price—with only a slightly reduced combined performance. #### First TLB implementations The first documented uses of a TLB were on the GE 645 and the IBM 360/67, both of which used an associative memory as a TLB. #### First instruction cache The first documented use of an instruction cache was on the CDC 6600. #### First data cache The first documented use of a data cache was on the IBM System/360 Model 85. #### In 68k microprocessors The 68010, released in 1982, has a "loop mode" which can be considered a tiny and special-case instruction cache that accelerates loops that consist of only two instructions. The 68020, released in 1984, replaced that with a typical instruction cache of 256 bytes, being the first 68k series processor to feature true on-chip cache memory. The 68030, released in 1987, is basically a 68020 core with an additional 256-byte data cache, an on-chip memory management unit (MMU), a process shrink, and added burst mode for the caches. The 68040, released in 1990, has split instruction and data caches of four kilobytes each. The 68060, released in 1994, has the following: 8 KiB data cache (four-way associative), 8 KiB instruction cache (four-way associative), 96-byte FIFO instruction buffer, 256-entry branch cache, and 64-entry address translation cache MMU buffer (four-way associative). #### In x86 microprocessors
https://en.wikipedia.org/wiki/CPU_cache
passage: The complexifications of JB algebras are called Jordan C*-algebras or JB*-algebras. They have been used extensively in complex geometry to extend Koecher's Jordan algebraic treatment of bounded symmetric domains to infinite dimensions. Not all JB algebras can be realized as Jordan algebras of self-adjoint operators on a Hilbert space, exactly as in finite dimensions. The exceptional Albert algebra is the common obstruction. The Jordan algebra analogue of von Neumann algebras is played by JBW algebras. These turn out to be JB algebras which, as Banach spaces, are the dual spaces of Banach spaces. Much of the structure theory of von Neumann algebras can be carried over to JBW algebras. In particular the JBW factors—those with center reduced to R—are completely understood in terms of von Neumann algebras. Apart from the exceptional Albert algebra, all JWB factors can be realised as Jordan algebras of self-adjoint operators on a Hilbert space closed in the weak operator topology. Of these the spin factors can be constructed very simply from real Hilbert spaces. All other JWB factors are either the self-adjoint part of a von Neumann factor or its fixed point subalgebra under a period 2 *-antiautomorphism of the von Neumann factor. ### Jordan rings A Jordan ring is a generalization of Jordan algebras, requiring only that the Jordan ring be over a general ring rather than a field.
https://en.wikipedia.org/wiki/Jordan_algebra
passage: This can be generalized to an arbitrary finite dimension: In Euclidean spaceEvery continuous function from a closed ball of a Euclidean space into itself has a fixed point. A slightly more general version is as follows: Convex compact setEvery continuous function from a nonempty convex compact subset K of a Euclidean space to K itself has a fixed point. An even more general form is better known under a different name: Schauder fixed point theoremEvery continuous function from a nonempty convex compact subset K of a Banach space to K itself has a fixed point. ## Importance of the pre-conditions The theorem holds only for functions that are endomorphisms (functions that have the same set as the domain and codomain) and for nonempty sets that are compact (thus, in particular, bounded and closed) and convex (or homeomorphic to convex). The following examples show why the pre-conditions are important. ### The function f as an endomorphism Consider the function $$ f(x) = x+1 $$ with domain [-1,1]. The range of the function is [0,2]. Thus, f is not an endomorphism. ### Boundedness Consider the function $$ f(x) = x+1, $$ which is a continuous function from $$ \mathbb{R} $$ to itself. As it shifts every point to the right, it cannot have a fixed point. The space $$ \mathbb{R} $$ is convex and closed, but not bounded.
https://en.wikipedia.org/wiki/Brouwer_fixed-point_theorem
passage: The embeddings of the maximal subgroups of E8 up to dimension 248 are shown to the right. ## Applications The E8 Lie group has applications in theoretical physics and especially in string theory and supergravity. E8×E8 is the gauge group of one of the two types of heterotic string and is one of two anomaly-free gauge groups that can be coupled to the N = 1 supergravity in ten dimensions. E8 is the U-duality group of supergravity on an eight-torus (in its split form). One way to incorporate the standard model of particle physics into heterotic string theory is the symmetry breaking of E8 to its maximal subalgebra SU(3)×E6. In 1982, Michael Freedman used the E8 lattice to construct an example of a topological 4-manifold, the E8 manifold, which has no smooth structure. Antony Garrett Lisi's incomplete "An Exceptionally Simple Theory of Everything" attempts to describe all known fundamental interactions in physics as part of the E8 Lie algebra. reported an experiment where the electron spins of a cobalt-niobium crystal exhibited, under certain conditions, two of the eight peaks related to E8 that were predicted by .Did a 1-dimensional magnet detect a 248-dimensional Lie algebra?, Notices of the American Mathematical Society, September 2011. ## History discovered the complex Lie algebra E8 during his classification of simple compact Lie algebras, though he did not prove its existence, which was first shown by Élie Cartan.
https://en.wikipedia.org/wiki/E8_%28mathematics%29
passage: Then the integrand above is sharply peaked at and: $$ \begin{align} \left[z(t),p_z(t)\right]&\approx \frac{2i\hbar e^2}{3\pi mc^3}\omega^3_0 \int^\infin_{-\infin} \frac{dx}{x^2 + \tau^2\omega^6_0} \\ &= \left (\frac{2i\hbar e^2 \omega^3_0}{3\pi mc^3} \right )\left (\frac{\pi}{\tau\omega^3_0} \right ) \\ &=i\hbar \end{align} $$ the necessity of the vacuum field can also be appreciated by making the small damping approximation in $$ \begin{align} &\mathbf{\ddot{x}} + \omega^2_0\mathbf{x}-\tau \mathbf{\overset{...}{x}}=\frac{e}{m}\mathbf{E}_0(t) \\ &\mathbf{\ddot{x}}\approx-\omega^2_0\mathbf{x}(t) && \mathbf{\overset{...}{x}}\approx-\omega^2_0\mathbf{\dot{x}} \end{align} $$ and $$ \mathbf{\ddot{x}}+\tau\omega^2_0\mathbf{\dot{x}}+\omega^2_0\mathbf{x}\approx\frac{e}{m}\mathbf{E}_0(t) $$ Without the free field in this equation the operator would be exponentially dampened, and commutators like would approach zero for .
https://en.wikipedia.org/wiki/Zero-point_energy
passage: In order to colour the Fatou domain, we have chosen a small number ε and set the sequences of iteration $$ z_k (k = 0, 1, 2, \dots, z_0 = z) $$ to stop when $$ |z_k - z^*| < \epsilon $$ , and we colour the point z according to the number k (or the real iteration number, if we prefer a smooth colouring). If we choose a direction from $$ z^* $$ given by an angle θ, the field line issuing from $$ z^* $$ in this direction consists of the points z such that the argument ψ of the number $$ z_k - z^* $$ satisfies the condition that $$ \psi - k\beta = \theta \mod \pi. \, $$ For if we pass an iteration band in the direction of the field lines (and away from the cycle), the iteration number k is increased by 1 and the number ψ is increased by β, therefore the number $$ \psi - k\beta \mod \pi $$ is constant along the field line. A colouring of the field lines of the Fatou domain means that we colour the spaces between pairs of field lines: we choose a number of regularly situated directions issuing from $$ z^* $$ , and in each of these directions we choose two directions around this direction. As it can happen that the two field lines of a pair do not end in the same point of the Julia set, our coloured field lines can ramify (endlessly) in their way towards the Julia set.
https://en.wikipedia.org/wiki/Julia_set
passage: If it is prime, the two symbols agree. It obeys the same rules of manipulation as the Legendre symbol. In particular $$ \begin{align} \left(\frac{-1}{n}\right) = (-1)^{\frac{n-1}{2}} &= \begin{cases} 1 & n \equiv 1 \bmod{4}\\ -1 & n \equiv 3 \bmod{4}\end{cases} \\ \left( \frac{2}{n}\right) = (-1)^{\frac{n^2-1}{8}} &= \begin{cases} 1 & n \equiv 1, 7 \bmod{8}\\ -1 & n \equiv 3, 5\bmod{8}\end{cases} \\ \left( \frac{-2}{n}\right) = (-1)^{\frac{n^2+4n-5}{8}} &= \begin{cases} 1 & n \equiv 1, 3 \bmod{8}\\ -1 & n \equiv 5, 7\bmod{8}\end{cases} \end{align} $$ and if both numbers are positive and odd (this is sometimes called "Jacobi's reciprocity law"): $$ \left(\frac{m}{n}\right) = (-1)^{\frac{(m-1)(n-1)}{4}}\left(\frac{n}{m}\right). $$ However, if the Jacobi symbol is 1 but the denominator is not a prime, it does not necessarily follow that the numerator is a quadratic residue of the denominator.
https://en.wikipedia.org/wiki/Quadratic_reciprocity
passage: In mathematics, hyperbolic space of dimension n is the unique simply connected, n-dimensional Riemannian manifold of constant sectional curvature equal to −1. It is homogeneous, and satisfies the stronger property of being a symmetric space. There are many ways to construct it as an open subset of $$ \mathbb R^n $$ with an explicitly written Riemannian metric; such constructions are referred to as models. Hyperbolic 2-space, H2, which was the first instance studied, is also called the hyperbolic plane. It is also sometimes referred to as Lobachevsky space or Bolyai–Lobachevsky space after the names of the author who first published on the topic of hyperbolic geometry. Sometimes the qualificative "real" is added to distinguish it from complex hyperbolic spaces. Hyperbolic space serves as the prototype of a Gromov hyperbolic space, which is a far-reaching notion including differential-geometric as well as more combinatorial spaces via a synthetic approach to negative curvature. Another generalisation is the notion of a CAT(−1) space. ## Formal definition and models ### Definition The $$ n $$ -dimensional hyperbolic space or hyperbolic -space, usually denoted $$ \mathbb H^n $$ , is the unique simply connected, $$ n $$ -dimensional complete Riemannian manifold with a constant negative sectional curvature equal to −1. The unicity means that any two Riemannian manifolds that satisfy these properties are isometric to each other.
https://en.wikipedia.org/wiki/Hyperbolic_space
passage: Dynamic relaxation is a numerical method, which, among other things, can be used to do "form-finding" for cable and fabric structures. The aim is to find a geometry where all forces are in equilibrium. In the past this was done by direct modelling, using hanging chains and weights (see Gaudi), or by using soap films, which have the property of adjusting to find a "minimal surface". The dynamic relaxation method is based on discretizing the continuum under consideration by lumping the mass at nodes and defining the relationship between nodes in terms of stiffness (see also the finite element method). The system oscillates about the equilibrium position under the influence of loads. An iterative process is followed by simulating a pseudo-dynamic process in time, with each iteration based on an update of the geometry, similar to leapfrog integration and related to velocity Verlet integration. ## Main equations used Considering Newton's second law of motion (force is mass multiplied by acceleration) in the $$ x $$ direction at the $$ i $$ th node at time $$ t $$ : $$ R_{ix}(t)=M_{i}A_{ix}(t)\frac{}{} $$ Where: $$ R $$ is the residual force $$ M $$ is the nodal mass $$ A $$ is the nodal acceleration Note that fictitious nodal masses may be chosen to speed up the process of form-finding.
https://en.wikipedia.org/wiki/Dynamic_relaxation
passage: Examples include a virtual X-ray view based on prior tomography or on real-time images from ultrasound and confocal microscopy probes, visualizing the position of a tumor in the video of an endoscope, or radiation exposure risks from X-ray imaging devices. AR can enhance viewing a fetus inside a mother's womb. Siemens, Karl Storz and IRCAD have developed a system for laparoscopic liver surgery that uses AR to view sub-surface tumors and vessels. AR has been used for cockroach phobia treatment and to reduce the fear of spiders. Patients wearing augmented reality glasses can be reminded to take medications. Augmented reality can be very helpful in the medical field. It could be used to provide crucial information to a doctor or surgeon without having them take their eyes off the patient. On 30 April 2015, Microsoft announced the Microsoft HoloLens, their first attempt at augmented reality. The HoloLens is capable of displaying images for image-guided surgery. As augmented reality advances, it finds increasing applications in healthcare. Augmented reality and similar computer based-utilities are being used to train medical professionals. In healthcare, AR can be used to provide guidance during diagnostic and therapeutic interventions e.g. during surgery. Magee et al., for instance, describe the use of augmented reality for medical training in simulating ultrasound-guided needle placement. Recently, augmented reality began seeing adoption in neurosurgery, a field that requires heavy amounts of imaging before procedures.
https://en.wikipedia.org/wiki/Augmented_reality
passage: If this is the equation of an elliptic cylinder. Further simplification can be obtained by translation of axes and scalar multiplication. If $$ \rho $$ has the same sign as the coefficients and , then the equation of an elliptic cylinder may be rewritten in Cartesian coordinates as: $$ \left(\frac{x}{a}\right)^2+ \left(\frac{y}{b}\right)^2 = 1. $$ This equation of an elliptic cylinder is a generalization of the equation of the ordinary, circular cylinder (). Elliptic cylinders are also known as cylindroids, but that name is ambiguous, as it can also refer to the Plücker conoid. If $$ \rho $$ has a different sign than the coefficients, we obtain the imaginary elliptic cylinders: $$ \left(\frac{x}{a}\right)^2 + \left(\frac{y}{b}\right)^2 = -1, $$ which have no real points on them. ( $$ \rho = 0 $$ gives a single real point.) ### Hyperbolic cylinder
https://en.wikipedia.org/wiki/Cylinder
passage: This equality is based on the trace identity $$ \textrm{pf}(A)\,\textrm{pf}(B) = \exp\left(\tfrac{1}{2}\mathrm{tr}\log(A^\text{T}B)\right) $$ and on the observation that $$ \textrm{pf}(\sigma_y\otimes I_n)=(-i)^{n^2} $$ . Since calculating the logarithm of a matrix is a computationally demanding task, one can instead compute all eigenvalues of $$ ((\sigma_y\otimes I_n)^\mathrm{T}\cdot A) $$ , take the log of all of these and sum them up. This procedure merely exploits the property $$ \operatorname{tr}{\log{(AB)}} = \operatorname{tr}{\log{(A)}} + \operatorname{tr}{\log{(B)}} $$ . This can be implemented in Mathematica with a single statement: `Pf[x_] := Module[{n = Dimensions[x] / 2}, I^(n^2) Exp[ 1/2 Total[ Log[Eigenvalues[ Dot[Transpose[KroneckerProduct[PauliMatrix[2], IdentityMatrix[n]]], x] ]]]]]` However, this algorithm is unstable when the Pfaffian is large.
https://en.wikipedia.org/wiki/Pfaffian
passage: In modern terms, the "reduced Fermi constant", that is, the constant in natural units is $$ G_{\rm F}^0=\frac{G_{\rm F}}{(\hbar c)^3}=\frac{\sqrt{2}}{8}\frac{g^{2}}{M_{\rm W}^{2} c^4}=1.1663787(6)\times10^{-5} \; \textrm{GeV}^{-2} \approx 4.5437957\times10^{14} \; \textrm{J}^{-2}\ . $$ Here, is the coupling constant of the weak interaction, and is the mass of the W boson, which mediates the decay in question. In the Standard Model, the Fermi constant is related to the Higgs vacuum expectation value $$ v = \left(\sqrt{2} \, G_{\rm F}^0\right)^{-1/2} \simeq 246.22 \; \textrm{GeV} $$ . More directly, approximately (tree level for the standard model), $$ G_{\rm F}^0\simeq \frac {\pi \alpha}{\sqrt{2}~ M_{\rm W}^2 (1- M^2_{\rm W}/M^2_{\rm Z} )}. $$ This can be further simplified in terms of the Weinberg angle using the relation between the W and Z bosons with $$ M_\text{Z}=\frac{M_\text{W}}{\cos\theta_\text{W}} $$ , so that $$ G_{\rm F}^0\simeq \frac {\pi \alpha}{\sqrt{2}~ M_{\rm Z}^{2}\cos^{2}\theta_{\rm W}\sin^{2}\theta_{\rm W}}. $$
https://en.wikipedia.org/wiki/Fermi%27s_interaction
passage: The field does exert a torque on the magnetic dipole tending to align it with the field. However, torque is proportional to rate of change of angular momentum, so precession occurs: the direction of spin changes. This behavior is described by the Landau–Lifshitz–Gilbert equation: $$ \frac{1}{\gamma} \frac{\mathrm d\mathbf{m}}{\mathrm dt} = \mathbf{m} \times \mathbf{H}_\text{eff} - \frac{\lambda}{\gamma m} \mathbf{m} \times \frac{\mathrm d\mathbf{m}}{\mathrm dt} $$ where is the gyromagnetic ratio, is the magnetic moment, is the damping coefficient and is the effective magnetic field (the external field plus any self-induced field). The first term describes precession of the moment about the effective field, while the second is a damping term related to dissipation of energy caused by interaction with the surroundings. ### Magnetic moment of an electron Electrons and many elementary particles also have intrinsic magnetic moments, an explanation of which requires a quantum mechanical treatment and relates to the intrinsic angular momentum of the particles as discussed in the article Electron magnetic moment. It is these intrinsic magnetic moments that give rise to the macroscopic effects of magnetism, and other phenomena, such as electron paramagnetic resonance.
https://en.wikipedia.org/wiki/Magnetic_moment
passage: In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational forces is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic. In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting a star is the projection of a geodesic of the curved four-dimensional (4-D) spacetime geometry around the star onto three-dimensional (3-D) space. ## Mathematical expression The full geodesic equation is $$ {d^2 x^\mu \over ds^2}+\Gamma^\mu {}_{\alpha \beta}{d x^\alpha \over ds}{d x^\beta \over ds}=0\ $$ where s is a scalar parameter of motion (e.g. the proper time), and $$ \Gamma^\mu {}_{\alpha \beta} $$ are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) symmetric in the two lower indices.
https://en.wikipedia.org/wiki/Geodesics_in_general_relativity
passage: This value is always greater than or equal to 0 (for minimization problems). The duality gap is zero if and only if strong duality holds. Otherwise the gap is strictly positive and weak duality holds. In computational optimization, another "duality gap" is often reported, which is the difference in value between any dual solution and the value of a feasible but suboptimal iterate for the primal problem. This alternative "duality gap" quantifies the discrepancy between the value of a current feasible but suboptimal iterate for the primal problem and the value of the dual problem; the value of the dual problem is, under regularity conditions, equal to the value of the convex relaxation of the primal problem: The convex relaxation is the problem arising replacing a non-convex feasible set with its closed convex hull and with replacing a non-convex function with its convex closure, that is the function that has the epigraph that is the closed convex hull of the original primal objective function. ## Linear case Linear programming problems are optimization problems in which the objective function and the constraints are all linear. In the primal problem, the objective function is a linear combination of n variables. There are m constraints, each of which places an upper bound on a linear combination of the n variables. The goal is to maximize the value of the objective function subject to the constraints. A solution is a vector (a list) of n values that achieves the maximum value for the objective function.
https://en.wikipedia.org/wiki/Duality_%28optimization%29
passage: ### Activation The proto-oncogene can become an oncogene by a relatively small modification of its original function. There are three basic methods of activation: 1. A mutation within a proto-oncogene, or within a regulatory region (for example the promoter region), can cause a change in the protein structure, causing 1. an increase in protein (enzyme) activity 1. a loss of regulation 1. An increase in the amount of a certain protein (protein concentration), caused by 1. an increase of protein expression (through misregulation) 1. an increase of protein (mRNA) stability, prolonging its existence and thus its activity in the cell 1. gene duplication (one type of chromosome abnormality), resulting in an increased amount of protein in the cell 1. A chromosomal translocation (another type of chromosome abnormality) 1. There are 2 different types of chromosomal translocations that can occur:
https://en.wikipedia.org/wiki/Oncogene
passage: ### Cartesian coordinates In three-dimensional Cartesian coordinates, the divergence of a continuously differentiable vector field $$ \mathbf{F} = F_x\mathbf{i} + F_y\mathbf{j} + F_z\mathbf{k} $$ is defined as the scalar-valued function: $$ \operatorname{div} \mathbf{F} = \nabla\cdot\mathbf{F} = \left(\frac{\partial}{\partial x}, \frac{\partial}{\partial y}, \frac{\partial}{\partial z} \right) \cdot (F_x,F_y,F_z) = \frac{\partial F_x}{\partial x}+\frac{\partial F_y}{\partial y}+\frac{\partial F_z}{\partial z}. $$ Although expressed in terms of coordinates, the result is invariant under rotations, as the physical interpretation suggests. This is because the trace of the Jacobian matrix of an -dimensional vector field in -dimensional space is invariant under any invertible linear transformation. The common notation for the divergence is a convenient mnemonic, where the dot denotes an operation reminiscent of the dot product: take the components of the operator (see del), apply them to the corresponding components of , and sum the results. Because applying an operator is different from multiplying the components, this is considered an abuse of notation.
https://en.wikipedia.org/wiki/Divergence
passage: $$ d=\frac{\lambda}{2n \sin\alpha} \approx \frac{\lambda}{2\,\textrm{NA}} $$ where n is the index of refraction of the medium in which the lens is working and α is the maximum half-angle of the cone of light that can enter the lens (see numerical aperture). Early twentieth century scientists theorized ways of getting around the limitations of the relatively large wavelength of visible light (wavelengths of 400–700 nanometres) by using electrons. Like all matter, electrons have both wave and particle properties (matter wave), and their wave-like properties mean that a beam of electrons can be focused and diffracted much like light can. The wavelength of electrons is related to their kinetic energy via the de Broglie equation, which says that the wavelength is inversely proportional to the momentum. Taking into account relativistic effects (as in a TEM an electron's velocity is a substantial fraction of the speed of light, c) the wavelength is $$ \lambda_e = \frac{h}{\sqrt{2m_0E\left(1+\frac{E}{2m_0c^2}\right)}} $$ where h is the Planck constant, m0 is the rest mass of an electron and E is the kinetic energy of the accelerated electron.
https://en.wikipedia.org/wiki/Transmission_electron_microscopy
passage: If $$ \mathfrak{g} $$ is a finite-dimensional semisimple Lie algebra over a field of characteristic zero and V is finite-dimensional, then V is semisimple; this is Weyl's complete reducibility theorem. Thus, for semisimple Lie algebras, a classification of irreducible (i.e. simple) representations leads immediately to classification of all representations. For other Lie algebra, which do not have this special property, classifying the irreducible representations may not help much in classifying general representations. A Lie algebra is said to be reductive if the adjoint representation is semisimple. Certainly, every (finite-dimensional) semisimple Lie algebra $$ \mathfrak g $$ is reductive, since every representation of $$ \mathfrak g $$ is completely reducible, as we have just noted. In the other direction, the definition of a reductive Lie algebra means that it decomposes as a direct sum of ideals (i.e., invariant subspaces for the adjoint representation) that have no nontrivial sub-ideals. Some of these ideals will be one-dimensional and the rest are simple Lie algebras. Thus, a reductive Lie algebra is a direct sum of a commutative algebra and a semisimple algebra.
https://en.wikipedia.org/wiki/Lie_algebra_representation
passage: ## Popular arachnology In the 1970s, arachnids – particularly tarantulas – started to become popular as exotic pets. Many tarantulas consequently became more widely known by their common names, such as Mexican redknee tarantula for Brachypelma hamorii. Various societies now focus on the husbandry, care, study, and captive breeding of tarantulas, and other arachnids. They also typically produce journals or newsletters with articles and advice on these subjects. - British Tarantula Society (BTS) website - Deutsche Arachnologische Gesellschaft (DeArGe) website - The American Tarantula Society (ATS) website
https://en.wikipedia.org/wiki/Arachnology