source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Prime%20ring
In abstract algebra, a nonzero ring R is a prime ring if for any two elements a and b of R, arb = 0 for all r in R implies that either a = 0 or b = 0. This definition can be regarded as a simultaneous generalization of both integral domains and simple rings. Although this article discusses the above definition, prime ring may also refer to the minimal non-zero subring of a field, which is generated by its identity element 1, and determined by its characteristic. For a characteristic 0 field, the prime ring is the integers, and for a characteristic p field (with p a prime number) the prime ring is the finite field of order p (cf. Prime field). Equivalent definitions A ring R is prime if and only if the zero ideal {0} is a prime ideal in the noncommutative sense. This being the case, the equivalent conditions for prime ideals yield the following equivalent conditions for R to be a prime ring: For any two ideals A and B of R, AB = {0} implies A = {0} or B = {0}. For any two right ideals A and B of R, AB = {0} implies A = {0} or B = {0}. For any two left ideals A and B of R, AB = {0} implies A = {0} or B = {0}. Using these conditions it can be checked that the following are equivalent to R being a prime ring: All nonzero right ideals are faithful as right R-modules. All nonzero left ideals are faithful as left R-modules. Examples Any domain is a prime ring. Any simple ring is a prime ring, and more generally: every left or right primitive ring is a prime ring. Any matrix ring over an integral domain is a prime ring. In particular, the ring of 2 × 2 integer matrices is a prime ring. Properties A commutative ring is a prime ring if and only if it is an integral domain. A ring is prime if and only if its zero ideal is a prime ideal. A nonzero ring is prime if and only if the monoid of its ideals lacks zero divisors. The ring of matrices over a prime ring is again a prime ring. Notes References Ring theory
https://en.wikipedia.org/wiki/Matrix%20ring
In abstract algebra, a matrix ring is a set of matrices with entries in a ring R that form a ring under matrix addition and matrix multiplication . The set of all matrices with entries in R is a matrix ring denoted Mn(R) (alternative notations: Matn(R) and ). Some sets of infinite matrices form infinite matrix rings. Any subring of a matrix ring is a matrix ring. Over a rng, one can form matrix rngs. When R is a commutative ring, the matrix ring Mn(R) is an associative algebra over R, and may be called a matrix algebra. In this setting, if M is a matrix and r is in R, then the matrix rM is the matrix M with each of its entries multiplied by r. Examples The set of all square matrices over R, denoted Mn(R). This is sometimes called the "full ring of n-by-n matrices". The set of all upper triangular matrices over R. The set of all lower triangular matrices over R. The set of all diagonal matrices over R. This subalgebra of Mn(R) is isomorphic to the direct product of n copies of R. For any index set I, the ring of endomorphisms of the right R-module is isomorphic to the ring of column finite matrices whose entries are indexed by and whose columns each contain only finitely many nonzero entries. The ring of endomorphisms of M considered as a left R-module is isomorphic to the ring of row finite matrices. If R is a Banach algebra, then the condition of row or column finiteness in the previous point can be relaxed. With the norm in place, absolutely convergent series can be used instead of finite sums. For example, the matrices whose column sums are absolutely convergent sequences form a ring. Analogously of course, the matrices whose row sums are absolutely convergent series also form a ring. This idea can be used to represent operators on Hilbert spaces, for example. The intersection of the row-finite and column-finite matrix rings forms a ring . If R is commutative, then Mn(R) has a structure of a *-algebra over R, where the involution * on Mn(R) is matrix transposition. If A is a C*-algebra, then Mn(A) is another C*-algebra. If A is non-unital, then Mn(A) is also non-unital. By the Gelfand–Naimark theorem, there exists a Hilbert space H and an isometric *-isomorphism from A to a norm-closed subalgebra of the algebra B(H) of continuous operators; this identifies Mn(A) with a subalgebra of B(H⊕n). For simplicity, if we further suppose that H is separable and A B(H) is a unital C*-algebra, we can break up A into a matrix ring over a smaller C*-algebra. One can do so by fixing a projection p and hence its orthogonal projection 1 − p; one can identify A with , where matrix multiplication works as intended because of the orthogonality of the projections. In order to identify A with a matrix ring over a C*-algebra, we require that p and 1 − p have the same "rank"; more precisely, we need that p and 1 − p are Murray–von Neumann equivalent, i.e., there exists a partial isometry u such that and . One can easily generalize this
https://en.wikipedia.org/wiki/Domain%20%28ring%20theory%29
In algebra, a domain is a nonzero ring in which implies or . (Sometimes such a ring is said to "have the zero-product property".) Equivalently, a domain is a ring in which 0 is the only left zero divisor (or equivalently, the only right zero divisor). A commutative domain is called an integral domain. Mathematical literature contains multiple variants of the definition of "domain". Examples and non-examples The ring is not a domain, because the images of 2 and 3 in this ring are nonzero elements with product 0. More generally, for a positive integer , the ring is a domain if and only if is prime. A finite domain is automatically a finite field, by Wedderburn's little theorem. The quaternions form a noncommutative domain. More generally, any division algebra is a domain, since all its nonzero elements are invertible. The set of all integral quaternions is a noncommutative ring which is a subring of quaternions, hence a noncommutative domain. A matrix ring Mn(R) for n ≥ 2 is never a domain: if R is nonzero, such a matrix ring has nonzero zero divisors and even nilpotent elements other than 0. For example, the square of the matrix unit E12 is 0. The tensor algebra of a vector space, or equivalently, the algebra of polynomials in noncommuting variables over a field, is a domain. This may be proved using an ordering on the noncommutative monomials. If R is a domain and S is an Ore extension of R then S is a domain. The Weyl algebra is a noncommutative domain. The universal enveloping algebra of any Lie algebra over a field is a domain. The proof uses the standard filtration on the universal enveloping algebra and the Poincaré–Birkhoff–Witt theorem. Group rings and the zero divisor problem Suppose that G is a group and K is a field. Is the group ring a domain? The identity shows that an element g of finite order induces a zero divisor in R. The zero divisor problem asks whether this is the only obstruction; in other words, Given a field K and a torsion-free group G, is it true that K[G] contains no zero divisors? No counterexamples are known, but the problem remains open in general (as of 2017). For many special classes of groups, the answer is affirmative. Farkas and Snider proved in 1976 that if G is a torsion-free polycyclic-by-finite group and then the group ring K[G] is a domain. Later (1980) Cliff removed the restriction on the characteristic of the field. In 1988, Kropholler, Linnell and Moody generalized these results to the case of torsion-free solvable and solvable-by-finite groups. Earlier (1965) work of Michel Lazard, whose importance was not appreciated by the specialists in the field for about 20 years, had dealt with the case where K is the ring of p-adic integers and G is the pth congruence subgroup of . Spectrum of an integral domain Zero divisors have a topological interpretation, at least in the case of commutative rings: a ring R is an integral domain if and only if it is reduced and its spec
https://en.wikipedia.org/wiki/Directional%20derivative
A directional derivative is a concept in multivariable calculus that measures the rate at which a function changes in a particular direction at a given point. The directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity specified by v. The directional derivative of a scalar function f with respect to a vector v at a point (e.g., position) x may be denoted by any of the following: It therefore generalizes the notion of a partial derivative, in which the rate of change is taken along one of the curvilinear coordinate curves, all other coordinates being constant. The directional derivative is a special case of the Gateaux derivative. Definition The directional derivative of a scalar function along a vector is the function defined by the limit This definition is valid in a broad range of contexts, for example where the norm of a vector (and hence a unit vector) is undefined. For differentiable functions If the function f is differentiable at x, then the directional derivative exists along any unit vector v at x, and one has where the on the right denotes the gradient, is the dot product and v is a unit vector. This follows from defining a path and using the definition of the derivative as a limit which can be calculated along this path to get: Intuitively, the directional derivative of f at a point x represents the rate of change of f, in the direction of v with respect to time, when moving past x. Using only direction of vector In a Euclidean space, some authors define the directional derivative to be with respect to an arbitrary nonzero vector v after normalization, thus being independent of its magnitude and depending only on its direction. This definition gives the rate of increase of per unit of distance moved in the direction given by . In this case, one has or in case f is differentiable at x, Restriction to a unit vector In the context of a function on a Euclidean space, some texts restrict the vector v to being a unit vector. With this restriction, both the above definitions are equivalent. Properties Many of the familiar properties of the ordinary derivative hold for the directional derivative. These include, for any functions f and g defined in a neighborhood of, and differentiable at, p: sum rule: constant factor rule: For any constant c, product rule (or Leibniz's rule): chain rule: If g is differentiable at p and h is differentiable at g(p), then In differential geometry Let be a differentiable manifold and a point of . Suppose that is a function defined in a neighborhood of , and differentiable at . If is a tangent vector to at , then the directional derivative of along , denoted variously as (see Exterior derivative), (see Covariant derivative), (see Lie derivative), or (see ), can be defined as follows. Let be a
https://en.wikipedia.org/wiki/Axiom%20%28computer%20algebra%20system%29
Axiom is a free, general-purpose computer algebra system. It consists of an interpreter environment, a compiler and a library, which defines a strongly typed hierarchy. History Two computer algebra systems named Scratchpad were developed by IBM. The first one was started in 1965 by James Griesmer at the request of Ralph Gomory, and written in Fortran. The development of this software was stopped before any public release. The second Scratchpad, originally named Scratchpad II, was developed from 1977 on, at Thomas J. Watson Research Center, under the direction of Richard Dimick Jenks. The design is principally due to Richard D. Jenks (IBM Research), James H. Davenport (University of Bath), Barry M. Trager (IBM Research), David Y.Y. Yun (Southern Methodist University) and Victor S. Miller (IBM Research). Early consultants on the project were David Barton (University of California, Berkeley) and James W. Thatcher (IBM Research). Implementation included Robert Sutor (IBM Research), Scott C. Morrison (University of California, Berkeley), Christine J. Sundaresan (IBM Research), Timothy Daly (IBM Research), Patrizia Gianni (University of Pisa), Albrecht Fortenbacher (Universitaet Karlsruhe), Stephen M. Watt (IBM Research and University of Waterloo), Josh Cohen (Yale University), Michael Rothstein (Kent State University), Manuel Bronstein (IBM Research), Michael Monagan (Simon Fraser University), Jonathan Steinbach (IBM Research), William Burge (IBM Research), Jim Wen (IBM Research), William Sit (City College of New York), and Clifton Williamson (IBM Research) Scratchpad II was renamed Axiom when IBM decided, circa 1990, to make it a commercial product. A few years later, it was sold to NAG. In 2001, it was withdrawn from the market and re-released under the Modified BSD License. Since then, the project's lead developer has been Tim Daly. In 2007, Axiom was forked twice, originating two different open-source projects: OpenAxiom and FriCAS, following "serious disagreement about project goals". The Axiom project continued to be developed by Tim Daly. The current research direction is "Proving Axiom Sane", that is, logical, rational, judicious, and sound. Documentation Axiom is a literate program. The source code is becoming available in a set of volumes which are available on the axiom-developer.org website. These volumes contain the actual source code of the system. The currently available documents are: Combined Table of Contents Volume 0: Axiom Jenks and Sutor—The main textbook Volume 1: Axiom Tutorial—A simple introduction Volume 2: Axiom Users Guide—Detailed examples of domain use (incomplete) Volume 3: Axiom Programmers Guide—Guided examples of program writing (incomplete) Volume 4: Axiom Developers Guide—Short essays on developer-specific topics (incomplete) Volume 5: Axiom Interpreter—Source code for Axiom interpreter (incomplete) Volume 6: Axiom Command—Source code for system commands and scripts (incomplete) Volume 7: Axiom Hy
https://en.wikipedia.org/wiki/Joseph%20Bertrand
Joseph Louis François Bertrand (; 11 March 1822 – 5 April 1900) was a French mathematician who worked in the fields of number theory, differential geometry, probability theory, economics and thermodynamics. Biography Joseph Bertrand was the son of physician Alexandre Jacques François Bertrand and the brother of archaeologist Alexandre Bertrand. His father died when Joseph was only nine years old, but that did not stand in his way of learning and understanding algebraic and elementary geometric concepts, and he also could speak Latin fluently, all when he was of the same age of nine. At eleven years old he attended the course of the École Polytechnique as an auditor (open courses). From age eleven to seventeen, he obtained two bachelor's degrees, a license and a PhD with a thesis on the mathematical theory of electricity and is admitted first to the 1839 entrance examination of the École Polytechnique. Bertrand was a professor at the École Polytechnique and Collège de France, and was a member of the Paris Academy of Sciences where he was its permanent secretary for twenty-six years. He conjectured, in 1845, that there is at least one prime between n and 2n − 2 for every n > 3. Chebyshev proved this conjecture, now called Bertrand's postulate, in 1850. He was also famous for a paradox in the field of probability, now known as Bertrand's Paradox. There is another paradox in game theory that is named after him, called the Bertrand Paradox. In 1849, he was the first to define real numbers using what is now called a Dedekind cut. Bertrand translated into French Carl Friedrich Gauss's work on the theory of errors and the method of least squares. In the field of economics, he reviewed the work on oligopoly theory, specifically the Cournot Competition Model (1838) of French mathematician Antoine Augustin Cournot. His Bertrand Competition Model (1883) argued that Cournot had reached a very misleading conclusion, and he reworked it using prices rather than quantities as the strategic variables, thus showing that the equilibrium price was simply the competitive price. His book Thermodynamique points out in Chapter XII, that thermodynamic entropy and temperature are only defined for reversible processes. He was one of the first people to point this out. In 1858 he was elected a foreign member of the Royal Swedish Academy of Sciences. Works by Bertrand Traité de calcul différentiel et de calcul intégral (Paris : Gauthier-Villars, 1864–1870) (2 volumes treatise on calculus) Rapport sur les progrès les plus récents de l'analyse mathématique (Paris: Imprimerie Impériale, 1867) (report on recent progress in mathematical analysis) Traité d'arithmétique (L. Hachette, 1849) (arithmetics) Thermodynamique (Paris : Gauthier-Villars, 1887) Méthode des moindres carrés (Mallet-Bachelier, 1855) (translation of Gauss's work on least squares) Leçons sur la théorie mathématique de l'électricité / professées au Collège de France (Paris : Gauthier-Villars et fils,
https://en.wikipedia.org/wiki/Jacobson%20density%20theorem
In mathematics, more specifically non-commutative ring theory, modern algebra, and module theory, the Jacobson density theorem is a theorem concerning simple modules over a ring . The theorem can be applied to show that any primitive ring can be viewed as a "dense" subring of the ring of linear transformations of a vector space. This theorem first appeared in the literature in 1945, in the famous paper "Structure Theory of Simple Rings Without Finiteness Assumptions" by Nathan Jacobson. This can be viewed as a kind of generalization of the Artin-Wedderburn theorem's conclusion about the structure of simple Artinian rings. Motivation and formal statement Let be a ring and let be a simple right -module. If is a non-zero element of , (where is the cyclic submodule of generated by ). Therefore, if are non-zero elements of , there is an element of that induces an endomorphism of transforming to . The natural question now is whether this can be generalized to arbitrary (finite) tuples of elements. More precisely, find necessary and sufficient conditions on the tuple and separately, so that there is an element of with the property that for all . If is the set of all -module endomorphisms of , then Schur's lemma asserts that is a division ring, and the Jacobson density theorem answers the question on tuples in the affirmative, provided that the are linearly independent over . With the above in mind, the theorem may be stated this way: The Jacobson density theorem. Let be a simple right -module, , and a finite and -linearly independent set. If is a -linear transformation on then there exists such that for all in . Proof In the Jacobson density theorem, the right -module is simultaneously viewed as a left -module where , in the natural way: . It can be verified that this is indeed a left module structure on . As noted before, Schur's lemma proves is a division ring if is simple, and so is a vector space over . The proof also relies on the following theorem proven in p. 185: Theorem. Let be a simple right -module, , and a finite set. Write for the annihilator of in . Let be in with . Then is in ; the -span of . Proof of the Jacobson density theorem We use induction on . If is empty, then the theorem is vacuously true and the base case for induction is verified. Assume is non-empty, let be an element of and write If is any -linear transformation on , by the induction hypothesis there exists such that for all in . Write . It is easily seen that is a submodule of . If , then the previous theorem implies that would be in the -span of , contradicting the -linear independence of , therefore . Since is simple, we have: . Since , there exists in such that . Define and observe that for all in we have: Now we do the same calculation for : Therefore, for all in , as desired. This completes the inductive step of the proof. It follows now from mathematical induction that the theorem is true for finite s
https://en.wikipedia.org/wiki/Fisher%20information%20metric
In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, i.e., a smooth manifold whose points are probability measures defined on a common probability space. It can be used to calculate the informational difference between measurements. The metric is interesting in several aspects. By Chentsov’s theorem, the Fisher information metric on statistical models is the only Riemannian metric (up to rescaling) that is invariant under sufficient statistics. It can also be understood to be the infinitesimal form of the relative entropy (i.e., the Kullback–Leibler divergence); specifically, it is the Hessian of the divergence. Alternately, it can be understood as the metric induced by the flat space Euclidean metric, after appropriate changes of variable. When extended to complex projective Hilbert space, it becomes the Fubini–Study metric; when written in terms of mixed states, it is the quantum Bures metric. Considered purely as a matrix, it is known as the Fisher information matrix. Considered as a measurement technique, where it is used to estimate hidden parameters in terms of observed random variables, it is known as the observed information. Definition Given a statistical manifold with coordinates , one writes for the probability density as a function of . Here is drawn from the value space R for a (discrete or continuous) random variable X. The probability is normalized by where is the distribution of . The Fisher information metric then takes the form: The integral is performed over all values x in R. The variable is now a coordinate on a Riemann manifold. The labels j and k index the local coordinate axes on the manifold. When the probability is derived from the Gibbs measure, as it would be for any Markovian process, then can also be understood to be a Lagrange multiplier; Lagrange multipliers are used to enforce constraints, such as holding the expectation value of some quantity constant. If there are n constraints holding n different expectation values constant, then the dimension of the manifold is n dimensions smaller than the original space. In this case, the metric can be explicitly derived from the partition function; a derivation and discussion is presented there. Substituting from information theory, an equivalent form of the above definition is: To show that the equivalent form equals the above definition note that and apply on both sides. Relation to the Kullback–Leibler divergence Alternatively, the metric can be obtained as the second derivative of the relative entropy or Kullback–Leibler divergence. To obtain this, one considers two probability distributions and , which are infinitesimally close to one another, so that with an infinitesimally small change of in the j direction. Then, since the Kullback–Leibler divergence has an absolute minimum of 0 when , one has an expansion up to second order in o
https://en.wikipedia.org/wiki/Eckmann%E2%80%93Hilton%20argument
In mathematics, the Eckmann–Hilton argument (or Eckmann–Hilton principle or Eckmann–Hilton theorem) is an argument about two unital magma structures on a set where one is a homomorphism for the other. Given this, the structures are the same, and the resulting magma is a commutative monoid. This can then be used to prove the commutativity of the higher homotopy groups. The principle is named after Beno Eckmann and Peter Hilton, who used it in a 1962 paper. The Eckmann–Hilton result Let be a set equipped with two binary operations, which we will write and , and suppose: and are both unital, meaning that there are identity elements and of such that and , for all . for all . Then and are the same and in fact commutative and associative. Remarks The operations and are often referred to as monoid structures or multiplications, but this suggests they are assumed to be associative, a property that is not required for the proof. In fact, associativity follows. Likewise, we do not have to require that the two operations have the same neutral element; this is a consequence. Proof First, observe that the units of the two operations coincide: . Now, let . Then . This establishes that the two operations coincide and are commutative. For associativity, . Two-dimensional proof The above proof also has a "two-dimensional" presentation that better illustrates the application to higher homotopy groups. For this version of the proof, we write the two operations as vertical and horizontal juxtaposition, i.e., and . The interchange property can then be expressed as follows: For all , , so we can write without ambiguity. Let and be the units for vertical and horizontal composition respectively. Then , so both units are equal. Now, for all , , so horizontal composition is the same as vertical composition and both operations are commutative. Finally, for all , , so composition is associative. Remarks If the operations are associative, each one defines the structure of a monoid on , and the conditions above are equivalent to the more abstract condition that is a monoid homomorphism (or vice versa). An even more abstract way of stating the theorem is: If is a monoid object in the category of monoids, then is in fact a commutative monoid. It is important that a similar argument does NOT give such a triviality result in the case of monoid objects in the categories of small categories or of groupoids. Instead the notion of group object in the category of groupoids turns out to be equivalent to the notion of crossed module. This leads to the idea of using multiple groupoid objects in homotopy theory. More generally, the Eckmann–Hilton argument is a special case of the use of the interchange law in the theory of (strict) double and multiple categories. A (strict) double category is a set, or class, equipped with two category structures, each of which is a morphism for the other structure. If the compositions in the two category structur
https://en.wikipedia.org/wiki/Artinian%20ring
In mathematics, specifically abstract algebra, an Artinian ring (sometimes Artin ring) is a ring that satisfies the descending chain condition on (one-sided) ideals; that is, there is no infinite descending sequence of ideals. Artinian rings are named after Emil Artin, who first discovered that the descending chain condition for ideals simultaneously generalizes finite rings and rings that are finite-dimensional vector spaces over fields. The definition of Artinian rings may be restated by interchanging the descending chain condition with an equivalent notion: the minimum condition. Precisely, a ring is left Artinian if it satisfies the descending chain condition on left ideals, right Artinian if it satisfies the descending chain condition on right ideals, and Artinian or two-sided Artinian if it is both left and right Artinian. For commutative rings the left and right definitions coincide, but in general they are distinct from each other. The Wedderburn–Artin theorem characterizes every simple Artinian ring as a ring of matrices over a division ring. This implies that a simple ring is left Artinian if and only if it is right Artinian. The same definition and terminology can be applied to modules, with ideals replaced by submodules. Although the descending chain condition appears dual to the ascending chain condition, in rings it is in fact the stronger condition. Specifically, a consequence of the Akizuki–Hopkins–Levitzki theorem is that a left (resp. right) Artinian ring is automatically a left (resp. right) Noetherian ring. This is not true for general modules; that is, an Artinian module need not be a Noetherian module. Examples and counterexamples An integral domain is Artinian if and only if it is a field. A ring with finitely many, say left, ideals is left Artinian. In particular, a finite ring (e.g., ) is left and right Artinian. Let k be a field. Then is Artinian for every positive integer n. Similarly, is an Artinian ring with maximal ideal . Let be an endomorphism between a finite-dimensional vector space V. Then the subalgebra generated by is a commutative Artinian ring. If I is a nonzero ideal of a Dedekind domain A, then is a principal Artinian ring. For each , the full matrix ring over a left Artinian (resp. left Noetherian) ring R is left Artinian (resp. left Noetherian). The following two are examples of non-Artinian rings. If R is any ring, then the polynomial ring R[x] is not Artinian, since the ideal generated by is (properly) contained in the ideal generated by for all natural numbers n. In contrast, if R is Noetherian so is R[x] by the Hilbert basis theorem. The ring of integers is a Noetherian ring but is not Artinian. Modules over Artinian rings Let M be a left module over a left Artinian ring. Then the following are equivalent (Hopkins' theorem): (i) M is finitely generated, (ii) M has finite length (i.e., has composition series), (iii) M is Noetherian, (iv) M is Artinian. Commutative Arti
https://en.wikipedia.org/wiki/Schl%C3%A4fli%20symbol
In geometry, the Schläfli symbol is a notation of the form that defines regular polytopes and tessellations. The Schläfli symbol is named after the 19th-century Swiss mathematician Ludwig Schläfli, who generalized Euclidean geometry to more than three dimensions and discovered all their convex regular polytopes, including the six that occur in four dimensions. Definition The Schläfli symbol is a recursive description, starting with {p} for a p-sided regular polygon that is convex. For example, {3} is an equilateral triangle, {4} is a square, {5} a convex regular pentagon, etc. Regular star polygons are not convex, and their Schläfli symbols {p/q} contain irreducible fractions p/q, where p is the number of vertices, and q is their turning number. Equivalently, {p/q} is created from the vertices of {p}, connected every q. For example, is a pentagram; is a pentagon. A regular polyhedron that has q regular p-sided polygon faces around each vertex is represented by {p,q}. For example, the cube has 3 squares around each vertex and is represented by {4,3}. A regular 4-dimensional polytope, with r {p,q} regular polyhedral cells around each edge is represented by {p,q,r}. For example, a tesseract, {4,3,3}, has 3 cubes, {4,3}, around an edge. In general, a regular polytope {p,q,r,...,y,z} has z {p,q,r,...,y} facets around every peak, where a peak is a vertex in a polyhedron, an edge in a 4-polytope, a face in a 5-polytope, and an (n-3)-face in an n-polytope. Properties A regular polytope has a regular vertex figure. The vertex figure of a regular polytope {p,q,r,...,y,z} is {q,r,...,y,z}. Regular polytopes can have star polygon elements, like the pentagram, with symbol , represented by the vertices of a pentagon but connected alternately. The Schläfli symbol can represent a finite convex polyhedron, an infinite tessellation of Euclidean space, or an infinite tessellation of hyperbolic space, depending on the angle defect of the construction. A positive angle defect allows the vertex figure to fold into a higher dimension and loops back into itself as a polytope. A zero angle defect tessellates space of the same dimension as the facets. A negative angle defect cannot exist in ordinary space, but can be constructed in hyperbolic space. Usually, a facet or a vertex figure is assumed to be a finite polytope, but can sometimes itself be considered a tessellation. A regular polytope also has a dual polytope, represented by the Schläfli symbol elements in reverse order. A self-dual regular polytope will have a symmetric Schläfli symbol. In addition to describing Euclidean polytopes, Schläfli symbols can be used to describe spherical polytopes or spherical honeycombs. History and variations Schläfli's work was almost unknown in his lifetime, and his notation for describing polytopes was rediscovered independently by several others. In particular, Thorold Gosset rediscovered the Schläfli symbol which he wrote as | p | q | r | ... | z | rather than
https://en.wikipedia.org/wiki/Curvature%20form
In differential geometry, the curvature form describes curvature of a connection on a principal bundle. The Riemann curvature tensor in Riemannian geometry can be considered as a special case. Definition Let G be a Lie group with Lie algebra , and P → B be a principal G-bundle. Let ω be an Ehresmann connection on P (which is a -valued one-form on P). Then the curvature form is the -valued 2-form on P defined by (In another convention, 1/2 does not appear.) Here stands for exterior derivative, is defined in the article "Lie algebra-valued form" and D denotes the exterior covariant derivative. In other terms, where X, Y are tangent vectors to P. There is also another expression for Ω: if X, Y are horizontal vector fields on P, then where hZ means the horizontal component of Z, on the right we identified a vertical vector field and a Lie algebra element generating it (fundamental vector field), and is the inverse of the normalization factor used by convention in the formula for the exterior derivative. A connection is said to be flat if its curvature vanishes: Ω = 0. Equivalently, a connection is flat if the structure group can be reduced to the same underlying group but with the discrete topology. Curvature form in a vector bundle If E → B is a vector bundle, then one can also think of ω as a matrix of 1-forms and the above formula becomes the structure equation of E. Cartan: where is the wedge product. More precisely, if and denote components of ω and Ω correspondingly, (so each is a usual 1-form and each is a usual 2-form) then For example, for the tangent bundle of a Riemannian manifold, the structure group is O(n) and Ω is a 2-form with values in the Lie algebra of O(n), i.e. the antisymmetric matrices. In this case the form Ω is an alternative description of the curvature tensor, i.e. using the standard notation for the Riemannian curvature tensor. Bianchi identities If is the canonical vector-valued 1-form on the frame bundle, the torsion of the connection form is the vector-valued 2-form defined by the structure equation where as above D denotes the exterior covariant derivative. The first Bianchi identity takes the form The second Bianchi identity takes the form and is valid more generally for any connection in a principal bundle. The Bianchi identities can be written in tensor notation as: The contracted Bianchi identities are used to derive the Einstein tensor in the Einstein field equations, the bulk of general theory of relativity. Notes References Shoshichi Kobayashi and Katsumi Nomizu (1963) Foundations of Differential Geometry, Vol.I, Chapter 2.5 Curvature form and structure equation, p 75, Wiley Interscience. See also Connection (principal bundle) Basic introduction to the mathematics of curved spacetime Contracted Bianchi identities Einstein tensor Einstein field equations General theory of relativity Chern-Simons form Curvature of Riemannian manifolds Gauge theory Curvature tensors D
https://en.wikipedia.org/wiki/Focal%20point
Focal point may refer to: Focus (optics) Focus (geometry) Conjugate points, also called focal points Focal point (game theory) Unicom Focal Point, a portfolio management software tool Focal point review, a human resources process for employee evaluation Focal Point (album), a 1976 studio album by McCoy Tyner "Focal Point: Mark of the Leaf", a Naruto episode See also Foca Point, Signy Island, South Orkney Islands Focal (disambiguation) Focus (disambiguation)
https://en.wikipedia.org/wiki/Hasse%20principle
In mathematics, Helmut Hasse's local–global principle, also known as the Hasse principle, is the idea that one can find an integer solution to an equation by using the Chinese remainder theorem to piece together solutions modulo powers of each different prime number. This is handled by examining the equation in the completions of the rational numbers: the real numbers and the p-adic numbers. A more formal version of the Hasse principle states that certain types of equations have a rational solution if and only if they have a solution in the real numbers and in the p-adic numbers for each prime p. Intuition Given a polynomial equation with rational coefficients, if it has a rational solution, then this also yields a real solution and a p-adic solution, as the rationals embed in the reals and p-adics: a global solution yields local solutions at each prime. The Hasse principle asks when the reverse can be done, or rather, asks what the obstruction is: when can you patch together solutions over the reals and p-adics to yield a solution over the rationals: when can local solutions be joined to form a global solution? One can ask this for other rings or fields: integers, for instance, or number fields. For number fields, rather than reals and p-adics, one uses complex embeddings and -adics, for prime ideals . Forms representing 0 Quadratic forms The Hasse–Minkowski theorem states that the local–global principle holds for the problem of representing 0 by quadratic forms over the rational numbers (which is Minkowski's result); and more generally over any number field (as proved by Hasse), when one uses all the appropriate local field necessary conditions. Hasse's theorem on cyclic extensions states that the local–global principle applies to the condition of being a relative norm for a cyclic extension of number fields. Cubic forms A counterexample by Ernst S. Selmer shows that the Hasse–Minkowski theorem cannot be extended to forms of degree 3: The cubic equation 3x3 + 4y3 + 5z3 = 0 has a solution in real numbers, and in all p-adic fields, but it has no nontrivial solution in which x, y, and z are all rational numbers. Roger Heath-Brown showed that every cubic form over the integers in at least 14 variables represents 0, improving on earlier results of Davenport. Since every cubic form over the p-adic numbers with at least ten variables represents 0, the local–global principle holds trivially for cubic forms over the rationals in at least 14 variables. Restricting to non-singular forms, one can do better than this: Heath-Brown proved that every non-singular cubic form over the rational numbers in at least 10 variables represents 0, thus trivially establishing the Hasse principle for this class of forms. It is known that Heath-Brown's result is best possible in the sense that there exist non-singular cubic forms over the rationals in 9 variables that do not represent zero. However, Hooley showed that the Hasse principle holds for the representation
https://en.wikipedia.org/wiki/Jacobian%20variety
In mathematics, the Jacobian variety J(C) of a non-singular algebraic curve C of genus g is the moduli space of degree 0 line bundles. It is the connected component of the identity in the Picard group of C, hence an abelian variety. Introduction The Jacobian variety is named after Carl Gustav Jacobi, who proved the complete version of the Abel–Jacobi theorem, making the injectivity statement of Niels Abel into an isomorphism. It is a principally polarized abelian variety, of dimension g, and hence, over the complex numbers, it is a complex torus. If p is a point of C, then the curve C can be mapped to a subvariety of J with the given point p mapping to the identity of J, and C generates J as a group. Construction for complex curves Over the complex numbers, the Jacobian variety can be realized as the quotient space V/L, where V is the dual of the vector space of all global holomorphic differentials on C and L is the lattice of all elements of V of the form where γ is a closed path in C. In other words, with embedded in via the above map. This can be done explicitly with the use of theta functions. The Jacobian of a curve over an arbitrary field was constructed by as part of his proof of the Riemann hypothesis for curves over a finite field. The Abel–Jacobi theorem states that the torus thus built is a variety, the classical Jacobian of a curve, that indeed parametrizes the degree 0 line bundles, that is, it can be identified with its Picard variety of degree 0 divisors modulo linear equivalence. Algebraic structure As a group, the Jacobian variety of a curve is isomorphic to the quotient of the group of divisors of degree zero by the subgroup of principal divisors, i.e., divisors of rational functions. This holds for fields that are not algebraically closed, provided one considers divisors and functions defined over that field. Further notions Torelli's theorem states that a complex curve is determined by its Jacobian (with its polarization). The Schottky problem asks which principally polarized abelian varieties are the Jacobians of curves. The Picard variety, the Albanese variety, generalized Jacobian, and intermediate Jacobians are generalizations of the Jacobian for higher-dimensional varieties. For varieties of higher dimension the construction of the Jacobian variety as a quotient of the space of holomorphic 1-forms generalizes to give the Albanese variety, but in general this need not be isomorphic to the Picard variety. See also Period matrix – period matrices are a useful technique for computing the Jacobian of a curve Hodge structure – these are generalizations of Jacobians Honda–Tate theorem – classifies abelian varieties over finite fields up to isogeny Intermediate Jacobian References Computation techniques – techniques for constructing Jacobians Isogeny classes Abelian varieties isogenous to no Jacobian Cryptography Curves, Jacobians, and Cryptography General Abelian varieties Algebraic
https://en.wikipedia.org/wiki/Connected%20sum
In mathematics, specifically in topology, the operation of connected sum is a geometric modification on manifolds. Its effect is to join two given manifolds together near a chosen point on each. This construction plays a key role in the classification of closed surfaces. More generally, one can also join manifolds together along identical submanifolds; this generalization is often called the fiber sum. There is also a closely related notion of a connected sum on knots, called the knot sum or composition of knots. Connected sum at a point A connected sum of two m-dimensional manifolds is a manifold formed by deleting a ball inside each manifold and gluing together the resulting boundary spheres. If both manifolds are oriented, there is a unique connected sum defined by having the gluing map reverse orientation. Although the construction uses the choice of the balls, the result is unique up to homeomorphism. One can also make this operation work in the smooth category, and then the result is unique up to diffeomorphism. There are subtle problems in the smooth case: not every diffeomorphism between the boundaries of the spheres gives the same composite manifold, even if the orientations are chosen correctly. For example, Milnor showed that two 7-cells can be glued along their boundary so that the result is an exotic sphere homeomorphic but not diffeomorphic to a 7-sphere. However, there is a canonical way to choose the gluing of and which gives a unique well-defined connected sum. Choose embeddings and so that preserves orientation and reverses orientation. Now obtain from the disjoint sum by identifying with for each unit vector and each . Choose the orientation for which is compatible with and . The fact that this construction is well-defined depends crucially on the disc theorem, which is not at all obvious. For further details, see. The operation of connected sum is denoted by . The operation of connected sum has the sphere as an identity; that is, is homeomorphic (or diffeomorphic) to . The classification of closed surfaces, a foundational and historically significant result in topology, states that any closed surface can be expressed as the connected sum of a sphere with some number of tori and some number of real projective planes. Connected sum along a submanifold Let and be two smooth, oriented manifolds of equal dimension and a smooth, closed, oriented manifold, embedded as a submanifold into both and Suppose furthermore that there exists an isomorphism of normal bundles that reverses the orientation on each fiber. Then induces an orientation-preserving diffeomorphism where each normal bundle is diffeomorphically identified with a neighborhood of in , and the map is the orientation-reversing diffeomorphic involution on normal vectors. The connected sum of and along is then the space obtained by gluing the deleted neighborhoods together by the orientation-preserving diffeomorphism. The sum is of
https://en.wikipedia.org/wiki/Homological%20conjectures%20in%20commutative%20algebra
In mathematics, homological conjectures have been a focus of research activity in commutative algebra since the early 1960s. They concern a number of interrelated (sometimes surprisingly so) conjectures relating various homological properties of a commutative ring to its internal ring structure, particularly its Krull dimension and depth. The following list given by Melvin Hochster is considered definitive for this area. In the sequel, , and refer to Noetherian commutative rings; will be a local ring with maximal ideal , and and are finitely generated -modules. The Zero Divisor Theorem. If has finite projective dimension and is not a zero divisor on , then is not a zero divisor on . Bass's Question. If has a finite injective resolution then is a Cohen–Macaulay ring. The Intersection Theorem. If has finite length, then the Krull dimension of N (i.e., the dimension of R modulo the annihilator of N) is at most the projective dimension of M. The New Intersection Theorem. Let denote a finite complex of free R-modules such that has finite length but is not 0. Then the (Krull dimension) . The Improved New Intersection Conjecture. Let denote a finite complex of free R-modules such that has finite length for and has a minimal generator that is killed by a power of the maximal ideal of R. Then . The Direct Summand Conjecture. If is a module-finite ring extension with R regular (here, R need not be local but the problem reduces at once to the local case), then R is a direct summand of S as an R-module. The conjecture was proven by Yves André using a theory of perfectoid spaces. The Canonical Element Conjecture. Let be a system of parameters for R, let be a free R-resolution of the residue field of R with , and let denote the Koszul complex of R with respect to . Lift the identity map to a map of complexes. Then no matter what the choice of system of parameters or lifting, the last map from is not 0. Existence of Balanced Big Cohen–Macaulay Modules Conjecture. There exists a (not necessarily finitely generated) R-module W such that mRW ≠ W and every system of parameters for R is a regular sequence on W. Cohen-Macaulayness of Direct Summands Conjecture. If R is a direct summand of a regular ring S as an R-module, then R is Cohen–Macaulay (R need not be local, but the result reduces at once to the case where R is local). The Vanishing Conjecture for Maps of Tor. Let be homomorphisms where R is not necessarily local (one can reduce to that case however), with A, S regular and R finitely generated as an A-module. Let W be any A-module. Then the map is zero for all . The Strong Direct Summand Conjecture. Let be a map of complete local domains, and let Q be a height one prime ideal of S lying over , where R and are both regular. Then is a direct summand of Q considered as R-modules. Existence of Weakly Functorial Big Cohen-Macaulay Algebras Conjecture. Let be a local homomorphism of complete local domains. Then there exist
https://en.wikipedia.org/wiki/Basel%20problem
The Basel problem is a problem in mathematical analysis with relevance to number theory, concerning an infinite sum of inverse squares. It was first posed by Pietro Mengoli in 1650 and solved by Leonhard Euler in 1734, and read on 5 December 1735 in The Saint Petersburg Academy of Sciences. Since the problem had withstood the attacks of the leading mathematicians of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up more than a century later by Bernhard Riemann in his seminal 1859 paper "On the Number of Primes Less Than a Given Magnitude", in which he defined his zeta function and proved its basic properties. The problem is named after Basel, hometown of Euler as well as of the Bernoulli family who unsuccessfully attacked the problem. The Basel problem asks for the precise summation of the reciprocals of the squares of the natural numbers, i.e. the precise sum of the infinite series: The sum of the series is approximately equal to 1.644934. The Basel problem asks for the exact sum of this series (in closed form), as well as a proof that this sum is correct. Euler found the exact sum to be and announced this discovery in 1735. His arguments were based on manipulations that were not justified at the time, although he was later proven correct. He produced an accepted proof in 1741. The solution to this problem can be used to estimate the probability that two large random numbers are coprime. Two random integers in the range from 1 to , in the limit as goes to infinity, are relatively prime with a probability that approaches , the reciprocal of the solution to the Basel problem. Euler's approach Euler's original derivation of the value essentially extended observations about finite polynomials and assumed that these same properties hold true for infinite series. Of course, Euler's original reasoning requires justification (100 years later, Karl Weierstrass proved that Euler's representation of the sine function as an infinite product is valid, by the Weierstrass factorization theorem), but even without justification, by simply obtaining the correct value, he was able to verify it numerically against partial sums of the series. The agreement he observed gave him sufficient confidence to announce his result to the mathematical community. To follow Euler's argument, recall the Taylor series expansion of the sine function Dividing through by gives The Weierstrass factorization theorem shows that the left-hand side is the product of linear factors given by its roots, just as for finite polynomials. Euler assumed this as a heuristic for expanding an infinite degree polynomial in terms of its roots, but in fact is not always true for general . This factorization expands the equation into: If we formally multiply out this product and collect all the terms (we are allowed to do so because of Newton's identities), we see by induction that the coe
https://en.wikipedia.org/wiki/121%20%28number%29
121 (one hundred [and] twenty-one) is the natural number following 120 and preceding 122. In mathematics One hundred [and] twenty-one is a square (11 times 11) the sum of the powers of 3 from 0 to 4, so a repunit in ternary. Furthermore, 121 is the only square of the form , where p is prime (3, in this case). the sum of three consecutive prime numbers (37 + 41 + 43). As , it provides a solution to Brocard's problem. There are only two other squares known to be of the form . Another example of 121 being one of the few numbers supporting a conjecture is that Fermat conjectured that 4 and 121 are the only perfect squares of the form (with being 2 and 5, respectively). It is also a star number, a centered tetrahedral number, and a centered octagonal number. In decimal, it is a Smith number since its digits add up to the same value as its factorization (which uses the same digits) and as a consequence of that it is a Friedman number (). But it cannot be expressed as the sum of any other number plus that number's digits, making 121 a self number. In other fields 121 is also: The electricity emergency telephone number in Egypt The number for voicemail for mobile phones on the Vodafone network The undiscovered chemical element unbiunium has the atomic number 121 The official end score for cribbage The pennant number of RTS Moskva, the Russian Navy’s Black Sea flagship, which was damaged beyond repair on April 13, 2022. See also List of highways numbered 121 United States House of Representatives House Resolution 121 United Nations Security Council Resolution 121 References Integers
https://en.wikipedia.org/wiki/Cancellation%20property
In mathematics, the notion of cancellativity (or cancellability) is a generalization of the notion of invertibility. An element a in a magma has the left cancellation property (or is left-cancellative) if for all b and c in M, always implies that . An element a in a magma has the right cancellation property (or is right-cancellative) if for all b and c in M, always implies that . An element a in a magma has the two-sided cancellation property (or is cancellative) if it is both left- and right-cancellative. A magma has the left cancellation property (or is left-cancellative) if all a in the magma are left cancellative, and similar definitions apply for the right cancellative or two-sided cancellative properties. A left-invertible element is left-cancellative, and analogously for right and two-sided. If a⁻¹ is the inverse of a, then a ∗ b = a ∗ c implies a⁻¹ ∗ a ∗ b = a⁻¹ ∗ a ∗ c which implies b = c. For example, every quasigroup, and thus every group, is cancellative. Interpretation To say that an element a in a magma is left-cancellative, is to say that the function is injective. That the function g is injective implies that given some equality of the form a ∗ x = b, where the only unknown is x, there is only one possible value of x satisfying the equality. More precisely, we are able to define some function f, the inverse of g, such that for all x . Put another way, for all x and y in M, if a * x = a * y, then x = y. Similarly, to say that the element a is right-cancellative, is to say that the function is injective and that for all x and y in M, if x * a = y * a, then x = y. Examples of cancellative monoids and semigroups The positive (equally non-negative) integers form a cancellative semigroup under addition. The non-negative integers form a cancellative monoid under addition. Each of these is an example of a cancellative magma that is not a quasigroup. In fact, any free semigroup or monoid obeys the cancellative law, and in general, any semigroup or monoid embedding into a group (as the above examples clearly do) will obey the cancellative law. In a different vein, (a subsemigroup of) the multiplicative semigroup of elements of a ring that are not zero divisors (which is just the set of all nonzero elements if the ring in question is a domain, like the integers) has the cancellation property. Note that this remains valid even if the ring in question is noncommutative and/or nonunital. Non-cancellative algebraic structures Although the cancellation law holds for addition, subtraction, multiplication and division of real and complex numbers (with the single exception of multiplication by zero and division of zero by another number), there are a number of algebraic structures where the cancellation law is not valid. The cross product of two vectors does not obey the cancellation law. If , then it does not follow that even if (take for example) Matrix multiplication also does not necessarily obey the cancellation law
https://en.wikipedia.org/wiki/Glossary%20of%20Riemannian%20and%20metric%20geometry
This is a glossary of some terms used in Riemannian geometry and metric geometry — it doesn't cover the terminology of differential topology. The following articles may also be useful; they either contain specialised vocabulary or provide more detailed expositions of the definitions given below. Connection Curvature Metric space Riemannian manifold See also: Glossary of general topology Glossary of differential geometry and topology List of differential geometry topics Unless stated otherwise, letters X, Y, Z below denote metric spaces, M, N denote Riemannian manifolds, |xy| or denotes the distance between points x and y in X. Italic word denotes a self-reference to this glossary. A caveat: many terms in Riemannian and metric geometry, such as convex function, convex set and others, do not have exactly the same meaning as in general mathematical usage. A Alexandrov space a generalization of Riemannian manifolds with upper, lower or integral curvature bounds (the last one works only in dimension 2) Almost flat manifold Arc-wise isometry the same as path isometry. Autoparallel the same as totally geodesic B Barycenter, see center of mass. bi-Lipschitz map. A map is called bi-Lipschitz if there are positive constants c and C such that for any x and y in X Busemann function given a ray, γ : [0, ∞)→X, the Busemann function is defined by C Cartan–Hadamard theorem is the statement that a connected, simply connected complete Riemannian manifold with non-positive sectional curvature is diffeomorphic to Rn via the exponential map; for metric spaces, the statement that a connected, simply connected complete geodesic metric space with non-positive curvature in the sense of Alexandrov is a (globally) CAT(0) space. Cartan extended Einstein's General relativity to Einstein–Cartan theory, using Riemannian-Cartan geometry instead of Riemannian geometry. This extension provides affine torsion, which allows for non-symmetric curvature tensors and the incorporation of spin–orbit coupling. Center of mass. A point q ∈ M is called the center of mass of the points if it is a point of global minimum of the function Such a point is unique if all distances are less than radius of convexity. Christoffel symbol Collapsing manifold Complete space Completion Conformal map is a map which preserves angles. Conformally flat a manifold M is conformally flat if it is locally conformally equivalent to a Euclidean space, for example standard sphere is conformally flat. Conjugate points two points p and q on a geodesic are called conjugate if there is a Jacobi field on which has a zero at p and q. Convex function. A function f on a Riemannian manifold is a convex if for any geodesic the function is convex. A function f is called -convex if for any geodesic with natural parameter , the function is convex. Convex A subset K of a Riemannian manifold M is called convex if for any two points in K there is a shortest path connecting them whic
https://en.wikipedia.org/wiki/Sturm%E2%80%93Liouville%20theory
In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form: for given functions , and , together with some boundary conditions at extreme values of . The goals of a given Sturm–Liouville problem are: To find the for which there exists a non-trivial solution to the problem. Such values are called the eigenvalues of the problem. For each eigenvalue , to find the corresponding solution of the problem. Such functions are called the eigenfunctions associated to each . Sturm–Liouville theory is the general study of Sturm–Liouville problems. In particular, for a "regular" Sturm–Liouville problem, it can be shown that there are an infinite number of eigenvalues each with a unique eigenfunction, and that these eigenfunctions form an orthonormal basis of a certain Hilbert space of functions. This theory is important in applied mathematics, where Sturm–Liouville problems occur very frequently, particularly when dealing with separable linear partial differential equations. For example, in quantum mechanics, the one-dimensional time-independent Schrödinger equation is a Sturm–Liouville problem. Sturm–Liouville theory is named after Jacques Charles François Sturm (1803–1855) and Joseph Liouville (1809–1882) who developed the theory. Main results The main results in Sturm–Liouville theory apply to a Sturm–Liouville problem on a finite interval that is "regular". The problem is said to be regular if: the coefficient functions and the derivative are all continuous on ; and for all ; the problem has separated boundary conditions of the form: The function , sometimes denoted , is called the weight or density function. The goals of a Sturm–Liouville problem are: to find the eigenvalues: those for which there exists a non-trivial solution; for each eigenvalue , to find the corresponding eigenfunction . For a regular Sturm–Liouville problem, a function is called a solution if it is continuously differentiable and satisfies the equation () at every . In the case of more general , the solutions must be understood in a weak sense. The terms eigenvalue and eigenvector are used because the solutions correspond to the eigenvalues and eigenfunctions of a Hermitian differential operator in an appropriate Hilbert space of functions with inner product defined using the weight function. Sturm–Liouville theory studies the existence and asymptotic behavior of the eigenvalues, the corresponding qualitative theory of the eigenfunctions and their completeness in the function space. The main result of Sturm–Liouville theory states that, for any regular Sturm–Liouville problem: The eigenvalues are real and can be numbered so that Corresponding to each eigenvalue is a unique (up to constant multiple) eigenfunction with exactly zeros in , called the th fundamental solution. The normalized eigenfunctions form an orthonormal basis under the w-weighted inner product in the Hil
https://en.wikipedia.org/wiki/Weyl%20algebra
In abstract algebra, the Weyl algebra is the ring of differential operators with polynomial coefficients (in one variable), namely expressions of the form More precisely, let F be the underlying field, and let F[X] be the ring of polynomials in one variable, X, with coefficients in F. Then each fi lies in F[X]. ∂X is the derivative with respect to X. The algebra is generated by X and ∂X. The Weyl algebra is an example of a simple ring that is not a matrix ring over a division ring. It is also a noncommutative example of a domain, and an example of an Ore extension. The Weyl algebra is isomorphic to the quotient of the free algebra on two generators, X and Y, by the ideal generated by the element The Weyl algebra is the first in an infinite family of algebras, also known as Weyl algebras. The n-th Weyl algebra, An, is the ring of differential operators with polynomial coefficients in n variables. It is generated by Xi and ∂Xi, . Weyl algebras are named after Hermann Weyl, who introduced them to study the Heisenberg uncertainty principle in quantum mechanics. It is a quotient of the universal enveloping algebra of the Heisenberg algebra, the Lie algebra of the Heisenberg group, by setting the central element of the Heisenberg algebra (namely [X,Y]) equal to the unit of the universal enveloping algebra (called 1 above). The Weyl algebra is also referred to as the symplectic Clifford algebra. Weyl algebras represent the same structure for symplectic bilinear forms that Clifford algebras represent for non-degenerate symmetric bilinear forms. Generators and relations One may give an abstract construction of the algebras An in terms of generators and relations. Start with an abstract vector space V (of dimension 2n) equipped with a symplectic form ω. Define the Weyl algebra W(V) to be where T(V) is the tensor algebra on V, and the notation means "the ideal generated by". In other words, W(V) is the algebra generated by V subject only to the relation . Then, W(V) is isomorphic to An via the choice of a Darboux basis for . Quantization The algebra W(V) is a quantization of the symmetric algebra Sym(V). If V is over a field of characteristic zero, then W(V) is naturally isomorphic to the underlying vector space of the symmetric algebra Sym(V) equipped with a deformed product – called the Groenewold–Moyal product (considering the symmetric algebra to be polynomial functions on V∗, where the variables span the vector space V, and replacing iħ in the Moyal product formula with 1). The isomorphism is given by the symmetrization map from Sym(V) to W(V) If one prefers to have the iħ and work over the complex numbers, one could have instead defined the Weyl algebra above as generated by Xi and iħ∂Xi (as per quantum mechanics usage). Thus, the Weyl algebra is a quantization of the symmetric algebra, which is essentially the same as the Moyal quantization (if for the latter one restricts to polynomial functions), but the former is in term
https://en.wikipedia.org/wiki/Variational%20principle
In science and especially in mathematical studies, a variational principle is one that enables a problem to be solved using calculus of variations, which concerns finding functions that optimize the values of quantities that depend on those functions. For example, the problem of determining the shape of a hanging chain suspended at both ends—a catenary—can be solved using variational calculus, and in this case, the variational principle is the following: The solution is a function that minimizes the gravitational potential energy of the chain. Overview Any physical law which can be expressed as a variational principle describes a self-adjoint operator. These expressions are also called Hermitian. Such an expression describes an invariant under a Hermitian transformation. History Felix Klein's Erlangen program attempted to identify such invariants under a group of transformations. In what is referred to in physics as Noether's theorem, the Poincaré group of transformations (what is now called a gauge group) for general relativity defines symmetries under a group of transformations which depend on a variational principle, or action principle. Examples In mathematics The Rayleigh–Ritz method for solving boundary-value problems approximately Ekeland's variational principle in mathematical optimization The finite element method The variation principle relating topological entropy and Kolmogorov-Sinai entropy. In physics Fermat's principle in geometrical optics Maupertuis' principle in classical mechanics The principle of least action in mechanics, electromagnetic theory, and quantum mechanics The variational method in quantum mechanics Gauss's principle of least constraint and Hertz's principle of least curvature Hilbert's action principle in general relativity, leading to the Einstein field equations. Palatini variation Gibbons–Hawking–York boundary term References External links The Feynman Lectures on Physics Vol. II Ch. 19: The Principle of Least Action S T Epstein 1974 "The Variation Method in Quantum Chemistry". (New York: Academic) C Lanczos, The Variational Principles of Mechanics (Dover Publications) R K Nesbet 2003 "Variational Principles and Methods In Theoretical Physics and Chemistry". (New York: Cambridge U.P.) S K Adhikari 1998 "Variational Principles for the Numerical Solution of Scattering Problems". (New York: Wiley) C G Gray, G Karl G and V A Novikov 1996, Ann. Phys. 251 1. C.G. Gray, G. Karl, and V. A. Novikov, "Progress in Classical and Quantum Variational Principles". 11 December 2003. physics/0312071 Classical Physics. John Venables, "The Variational Principle and some applications". Dept of Physics and Astronomy, Arizona State University, Tempe, Arizona (Graduate Course: Quantum Physics) Andrew James Williamson, "The Variational Principle -- Quantum monte carlo calculations of electronic excitations". Robinson College, Cambridge, Theory of Condensed Matter Group, Cavendish Laboratory. September
https://en.wikipedia.org/wiki/Envelope%20%28disambiguation%29
An envelope is the paper container used to hold a letter being sent by post. Envelope may also refer to: Mathematics Envelope (mathematics), a curve, surface, or higher-dimensional object defined as being tangent to a given family of lines or curves (or surfaces, or higher-dimensional objects, respectively) Envelope (category theory) Science Viral envelope, the membranal covering surrounding the capsid of a virus Cell envelope of a bacterium, consisting of the cell membrane, cell wall and outer membrane , the fabric skin covering the airship Building envelope, the exterior layer of a building that protects it from the elements Envelope (motion), a solid representing all positions that an object may occupy during its normal range of motion Envelope (music), the variation of a sound over time, as is used in sound synthesis Envelope (radar), the volume of space where a radar system is required to reliably detect an object Envelope (waves), a curve outlining the peak values of an oscillating waveform or signal Envelope detector, an electronic circuit used to measure the envelope of a waveform Flight envelope, the limits within which an aircraft can operate Entertainment Envelopes (band), an indie/pop band from Sweden and France, based in the UK Envelope (film), a 2012 film "Envelopes", a song by Frank Zappa from his 1982 album Ship Arriving Too Late to Save a Drowning Witch Other uses Envelope (military), attacking one or both of the enemy's flanks to encircle the enemy The envelope of an internet email, its SMTP routing information Envelope system, a method of personal budgeting where money is allocated for specific purposes Gaza envelope, a region in southwestern Israel adjacent to the Gaza Strip See also Two envelopes problem, a paradox Stellar envelope (disambiguation), for astrophysics uses
https://en.wikipedia.org/wiki/Pierre%20Fran%C3%A7ois%20Verhulst
Pierre François Verhulst (28 October 1804, Brussels – 15 February 1849, Brussels) was a Belgian mathematician and a doctor in number theory from the University of Ghent in 1825. He is best known for the logistic growth model. Logistic equation Verhulst developed the logistic function in a series of three papers between 1838 and 1847, based on research on modeling population growth that he conducted in the mid 1830s, under the guidance of Adolphe Quetelet; see for details. Verhulst published in the equation: where N(t) represents number of individuals at time t, r the intrinsic growth rate, and is the density-dependent crowding effect (also known as intraspecific competition). In this equation, the population equilibrium (sometimes referred to as the carrying capacity, K), , is . In he named the solution the logistic curve. Later, Raymond Pearl and Lowell Reed popularized the equation, but with a presumed equilibrium, K, as where K sometimes represents the maximum number of individuals that the environment can support. In relation to the density-dependent crowding effect, . The Pearl-Reed logistic equation can be integrated exactly, and has solution where C = 1/N(0) − 1/K is determined by the initial condition N(0). The solution can also be written as a weighted harmonic mean of the initial condition and the carrying capacity, Although the continuous-time logistic equation is often compared to the logistic map because of similarity of form, it is actually more closely related to the Beverton–Holt model of fisheries recruitment. The concept of R/K selection theory derives its name from the competing dynamics of exponential growth and carrying capacity introduced by the equations above. See also Population dynamics Logistic map Logistic distribution Works References Published as: External links 1804 births 1849 deaths Belgian mathematicians 19th-century male writers
https://en.wikipedia.org/wiki/Thue%E2%80%93Morse%20sequence
In mathematics, the Thue–Morse sequence or Prouhet–Thue–Morse sequence or parity sequence is the binary sequence (an infinite sequence of 0s and 1s) obtained by starting with 0 and successively appending the Boolean complement of the sequence obtained thus far. The first few steps of this procedure yield the strings 0 then 01, 0110, 01101001, 0110100110010110, and so on, which are prefixes of the Thue–Morse sequence. The full sequence begins: 01101001100101101001011001101001.... The sequence is named after Axel Thue and Marston Morse. Definition There are several equivalent ways of defining the Thue–Morse sequence. Direct definition To compute the nth element tn, write the number n in binary. If the number of ones in this binary expansion is odd then tn = 1, if even then tn = 0. That is, tn is the even parity bit for n. John H. Conway et al. called numbers n satisfying tn = 1 odious (for odd) numbers and numbers for which tn = 0 evil (for even) numbers. In other words, tn = 0 if n is an evil number and tn = 1 if n is an odious number. Fast sequence generation This method leads to a fast method for computing the Thue–Morse sequence: start with , and then, for each n, find the highest-order bit in the binary representation of n that is different from the same bit in the representation of . If this bit is at an even index, tn differs from , and otherwise it is the same as . In pseudo-code form: def generate_sequence(seq_length: int): """Thue–Morse sequence.""" value = 0 for n = 0 to seq_length-1 by 1: # Note: assumes an even number of bits in the word size, and two's complement arithmetic so that when n == 0, x is odd (e.g. 31 or 63) x = index_of_highest_one_bit(n ^ (n - 1)) if ((x & 1) == 0): # bit index is even, so toggle value value = 1 - value yield value The resulting algorithm takes constant time to generate each sequence element, using only a logarithmic number of bits (constant number of words) of memory. Recurrence relation The Thue–Morse sequence is the sequence tn satisfying the recurrence relation for all non-negative integers n. L-system The Thue–Morse sequence is a morphic word: it is the output of the following Lindenmayer system: Characterization using bitwise negation The Thue–Morse sequence in the form given above, as a sequence of bits, can be defined recursively using the operation of bitwise negation. So, the first element is 0. Then once the first 2n elements have been specified, forming a string s, then the next 2n elements must form the bitwise negation of s. Now we have defined the first 2n+1 elements, and we recurse. Spelling out the first few steps in detail: We start with 0. The bitwise negation of 0 is 1. Combining these, the first 2 elements are 01. The bitwise negation of 01 is 10. Combining these, the first 4 elements are 0110. The bitwise negation of 0110 is 1001. Combining these, the first 8 ele
https://en.wikipedia.org/wiki/New%20Zealand%20census
The New Zealand Census of Population and Dwellings () is a national population and housing census conducted by government department Statistics New Zealand every five years. There have been 34 censuses since 1851. In addition to providing detailed information about national demographics, the results of the census play an important part in the calculation of resource allocation to local service providers. The most recent census, the 2023 census, took place on 7 March 2023. Census date Since 1926, the census has always been held on a Tuesday and since 1966, the census always occurs in March. These are statistically the month and weekday on which New Zealanders are least likely to be travelling. The census forms have to be returned by midnight on census day for them to be valid. Conducting the census Until 2018, census forms were hand-delivered by census workers during the lead-in to the census, with one form per person and a special form with questions about the dwelling. In addition, teams of census workers attempt to cover all hospitals, camp grounds, workplaces and transport systems where people might be found at midnight. In 2018, the process was different. The majority of households received an access code in the post and were encouraged to complete their census online. If preferred, households could request paper census forms. The smallest geographic unit used in the census for population data is the mesh block, which there are 53,589 of, with an average of 88 people in each. The 2023 census can be completed online or on paper forms. Forms with an access code were mailed out to householders from 20 February, but paper forms can be requested online or by telephone (free call 0800 CENSUS (0800 236–787)). Data collected The 2018 census collected data on the following topics: Population structure Location Culture and Identity Education and training Work Income Families and households Housing Transport Health and disability * Required to be included under the Statistics Act 1975 or the Electoral Act 1993 History The first full census in New Zealand was conducted in 1851, and the census was triennial until 1881, at which time it became five-yearly. The 1931 census was cancelled due to the effects of the Great Depression, as was the 1941 census due to World War II. The 1946 census was brought forward to Tuesday 25 September 1945, so that the results could be used for an electoral redistribution (the first for ten years) before the . 1951 was the first year in which Māori and European New Zealanders were treated equally, with European New Zealanders having had a different census form in previous years and separate censuses in the nineteenth century. Results for those censuses before 1966 have been destroyed with a few exceptions and those since will not be available before 2066. The 2006 census was held on Tuesday, 7 March. For the first time, respondents had the option of completing their census form online rather than by a printed f
https://en.wikipedia.org/wiki/Inclusion%20map
In mathematics, if is a subset of then the inclusion map (also inclusion function, insertion, or canonical injection) is the function that sends each element of to treated as an element of A "hooked arrow" () is sometimes used in place of the function arrow above to denote an inclusion map; thus: (However, some authors use this hooked arrow for any embedding.) This and other analogous injective functions from substructures are sometimes called natural injections. Given any morphism between objects and , if there is an inclusion map into the domain , then one can form the restriction of In many instances, one can also construct a canonical inclusion into the codomain known as the range of Applications of inclusion maps Inclusion maps tend to be homomorphisms of algebraic structures; thus, such inclusion maps are embeddings. More precisely, given a substructure closed under some operations, the inclusion map will be an embedding for tautological reasons. For example, for some binary operation to require that is simply to say that is consistently computed in the sub-structure and the large structure. The case of a unary operation is similar; but one should also look at nullary operations, which pick out a constant element. Here the point is that closure means such constants must already be given in the substructure. Inclusion maps are seen in algebraic topology where if is a strong deformation retract of the inclusion map yields an isomorphism between all homotopy groups (that is, it is a homotopy equivalence). Inclusion maps in geometry come in different kinds: for example embeddings of submanifolds. Contravariant objects (which is to say, objects that have pullbacks; these are called covariant in an older and unrelated terminology) such as differential forms restrict to submanifolds, giving a mapping in the other direction. Another example, more sophisticated, is that of affine schemes, for which the inclusions and may be different morphisms, where is a commutative ring and is an ideal of See also References Basic concepts in set theory Functions and mappings
https://en.wikipedia.org/wiki/Bimodule
In abstract algebra, a bimodule is an abelian group that is both a left and a right module, such that the left and right multiplications are compatible. Besides appearing naturally in many parts of mathematics, bimodules play a clarifying role, in the sense that many of the relationships between left and right modules become simpler when they are expressed in terms of bimodules. Definition If R and S are two rings, then an R-S-bimodule is an abelian group such that: M is a left R-module and a right S-module. For all r in R, s in S and m in M: An R-R-bimodule is also known as an R-bimodule. Examples For positive integers n and m, the set Mn,m(R) of matrices of real numbers is an R-S-bimodule, where R is the ring Mn(R) of matrices, and S is the ring Mm(R) of matrices. Addition and multiplication are carried out using the usual rules of matrix addition and matrix multiplication; the heights and widths of the matrices have been chosen so that multiplication is defined. Note that Mn,m(R) itself is not a ring (unless ), because multiplying an matrix by another matrix is not defined. The crucial bimodule property, that , is the statement that multiplication of matrices is associative (which, in the case of a matrix ring, corresponds to associativity). Any algebra A over a ring R has the natural structure of an R-bimodule, with left and right multiplication defined by and respectively, where is the canonical embedding of R into A. If R is a ring, then R itself can be considered to be an R-R-bimodule by taking the left and right actions to be multiplication—the actions commute by associativity. This can be extended to Rn (the n-fold direct product of R). Any two-sided ideal of a ring R is an R-R-bimodule, with the ring multiplication both as the left and as the right multiplication. Any module over a commutative ring R has the natural structure of a bimodule. For example, if M is a left module, we can define multiplication on the right to be the same as multiplication on the left. (However, not all R-bimodules arise this way: other compatible right multiplications may exist.) If M is a left R-module, then M is an R-Z-bimodule, where Z is the ring of integers. Similarly, right R-modules may be interpreted as Z-R-bimodules. Any abelian group may be treated as a Z-Z-bimodule. If M is a right R-module, then the set of R-module endomorphisms is a ring with the multiplication given by composition. The endomorphism ring acts on M by left multiplication defined by . The bimodule property, that , restates that f is a R-module homomorphism from M to itself. Therefore any right R-module M is an -bimodule. Similarly any left R-module N is an -bimodule. If R is a subring of S, then S is an R-R-bimodule. It is also an R-S- and an S-R-bimodule. If M is an S-R-bimodule and N is an R-T-bimodule, then is an S-T-bimodule. Further notions and facts If M and N are R-S-bimodules, then a map is a bimodule homomorphism if it is both a homomorp
https://en.wikipedia.org/wiki/Flat%20module
In algebra, flat modules include free modules, projective modules, and, over a principal ideal domain, torsion free modules. Formally, a module M over a ring R is flat if taking the tensor product over R with M preserves exact sequences. A module is faithfully flat if taking the tensor product with a sequence produces an exact sequence if and only if the original sequence is exact. Flatness was introduced by in his paper Géometrie Algébrique et Géométrie Analytique. Definition A left module over a ring is flat if the following condition is satisfied: for every injective linear map of right -modules, the map is also injective, where is the map induced by For this definition, it is enough to restrict the injections to the inclusions of finitely generated ideals into . Equivalently, an -module is flat if the tensor product with is an exact functor; that is if, for every short exact sequence of -modules the sequence is also exact. (This is an equivalent definition since the tensor product is a right exact functor.) These definitions apply also if is a non-commutative ring, and is a left -module; in this case, , and must be right -modules, and the tensor products are not -modules in general, but only abelian groups. Characterizations Flatness can also be characterized by the following equational condition, which means that -linear relations in stem from linear relations in . A left -module is flat if and only if, for every linear relation with and , there exist elements and such that for and for It is equivalent to define elements of a module, and a linear map from to this module, which maps the standard basis of to the elements. This allows rewriting the previous characterization in terms of homomorphisms, as follows. An -module is flat if and only if the following condition holds: for every map where is a finitely generated free -module, and for every finitely generated -submodule of the map factors through a map to a free -module such that Relations to other module properties Flatness is related to various other module properties, such as being free, projective, or torsion-free. In particular, every flat module is torsion-free, every projective module is flat, and every free module is projective. There are finitely generated modules that are flat and not projective. However, finitely generated flat modules are all projective over the rings that are most commonly considered. This is partly summarized in the following graphic. Torsion-free modules Every flat module is torsion-free. This results from the above characterization in terms of relations by taking . The converse holds over the integers, and more generally over principal ideal domains and Dedekind rings. An integral domain over which every torsion-free module is flat is called a Prüfer domain. Free and projective modules A module is projective if and only if there is a free module and two linear maps and such tha
https://en.wikipedia.org/wiki/Differentiable%20curve
Differential geometry of curves is the branch of geometry that deals with smooth curves in the plane and the Euclidean space by methods of differential and integral calculus. Many specific curves have been thoroughly investigated using the synthetic approach. Differential geometry takes another path: curves are represented in a parametrized form, and their geometric properties and various quantities associated with them, such as the curvature and the arc length, are expressed via derivatives and integrals using vector calculus. One of the most important tools used to analyze a curve is the Frenet frame, a moving frame that provides a coordinate system at each point of the curve that is "best adapted" to the curve near that point. The theory of curves is much simpler and narrower in scope than the theory of surfaces and its higher-dimensional generalizations because a regular curve in a Euclidean space has no intrinsic geometry. Any regular curve may be parametrized by the arc length (the natural parametrization). From the point of view of a theoretical point particle on the curve that does not know anything about the ambient space, all curves would appear the same. Different space curves are only distinguished by how they bend and twist. Quantitatively, this is measured by the differential-geometric invariants called the curvature and the torsion of a curve. The fundamental theorem of curves asserts that the knowledge of these invariants completely determines the curve. Definitions A parametric -curve or a -parametrization is a vector-valued function that is -times continuously differentiable (that is, the component functions of are continuously differentiable), where , , and is a non-empty interval of real numbers. The of the parametric curve is . The parametric curve and its image must be distinguished because a given subset of can be the image of many distinct parametric curves. The parameter in can be thought of as representing time, and the trajectory of a moving point in space. When is a closed interval , is called the starting point and is the endpoint of . If the starting and the end points coincide (that is, ), then is a closed curve or a loop. To be a -loop, the function must be -times continuously differentiable and satisfy for . The parametric curve is if is injective. It is if each component function of is an analytic function, that is, it is of class . The curve is regular of order (where ) if, for every , is a linearly independent subset of . In particular, a parametric -curve is if and only if for any . Re-parametrization and equivalence relation Given the image of a parametric curve, there are several different parametrizations of the parametric curve. Differential geometry aims to describe the properties of parametric curves that are invariant under certain reparametrizations. A suitable equivalence relation on the set of all parametric curves must be defined. The differential-geometric prope
https://en.wikipedia.org/wiki/Order%20%28group%20theory%29
In mathematics, the order of a finite group is the number of its elements. If a group is not finite, one says that its order is infinite. The order of an element of a group (also called period length or period) is the order of the subgroup generated by the element. If the group operation is denoted as a multiplication, the order of an element of a group, is thus the smallest positive integer such that , where denotes the identity element of the group, and denotes the product of copies of . If no such exists, the order of is infinite. The order of a group is denoted by or , and the order of an element is denoted by or , instead of where the brackets denote the generated group. Lagrange's theorem states that for any subgroup of a finite group , the order of the subgroup divides the order of the group; that is, is a divisor of . In particular, the order of any element is a divisor of . Example The symmetric group S3 has the following multiplication table. {| class="wikitable" |- ! • ! e || s || t || u || v || w |- ! e | e || s || t || u || v || w |- ! s | s || e || v || w || t || u |- ! t | t || u || e || s || w || v |- ! u | u || t || w || v || e || s |- ! v | v || w || s || e || u || t |- ! w | w || v || u || t || s || e |} This group has six elements, so . By definition, the order of the identity, , is one, since . Each of , , and squares to , so these group elements have order two: . Finally, and have order 3, since , and . Order and structure The order of a group G and the orders of its elements give much information about the structure of the group. Roughly speaking, the more complicated the factorization of |G|, the more complicated the structure of G. For |G| = 1, the group is trivial. In any group, only the identity element a = e has ord(a) = 1. If every non-identity element in G is equal to its inverse (so that a2 = e), then ord(a) = 2; this implies G is abelian since . The converse is not true; for example, the (additive) cyclic group Z6 of integers modulo 6 is abelian, but the number 2 has order 3: . The relationship between the two concepts of order is the following: if we write for the subgroup generated by a, then For any integer k, we have ak = e   if and only if   ord(a) divides k. In general, the order of any subgroup of G divides the order of G. More precisely: if H is a subgroup of G, then ord(G) / ord(H) = [G : H], where [G : H] is called the index of H in G, an integer. This is Lagrange's theorem. (This is, however, only true when G has finite order. If ord(G) = ∞, the quotient ord(G) / ord(H) does not make sense.) As an immediate consequence of the above, we see that the order of every element of a group divides the order of the group. For example, in the symmetric group shown above, where ord(S3) = 6, the possible orders of the elements are 1, 2, 3 or 6. The following partial converse is true for finite groups: if d divides the order of a group G and d is a prime number, then there exists an
https://en.wikipedia.org/wiki/Deltahedron
In geometry, a deltahedron (plural deltahedra) is a polyhedron whose faces are all equilateral triangles. The name is taken from the Greek upper case delta (Δ), which has the shape of an equilateral triangle. There are infinitely many deltahedra, all having an even number of faces by the handshaking lemma. Of these only eight are convex, having 4, 6, 8, 10, 12, 14, 16 and 20 faces. The number of faces, edges, and vertices is listed below for each of the eight convex deltahedra. The eight convex deltahedra There are only eight strictly-convex deltahedra: three are regular polyhedra, and five are Johnson solids. The three regular convex polyhedra are indeed Platonic solids. In the 6-faced deltahedron, some vertices have degree 3 and some degree 4. In the 10-, 12-, 14-, and 16-faced deltahedra, some vertices have degree 4 and some degree 5. These five irregular deltahedra belong to the class of Johnson solids: convex polyhedra with regular polygons for faces. Deltahedra retain their shape even if the edges are free to rotate around their vertices so that the angles between edges are fluid. Not all polyhedra have this property: for example, if some of the angles of a cube are relaxed, the cube can be deformed into a non-right square prism. There is no 18-faced convex deltahedron. However, the edge-contracted icosahedron gives an example of an octadecahedron that can either be made convex with 18 irregular triangular faces, or made with equilateral triangles that include two coplanar sets of three triangles. Non-strictly convex cases There are infinitely many cases with coplanar triangles, allowing for sections of the infinite triangular tilings. If the sets of coplanar triangles are considered a single face, a smaller set of faces, edges, and vertices can be counted. The coplanar triangular faces can be merged into rhombic, trapezoidal, hexagonal, or other equilateral polygon faces. Each face must be a convex polyiamond such as , , , , , , and , ... Some smaller examples include: Non-convex forms There are an infinite number of nonconvex forms. Some examples of face-intersecting deltahedra: Great icosahedron - a Kepler-Poinsot solid, with 20 intersecting triangles Other nonconvex deltahedra can be generated by adding equilateral pyramids to the faces of all 5 Platonic solids: Other augmentations of the tetrahedron include: Also by adding inverted pyramids to faces: Excavated dodecahedron See also Simplicial polytope - polytopes with all simplex facets References Further reading . . . . pp. 35–36 External links The eight convex deltahedra Deltahedron Deltahedron Polyhedra
https://en.wikipedia.org/wiki/Alternating%20series
In mathematics, an alternating series is an infinite series of the form or with for all . The signs of the general terms alternate between positive and negative. Like any series, an alternating series converges if and only if the associated sequence of partial sums converges. Examples The geometric series 1/2 − 1/4 + 1/8 − 1/16 + ⋯ sums to 1/3. The alternating harmonic series has a finite sum but the harmonic series does not. The Mercator series provides an analytic expression of the natural logarithm: The functions sine and cosine used in trigonometry can be defined as alternating series in calculus even though they are introduced in elementary algebra as the ratio of sides of a right triangle. In fact, and When the alternating factor is removed from these series one obtains the hyperbolic functions sinh and cosh used in calculus. For integer or positive index α the Bessel function of the first kind may be defined with the alternating series where is the gamma function. If is a complex number, the Dirichlet eta function is formed as an alternating series that is used in analytic number theory. Alternating series test The theorem known as "Leibniz Test" or the alternating series test tells us that an alternating series will converge if the terms converge to 0 monotonically. Proof: Suppose the sequence converges to zero and is monotone decreasing. If is odd and , we obtain the estimate via the following calculation: Since is monotonically decreasing, the terms are negative. Thus, we have the final inequality: . Similarly, it can be shown that . Since converges to , our partial sums form a Cauchy sequence (i.e., the series satisfies the Cauchy criterion) and therefore converge. The argument for even is similar. Approximating sums The estimate above does not depend on . So, if is approaching 0 monotonically, the estimate provides an error bound for approximating infinite sums by partial sums: That does not mean that this estimate always finds the very first element after which error is less than the modulus of the next term in the series. Indeed if you take and try to find the term after which error is at most 0.00005, the inequality above shows that the partial sum up through is enough, but in fact this is twice as many terms as needed. Indeed, the error after summing first 9999 elements is 0.0000500025, and so taking the partial sum up through is sufficient. This series happens to have the property that constructing a new series with also gives an alternating series where the Leibniz test applies and thus makes this simple error bound not optimal. This was improved by the Calabrese bound, discovered in 1962, that says that this property allows for a result 2 times less than with the Leibniz error bound. In fact this is also not optimal for series where this property applies 2 or more times, which is described by Johnsonbaugh error bound. If one can apply the property an infinite number of times, Euler's transfo
https://en.wikipedia.org/wiki/Kagawa%20District%2C%20Kagawa
is a district located in Kagawa Prefecture, Japan. As of the January 10, 2006 Takamatsu merger (but with 2003 population statistics), the district consists of the single town of Naoshima and has an estimated population of 3,583 and a density of 251.97 persons per km2. The total area is 14.22 km2. Towns and villages Naoshima Mergers On September 26, 2005 the town of Shionoe merged into the expanded city of Takamatsu. On January 10, 2006 the towns of Kagawa and Kōnan, along with the towns of Aji and Mure, both from Kita District, and the town of Kokubunji, from Ayauta District, merged into the expanded city of Takamatsu. Districts in Kagawa Prefecture
https://en.wikipedia.org/wiki/Free%20algebra
In mathematics, especially in the area of abstract algebra known as ring theory, a free algebra is the noncommutative analogue of a polynomial ring since its elements may be described as "polynomials" with non-commuting variables. Likewise, the polynomial ring may be regarded as a free commutative algebra. Definition For R a commutative ring, the free (associative, unital) algebra on n indeterminates {X1,...,Xn} is the free R-module with a basis consisting of all words over the alphabet {X1,...,Xn} (including the empty word, which is the unit of the free algebra). This R-module becomes an R-algebra by defining a multiplication as follows: the product of two basis elements is the concatenation of the corresponding words: and the product of two arbitrary R-module elements is thus uniquely determined (because the multiplication in an R-algebra must be R-bilinear). This R-algebra is denoted R⟨X1,...,Xn⟩. This construction can easily be generalized to an arbitrary set X of indeterminates. In short, for an arbitrary set , the free (associative, unital) R-algebra on X is with the R-bilinear multiplication that is concatenation on words, where X* denotes the free monoid on X (i.e. words on the letters Xi), denotes the external direct sum, and Rw denotes the free R-module on 1 element, the word w. For example, in R⟨X1,X2,X3,X4⟩, for scalars α, β, γ, δ ∈ R, a concrete example of a product of two elements is . The non-commutative polynomial ring may be identified with the monoid ring over R of the free monoid of all finite words in the Xi. Contrast with polynomials Since the words over the alphabet {X1, ...,Xn} form a basis of R⟨X1,...,Xn⟩, it is clear that any element of R⟨X1, ...,Xn⟩ can be written uniquely in the form: where are elements of R and all but finitely many of these elements are zero. This explains why the elements of R⟨X1,...,Xn⟩ are often denoted as "non-commutative polynomials" in the "variables" (or "indeterminates") X1,...,Xn; the elements are said to be "coefficients" of these polynomials, and the R-algebra R⟨X1,...,Xn⟩ is called the "non-commutative polynomial algebra over R in n indeterminates". Note that unlike in an actual polynomial ring, the variables do not commute. For example, X1X2 does not equal X2X1. More generally, one can construct the free algebra R⟨E⟩ on any set E of generators. Since rings may be regarded as Z-algebras, a free ring on E can be defined as the free algebra Z⟨E⟩. Over a field, the free algebra on n indeterminates can be constructed as the tensor algebra on an n-dimensional vector space. For a more general coefficient ring, the same construction works if we take the free module on n generators. The construction of the free algebra on E is functorial in nature and satisfies an appropriate universal property. The free algebra functor is left adjoint to the forgetful functor from the category of R-algebras to the category of sets. Free algebras over division rings are free ideal rings. See al
https://en.wikipedia.org/wiki/Petrus%20Apianus
Petrus Apianus (April 16, 1495 – April 21, 1552), also known as Peter Apian, Peter Bennewitz, and Peter Bienewitz, was a German humanist, known for his works in mathematics, astronomy and cartography. His work on "cosmography", the field that dealt with the earth and its position in the universe, was presented in his most famous publications, Astronomicum Caesareum (1540) and Cosmographicus liber (1524). His books were extremely influential in his time, with the numerous editions in multiple languages being published until 1609. The lunar crater Apianus and asteroid 19139 Apian are named in his honour. Life and work Apianus was born as Peter Bienewitz (or Bennewitz) in Leisnig in Saxony; his father, Martin, was a shoemaker. The family was relatively well off, belonging to the middle-class citizenry of Leisnig. Apianus was educated at the Latin school in Rochlitz. From 1516 to 1519 he studied at the University of Leipzig; during this time, he Latinized his name to Apianus (lat. apis means "bee"; "Biene" is the German word for bee). In 1519, Apianus moved to Vienna and continued his studies at the University of Vienna, which was considered one of the leading universities in geography and mathematics at the time and where Georg Tannstetter taught. When the plague broke out in Vienna in 1521, he completed his studies with a BA and moved to Regensburg and then to Landshut. At Landshut, he produced his Cosmographicus liber (1524), a highly respected work on astronomy and navigation which was to see more than 40 reprints in four languages (Latin; French, 1544; Dutch, 1545; Spanish, 1548) and that remained popular until the end of the 16th century. Later editions were produced by Gemma Frisius. In 1527, Peter Apianus was called to the University of Ingolstadt as a mathematician and printer. His print shop started small. Among the first books he printed were the writings of Johann Eck, Martin Luther's antagonist. This print shop was active between 1543 and 1540 and became well known for its high-quality editions of geographic and cartographic works. It is thought that he used stereotype printing techniques on woodblocks. The printer's logo included the motto Industria superat vires in Greek, Hebrew, and Latin around the figure of a boy. Through his work, Apianus became a favourite of emperor Charles V, who had praised Cosmographicus liber at the Imperial Diet of 1530 and granted him a printing monopoly in 1532 and 1534. In 1535, the emperor made Apianus an armiger, i.e. granted him the right to display a coat of arms. In 1540, Apianus printed the Astronomicum Caesareum, dedicated to Charles V. Charles promised him a truly royal sum (3,000 golden guilders), appointed him his court mathematician, and made him a Reichsritter (a Free Imperial Knight) and in 1544 even an Imperial Count Palatine. All this furthered Apianus's reputation as an eminent scientist. Astronomicum Caesareum is noted for its visual appeal. Printed and bound decoratively, with abou
https://en.wikipedia.org/wiki/Pullback%20%28differential%20geometry%29
Let be a smooth map between smooth manifolds and . Then there is an associated linear map from the space of 1-forms on (the linear space of sections of the cotangent bundle) to the space of 1-forms on . This linear map is known as the pullback (by ), and is frequently denoted by . More generally, any covariant tensor field – in particular any differential form – on may be pulled back to using . When the map is a diffeomorphism, then the pullback, together with the pushforward, can be used to transform any tensor field from to or vice versa. In particular, if is a diffeomorphism between open subsets of and , viewed as a change of coordinates (perhaps between different charts on a manifold ), then the pullback and pushforward describe the transformation properties of covariant and contravariant tensors used in more traditional (coordinate dependent) approaches to the subject. The idea behind the pullback is essentially the notion of precomposition of one function with another. However, by combining this idea in several different contexts, quite elaborate pullback operations can be constructed. This article begins with the simplest operations, then uses them to construct more sophisticated ones. Roughly speaking, the pullback mechanism (using precomposition) turns several constructions in differential geometry into contravariant functors. Pullback of smooth functions and smooth maps Let be a smooth map between (smooth) manifolds and , and suppose is a smooth function on . Then the pullback of by is the smooth function on defined by . Similarly, if is a smooth function on an open set in , then the same formula defines a smooth function on the open set in . (In the language of sheaves, pullback defines a morphism from the sheaf of smooth functions on to the direct image by of the sheaf of smooth functions on .) More generally, if is a smooth map from to any other manifold , then is a smooth map from to . Pullback of bundles and sections If is a vector bundle (or indeed any fiber bundle) over and is a smooth map, then the pullback bundle is a vector bundle (or fiber bundle) over whose fiber over in is given by . In this situation, precomposition defines a pullback operation on sections of : if is a section of over , then the pullback section is a section of over . Pullback of multilinear forms Let be a linear map between vector spaces V and W (i.e., Φ is an element of , also denoted ), and let be a multilinear form on W (also known as a tensor – not to be confused with a tensor field – of rank , where s is the number of factors of W in the product). Then the pullback Φ∗F of F by Φ is a multilinear form on V defined by precomposing F with Φ. More precisely, given vectors v1, v2, ..., vs in V, Φ∗F is defined by the formula which is a multilinear form on V. Hence Φ∗ is a (linear) operator from multilinear forms on W to multilinear forms on V. As a special case, note that if F is a linear form (or (0,1)-ten
https://en.wikipedia.org/wiki/Operator%20topologies
In the mathematical field of functional analysis there are several standard topologies which are given to the algebra of bounded linear operators on a Banach space . Introduction Let be a sequence of linear operators on the Banach space . Consider the statement that converges to some operator on . This could have several different meanings: If , that is, the operator norm of (the supremum of , where ranges over the unit ball in ) converges to 0, we say that in the uniform operator topology. If for all , then we say in the strong operator topology. Finally, suppose that for all we have in the weak topology of . This means that for all continuous linear functionals on . In this case we say that in the weak operator topology. List of topologies on B(H) There are many topologies that can be defined on besides the ones used above; most are at first only defined when is a Hilbert space, even though in many cases there are appropriate generalisations. The topologies listed below are all locally convex, which implies that they are defined by a family of seminorms. In analysis, a topology is called strong if it has many open sets and weak if it has few open sets, so that the corresponding modes of convergence are, respectively, strong and weak. (In topology proper, these terms can suggest the opposite meaning, so strong and weak are replaced with, respectively, fine and coarse.) The diagram on the right is a summary of the relations, with the arrows pointing from strong to weak. If is a Hilbert space, the Hilbert space has a (unique) predual , consisting of the trace class operators, whose dual is . The seminorm for w positive in the predual is defined to be . If is a vector space of linear maps on the vector space , then is defined to be the weakest topology on such that all elements of are continuous. The norm topology or uniform topology or uniform operator topology is defined by the usual norm ||x|| on . It is stronger than all the other topologies below. The weak (Banach space) topology is , in other words the weakest topology such that all elements of the dual are continuous. It is the weak topology on the Banach space . It is stronger than the ultraweak and weak operator topologies. (Warning: the weak Banach space topology and the weak operator topology and the ultraweak topology are all sometimes called the weak topology, but they are different.) The Mackey topology or Arens-Mackey topology is the strongest locally convex topology on such that the dual is , and is also the uniform convergence topology on , -compact convex subsets of . It is stronger than all topologies below. The σ-strong-* topology or ultrastrong-* topology is the weakest topology stronger than the ultrastrong topology such that the adjoint map is continuous. It is defined by the family of seminorms and for positive elements of . It is stronger than all topologies below. The σ-strong topology or ultrastrong topology or strong
https://en.wikipedia.org/wiki/Law%20of%20the%20iterated%20logarithm
In probability theory, the law of the iterated logarithm describes the magnitude of the fluctuations of a random walk. The original statement of the law of the iterated logarithm is due to A. Ya. Khinchin (1924). Another statement was given by A. N. Kolmogorov in 1929. Statement Let {Yn} be independent, identically distributed random variables with means zero and unit variances. Let Sn = Y1 + ... + Yn. Then where “log” is the natural logarithm, “lim sup” denotes the limit superior, and “a.s.” stands for “almost surely”. Discussion The law of iterated logarithms operates “in between” the law of large numbers and the central limit theorem. There are two versions of the law of large numbers — the weak and the strong — and they both state that the sums Sn, scaled by n−1, converge to zero, respectively in probability and almost surely: On the other hand, the central limit theorem states that the sums Sn scaled by the factor n−½ converge in distribution to a standard normal distribution. By Kolmogorov's zero–one law, for any fixed M, the probability that the event occurs is 0 or 1. Then so An identical argument shows that This implies that these quantities cannot converge almost surely. In fact, they cannot even converge in probability, which follows from the equality and the fact that the random variables are independent and both converge in distribution to The law of the iterated logarithm provides the scaling factor where the two limits become different: Thus, although the absolute value of the quantity is less than any predefined ε > 0 with probability approaching one, it will nevertheless almost surely be greater than ε infinitely often; in fact, the quantity will be visiting the neighborhoods of any point in the interval (-1,1) almost surely. Generalizations and variants The law of the iterated logarithm (LIL) for a sum of independent and identically distributed (i.i.d.) random variables with zero mean and bounded increment dates back to Khinchin and Kolmogorov in the 1920s. Since then, there has been a tremendous amount of work on the LIL for various kinds of dependent structures and for stochastic processes. The following is a small sample of notable developments. Hartman–Wintner (1940) generalized LIL to random walks with increments with zero mean and finite variance. De Acosta (1983) gave a simple proof of the Hartman–Wintner version of the LIL. Chung (1948) proved another version of the law of the iterated logarithm for the absolute value of a brownian motion. Strassen (1964) studied the LIL from the point of view of invariance principles. Stout (1970) generalized the LIL to stationary ergodic martingales. Wittmann (1985) generalized Hartman–Wintner version of LIL to random walks satisfying milder conditions. Vovk (1987) derived a version of LIL valid for a single chaotic sequence (Kolmogorov random sequence). This is notable, as it is outside the realm of classical probability theory. Yongge Wang (1996) showe
https://en.wikipedia.org/wiki/Cohen%E2%80%93Macaulay%20ring
In mathematics, a Cohen–Macaulay ring is a commutative ring with some of the algebro-geometric properties of a smooth variety, such as local equidimensionality. Under mild assumptions, a local ring is Cohen–Macaulay exactly when it is a finitely generated free module over a regular local subring. Cohen–Macaulay rings play a central role in commutative algebra: they form a very broad class, and yet they are well understood in many ways. They are named for , who proved the unmixedness theorem for polynomial rings, and for , who proved the unmixedness theorem for formal power series rings. All Cohen–Macaulay rings have the unmixedness property. For Noetherian local rings, there is the following chain of inclusions. Definition For a commutative Noetherian local ring R, a finite (i.e. finitely generated) R-module is a Cohen-Macaulay module if (in general we have: , see Auslander–Buchsbaum formula for the relation between depth and dim of a certain kind of modules). On the other hand, is a module on itself, so we call a Cohen-Macaulay ring if it is a Cohen-Macaulay module as an -module. A maximal Cohen-Macaulay module is a Cohen-Macaulay module M such that . The above definition was for a Noetherian local rings. But we can expand the definition for a more general Noetherian ring: If is a commutative Noetherian ring, then an R-module M is called Cohen–Macaulay module if is a Cohen-Macaulay module for all maximal ideals . (This is a kind of circular definition unless we define zero modules as Cohen-Macaulay. So we define zero modules as Cohen-Macaulay modules in this definition.) Now, to define maximal Cohen-Macaulay modules for these rings, we require that to be such an -module for each maximal ideal of R. As in the local case, R is a Cohen-Macaulay ring if it is a Cohen-Macaulay module (as an -module on itself). Examples Noetherian rings of the following types are Cohen–Macaulay. Any regular local ring. This leads to various examples of Cohen–Macaulay rings, such as the integers , or a polynomial ring over a field K, or a power series ring . In geometric terms, every regular scheme, for example a smooth variety over a field, is Cohen–Macaulay. Any 0-dimensional ring (or equivalently, any Artinian ring). Any 1-dimensional reduced ring, for example any 1-dimensional domain. Any 2-dimensional normal ring. Any Gorenstein ring. In particular, any complete intersection ring. The ring of invariants when R is a Cohen–Macaulay algebra over a field of characteristic zero and G is a finite group (or more generally, a linear algebraic group whose identity component is reductive). This is the Hochster–Roberts theorem. Any determinantal ring. That is, let R be the quotient of a regular local ring S by the ideal I generated by the r × r minors of some p × q matrix of elements of S. If the codimension (or height) of I is equal to the "expected" codimension (p−r+1)(q−r+1), R is called a determinantal ring. In that case, R is Cohen−Macaulay. Sim
https://en.wikipedia.org/wiki/Riemann%E2%80%93Hurwitz%20formula
In mathematics, the Riemann–Hurwitz formula, named after Bernhard Riemann and Adolf Hurwitz, describes the relationship of the Euler characteristics of two surfaces when one is a ramified covering of the other. It therefore connects ramification with algebraic topology, in this case. It is a prototype result for many others, and is often applied in the theory of Riemann surfaces (which is its origin) and algebraic curves. Statement For a compact, connected, orientable surface , the Euler characteristic is , where g is the genus (the number of handles), since the Betti numbers are . In the case of an (unramified) covering map of surfaces that is surjective and of degree , we have the formula That is because each simplex of should be covered by exactly in , at least if we use a fine enough triangulation of , as we are entitled to do since the Euler characteristic is a topological invariant. What the Riemann–Hurwitz formula does is to add in a correction to allow for ramification (sheets coming together). Now assume that and are Riemann surfaces, and that the map is complex analytic. The map is said to be ramified at a point P in S′ if there exist analytic coordinates near P and π(P) such that π takes the form π(z) = zn, and n > 1. An equivalent way of thinking about this is that there exists a small neighborhood U of P such that π(P) has exactly one preimage in U, but the image of any other point in U has exactly n preimages in U. The number n is called the ramification index at P and also denoted by eP. In calculating the Euler characteristic of S′ we notice the loss of eP − 1 copies of P above π(P) (that is, in the inverse image of π(P)). Now let us choose triangulations of S and S′ with vertices at the branch and ramification points, respectively, and use these to compute the Euler characteristics. Then S′ will have the same number of d-dimensional faces for d different from zero, but fewer than expected vertices. Therefore, we find a "corrected" formula or as it is also commonly written, using that and multiplying through by -1: (all but finitely many P have eP = 1, so this is quite safe). This formula is known as the Riemann–Hurwitz formula and also as Hurwitz's theorem. Another useful form of the formula is: where r is the number points in S at which the cover has nontrivial ramification (ramification points) and b is the number of points in S that are images of such points (branch points). Indeed, to obtain this formula, remove disjoint disc neighborhoods of the branch points from S and disjoint disc neighborhoods of the ramification points in S' so that the restriction of is a covering. Then apply the general degree formula to the restriction, use the fact that the Euler characteristic of the disc equals 1, and use the additivity of the Euler characteristic under connected sums. Examples The Weierstrass -function, considered as a meromorphic function with values in the Riemann sphere, yields a map from an ellip
https://en.wikipedia.org/wiki/Antecedent%20variable
In statistics and social sciences, an antecedent variable is a variable that can help to explain the apparent relationship (or part of the relationship) between other variables that are nominally in a cause and effect relationship. In a regression analysis, an antecedent variable would be one that influences both the independent variable and the dependent variable. See also Path analysis (statistics) Latent variable Intervening variable Confounding variable References Regression analysis Independence (probability theory) Design of experiments
https://en.wikipedia.org/wiki/Centered%20hexagonal%20number
In mathematics and combinatorics, a centered hexagonal number, or hex number, is a centered figurate number that represents a hexagon with a dot in the center and all other dots surrounding the center dot in a hexagonal lattice. The following figures illustrate this arrangement for the first four centered hexagonal numbers: {|style="min-width: 325px;"| ! 1 !! !! 7 !! !! 19 !! !! 37 |- style="text-align:center; color:red; vertical-align:middle;" | +1 || || +6 || || +12 || || +18 |- style="vertical-align:middle; text-align:center; line-height:1.1em;" | | |     | |               | |                               |} Centered hexagonal numbers should not be confused with cornered hexagonal numbers, which are figurate numbers in which the associated hexagons share a vertex. The sequence of hexagonal numbers starts out as follows : 1, 7, 19, 37, 61, 91, 127, 169, 217, 271, 331, 397, 469, 547, 631, 721, 817, 919. Formula The th centered hexagonal number is given by the formula Expressing the formula as shows that the centered hexagonal number for is 1 more than 6 times the th triangular number. In the opposite direction, the index corresponding to the centered hexagonal number can be calculated using the formula This can be used as a test for whether a number is centered hexagonal: it will be if and only if the above expression is an integer. Recurrence and generating function The centered hexagonal numbers satisfy the recurrence relation From this we can calculate the generating function . The generating function satisfies The latter term is the Taylor series of , so we get and end up at Properties In base 10 one can notice that the hexagonal numbers' rightmost (least significant) digits follow the pattern 1–7–9–7–1 (repeating with period 5). This follows from the last digit of the triangle numbers which repeat 0-1-3-1-0 when taken modulo 5. In base 6 the rightmost digit is always 1: 16, 116, 316, 1016, 1416, 2316, 3316, 4416... This follows from the fact that every centered hexagonal number modulo 6 (=106) equals 1. The sum of the first centered hexagonal numbers is . That is, centered hexagonal pyramidal numbers and cubes are the same numbers, but they represent different shapes. Viewed from the opposite perspective, centered hexagonal numbers are differences of two consecutive cubes, so that the centered hexagonal numbers are the gnomon of the cubes. (This can be seen geometrically from the diagram.) In particular, prime centered hexagonal numbers are cuban primes. The difference between and the th centered hexagonal number is a number of the form , while the difference between and the th centered hexagonal number is a pronic number. Applications Centered hexagonal numbers have practical applications in packing problems. They arise when packing round items into larger round containers, such as Vienna sausages into round cans, or combining individual wire strands into a cable. Many segmented mirror reflecting tele
https://en.wikipedia.org/wiki/Gams%20%28disambiguation%29
Gams may be: Acronyms General Algebraic Modeling System (GAMS), a mathematical optimization computer program Guide to Available Mathematical Software (GAMS), a project of the National Institute of Standards and Technology Graduate of Ayurvedic Medicine and Surgery (GAMS), a degree in Ayurvedic Medicine and Surgery nowadays called BAMS (Bachelor's degree in Ayurvedic Medicine and surgery) Places Gams, a municipality of Switzerland Bad Gams, a municipality of Austria Gams bei Hieflau, a municipality of Austria Gams, German name for Kamnica, Maribor, a village northwest of Maribor, Slovenia People Helmut Gams, known by the author abbreviation "Gams" Pius Bonifacius Gams, ecclesiastical historian Music "Gams", a song by the Cincinnati blues-rock group The Bronx Kill Other plural of Gam (nautical term), a social meeting between ships at sea
https://en.wikipedia.org/wiki/Dixon%27s%20Q%20test
In statistics, Dixon's Q test, or simply the Q test, is used for identification and rejection of outliers. This assumes normal distribution and per Robert Dean and Wilfrid Dixon, and others, this test should be used sparingly and never more than once in a data set. To apply a Q test for bad data, arrange the data in order of increasing values and calculate Q as defined: Where gap is the absolute difference between the outlier in question and the closest number to it. If Q > Qtable, where Qtable is a reference value corresponding to the sample size and confidence level, then reject the questionable point. Note that only one point may be rejected from a data set using a Q test. Example Consider the data set: Now rearrange in increasing order: We hypothesize that 0.167 is an outlier. Calculate Q: With 10 observations and at 90% confidence, Q = 0.455 > 0.412 = Qtable, so we conclude 0.167 is indeed an outlier. However, at 95% confidence, Q = 0.455 < 0.466 = Qtable 0.167 is not considered an outlier. McBane notes: Dixon provided related tests intended to search for more than one outlier, but they are much less frequently used than the r10 or Q version that is intended to eliminate a single outlier. Table This table summarizes the limit values of the two-tailed Dixon's Q test. See also Grubbs's test for outliers References Further reading Robert B. Dean and Wilfrid J. Dixon (1951) "Simplified Statistics for Small Numbers of Observations". Anal. Chem., 1951, 23 (4), 636–638. Abstract Full text PDF Rorabacher, D. B. (1991) "Statistical Treatment for Rejection of Deviant Values: Critical Values of Dixon Q Parameter and Related Subrange Ratios at the 95 percent Confidence Level". Anal. Chem., 63 (2), 139–146. PDF (including larger tables of limit values) McBane, George C. (2006) "Programs to Compute Distribution Functions and Critical Values for Extreme Value Ratios for Outlier Detection". J. Statistical Software 16(3):1–9, 2006 Article (PDF) and Software (Fortan-90, Zipfile) Shivanshu Shrivastava, A. Rajesh, P. K. Bora (2014) "Sliding window Dixon's tests for malicious users' suppression in a cooperative spectrum sensing system" IET Communications, 2014, 8 (7) W. J. Dixon. The Annals of Mathematical Statistics. Vol. 21, No. 4 (Dec., 1950), pp. 488-506 External links Main page of GNU R's package 'outlier' includes 'dixon.test' function. Dixon's test in Communications – use of Dixon's test in cognitive radio communications (by Shivanshu Shrivastava) Statistical tests Robust statistics Statistical outliers
https://en.wikipedia.org/wiki/Closed%20graph%20theorem
In mathematics, the closed graph theorem may refer to one of several basic results characterizing continuous functions in terms of their graphs. Each gives conditions when functions with closed graphs are necessarily continuous. Graphs and maps with closed graphs If is a map between topological spaces then the graph of is the set or equivalently, It is said that the graph of is closed if is a closed subset of (with the product topology). Any continuous function into a Hausdorff space has a closed graph. Any linear map, between two topological vector spaces whose topologies are (Cauchy) complete with respect to translation invariant metrics, and if in addition (1a) is sequentially continuous in the sense of the product topology, then the map is continuous and its graph, , is necessarily closed. Conversely, if is such a linear map with, in place of (1a), the graph of is (1b) known to be closed in the Cartesian product space , then is continuous and therefore necessarily sequentially continuous. Examples of continuous maps that do not have a closed graph If is any space then the identity map is continuous but its graph, which is the diagonal , is closed in if and only if is Hausdorff. In particular, if is not Hausdorff then is continuous but does have a closed graph. Let denote the real numbers with the usual Euclidean topology and let denote with the indiscrete topology (where note that is Hausdorff and that every function valued in is continuous). Let be defined by and for all . Then is continuous but its graph is closed in . Closed graph theorem in point-set topology In point-set topology, the closed graph theorem states the following: Non-Hausdorff spaces are rarely seen, but non-compact spaces are common. An example of non-compact is the real line, which allows the discontinuous function with closed graph . For set-valued functions In functional analysis If is a linear operator between topological vector spaces (TVSs) then we say that is a closed operator if the graph of is closed in when is endowed with the product topology. The closed graph theorem is an important result in functional analysis that guarantees that a closed linear operator is continuous under certain conditions. The original result has been generalized many times. A well known version of the closed graph theorems is the following. See also Notes References Bibliography Theorems in functional analysis
https://en.wikipedia.org/wiki/Steiner%20tree%20problem
In combinatorial mathematics, the Steiner tree problem, or minimum Steiner tree problem, named after Jakob Steiner, is an umbrella term for a class of problems in combinatorial optimization. While Steiner tree problems may be formulated in a number of settings, they all require an optimal interconnect for a given set of objects and a predefined objective function. One well-known variant, which is often used synonymously with the term Steiner tree problem, is the Steiner tree problem in graphs. Given an undirected graph with non-negative edge weights and a subset of vertices, usually referred to as terminals, the Steiner tree problem in graphs requires a tree of minimum weight that contains all terminals (but may include additional vertices) and minimizes the total weight of its edges. Further well-known variants are the Euclidean Steiner tree problem and the rectilinear minimum Steiner tree problem. The Steiner tree problem in graphs can be seen as a generalization of two other famous combinatorial optimization problems: the (non-negative) shortest path problem and the minimum spanning tree problem. If a Steiner tree problem in graphs contains exactly two terminals, it reduces to finding the shortest path. If, on the other hand, all vertices are terminals, the Steiner tree problem in graphs is equivalent to the minimum spanning tree. However, while both the non-negative shortest path and the minimum spanning tree problem are solvable in polynomial time, no such solution is known for the Steiner tree problem. Its decision variant, asking whether a given input has a tree of weight less than some given threshold, is NP-complete, which implies that the optimization variant, asking for the minimum-weight tree in a given graph, is NP-hard. In fact, the decision variant was among Karp's original 21 NP-complete problems. The Steiner tree problem in graphs has applications in circuit layout or network design. However, practical applications usually require variations, giving rise to a multitude of Steiner tree problem variants. Most versions of the Steiner tree problem are NP-hard, but some restricted cases can be solved in polynomial time. Despite the pessimistic worst-case complexity, several Steiner tree problem variants, including the Steiner tree problem in graphs and the rectilinear Steiner tree problem, can be solved efficiently in practice, even for large-scale real-world problems. Euclidean Steiner tree The original problem was stated in the form that has become known as the Euclidean Steiner tree problem or geometric Steiner tree problem: Given N points in the plane, the goal is to connect them by lines of minimum total length in such a way that any two points may be interconnected by line segments either directly or via other points and line segments. It may be shown that the connecting line segments do not intersect each other except at the endpoints and form a tree, hence the name of the problem. The problem for N = 3 has long been cons
https://en.wikipedia.org/wiki/Initial%20value%20problem
In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or other sciences frequently amounts to solving an initial value problem. In that context, the differential initial value is an equation which specifies how the system evolves with time given the initial conditions of the problem. Definition An initial value problem is a differential equation with where is an open set of , together with a point in the domain of called the initial condition. A solution to an initial value problem is a function that is a solution to the differential equation and satisfies In higher dimensions, the differential equation is replaced with a family of equations , and is viewed as the vector , most commonly associated with the position in space. More generally, the unknown function can take values on infinite dimensional spaces, such as Banach spaces or spaces of distributions. Initial value problems are extended to higher orders by treating the derivatives in the same way as an independent function, e.g. . Existence and uniqueness of solutions The Picard–Lindelöf theorem guarantees a unique solution on some interval containing t0 if f is continuous on a region containing t0 and y0 and satisfies the Lipschitz condition on the variable y. The proof of this theorem proceeds by reformulating the problem as an equivalent integral equation. The integral can be considered an operator which maps one function into another, such that the solution is a fixed point of the operator. The Banach fixed point theorem is then invoked to show that there exists a unique fixed point, which is the solution of the initial value problem. An older proof of the Picard–Lindelöf theorem constructs a sequence of functions which converge to the solution of the integral equation, and thus, the solution of the initial value problem. Such a construction is sometimes called "Picard's method" or "the method of successive approximations". This version is essentially a special case of the Banach fixed point theorem. Hiroshi Okamura obtained a necessary and sufficient condition for the solution of an initial value problem to be unique. This condition has to do with the existence of a Lyapunov function for the system. In some situations, the function f is not of class C1, or even Lipschitz, so the usual result guaranteeing the local existence of a unique solution does not apply. The Peano existence theorem however proves that even for f merely continuous, solutions are guaranteed to exist locally in time; the problem is that there is no guarantee of uniqueness. The result may be found in Coddington & Levinson (1955, Theorem 1.3) or Robinson (2001, Theorem 2.6). An even more general result is the Carathéodory existence theorem, which proves existence for some discontinuous functions f. Examples A si
https://en.wikipedia.org/wiki/115%20%28number%29
115 (one hundred [and] fifteen) is the natural number following 114 and preceding 116. In mathematics 115 has a square sum of divisors: There are 115 different rooted trees with exactly eight nodes, 115 inequivalent ways of placing six rooks on a 6 × 6 chess board in such a way that no two of the rooks attack each other, and 115 solutions to the stamp folding problem for a strip of seven stamps. 115 is also a heptagonal pyramidal number. The 115th Woodall number, is a prime number. 115 is the sum of the first five heptagonal numbers. See also 115 (disambiguation) References Integers
https://en.wikipedia.org/wiki/116%20%28number%29
116 (one hundred [and] sixteen) is the natural number following 115 and preceding 117. In mathematics 116 is a noncototient, meaning that there is no solution to the equation , where stands for Euler's totient function. 116! + 1 is a factorial prime. There are 116 ternary Lyndon words of length six, and 116 irreducible polynomials of degree six over a three-element field, which form the basis of a free Lie algebra of dimension 116. There are 116 different ways of partitioning the numbers from 1 through 5 into subsets in such a way that, for every k, the union of the first k subsets is a consecutive sequence of integers. There are 116 different 6×6 Costas arrays. See also 116 (disambiguation) References Integers
https://en.wikipedia.org/wiki/117%20%28number%29
117 (one hundred [and] seventeen) is the natural number following 116 and preceding 118. In mathematics 117 is the smallest possible length of the longest edge of an integer Heronian tetrahedron (a tetrahedron whose edge lengths, face areas and volume are all integers). Its other edge lengths are 51, 52, 53, 80 and 84. 117 is a pentagonal number. In other fields 117 can be a substitute for the number 17, which is considered unlucky in Italy. When Renault exported the R17 to Italy, it was renamed R117. Chinese dragons are usually depicted as having 117 scales, subdivided into 81 associated with yang and 36 associated with yin. In the Danish language the number 117 () is often used as a hyperbolic term to represent an arbitrary but large number. See also 117 (disambiguation) References Integers
https://en.wikipedia.org/wiki/118%20%28number%29
118 (one hundred [and] eighteen) is the natural number following 117 and preceding 119. In mathematics There is no answer to the equation φ(x) = 118, making 118 a nontotient. Four expressions for 118 as the sum of three positive integers have the same product: 14 + 50 + 54 = 15 + 40 + 63 = 18 + 30 + 70 = 21 + 25 + 72 = 118 and 14 × 50 × 54 = 15 × 40 × 63 = 18 × 30 × 70 = 21 × 25 × 72 = 37800. 118 is the smallest number that can be expressed as four sums with the same product in this way. Because of its expression as , it is a Leyland number of the second kind. 118!! - 1 is a prime number, where !! denotes the double factorial (the product of even integers up to 118). In other fields There are 118 known elements on the Periodic Table, the 118th element being oganesson. See also 118 (disambiguation) References Integers
https://en.wikipedia.org/wiki/Growth%20rate%20%28group%20theory%29
In the mathematical subject of geometric group theory, the growth rate of a group with respect to a symmetric generating set describes how fast a group grows. Every element in the group can be written as a product of generators, and the growth rate counts the number of elements that can be written as a product of length n. Definition Suppose G is a finitely generated group; and T is a finite symmetric set of generators (symmetric means that if then ). Any element can be expressed as a word in the T-alphabet Consider the subset of all elements of G that can be expressed by such a word of length ≤ n This set is just the closed ball of radius n in the word metric d on G with respect to the generating set T: More geometrically, is the set of vertices in the Cayley graph with respect to T that are within distance n of the identity. Given two nondecreasing positive functions a and b one can say that they are equivalent () if there is a constant C such that for all positive integers n, for example if . Then the growth rate of the group G can be defined as the corresponding equivalence class of the function where denotes the number of elements in the set . Although the function depends on the set of generators T its rate of growth does not (see below) and therefore the rate of growth gives an invariant of a group. The word metric d and therefore sets depend on the generating set T. However, any two such metrics are bilipschitz equivalent in the following sense: for finite symmetric generating sets E, F, there is a positive constant C such that As an immediate corollary of this inequality we get that the growth rate does not depend on the choice of generating set. Polynomial and exponential growth If for some we say that G has a polynomial growth rate. The infimum of such ks is called the order of polynomial growth. According to Gromov's theorem, a group of polynomial growth is a virtually nilpotent group, i.e. it has a nilpotent subgroup of finite index. In particular, the order of polynomial growth has to be a natural number and in fact . If for some we say that G has an exponential growth rate. Every finitely generated G has at most exponential growth, i.e. for some we have . If grows more slowly than any exponential function, G has a subexponential growth rate. Any such group is amenable. Examples A free group of finite rank has exponential growth rate. A finite group has constant growth—that is, polynomial growth of order 0—and this includes fundamental groups of manifolds whose universal cover is compact. If M is a closed negatively curved Riemannian manifold then its fundamental group has exponential growth rate. John Milnor proved this using the fact that the word metric on is quasi-isometric to the universal cover of M. The free abelian group has a polynomial growth rate of order d. The discrete Heisenberg group has a polynomial growth rate of order 4. This fact is a special case of the general theorem
https://en.wikipedia.org/wiki/Growth%20rate
Growth rate may refer to: By rate Asymptotic analysis, a branch of mathematics concerned with the analysis of growth rates Linear growth Exponential growth, a growth rate classification Any of a variety of growth rates classified by such things as the Landau notation By type of growing medium Economic growth, the increase in value of the goods and services produced by an economy Compound annual growth rate or CAGR, a measure of financial growth Population growth rate, change in population over time Growth rate (group theory), a property of a group in group theory In biology The rate of growth in any biological system, see Growth § Biology.
https://en.wikipedia.org/wiki/%C3%89tienne%20Laspeyres
Ernst Louis Étienne Laspeyres (; 28 November 1834 – 4 August 1913) was a German economist. He was Professor ordinarius of economics and statistics or State Sciences and cameralistics (public finance and administration) in Basel, Riga, Dorpat (now Tartu), Karlsruhe, and finally for 26 years in Gießen. Laspeyres was the scion of a Huguenot family of originally Gascon descent which had settled in Berlin in the 17th century, and he emphasised the Occitan pronunciation of his name as a link to his Gascon origins. Work Laspeyres is mainly known today for his 1871 development of the index number formula method for determining price increases, used for calculating the rate of inflation. A type of this calculation is known today as the Laspeyres Index. In addition to his accomplishments in price indices, Laspeyres may be counted as one of the fathers of business administration as an academic-professional discipline in Germany, and as one of the main unifiers of economics and statistics by “developing ideas which are today by and large nationally and internationally reality: quantification and operationalization of economics; expansion of official statistics; cooperation of official statistics and economic research; and integration of the economist and the statistician in one person.” (Rinne 1983) In economics, Laspeyres was to some extent a representative of the Historical School and certainly of Kathedersozialismus. The surname Laspeyres is of Gascon origin; his ancestors were Huguenots who settled in Berlin in the 17th century. How he pronounced his surname is uncertain, but likely as "Las-pay-ress". Bibliography Books by Laspeyres: Wechselbeziehungen zwischen Volksvermehrung und Höhe des Arbeitslohns, 1860 Geschichte der Volkswirtschäftlichen Anschauungen der Niederländer und ihrer Literatur zur Zeit der Republik, 1863 Der Einfluß der Wohnung auf die Sittlichkeit, 1869 Articles by Laspeyres: “Mitteilungen aus Pieter de la Courts Schriften. Ein Beitrag zur Geschichte der niederländischen Nationalökonomik des 17. Jahrhunderts” in Zeitschrift für die gesamte Staatswissenschaft, 1862 “Hamburger Warenpreise 1851-1860 und die kalifornisch-australische Geldentdeckung seit 1848. Ein Beitrag zur Lehre von der Geldentwertung” in Jahrbücher für Nationalökonomie und Statistik, 1884 “Die Berechnung einer mittleren Warenpreissteigerung” in Jahrbücher für Nationalökonomie und Statistik, 1871 “Welche Waren werden im Verlaufe der Zeiten immer teurer? – Statistische Studien zur Geschichte der Preisen” in Zeitschrift für die gesamte Staatswissenschaft, 1872 “Statistische Untersuchungungen über die wirtschaftliche und soziale Lage der sogenannte arbeitenden Klassen” in Concordia Zeitschrift für die Arbeiterfrage, 1875 “Die Kathedersocialisten und die statistischen Congresse. Gedanken zur Begründung einer nationalökonomischen Statistik und einer statistischen Nationalökonomie” in Deutsche Zeit- und Streit-Fragen, 1875 “Zur wirtschaftlichen Lage der ländl
https://en.wikipedia.org/wiki/RSA%20Factoring%20Challenge
The RSA Factoring Challenge was a challenge put forward by RSA Laboratories on March 18, 1991 to encourage research into computational number theory and the practical difficulty of factoring large integers and cracking RSA keys used in cryptography. They published a list of semiprimes (numbers with exactly two prime factors) known as the RSA numbers, with a cash prize for the successful factorization of some of them. The smallest of them, a 100-decimal digit number called RSA-100 was factored by April 1, 1991. Many of the bigger numbers have still not been factored and are expected to remain unfactored for quite some time, however advances in quantum computers make this prediction uncertain due to Shor's algorithm. In 2001, RSA Laboratories expanded the factoring challenge and offered prizes ranging from $10,000 to $200,000 for factoring numbers from 576 bits up to 2048 bits. The RSA Factoring Challenges ended in 2007. RSA Laboratories stated: "Now that the industry has a considerably more advanced understanding of the cryptanalytic strength of common symmetric-key and public-key algorithms, these challenges are no longer active." When the challenge ended in 2007, only RSA-576 and RSA-640 had been factored from the 2001 challenge numbers. The factoring challenge was intended to track the cutting edge in integer factorization. A primary application is for choosing the key length of the RSA public-key encryption scheme. Progress in this challenge should give an insight into which key sizes are still safe and for how long. As RSA Laboratories is a provider of RSA-based products, the challenge was used by them as an incentive for the academic community to attack the core of their solutions — in order to prove its strength. The RSA numbers were generated on a computer with no network connection of any kind. The computer's hard drive was subsequently destroyed so that no record would exist, anywhere, of the solution to the factoring challenge. The first RSA numbers generated, RSA-100 to RSA-500 and RSA-617, were labeled according to their number of decimal digits; the other RSA numbers (beginning with RSA-576) were generated later and labelled according to their number of binary digits. The numbers in the table below are listed in increasing order despite this shift from decimal to binary. The mathematics RSA Laboratories states that: for each RSA number n, there exist prime numbers p and q such that n = p × q. The problem is to find these two primes, given only n. The prizes and records The following table gives an overview over all RSA numbers. Note that the RSA Factoring Challenge ended in 2007 and no further prizes will be awarded for factoring the higher numbers. The challenge numbers in white lines are part of the original challenge and are expressed in base 10, while the challenge numbers in yellow lines are part of the 2001 expansion and are expressed in base 2 See also RSA numbers, decimal expansions of the numbers and known factoriz
https://en.wikipedia.org/wiki/Formal%20sum
In mathematics, a formal sum, formal series, or formal linear combination may be: In group theory, an element of a free abelian group, a sum of finitely many elements from a given basis set multiplied by integer coefficients. In linear algebra, an element of a vector space, a sum of finitely many elements from a given basis set multiplied by real, complex, or other numerical coefficients. In the study of series (mathematics), a sum of an infinite sequence of numbers or other quantities, considered as an abstract mathematical object regardless of whether the sum converges. In the study of power series, a sum of infinitely many monomials with distinct positive integer exponents, again considered as an abstract object regardless of convergence.
https://en.wikipedia.org/wiki/Tensor%20calculus
In mathematics, tensor calculus, tensor analysis, or Ricci calculus is an extension of vector calculus to tensor fields (tensors that may vary over a manifold, e.g. in spacetime). Developed by Gregorio Ricci-Curbastro and his student Tullio Levi-Civita, it was used by Albert Einstein to develop his general theory of relativity. Unlike the infinitesimal calculus, tensor calculus allows presentation of physics equations in a form that is independent of the choice of coordinates on the manifold. Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning. Working with a main proponent of the exterior calculus Elie Cartan, the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus:In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus. Syntax Tensor notation makes use of upper and lower indexes on objects that are used to label a variable object as covariant (lower index), contravariant (upper index), or mixed covariant and contravariant (having both upper and lower indexes). In fact in conventional math syntax we make use of covariant indexes when dealing with Cartesian coordinate systems frequently without realizing this is a limited use of tensor syntax as covariant indexed components. Tensor notation allows upper index on an object that may be confused with normal power operations from conventional math syntax. Key concepts Vector decomposition Tensors notation allows a vector () to be decomposed into an Einstein summation representing the tensor contraction of a basis vector ( or ) with a component vector ( or ). Every vector has two different representations, one referred to as contravariant component () with a covariant basis (), and the other as a covariant component () with a contravariant basis (). Tensor objects with all upper indexes are referred to as contravariant, and tensor objects with all lower indexes are referred to as covariant. The need to distinguish between contravariant and covariant arises from the fact that when we dot an arbitrary vector with its basis vector related to a particular coordinate system, there are two
https://en.wikipedia.org/wiki/Reduce%20%28computer%20algebra%20system%29
Reduce is a general-purpose computer algebra system geared towards applications in physics. The development of the Reduce computer algebra system was started in the 1960s by Anthony C. Hearn. Since then, many scientists from all over the world have contributed to its development under his direction. Reduce is written entirely in its own LISP dialect called Portable Standard Lisp, expressed in an ALGOL-like syntax called RLISP. The latter is used as a basis for Reduce's user-level language. Implementations of Reduce are available on most variants of Unix, Linux, Microsoft Windows, or Apple Macintosh systems by using an underlying Portable Standard Lisp or Codemist Standard LISP implementation. The Julia package Reduce.jl uses Reduce as a backend and implements its semantics in Julia style. Reduce was open sourced in December 2008 and is available for free under a modified BSD license on SourceForge. Previously it had cost $695. See also Comparison of computer algebra systems ALTRAN REDUCE Meets CAMAL - REDUCE Computer Algebra System - J. P. Fitch References External links Reduce wiki on SourceForge. Anthony C. Hearn, Reduce User's Manual Version 3.8, February 2004. In HTML format. Anthony C. Hearn, "Reduce: The First Forty Years", invited paper presented at the A3L Conference in Honor of the 60th Birthday of Volker Weispfenning, April 2005. Andrey Grozin, "TeXmacs-Reduce interface", April 2012. Computer algebra system software for Linux Computer algebra systems Formerly proprietary software Free computer algebra systems Free software programmed in Lisp Software using the BSD license
https://en.wikipedia.org/wiki/Tullio%20Levi-Civita
Tullio Levi-Civita, (, ; 29 March 1873 – 29 December 1941) was an Italian mathematician, most famous for his work on absolute differential calculus (tensor calculus) and its applications to the theory of relativity, but who also made significant contributions in other areas. He was a pupil of Gregorio Ricci-Curbastro, the inventor of tensor calculus. His work included foundational papers in both pure and applied mathematics, celestial mechanics (notably on the three-body problem), analytic mechanics (the Levi-Civita separability conditions in the Hamilton–Jacobi equation) and hydrodynamics. Biography Born into an Italian Jewish family in Padua, Levi-Civita was the son of Giacomo Levi-Civita, a lawyer and former senator. He graduated in 1892 from the University of Padua Faculty of Mathematics. In 1894 he earned a teaching diploma after which he was appointed to the Faculty of Science teacher's college in Pavia. In 1898 he was appointed to the Padua Chair of Rational Mechanics (left uncovered by death of Ernesto Padova) where he met and, in 1914, married Libera Trevisani, one of his pupils. He remained in his position at Padua until 1918, when he was appointed to the Chair of Higher Analysis at the University of Rome; in another two years he was appointed to the Chair of Mechanics there. In 1900 he and Ricci-Curbastro published the theory of tensors in Méthodes de calcul différentiel absolu et leurs applications, which Albert Einstein used as a resource to master the tensor calculus, a critical tool in the development of the theory of general relativity. In 1917 he introduced the notion of parallel transport in Riemannian geometry, motivated by the will to simplify the computation of the curvature of a Riemannian manifold. Levi-Civita's series of papers on the problem of a static gravitational field were also discussed in his 1915–1917 correspondence with Einstein. The correspondence was initiated by Levi-Civita, as he found mathematical errors in Einstein's use of tensor calculus to explain the theory of relativity. Levi-Civita methodically kept all of Einstein's replies to him; and even though Einstein had not kept Levi-Civita's, the entire correspondence could be re-constructed from Levi-Civita's archive. It is evident from this that, after numerous letters, the two men had grown to respect each other. In one of the letters, regarding Levi-Civita's new work, Einstein wrote "I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot". In 1933 Levi-Civita contributed to Paul Dirac's equations in quantum mechanics as well. His textbook on tensor calculus, The Absolute Differential Calculus (originally a set of lecture notes in Italian co-authored with Ricci-Curbastro), remains one of the standard texts almost a century after its first publication, with several translations available. In 1936, receiving an inv
https://en.wikipedia.org/wiki/Gregorio%20Ricci-Curbastro
Gregorio Ricci-Curbastro (; 12January 1925) was an Italian mathematician. He is most famous as the discoverer of tensor calculus. With his former student Tullio Levi-Civita, he wrote his most famous single publication, a pioneering work on the calculus of tensors, signing it as Gregorio Ricci. This appears to be the only time that Ricci-Curbastro used the shortened form of his name in a publication, and continues to cause confusion. Ricci-Curbastro also published important works in other fields, including a book on higher algebra and infinitesimal analysis, and papers on the theory of real numbers, an area in which he extended the research begun by Richard Dedekind. Early life and education Completing privately his high school studies at only 16 years of age, he enrolled on the course of philosophy-mathematics at Rome University (1869). The following year the Papal State fell and so Gregorio was called by his father to the city of his birth, Lugo di Romagna. Subsequently he attended courses at University of Bologna during the year 1872 - 1873, then transferred to the Scuola Normale Superiore di Pisa. In 1875 he graduated in Pisa in physical sciences and mathematics with a thesis on differential equations, entitled "On Fuches's Research Concerning Linear Differential Equations". During his various travels he was a student of the mathematicians Enrico Betti, Eugenio Beltrami, Ulisse Dini and Felix Klein. Studies on absolute differential calculus In 1877 Ricci-Curbastro obtained a scholarship at the Technical University of Munich, Bavaria, and he later worked as an assistant of Ulisse Dini, his teacher. In 1880 he became a lecturer of mathematics at the University of Padua where he dealt with Riemannian geometry and differential quadratic forms. He formed a research group in which Tullio Levi-Civita worked, with whom he wrote the fundamental treatise on absolute differential calculus (also known as Ricci calculus) with coordinates or tensor calculus on Riemannian manifold, which then became the lingua franca of the subsequent theory of Albert Einstein's general relativity. In fact absolute differential calculus had a crucial role in developing the theory, as is shown in a letter written by Albert Einstein to Ricci-Curbastro's nephew. In this context Ricci-Curbastro identified the so-called Ricci tensor which would have a crucial role within that theory. Influences The advent of tensor calculus in dynamics goes back to Lagrange, who originated the general treatment of a dynamical system, and to Riemann, who was the first to think about geometry in an arbitrary number of dimensions. He was also influenced by the works of Christoffel and of Lipschitz on the quadratic forms. In fact, it was essentially Christoffel's idea of covariant differentiation that allowed Ricci-Curbastro to make the greatest progress. Recognition Ricci-Curbastro received many honours for his contributions. He is honoured by mentions in various Academies amongst which
https://en.wikipedia.org/wiki/Gabriel%20Cramer
Gabriel Cramer (; 31 July 1704 – 4 January 1752) was a Genevan mathematician. He was the son of physician Jean Cramer and Anne Mallet Cramer. Biography Cramer showed promise in mathematics from an early age. At 18 he received his doctorate and at 20 he was co-chair of mathematics at the University of Geneva. In 1728 he proposed a solution to the St. Petersburg Paradox that came very close to the concept of expected utility theory given ten years later by Daniel Bernoulli. He published his best-known work in his forties. This included his treatise on algebraic curves (1750). It contains the earliest demonstration that a curve of the n-th degree is determined by n(n + 3)/2 points on it, in general position. (See Cramer's theorem (algebraic curves).) This led to the misconception that is Cramer's paradox, concerning the number of intersections of two curves compared to the number of points that determine a curve. He edited the works of the two elder Bernoullis, and wrote on the physical cause of the spheroidal shape of the planets and the motion of their apsides (1730), and on Newton's treatment of cubic curves (1746). In 1750 he published Cramer's rule, giving a general formula for the solution for any unknown in a linear equation system having a unique solution, in terms of determinants implied by the system. This rule is still standard. He did extensive travel throughout Europe in the late 1730s, which greatly influenced his works in mathematics. He died in 1752 at Bagnols-sur-Cèze while traveling in southern France to restore his health. Selected works Quelle est la cause de la figure elliptique des planètes et de la mobilité de leur aphélies?, Geneva, 1730 . Geneva: Frères Cramer & Cl. Philibert, 1750 See also Cramer–Castillon problem Devil's curve Jean-Louis Calandrini References "Gabriel Cramer", in Rousseau et les savants genevois, p. 29 W. W. Rouse Ball, A Short Account of the History of Mathematics, (4th Edition, 1908) Isaac Benguigui, Gabriel Cramer : illustre mathématicien, 1704–1752, Genève, Cramer & Cie, 1998 Johann Christoph Strodtmann, « Geschichte des Herrn Gabriel Cramer », in Das neue gelehrte Europa […], 4th part, Meissner, 1754 Also digitized by e-rara.ch External links 1704 births 1752 deaths 18th-century scientists from the Republic of Geneva Fellows of the Royal Society 18th-century mathematicians Linear algebraists Mathematicians from the Republic of Geneva
https://en.wikipedia.org/wiki/Hyperelliptic%20curve
In algebraic geometry, a hyperelliptic curve is an algebraic curve of genus g > 1, given by an equation of the form where f(x) is a polynomial of degree n = 2g + 1 > 4 or n = 2g + 2 > 4 with n distinct roots, and h(x) is a polynomial of degree < g + 2 (if the characteristic of the ground field is not 2, one can take h(x) = 0). A hyperelliptic function is an element of the function field of such a curve, or of the Jacobian variety on the curve; these two concepts are identical for elliptic functions, but different for hyperelliptic functions. Genus The degree of the polynomial determines the genus of the curve: a polynomial of degree 2g + 1 or 2g + 2 gives a curve of genus g. When the degree is equal to 2g + 1, the curve is called an imaginary hyperelliptic curve. Meanwhile, a curve of degree 2g + 2 is termed a real hyperelliptic curve. This statement about genus remains true for g = 0 or 1, but those special cases are not called "hyperelliptic". In the case g = 1 (if one chooses a distinguished point), such a curve is called an elliptic curve. Formulation and choice of model While this model is the simplest way to describe hyperelliptic curves, such an equation will have a singular point at infinity in the projective plane. This feature is specific to the case n > 3. Therefore, in giving such an equation to specify a non-singular curve, it is almost always assumed that a non-singular model (also called a smooth completion), equivalent in the sense of birational geometry, is meant. To be more precise, the equation defines a quadratic extension of C(x), and it is that function field that is meant. The singular point at infinity can be removed (since this is a curve) by the normalization (integral closure) process. It turns out that after doing this, there is an open cover of the curve by two affine charts: the one already given by and another one given by The glueing maps between the two charts are given by and wherever they are defined. In fact geometric shorthand is assumed, with the curve C being defined as a ramified double cover of the projective line, the ramification occurring at the roots of f, and also for odd n at the point at infinity. In this way the cases n = 2g + 1 and 2g + 2 can be unified, since we might as well use an automorphism of the projective plane to move any ramification point away from infinity. Using Riemann–Hurwitz formula Using the Riemann–Hurwitz formula, the hyperelliptic curve with genus g is defined by an equation with degree n = 2g + 2. Suppose f : X → P1 is a branched covering with ramification degree 2, where X is a curve with genus g and P1 is the Riemann sphere. Let g1 = g and g0 be the genus of P1 ( = 0 ), then the Riemann-Hurwitz formula turns out to be where s is over all ramified points on X. The number of ramified points is n, so n = 2g + 2. Occurrence and applications All curves of genus 2 are hyperelliptic, but for genus ≥ 3 the generic curve is not hyperelliptic. This is seen heurist
https://en.wikipedia.org/wiki/Compositional%20data
In statistics, compositional data are quantitative descriptions of the parts of some whole, conveying relative information. Mathematically, compositional data is represented by points on a simplex. Measurements involving probabilities, proportions, percentages, and ppm can all be thought of as compositional data. Ternary plot Compositional data in three variables can be plotted via ternary plots. The use of a barycentric plot on three variables graphically depicts the ratios of the three variables as positions in an equilateral triangle. Simplicial sample space In general, John Aitchison defined compositional data to be proportions of some whole in 1982. In particular, a compositional data point (or composition for short) can be represented by a real vector with positive components. The sample space of compositional data is a simplex: The only information is given by the ratios between components, so the information of a composition is preserved under multiplication by any positive constant. Therefore, the sample space of compositional data can always be assumed to be a standard simplex, i.e. . In this context, normalization to the standard simplex is called closure and is denoted by : where D is the number of parts (components) and denotes a row vector. Aitchison geometry The simplex can be given the structure of a real vector space in several different ways. The following vector space structure is called Aitchison geometry or the Aitchison simplex and has the following operations: Perturbation Powering Inner product Under these operations alone, it is sufficient to show that the Aitchison simplex forms a -dimensional Euclidean vector space. Orthonormal bases Since the Aitchison simplex forms a finite dimensional Hilbert space, it is possible to construct orthonormal bases in the simplex. Every composition can be decomposed as follows where forms an orthonormal basis in the simplex. The values are the (orthonormal and Cartesian) coordinates of with respect to the given basis. They are called isometric log-ratio coordinates . Linear transformations There are three well-characterized isomorphisms that transform from the Aitchison simplex to real space. All of these transforms satisfy linearity and as given below Additive logratio transform The additive log ratio (alr) transform is an isomorphism where . This is given by The choice of denominator component is arbitrary, and could be any specified component. This transform is commonly used in chemistry with measurements such as pH. In addition, this is the transform most commonly used for multinomial logistic regression. The alr transform is not an isometry, meaning that distances on transformed values will not be equivalent to distances on the original compositions in the simplex. Center logratio transform The center log ratio (clr) transform is both an isomorphism and an isometry where Where is the geometric mean of . The inverse of this fu
https://en.wikipedia.org/wiki/Swallowtail
Swallowtail may refer to: Swallowtail catastrophe or swallowtail surface, a singularity occurring in the part of mathematics called catastrophe theory Swallow-tail coat, a formal tailcoat worn traditionally as part of the white tie dress code Swallowtail butterfly, large colorful butterflies from the family Papilionidae Swallowtail (film), 1996 film directed by Shunji Iwai Swallowtail (flag), a term in vexillology Swallowtail joint in woodworking, see Dovetail joint The Swallow's Tail, a painting by Salvador Dalí, inspired by the swallowtail catastrophe Swallowtail, a butler café in Tokyo, Japan Swallowtail, a Wolf Alice song from their debut album My Love Is Cool See also Swallowtail Butterfly (Ai no Uta), the theme song for the film Swallowtail
https://en.wikipedia.org/wiki/600-cell
In geometry, the 600-cell is the convex regular 4-polytope (four-dimensional analogue of a Platonic solid) with Schläfli symbol {3,3,5}. It is also known as the C600, hexacosichoron and hexacosihedroid. It is also called a tetraplex (abbreviated from "tetrahedral complex") and a polytetrahedron, being bounded by tetrahedral cells. The 600-cell's boundary is composed of 600 tetrahedral cells with 20 meeting at each vertex. Together they form 1200 triangular faces, 720 edges, and 120 vertices. It is the 4-dimensional analogue of the icosahedron, since it has five tetrahedra meeting at every edge, just as the icosahedron has five triangles meeting at every vertex. Its dual polytope is the 120-cell. Geometry The 600-cell is the fifth in the sequence of 6 convex regular 4-polytopes (in order of size and complexity). It can be deconstructed into twenty-five overlapping instances of its immediate predecessor the 24-cell, as the 24-cell can be deconstructed into three overlapping instances of its predecessor the tesseract (8-cell), and the 8-cell can be deconstructed into two overlapping instances of its predecessor the 16-cell. The reverse procedure to construct each of these from an instance of its predecessor preserves the radius of the predecessor, but generally produces a successor with a smaller edge length. The 24-cell's edge length equals its radius, but the 600-cell's edge length is ~0.618 times its radius. The 600-cell's radius and edge length are in the golden ratio. Coordinates Unit radius Cartesian coordinates The vertices of a 600-cell of unit radius centered at the origin of 4-space, with edges of length ≈ 0.618 (where φ = ≈ 1.618 is the golden ratio), can be given as follows: 8 vertices obtained from (0, 0, 0, ±1) by permuting coordinates, and 16 vertices of the form: (±, ±, ±, ±) The remaining 96 vertices are obtained by taking even permutations of (±, ±, ±, 0) Note that the first 8 are the vertices of a 16-cell, the second 16 are the vertices of a tesseract, and those 24 vertices together are the vertices of a 24-cell. The remaining 96 vertices are the vertices of a snub 24-cell, which can be found by partitioning each of the 96 edges of another 24-cell (dual to the first) in the golden ratio in a consistent manner. When interpreted as quaternions, these are the unit icosians. In the 24-cell, there are squares, hexagons and triangles that lie on great circles (in central planes through four or six vertices). In the 600-cell there are twenty-five overlapping inscribed 24-cells, with each vertex and square shared by five 24-cells, and each hexagon or triangle shared by two 24-cells. In each 24-cell there are three disjoint 16-cells, so in the 600-cell there are 75 overlapping inscribed 16-cells. Each 16-cell constitutes a distinct orthonormal basis for the choice of a coordinate reference frame. The 60 axes and 75 16-cells of the 600-cell constitute a geometric configuration, which in the language of configurations is
https://en.wikipedia.org/wiki/Population%20dynamics
Population dynamics is the type of mathematics used to model and study the size and age composition of populations as dynamical systems. History Population dynamics has traditionally been the dominant branch of mathematical biology, which has a history of more than 220 years, although over the last century the scope of mathematical biology has greatly expanded. The beginning of population dynamics is widely regarded as the work of Malthus, formulated as the Malthusian growth model. According to Malthus, assuming that the conditions (the environment) remain constant (ceteris paribus), a population will grow (or decline) exponentially. This principle provided the basis for the subsequent predictive theories, such as the demographic studies such as the work of Benjamin Gompertz and Pierre François Verhulst in the early 19th century, who refined and adjusted the Malthusian demographic model. A more general model formulation was proposed by F. J. Richards in 1959, further expanded by Simon Hopkins, in which the models of Gompertz, Verhulst and also Ludwig von Bertalanffy are covered as special cases of the general formulation. The Lotka–Volterra predator-prey equations are another famous example, as well as the alternative Arditi–Ginzburg equations. Logistic function Simplified population models usually start with four key variables (four demographic processes) including death, birth, immigration, and emigration. Mathematical models used to calculate changes in population demographics and evolution hold the assumption of no external influence. Models can be more mathematically complex where "...several competing hypotheses are simultaneously confronted with the data." For example, in a closed system where immigration and emigration does not take place, the rate of change in the number of individuals in a population can be described as: where is the total number of individuals in the specific experimental population being studied, is the number of births and D is the number of deaths per individual in a particular experiment or model. The algebraic symbols , and stand for the rates of birth, death, and the rate of change per individual in the general population, the intrinsic rate of increase. This formula can be read as the rate of change in the population () is equal to births minus deaths (). Using these techniques, Malthus' population principle of growth was later transformed into a mathematical model known as the logistic equation: where is the biomass density, is the maximum per-capita rate of change, and is the carrying capacity of the population. The formula can be read as follows: the rate of change in the population () is equal to growth () that is limited by carrying capacity . From these basic mathematical principles the discipline of population ecology expands into a field of investigation that queries the demographics of real populations and tests these results against the statistical models. The field of population ecolog
https://en.wikipedia.org/wiki/Hodge%20theory
In mathematics, Hodge theory, named after W. V. D. Hodge, is a method for studying the cohomology groups of a smooth manifold M using partial differential equations. The key observation is that, given a Riemannian metric on M, every cohomology class has a canonical representative, a differential form that vanishes under the Laplacian operator of the metric. Such forms are called harmonic. The theory was developed by Hodge in the 1930s to study algebraic geometry, and it built on the work of Georges de Rham on de Rham cohomology. It has major applications in two settings: Riemannian manifolds and Kähler manifolds. Hodge's primary motivation, the study of complex projective varieties, is encompassed by the latter case. Hodge theory has become an important tool in algebraic geometry, particularly through its connection to the study of algebraic cycles. While Hodge theory is intrinsically dependent upon the real and complex numbers, it can be applied to questions in number theory. In arithmetic situations, the tools of p-adic Hodge theory have given alternative proofs of, or analogous results to, classical Hodge theory. History The field of algebraic topology was still nascent in the 1920s. It had not yet developed the notion of cohomology, and the interaction between differential forms and topology was poorly understood. In 1928, Élie Cartan published a note entitled Sur les nombres de Betti des espaces de groupes clos in which he suggested, but did not prove, that differential forms and topology should be linked. Upon reading it, Georges de Rham, then a student, was immediately struck by inspiration. In his 1931 thesis, he proved a spectacular result now called de Rham's theorem. By Stokes' theorem, integration of differential forms along singular chains induces, for any compact smooth manifold M, a bilinear pairing As originally stated, de Rham's theorem asserts that this is a perfect pairing, and that therefore each of the terms on the left-hand side are vector space duals of one another. In contemporary language, de Rham's theorem is more often phrased as the statement that singular cohomology with real coefficients is isomorphic to de Rham cohomology: De Rham's original statement is then a consequence of Poincaré duality. Separately, a 1927 paper of Solomon Lefschetz used topological methods to reprove theorems of Riemann. In modern language, if ω1 and ω2 are holomorphic differentials on an algebraic curve C, then their wedge product is necessarily zero because C has only one complex dimension; consequently, the cup product of their cohomology classes is zero, and when made explicit, this gave Lefschetz a new proof of the Riemann relations. Additionally, if ω is a non-zero holomorphic differential, then is a positive volume form, from which Lefschetz was able to rederive Riemann's inequalities. In 1929, W. V. D. Hodge learned of Lefschetz's paper. He immediately observed that similar principles applied to algebraic surface
https://en.wikipedia.org/wiki/Ramification%20group
In number theory, more specifically in local class field theory, the ramification groups are a filtration of the Galois group of a local field extension, which gives detailed information on the ramification phenomena of the extension. Ramification theory of valuations In mathematics, the ramification theory of valuations studies the set of extensions of a valuation v of a field K to an extension L of K. It is a generalization of the ramification theory of Dedekind domains. The structure of the set of extensions is known better when L/K is Galois. Decomposition group and inertia group Let (K, v) be a valued field and let L be a finite Galois extension of K. Let Sv be the set of equivalence classes of extensions of v to L and let G be the Galois group of L over K. Then G acts on Sv by σ[w] = [w ∘ σ] (i.e. w is a representative of the equivalence class [w] ∈ Sv and [w] is sent to the equivalence class of the composition of w with the automorphism ; this is independent of the choice of w in [w]). In fact, this action is transitive. Given a fixed extension w of v to L, the decomposition group of w is the stabilizer subgroup Gw of [w], i.e. it is the subgroup of G consisting of all elements that fix the equivalence class [w] ∈ Sv. Let mw denote the maximal ideal of w inside the valuation ring Rw of w. The inertia group of w is the subgroup Iw of Gw consisting of elements σ such that σx ≡ x (mod mw) for all x in Rw. In other words, Iw consists of the elements of the decomposition group that act trivially on the residue field of w. It is a normal subgroup of Gw. The reduced ramification index e(w/v) is independent of w and is denoted e(v). Similarly, the relative degree f(w/v) is also independent of w and is denoted f(v). Ramification groups in lower numbering Ramification groups are a refinement of the Galois group of a finite Galois extension of local fields. We shall write for the valuation, the ring of integers and its maximal ideal for . As a consequence of Hensel's lemma, one can write for some where is the ring of integers of . (This is stronger than the primitive element theorem.) Then, for each integer , we define to be the set of all that satisfies the following equivalent conditions. (i) operates trivially on (ii) for all (iii) The group is called -th ramification group. They form a decreasing filtration, In fact, the are normal by (i) and trivial for sufficiently large by (iii). For the lowest indices, it is customary to call the inertia subgroup of because of its relation to splitting of prime ideals, while the wild inertia subgroup of . The quotient is called the tame quotient. The Galois group and its subgroups are studied by employing the above filtration or, more specifically, the corresponding quotients. In particular, where are the (finite) residue fields of . is unramified. is tamely ramified (i.e., the ramification index is prime to the residue characteristic.) The study of ramification group
https://en.wikipedia.org/wiki/Rejection%20sampling
In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution in with a density. Rejection sampling is based on the observation that to sample a random variable in one dimension, one can perform a uniformly random sampling of the two-dimensional Cartesian graph, and keep the samples in the region under the graph of its density function. Note that this property can be extended to N-dimension functions. Description To visualize the motivation behind rejection sampling, imagine graphing the density function of a random variable onto a large rectangular board and throwing darts at it. Assume that the darts are uniformly distributed around the board. Now remove all of the darts that are outside the area under the curve. The remaining darts will be distributed uniformly within the area under the curve, and the x-positions of these darts will be distributed according to the random variable's density. This is because there is the most room for the darts to land where the curve is highest and thus the probability density is greatest. The visualization as just described is equivalent to a particular form of rejection sampling where the "proposal distribution" is uniform (hence its graph is a rectangle). The general form of rejection sampling assumes that the board is not necessarily rectangular but is shaped according to the density of some proposal distribution that we know how to sample from (for example, using inversion sampling), and which is at least as high at every point as the distribution we want to sample from, so that the former completely encloses the latter. (Otherwise, there would be parts of the curved area we want to sample from that could never be reached.) Rejection sampling works as follows: Sample a point on the x-axis from the proposal distribution. Draw a vertical line at this x-position, up to the maximum y-value of the probability density function of the proposal distribution. Sample uniformly along this line from 0 to the maximum of the probability density function. If the sampled value is greater than the value of the desired distribution at this vertical line, reject the x-value and return to step 1; else the x-value is a sample from the desired distribution. This algorithm can be used to sample from the area under any curve, regardless of whether the function integrates to 1. In fact, scaling a function by a constant has no effect on the sampled x-positions. Thus, the algorithm can be used to sample from a distribution whose normalizing constant is unknown, which is common in computational statistics. Theory The rejection sampling method generates sampling values from a target distribution with arbitrary probability density function by using a proposal distribution with
https://en.wikipedia.org/wiki/Poisson%20summation%20formula
In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation. Forms of the equation Consider an aperiodic function with Fourier transform alternatively designated by and The basic Poisson summation formula is: Also consider periodic functions, where parameters and are in the same units as : Then is a special case (P=1, x=0) of this generalization: which is a Fourier series expansion with coefficients that are samples of the function Similarly: also known as the important Discrete-time Fourier transform. The Poisson summation formula can also be proved quite conceptually using the compatibility of Pontryagin duality with short exact sequences such as Applicability holds provided is a continuous integrable function which satisfies for some and every Note that such is uniformly continuous, this together with the decay assumption on , show that the series defining converges uniformly to a continuous function.   holds in the strong sense that both sides converge uniformly and absolutely to the same limit. holds in a pointwise sense under the strictly weaker assumption that has bounded variation and The Fourier series on the right-hand side of is then understood as a (conditionally convergent) limit of symmetric partial sums. As shown above, holds under the much less restrictive assumption that is in , but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) Fourier series of In this case, one may extend the region where equality holds by considering summability methods such as Cesàro summability. When interpreting convergence in this way , case holds under the less restrictive conditions that is integrable and 0 is a point of continuity of . However may fail to hold even when both and are integrable and continuous, and the sums converge absolutely. Applications Method of images In partial differential equations, the Poisson summation formula provides a rigorous justification for the fundamental solution of the heat equation with absorbing rectangular boundary by the method of images. Here the heat kernel on is known, and that of a rectangle is determined by taking the periodization. The Poisson summation formula similarly provides a connection between Fourier analysis on Euclidean spaces and on the tori of the corresponding dimensions. In one dimension, the resulting solution is called a theta function. In electrodynamics, th
https://en.wikipedia.org/wiki/Diophantine%20geometry
In mathematics, Diophantine geometry is the study of Diophantine equations by means of powerful methods in algebraic geometry. By the 20th century it became clear for some mathematicians that methods of algebraic geometry are ideal tools to study these equations. Diophantine geometry is part of the broader field of arithmetic geometry. Four theorems in Diophantine geometry that are of fundamental importance include: Mordell–Weil theorem Roth's theorem Siegel's theorem Faltings's theorem Background Serge Lang published a book Diophantine Geometry in the area in 1962, and by this book he coined the term "Diophantine geometry". The traditional arrangement of material on Diophantine equations was by degree and number of variables, as in Mordell's Diophantine Equations (1969). Mordell's book starts with a remark on homogeneous equations f = 0 over the rational field, attributed to C. F. Gauss, that non-zero solutions in integers (even primitive lattice points) exist if non-zero rational solutions do, and notes a caveat of L. E. Dickson, which is about parametric solutions. The Hilbert–Hurwitz result from 1890 reducing the Diophantine geometry of curves of genus 0 to degrees 1 and 2 (conic sections) occurs in Chapter 17, as does Mordell's conjecture. Siegel's theorem on integral points occurs in Chapter 28. Mordell's theorem on the finite generation of the group of rational points on an elliptic curve is in Chapter 16, and integer points on the Mordell curve in Chapter 26. In a hostile review of Lang's book, Mordell wrote: He notes that the content of the book is largely versions of the Mordell–Weil theorem, Thue–Siegel–Roth theorem, Siegel's theorem, with a treatment of Hilbert's irreducibility theorem and applications (in the style of Siegel). Leaving aside issues of generality, and a completely different style, the major mathematical difference between the two books is that Lang used abelian varieties and offered a proof of Siegel's theorem, while Mordell noted that the proof "is of a very advanced character" (p. 263). Despite a bad press initially, Lang's conception has been sufficiently widely accepted for a 2006 tribute to call the book "visionary". A larger field sometimes called arithmetic of abelian varieties now includes Diophantine geometry along with class field theory, complex multiplication, local zeta-functions and L-functions. Paul Vojta wrote: While others at the time shared this viewpoint (e.g., Weil, Tate, Serre), it is easy to forget that others did not, as Mordell's review of Diophantine Geometry attests. Approaches A single equation defines a hypersurface, and simultaneous Diophantine equations give rise to a general algebraic variety V over K; the typical question is about the nature of the set V(K) of points on V with co-ordinates in K, and by means of height functions, quantitative questions about the "size" of these solutions may be posed, as well as the qualitative issues of whether any points exist, and if so whe
https://en.wikipedia.org/wiki/Medial%20magma
In abstract algebra, a medial magma or medial groupoid is a magma or groupoid (that is, a set with a binary operation) which satisfies the identity , or more simply for all x, y, u and v, using the convention that juxtaposition denotes the same operation but has higher precedence. This identity has been variously called medial, abelian, alternation, transposition, interchange, bi-commutative, bisymmetric, surcommutative, entropic etc. Any commutative semigroup is a medial magma, and a medial magma has an identity element if and only if it is a commutative monoid. The "only if" direction is the Eckmann–Hilton argument. Another class of semigroups forming medial magmas are normal bands. Medial magmas need not be associative: for any nontrivial abelian group with operation and integers , the new binary operation defined by yields a medial magma which in general is neither associative nor commutative. Using the categorical definition of product, for a magma , one may define the Cartesian square magma  with the operation . The binary operation of , considered as a mapping from to , maps to , to , and to . Hence, a magma  is medial if and only if its binary operation is a magma homomorphism from  to . This can easily be expressed in terms of a commutative diagram, and thus leads to the notion of a medial magma object in a category with a Cartesian product. (See the discussion in auto magma object.) If and are endomorphisms of a medial magma, then the mapping   defined by pointwise multiplication is itself an endomorphism. It follows that the set End() of all endomorphisms of a medial magma is itself a medial magma. Bruck–Murdoch–Toyoda theorem The Bruck–Murdoch-Toyoda theorem provides the following characterization of medial quasigroups. Given an abelian group and two commuting automorphisms φ and ψ of , define an operation on by where some fixed element of . It is not hard to prove that forms a medial quasigroup under this operation. The Bruck–Toyoda theorem states that every medial quasigroup is of this form, i.e. is isomorphic to a quasigroup defined from an abelian group in this way. In particular, every medial quasigroup is isotopic to an abelian group. The result was obtained independently in 1941 by D.C. Murdoch and K. Toyoda. It was then rediscovered by Bruck in 1944. Generalizations The term medial or (more commonly) entropic is also used for a generalization to multiple operations. An algebraic structure is an entropic algebra if every two operations satisfy a generalization of the medial identity. Let f and g be operations of arity m and n, respectively. Then f and g are required to satisfy Nonassociative examples A particularly natural example of a nonassociative medial magma is given by collinear points on Elliptic curves. The operation for points on the curve, corresponding to drawing a line between x and y and defining as the third intersection point of the line with the elliptic curve, is a (com
https://en.wikipedia.org/wiki/Instituto%20Nacional%20de%20Estat%C3%ADstica%20%28Portugal%29
The Instituto Nacional de Estatística or INE (Portuguese for "National Institute for Statistics") is the government office for national statistics of Portugal. In the English language it is also branded as Statistics Portugal. The INE is one of the components of the Portuguese National Statistical System (SEN), which also includes the Higher Council of Statistics, the Bank of Portugal and the regional statistical services the autonomous regions of the Azores and Madeira. It was established in 1935, as the successor of the Direcão-Geral de Estatística (Directorate-General for Statistics) which had been created in 1896. The first population census known to be done in which is the Portugal of today was done in the year 1 AD by order of the Roman Emperor Caesar Augustus, covering the province of Lusitania. After the foundation of the independent Portugal, many census were done, one of the first relevant known being the Roll of the Crossbowmen done in the 13th century by order of King Afonso III. The first modern census in Portugal, done accordingly with the scientific methodology established by the First International Statistical Congress, was carried out in 1864. A national census takes place every 10 years, the last one being carried away in 2011 and the next one to be carried away in 2021. The INE publishes the REVSTAT - Statistical Journal. The name "Instituto Nacional de Estatística" and the corresponding acronym "INE" is also used as the designation of the central statistical services of several other countries of the Community of Portuguese Speaking Countries, like those of Angola, Cape Verde, Guinea-Bissau, Mozambique and São Tomé and Príncipe. INE's headquarters INE is installed in an Art Deco building, designed specifically to serve as its headquarters by architect Porfírio Pardal Monteiro in 1931. The building started to be being built in 1932, being inaugurated in 1935. Being a landmark of the early 20th century Portuguese modern architecture, INE's headquarters was classified as a public interest monument in 2013. See also Office for National Statistics Eurostat Instituto Brasileiro de Geografia e Estatística Instituto Nacional de Estatística (Cape Verde) Instituto Nacional de Estatística (São Tomé and Príncipe) External links Instituto Nacional de Estatística Government of Portugal Portugal
https://en.wikipedia.org/wiki/Charles%20Fefferman
Charles Louis Fefferman (born April 18, 1949) is an American mathematician at Princeton University, where he is currently the Herbert E. Jones, Jr. '43 University Professor of Mathematics. He was awarded the Fields Medal in 1978 for his contributions to mathematical analysis. Early life and education Fefferman was born to a Jewish family, in Washington, DC. Fefferman was a child prodigy. Fefferman entered the University of Maryland at age 14, and had written his first scientific paper by the age of 15. He graduated with degrees in math and physics at 17, and earned his PhD in mathematics three years later from Princeton University, under Elias Stein. His doctoral dissertation was titled "Inequalities for strongly singular convolution operators". Fefferman achieved a full professorship at the University of Chicago at the age of 22, making him the youngest full professor ever appointed in the United States. Career At the age of 25, he returned to Princeton as a full professor, becoming the youngest person to be promoted to the title. He won the Alan T. Waterman Award in 1976 (the first person to get the award) and the Fields Medal in 1978 for his work in mathematical analysis, specifically convergence and divergence. He was elected to the National Academy of Sciences in 1979. He was appointed the Herbert Jones Professor at Princeton in 1984. In addition to the above, his honors include the Salem Prize in 1971, the Bergman Prize in 1992, the Bôcher Memorial Prize in 2008, and the Wolf Prize in Mathematics for 2017, as well as election to the American Academy of Arts and Sciences and the American Philosophical Society. For 2021 he was awarded the BBVA Foundation Frontiers of Knowledge Award in Basic Sciences. Fefferman contributed several innovations that revised the study of multidimensional complex analysis by finding fruitful generalisations of classical low-dimensional results. Fefferman's work on partial differential equations, Fourier analysis, in particular convergence, multipliers, divergence, singular integrals and Hardy spaces earned him a Fields Medal at the International Congress of Mathematicians at Helsinki in 1978. He was a Plenary Speaker of the ICM in 1974 in Vancouver. His early work included a study of the asymptotics of the Bergman kernel off the boundaries of pseudoconvex domains in . He has studied mathematical physics, harmonic analysis, fluid dynamics, neural networks, geometry, mathematical finance and spectral analysis, amongst others. Family Charles Fefferman and his wife Julie have two daughters, Nina and Lainie. Lainie Fefferman is a composer, taught math at Saint Ann's School and holds a degree in music from Yale University as well as a Ph.D. in music composition from Princeton. She has an interest in Middle Eastern music. Nina Fefferman is a computational biologist residing at the University of Tennessee whose research is concerned with the application of mathematical models to complex biological systems. Charles
https://en.wikipedia.org/wiki/Remainder
In mathematics, the remainder is the amount "left over" after performing some computation. In arithmetic, the remainder is the integer "left over" after dividing one integer by another to produce an integer quotient (integer division). In algebra of polynomials, the remainder is the polynomial "left over" after dividing one polynomial by another. The modulo operation is the operation that produces such a remainder when given a dividend and divisor. Alternatively, a remainder is also what is left after subtracting one number from another, although this is more precisely called the difference. This usage can be found in some elementary textbooks; colloquially it is replaced by the expression "the rest" as in "Give me two dollars back and keep the rest." However, the term "remainder" is still used in this sense when a function is approximated by a series expansion, where the error expression ("the rest") is referred to as the remainder term. Integer division Given an integer a and a non-zero integer d, it can be shown that there exist unique integers q and r, such that and . The number q is called the quotient, while r is called the remainder. (For a proof of this result, see Euclidean division. For algorithms describing how to calculate the remainder, see division algorithm.) The remainder, as defined above, is called the least positive remainder or simply the remainder. The integer a is either a multiple of d, or lies in the interval between consecutive multiples of d, namely, q⋅d and (q + 1)d (for positive q). In some occasions, it is convenient to carry out the division so that a is as close to an integral multiple of d as possible, that is, we can write a = k⋅d + s, with |s| ≤ |d/2| for some integer k. In this case, s is called the least absolute remainder. As with the quotient and remainder, k and s are uniquely determined, except in the case where d = 2n and s = ± n. For this exception, we have: a = k⋅d + n = (k + 1)d − n. A unique remainder can be obtained in this case by some convention—such as always taking the positive value of s. Examples In the division of 43 by 5, we have: 43 = 8 × 5 + 3, so 3 is the least positive remainder. We also have that: 43 = 9 × 5 − 2, and −2 is the least absolute remainder. These definitions are also valid if d is negative, for example, in the division of 43 by −5, 43 = (−8) × (−5) + 3, and 3 is the least positive remainder, while, 43 = (−9) × (−5) + (−2) and −2 is the least absolute remainder. In the division of 42 by 5, we have: 42 = 8 × 5 + 2, and since 2 < 5/2, 2 is both the least positive remainder and the least absolute remainder. In these examples, the (negative) least absolute remainder is obtained from the least positive remainder by subtracting 5, which is d. This holds in general. When dividing by d, either both remainders are positive and therefore equal, or they have opposite signs. If the positive remainder is r1, and the negative one is r2, then r1 = r2 + d. For floating-poi
https://en.wikipedia.org/wiki/Radon%20transform
In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes (integrating over lines is known as the X-ray transform). It was later generalized to higher-dimensional Euclidean spaces and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object. Explanation If a function represents an unknown density, then the Radon transform represents the projection data obtained as the output of a tomographic scan. Hence the inverse of the Radon transform can be used to reconstruct the original density from the projection data, and thus it forms the mathematical underpinning for tomographic reconstruction, also known as iterative reconstruction. The Radon transform data is often called a sinogram because the Radon transform of an off-center point source is a sinusoid. Consequently, the Radon transform of a number of small objects appears graphically as a number of blurred sine waves with different amplitudes and phases. The Radon transform is useful in computed axial tomography (CAT scan), barcode scanners, electron microscopy of macromolecular assemblies like viruses and protein complexes, reflection seismology and in the solution of hyperbolic partial differential equations. Definition Let be a function that satisfies the three regularity conditions: is continuous; the double integral , extending over the whole plane, converges; for any arbitrary point on the plane it holds that The Radon transform, , is a function defined on the space of straight lines by the line integral along each such line as: Concretely, the parametrization of any straight line with respect to arc length can always be written:where is the distance of from the origin and is the angle the normal vector to makes with the -axis. It follows that the quantities can be considered as coordinates on the space of all lines in , and the Radon transform can be expressed in these coordinates by: More generally, in the -dimensional Euclidean space , the Radon transform of a function satisfying the regularity conditions is a function on the space of all hyperplanes in . It is defined by: where the integral is taken with respect to the natural hypersurface measure, (generalizing the term from the -dimensional case). Observe that any element of is characterized as the solution locus of an equation , where is a
https://en.wikipedia.org/wiki/Minkowski%20addition
In geometry, the Minkowski sum of two sets of position vectors A and B in Euclidean space is formed by adding each vector in A to each vector in B: The Minkowski difference (also Minkowski subtraction, Minkowski decomposition, or geometric difference) is the corresponding inverse, where produces a set that could be summed with B to recover A. This is defined as the complement of the Minkowski sum of the complement of A with the reflection of B about the origin. This definition allows a symmetrical relationship between the Minkowski sum and difference. Note that alternately taking the sum and difference with B is not necessarily equivalent. The sum can fill gaps which the difference may not re-open, and the difference can erase small islands which the sum cannot recreate from nothing. In 2D image processing the Minkowski sum and difference are known as dilation and erosion. An alternative definition of the Minkowski difference is sometimes used for computing intersection of convex shapes. This is not equivalent to the previous definition, and is not an inverse of the sum operation. Instead it replaces the vector addition of the Minkowski sum with a vector subtraction. If the two convex shapes intersect, the resulting set will contain the origin. The concept is named for Hermann Minkowski. Example For example, if we have two sets A and B, each consisting of three position vectors (informally, three points), representing the vertices of two triangles in , with coordinates and then their Minkowski sum is which comprises the vertices of a hexagon. For Minkowski addition, the , containing only the zero vector, 0, is an identity element: for every subset S of a vector space, The empty set is important in Minkowski addition, because the empty set annihilates every other subset: for every subset S of a vector space, its sum with the empty set is empty: For another example, consider the Minkowski sums of open or closed balls in the field which is either the real numbers or complex numbers If is the closed ball of radius centered at in then for any and also will hold for any scalar such that the product is defined (which happens when or ). If and are all non-zero then the same equalities would still hold had been defined to be the open ball, rather than the closed ball, centered at (the non-zero assumption is needed because the open ball of radius is the empty set). The Minkowski sum of a closed ball and an open ball is an open ball. More generally, the Minkowski sum of an open subset with other set will be an open subset. If is the graph of and if and is the -axis in then the Minkowski sum of these two closed subsets of the plane is the open set consisting of everything other than the -axis. This shows that the Minkowski sum of two closed sets is not necessarily a closed set. However, the Minkowski sum of two closed subsets will be a closed subset if at least one of these sets is also a compact subs
https://en.wikipedia.org/wiki/Hausdorff%20measure
In mathematics, Hausdorff measure is a generalization of the traditional notions of area and volume to non-integer dimensions, specifically fractals and their Hausdorff dimensions. It is a type of outer measure, named for Felix Hausdorff, that assigns a number in [0,∞] to each set in or, more generally, in any metric space. The zero-dimensional Hausdorff measure is the number of points in the set (if the set is finite) or ∞ if the set is infinite. Likewise, the one-dimensional Hausdorff measure of a simple curve in is equal to the length of the curve, and the two-dimensional Hausdorff measure of a Lebesgue-measurable subset of is proportional to the area of the set. Thus, the concept of the Hausdorff measure generalizes the Lebesgue measure and its notions of counting, length, and area. It also generalizes volume. In fact, there are d-dimensional Hausdorff measures for any d ≥ 0, which is not necessarily an integer. These measures are fundamental in geometric measure theory. They appear naturally in harmonic analysis or potential theory. Definition Let be a metric space. For any subset , let denote its diameter, that is Let be any subset of and a real number. Define where the infimum is over all countable covers of by sets satisfying . Note that is monotone nonincreasing in since the larger is, the more collections of sets are permitted, making the infimum not larger. Thus, exists but may be infinite. Let It can be seen that is an outer measure (more precisely, it is a metric outer measure). By Carathéodory's extension theorem, its restriction to the σ-field of Carathéodory-measurable sets is a measure. It is called the -dimensional Hausdorff measure of . Due to the metric outer measure property, all Borel subsets of are measurable. In the above definition the sets in the covering are arbitrary. However, we can require the covering sets to be open or closed, or in normed spaces even convex, that will yield the same numbers, hence the same measure. In restricting the covering sets to be balls may change the measures but does not change the dimension of the measured sets. Properties of Hausdorff measures Note that if d is a positive integer, the d-dimensional Hausdorff measure of is a rescaling of the usual d-dimensional Lebesgue measure , which is normalized so that the Lebesgue measure of the unit cube [0,1]d is 1. In fact, for any Borel set E, where αd is the volume of the unit d-ball; it can be expressed using Euler's gamma function This is , where is the volume of the unit diameter d-ball. Remark. Some authors adopt a definition of Hausdorff measure slightly different from the one chosen here, the difference being that the value defined above is multiplied by the factor , so that Hausdorff d-dimensional measure coincides exactly with Lebesgue measure in the case of Euclidean space. Relation with Hausdorff dimension It turns out that may have a finite, nonzero value for at most one . That is, the Hausdor
https://en.wikipedia.org/wiki/Concyclic%20points
In geometry, a set of points are said to be concyclic (or cocyclic) if they lie on a common circle. A polygon whose vertices are concyclic is called a cyclic polygon, and the circle is called its circumscribing circle or circumcircle. All concyclic points are equidistant from the center of the circle. Three points in the plane that do not all fall on a straight line are concyclic, so every triangle is a cyclic polygon, with a well-defined circumcircle. However, four or more points in the plane are not necessarily concyclic. After triangles, the special case of cyclic quadrilaterals has been most extensively studied. Perpendicular bisectors In general the centre O of a circle on which points P and Q lie must be such that OP and OQ are equal distances. Therefore O must lie on the perpendicular bisector of the line segment PQ. For n distinct points there are n(n − 1)/2 bisectors, and the concyclic condition is that they all meet in a single point, the centre O. Triangles The vertices of every triangle fall on a circle called the circumcircle. (Because of this, some authors define "concyclic" only in the context of four or more points on a circle.) Several other sets of points defined from a triangle are also concyclic, with different circles; see Nine-point circle and Lester's theorem. The radius of the circle on which lie a set of points is, by definition, the radius of the circumcircle of any triangle with vertices at any three of those points. If the pairwise distances among three of the points are a, b, and c, then the circle's radius is The equation of the circumcircle of a triangle, and expressions for the radius and the coordinates of the circle's center, in terms of the Cartesian coordinates of the vertices are given here and here. Other concyclic points In any triangle all of the following nine points are concyclic on what is called the nine-point circle: the midpoints of the three edges, the feet of the three altitudes, and the points halfway between the orthocenter and each of the three vertices. Lester's theorem states that in any scalene triangle, the two Fermat points, the nine-point center, and the circumcenter are concyclic. If lines are drawn through the Lemoine point parallel to the sides of a triangle, then the six points of intersection of the lines and the sides of the triangle are concyclic, in what is called the Lemoine circle. The van Lamoen circle associated with any given triangle contains the circumcenters of the six triangles that are defined inside by its three medians. A triangle's circumcenter, its Lemoine point, and its first two Brocard points are concyclic, with the segment from the circumcenter to the Lemoine point being a diameter. Cyclic quadrilaterals A quadrilateral ABCD with concyclic vertices is called a cyclic quadrilateral; this happens if and only if (the inscribed angle theorem) which is true if and only if the opposite angles inside the quadrilateral are supplementary. A cyclic quad
https://en.wikipedia.org/wiki/Conditional%20probability%20distribution
In probability theory and statistics, given two jointly distributed random variables and , the conditional probability distribution of given is the probability distribution of when is known to be a particular value; in some cases the conditional probabilities may be expressed as functions containing the unspecified value of as a parameter. When both and are categorical variables, a conditional probability table is typically used to represent the conditional probability. The conditional distribution contrasts with the marginal distribution of a random variable, which is its distribution without reference to the value of the other variable. If the conditional distribution of given is a continuous distribution, then its probability density function is known as the conditional density function. The properties of a conditional distribution, such as the moments, are often referred to by corresponding names such as the conditional mean and conditional variance. More generally, one can refer to the conditional distribution of a subset of a set of more than two variables; this conditional distribution is contingent on the values of all the remaining variables, and if more than one variable is included in the subset then this conditional distribution is the conditional joint distribution of the included variables. Conditional discrete distributions For discrete random variables, the conditional probability mass function of given can be written according to its definition as: Due to the occurrence of in the denominator, this is defined only for non-zero (hence strictly positive) The relation with the probability distribution of given is: Example Consider the roll of a fair and let if the number is even (i.e., 2, 4, or 6) and otherwise. Furthermore, let if the number is prime (i.e., 2, 3, or 5) and otherwise. Then the unconditional probability that is 3/6 = 1/2 (since there are six possible rolls of the dice, of which three are even), whereas the probability that conditional on is 1/3 (since there are three possible prime number rolls—2, 3, and 5—of which one is even). Conditional continuous distributions Similarly for continuous random variables, the conditional probability density function of given the occurrence of the value of can be written as where gives the joint density of and , while gives the marginal density for . Also in this case it is necessary that . The relation with the probability distribution of given is given by: The concept of the conditional distribution of a continuous random variable is not as intuitive as it might seem: Borel's paradox shows that conditional probability density functions need not be invariant under coordinate transformations. Example The graph shows a bivariate normal joint density for random variables and . To see the distribution of conditional on , one can first visualize the line in the plane, and then visualize the plane containing that line and perpendicular to
https://en.wikipedia.org/wiki/Point%20in%20polygon
In computational geometry, the point-in-polygon (PIP) problem asks whether a given point in the plane lies inside, outside, or on the boundary of a polygon. It is a special case of point location problems and finds applications in areas that deal with processing geometrical data, such as computer graphics, computer vision, geographic information systems (GIS), motion planning, and computer-aided design (CAD). An early description of the problem in computer graphics shows two common approaches (ray casting and angle summation) in use as early as 1974. An attempt of computer graphics veterans to trace the history of the problem and some tricks for its solution can be found in an issue of the Ray Tracing News. Ray casting algorithm One simple way of finding whether the point is inside or outside a simple polygon is to test how many times a ray, starting from the point and going in any fixed direction, intersects the edges of the polygon. If the point is on the outside of the polygon the ray will intersect its edge an even number of times. If the point is on the inside of the polygon then it will intersect the edge an odd number of times. The status of a point on the edge of the polygon depends on the details of the ray intersection algorithm. This algorithm is sometimes also known as the crossing number algorithm or the even–odd rule algorithm, and was known as early as 1962. The algorithm is based on a simple observation that if a point moves along a ray from infinity to the probe point and if it crosses the boundary of a polygon, possibly several times, then it alternately goes from the outside to inside, then from the inside to the outside, etc. As a result, after every two "border crossings" the moving point goes outside. This observation may be mathematically proved using the Jordan curve theorem. Limited precision If implemented on a computer with finite precision arithmetics, the results may be incorrect if the point lies very close to that boundary, because of rounding errors. For some applications, like video games or other entertainment products, this is not a large concern since they often favor speed over precision. However, for a formally correct computer program, one would have to introduce a numerical tolerance ε and test in line whether P (the point) lies within ε of L (the Line), in which case the algorithm should stop and report "P lies very close to the boundary." Most implementations of the ray casting algorithm consecutively check intersections of a ray with all sides of the polygon in turn. In this case the following problem must be addressed. If the ray passes exactly through a vertex of a polygon, then it will intersect 2 segments at their endpoints. While it is OK for the case of the topmost vertex in the example or the vertex between crossing 4 and 5, the case of the rightmost vertex (in the example) requires that we count one intersection for the algorithm to work correctly. A similar problem arises with horizonta
https://en.wikipedia.org/wiki/Point%20location
The point location problem is a fundamental topic of computational geometry. It finds applications in areas that deal with processing geometrical data: computer graphics, geographic information systems (GIS), motion planning, and computer aided design (CAD). In its most general form, the problem is, given a partition of the space into disjoint regions, to determine the region where a query point lies. For example, the problem of determining which window of a graphical user interface contains a given mouse click can be formulated as an instance of point location, with a subdivision formed by the visible parts of each window, although specialized data structures may be more appropriate than general-purpose point location data structures in this application. Another special case is the point in polygon problem, in which one needs to determine whether a point is inside, outside, or on the boundary of a single polygon. In many applications, one needs to determine the location of several different points with respect to the same partition of the space. To solve this problem efficiently, it is useful to build a data structure that, given a query point, quickly determines which region contains the query point (e.g. Voronoi Diagram). Planar case In the planar case, we are given a planar subdivision S, formed by multiple polygons called faces, and need to determine which face contains a query point. A brute force search of each face using the point-in-polygon algorithm is possible, but usually not feasible for subdivisions of high complexity. Several different approaches lead to optimal data structures, with O(n) storage space and O(log n) query time, where n is the total number of vertices in S. For simplicity, we assume that the planar subdivision is contained inside a square bounding box. Slab decomposition The simplest and earliest data structure to achieve O(log n) time was discovered by Dobkin and Lipton in 1976. It is based on subdividing S using vertical lines that pass through each vertex in S. The region between two consecutive vertical lines is called a slab. Notice that each slab is divided by non-intersecting line segments that completely cross the slab from left to right. The region between two consecutive segments inside a slab corresponds to a unique face of S. Therefore, we reduce our point location problem to two simpler problems: Given a subdivision of the plane into vertical slabs, determine which slab contains a given point. Given a slab subdivided into regions by non-intersecting segments that completely cross the slab from left to right, determine which region contains a given point. The first problem can be solved by binary search on the x coordinate of the vertical lines in O(log n) time. The second problem can also be solved in O(log n) time by binary search. To see how, notice that, as the segments do not intersect and completely cross the slab, the segments can be sorted vertically inside each slab. While this algorith
https://en.wikipedia.org/wiki/Algebraic%20torus
In mathematics, an algebraic torus, where a one dimensional torus is typically denoted by , , or , is a type of commutative affine algebraic group commonly found in projective algebraic geometry and toric geometry. Higher dimensional algebraic tori can be modelled as a product of algebraic groups . These groups were named by analogy with the theory of tori in Lie group theory (see Cartan subgroup). For example, over the complex numbers the algebraic torus is isomorphic to the group scheme , which is the scheme theoretic analogue of the Lie group . In fact, any -action on a complex vector space can be pulled back to a -action from the inclusion as real manifolds. Tori are of fundamental importance in the theory of algebraic groups and Lie groups and in the study of the geometric objects associated to them such as symmetric spaces and buildings. Algebraic tori over fields In most places we suppose that the base field is perfect (for example finite or characteristic zero). This hypothesis is required to have a smooth group schemepg 64, since for an algebraic group to be smooth over characteristic , the maps must be geometrically reduced for large enough , meaning the image of the corresponding map on is smooth for large enough . In general one has to use separable closures instead of algebraic closures. Multiplicative group of a field If is a field then the multiplicative group over is the algebraic group such that for any field extension the -points are isomorphic to the group . To define it properly as an algebraic group one can take the affine variety defined by the equation in the affine plane over with coordinates . The multiplication is then given by restricting the regular rational map defined by and the inverse is the restriction of the regular rational map . Definition Let be a field with algebraic closure . Then a -torus is an algebraic group defined over which is isomorphic over to a finite product of copies of the multiplicative group. In other words, if is an -group it is a torus if and only if for some . The basic terminology associated to tori is as follows. The integer is called the rank or absolute rank of the torus . The torus is said to be split over a field extension if . There is a unique minimal finite extension of over which is split, which is called the splitting field of . The -rank of is the maximal rank of a split sub-torus of . A torus is split if and only if its -rank equals its absolute rank. A torus is said to be anisotropic if its -rank is zero. Isogenies An isogeny between algebraic groups is a surjective morphism with finite kernel; two tori are said to be isogenous if there exists an isogeny from the first to the second. Isogenies between tori are particularly well-behaved: for any isogeny there exists a "dual" isogeny such that is a power map. In particular being isogenous is an equivalence relation between tori. Examples Over an algebraically closed field Over any alg
https://en.wikipedia.org/wiki/Perfect%20field
In algebra, a field k is perfect if any one of the following equivalent conditions holds: Every irreducible polynomial over k has distinct roots. Every irreducible polynomial over k is separable. Every finite extension of k is separable. Every algebraic extension of k is separable. Either k has characteristic 0, or, when k has characteristic , every element of k is a pth power. Either k has characteristic 0, or, when k has characteristic , the Frobenius endomorphism is an automorphism of k. The separable closure of k is algebraically closed. Every reduced commutative k-algebra A is a separable algebra; i.e., is reduced for every field extension F/k. (see below) Otherwise, k is called imperfect. In particular, all fields of characteristic zero and all finite fields are perfect. Perfect fields are significant because Galois theory over these fields becomes simpler, since the general Galois assumption of field extensions being separable is automatically satisfied over these fields (see third condition above). Another important property of perfect fields is that they admit Witt vectors. More generally, a ring of characteristic p (p a prime) is called perfect if the Frobenius endomorphism is an automorphism. (When restricted to integral domains, this is equivalent to the above condition "every element of k is a pth power".) Examples Examples of perfect fields are: every field of characteristic zero, so and every finite extension, and ; every finite field ; every algebraically closed field; the union of a set of perfect fields totally ordered by extension; fields algebraic over a perfect field. Most fields that are encountered in practice are perfect. The imperfect case arises mainly in algebraic geometry in characteristic . Every imperfect field is necessarily transcendental over its prime subfield (the minimal subfield), because the latter is perfect. An example of an imperfect field is the field , since the Frobenius sends and therefore it is not surjective. It embeds into the perfect field called its perfection. Imperfect fields cause technical difficulties because irreducible polynomials can become reducible in the algebraic closure of the base field. For example, consider for an imperfect field of characteristic and a not a p-th power in k. Then in its algebraic closure , the following equality holds: where b = a and such b exists in this algebraic closure. Geometrically, this means that does not define an affine plane curve in . Field extension over a perfect field Any finitely generated field extension K over a perfect field k is separably generated, i.e. admits a separating transcendence base, that is, a transcendence base Γ such that K is separably algebraic over k(Γ). Perfect closure and perfection One of the equivalent conditions says that, in characteristic p, a field adjoined with all p-th roots () is perfect; it is called the perfect closure of k and usually denoted by . The perfect closure can be used
https://en.wikipedia.org/wiki/Regular%20sequence
In commutative algebra, a regular sequence is a sequence of elements of a commutative ring which are as independent as possible, in a precise sense. This is the algebraic analogue of the geometric notion of a complete intersection. Definitions For a commutative ring R and an R-module M, an element r in R is called a non-zero-divisor on M if r m = 0 implies m = 0 for m in M. An M-regular sequence is a sequence r1, ..., rd in R such that ri is a not a zero-divisor on M/(r1, ..., ri-1)M for i = 1, ..., d. Some authors also require that M/(r1, ..., rd)M is not zero. Intuitively, to say that r1, ..., rd is an M-regular sequence means that these elements "cut M down" as much as possible, when we pass successively from M to M/(r1)M, to M/(r1, r2)M, and so on. An R-regular sequence is called simply a regular sequence. That is, r1, ..., rd is a regular sequence if r1 is a non-zero-divisor in R, r2 is a non-zero-divisor in the ring R/(r1), and so on. In geometric language, if X is an affine scheme and r1, ..., rd is a regular sequence in the ring of regular functions on X, then we say that the closed subscheme {r1=0, ..., rd=0} ⊂ X is a complete intersection subscheme of X. Being a regular sequence may depend on the order of the elements. For example, x, y(1-x), z(1-x) is a regular sequence in the polynomial ring C[x, y, z], while y(1-x), z(1-x), x is not a regular sequence. But if R is a Noetherian local ring and the elements ri are in the maximal ideal, or if R is a graded ring and the ri are homogeneous of positive degree, then any permutation of a regular sequence is a regular sequence. Let R be a Noetherian ring, I an ideal in R, and M a finitely generated R-module. The depth of I on M, written depthR(I, M) or just depth(I, M), is the supremum of the lengths of all M-regular sequences of elements of I. When R is a Noetherian local ring and M is a finitely generated R-module, the depth of M, written depthR(M) or just depth(M), means depthR(m, M); that is, it is the supremum of the lengths of all M-regular sequences in the maximal ideal m of R. In particular, the depth of a Noetherian local ring R means the depth of R as a R-module. That is, the depth of R is the maximum length of a regular sequence in the maximal ideal. For a Noetherian local ring R, the depth of the zero module is ∞, whereas the depth of a nonzero finitely generated R-module M is at most the Krull dimension of M (also called the dimension of the support of M). Examples Given an integral domain any nonzero gives a regular sequence. For a prime number p, the local ring Z(p) is the subring of the rational numbers consisting of fractions whose denominator is not a multiple of p. The element p is a non-zero-divisor in Z(p), and the quotient ring of Z(p) by the ideal generated by p is the field Z/(p). Therefore p cannot be extended to a longer regular sequence in the maximal ideal (p), and in fact the local ring Z(p) has depth 1. For any field k, the elements x1, ..., xn in
https://en.wikipedia.org/wiki/Iterated%20function%20system
In mathematics, iterated function systems (IFSs) are a method of constructing fractals; the resulting fractals are often self-similar. IFS fractals are more related to set theory than fractal geometry. They were introduced in 1981. IFS fractals, as they are normally called, can be of any number of dimensions, but are commonly computed and drawn in 2D. The fractal is made up of the union of several copies of itself, each copy being transformed by a function (hence "function system"). The canonical example is the Sierpiński triangle. The functions are normally contractive, which means they bring points closer together and make shapes smaller. Hence, the shape of an IFS fractal is made up of several possibly-overlapping smaller copies of itself, each of which is also made up of copies of itself, ad infinitum. This is the source of its self-similar fractal nature. Definition Formally, an iterated function system is a finite set of contraction mappings on a complete metric space. Symbolically, is an iterated function system if each is a contraction on the complete metric space . Properties Hutchinson showed that, for the metric space , or more generally, for a complete metric space , such a system of functions has a unique nonempty compact (closed and bounded) fixed set S. One way of constructing a fixed set is to start with an initial nonempty closed and bounded set S0 and iterate the actions of the fi, taking Sn+1 to be the union of the images of Sn under the fi; then taking S to be the closure of the limit . Symbolically, the unique fixed (nonempty compact) set has the property The set S is thus the fixed set of the Hutchinson operator defined for via The existence and uniqueness of S is a consequence of the contraction mapping principle, as is the fact that for any nonempty compact set in . (For contractive IFS this convergence takes place even for any nonempty closed bounded set ). Random elements arbitrarily close to S may be obtained by the "chaos game," described below. Recently it was shown that the IFSs of non-contractive type (i.e. composed of maps that are not contractions with respect to any topologically equivalent metric in X) can yield attractors. These arise naturally in projective spaces, though classical irrational rotation on the circle can be adapted too. The collection of functions generates a monoid under composition. If there are only two such functions, the monoid can be visualized as a binary tree, where, at each node of the tree, one may compose with the one or the other function (i.e. take the left or the right branch). In general, if there are k functions, then one may visualize the monoid as a full k-ary tree, also known as a Cayley tree. Constructions Sometimes each function is required to be a linear, or more generally an affine, transformation, and hence represented by a matrix. However, IFSs may also be built from non-linear functions, including projective transformations and Möbius transformations.
https://en.wikipedia.org/wiki/Computational%20semiotics
Computational semiotics is an interdisciplinary field that applies, conducts, and draws on research in logic, mathematics, the theory and practice of computation, formal and natural language studies, the cognitive sciences generally, and semiotics proper. The term encompasses both the application of semiotics to computer hardware and software design and, conversely, the use of computation for performing semiotic analysis. The former focuses on what semiotics can bring to computation; the latter on what computation can bring to semiotics. Semiotics of computation A common theme of this work is the adoption of a sign-theoretic perspective on issues of artificial intelligence and knowledge representation. Many of its applications lie in the field of human-computer interaction (HCI) and fundamental devices of recognition. One part of this field, known as algebraic semiotics, combines aspects of algebraic specification and social semiotics, and has been applied to user interface design and to the representation of mathematical proofs. Computational methods for semiotics This strand involves formalizing semiotic methods of analysis and implementing them as algorithms on computers to process large digital data sets. These data sets are typically textual but semiotics opens the way for analysis of all manner of other data. Existing work provides methods for automated opposition analysis and generation of semiotic squares; metaphor identification; and image analysis. Shackell has suggested that a new field of Natural Semiotic Processing should emerge to extend natural language processing into areas such as persuasive technology, marketing and brand analysis that have significant cultural or non-linguistic aspects. On the other side, Meunier argues that semiotics and computation are compatible and combining them provides more logical consistency in understanding forms of meaning. See also Artificial intelligence Computational linguistics Computer-human interaction Formal language Information theory Knowledge representation Computational semantics Logic of information Meaning Natural language Relational database Semiotic engineering Semiotic information theory User interface References Further reading Meunier, J.G. (2021). Computational Semiotics, Bloomsbury Academic. Andersen, P.B. (1991). A Theory of Computer Semiotics, Cambridge University Press. de Souza, C.S., The Semiotic Engineering of Human-Computer Interaction, MIT Press, Cambridge, MA, 2005. Tanaka-Ishii, K. (2010), "Semiotics of Programming", Cambridge University Press. Hugo, J. (2005), "The Semiotics of Control Room Situation Awareness", Fourth International Cyberspace Conference on Ergonomics, Virtual Conference, 15 Sep – 15 Oct 2005. Eprint Gudwin, R.; Queiroz J. (eds) - Semiotics and Intelligent Systems Development - Idea Group Publishing, Hershey PA, USA (2006), (hardcover), 1-59904-064-6 (softcover), 1-59904-065-4 (e-book), 352 ps. Link to publisher Gudwin,
https://en.wikipedia.org/wiki/Sign%20function
In mathematics, the sign function or signum function (from signum, Latin for "sign") is a function that returns the sign of a real number. In mathematical notation the sign function is often represented as . Definition The signum function of a real number is a piecewise function which is defined as follows: Properties Any real number can be expressed as the product of its absolute value and its sign function: It follows that whenever is not equal to 0 we have Similarly, for any real number , We can also ascertain that: The signum function is the derivative of the absolute value function, up to (but not including) the indeterminacy at zero. More formally, in integration theory it is a weak derivative, and in convex function theory the subdifferential of the absolute value at 0 is the interval , "filling in" the sign function (the subdifferential of the absolute value is not single-valued at 0). Note, the resultant power of is 0, similar to the ordinary derivative of . The numbers cancel and all we are left with is the sign of . The signum function is differentiable with derivative 0 everywhere except at 0. It is not differentiable at 0 in the ordinary sense, but under the generalised notion of differentiation in distribution theory, the derivative of the signum function is two times the Dirac delta function, which can be demonstrated using the identity where is the Heaviside step function using the standard formalism. Using this identity, it is easy to derive the distributional derivative: The Fourier transform of the signum function is where means taking the Cauchy principal value. The signum can also be written using the Iverson bracket notation: The signum can also be written using the floor and the absolute value functions: The signum function has a very simple definition if is accepted to be equal to 1. Then signum can be written for all real numbers as The signum function coincides with the limits and as well as, Here, is the Hyperbolic tangent and the superscript of -1, above it, is shorthand notation for the inverse function of the Trigonometric function, tangent. For , a smooth approximation of the sign function is Another approximation is which gets sharper as ; note that this is the derivative of . This is inspired from the fact that the above is exactly equal for all nonzero if , and has the advantage of simple generalization to higher-dimensional analogues of the sign function (for example, the partial derivatives of ). See . Complex signum The signum function can be generalized to complex numbers as: for any complex number except . The signum of a given complex number is the point on the unit circle of the complex plane that is nearest to . Then, for , where is the complex argument function. For reasons of symmetry, and to keep this a proper generalization of the signum function on the reals, also in the complex domain one usually defines, for : Another generalization of the sign function f
https://en.wikipedia.org/wiki/Centrum%20Wiskunde%20%26%20Informatica
The (abbr. CWI; English: "National Research Institute for Mathematics and Computer Science") is a research centre in the field of mathematics and theoretical computer science. It is part of the institutes organization of the Dutch Research Council (NWO) and is located at the Amsterdam Science Park. This institute is famous as the creation site of the programming language Python. It was a founding member of the European Research Consortium for Informatics and Mathematics (ERCIM). Early history The institute was founded in 1946 by Johannes van der Corput, David van Dantzig, Jurjen Koksma, Hendrik Anthony Kramers, Marcel Minnaert and Jan Arnoldus Schouten. It was originally called Mathematical Centre (in Dutch: Mathematisch Centrum). One early mission was to develop mathematical prediction models to assist large Dutch engineering projects, such as the Delta Works. During this early period, the Mathematics Institute also helped with designing the wings of the Fokker F27 Friendship airplane, voted in 2006 as the most beautiful Dutch design of the 20th century. The computer science component developed soon after. Adriaan van Wijngaarden, considered the founder of computer science (or informatica) in the Netherlands, was the director of the institute for almost 20 years. Edsger Dijkstra did most of his early influential work on algorithms and formal methods at CWI. The first Dutch computers, the Electrologica X1 and Electrologica X8, were both designed at the centre, and Electrologica was created as a spinoff to manufacture the machines. In 1983, the name of the institute was changed to Centrum Wiskunde & Informatica (CWI) to reflect a governmental push for emphasizing computer science research in the Netherlands. Recent research The institute is known for its work in fields such as operations research, software engineering, information processing, and mathematical applications in life sciences and logistics. More recent examples of research results from CWI include the development of scheduling algorithms for the Dutch railway system (the Nederlandse Spoorwegen, one of the busiest rail networks in the world) and the development of the Python programming language by Guido van Rossum. Python has played an important role in the development of the Google search platform from the beginning, and it continues to do so as the system grows and evolves. Many information retrieval techniques used by packages such as SPSS were initially developed by Data Distilleries, a CWI spinoff. Work at the institute was recognized by national or international research awards, such as the Lanchester Prize (awarded yearly by INFORMS), the Gödel Prize (awarded by ACM SIGACT) and the Spinoza Prize. Most of its senior researchers hold part-time professorships at other Dutch universities, with the institute producing over 170 full professors during the course of its history. Several CWI researchers have been recognized as members of the Royal Netherlands Academy of Arts an
https://en.wikipedia.org/wiki/Marginal%20distribution
In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variables in the subset without reference to the values of the other variables. This contrasts with a conditional distribution, which gives the probabilities contingent upon the values of the other variables. Marginal variables are those variables in the subset of variables being retained. These concepts are "marginal" because they can be found by summing values in a table along rows or columns, and writing the sum in the margins of the table. The distribution of the marginal variables (the marginal distribution) is obtained by marginalizing (that is, focusing on the sums in the margin) over the distribution of the variables being discarded, and the discarded variables are said to have been marginalized out. The context here is that the theoretical studies being undertaken, or the data analysis being done, involves a wider set of random variables but that attention is being limited to a reduced number of those variables. In many applications, an analysis may start with a given collection of random variables, then first extend the set by defining new ones (such as the sum of the original random variables) and finally reduce the number by placing interest in the marginal distribution of a subset (such as the sum). Several different analyses may be done, each treating a different subset of variables as the marginal distribution. Definition Marginal probability mass function Given a known joint distribution of two discrete random variables, say, and , the marginal distribution of either variable – for example – is the probability distribution of when the values of are not taken into consideration. This can be calculated by summing the joint probability distribution over all values of . Naturally, the converse is also true: the marginal distribution can be obtained for by summing over the separate values of . , and A marginal probability can always be written as an expected value: Intuitively, the marginal probability of X is computed by examining the conditional probability of X given a particular value of Y, and then averaging this conditional probability over the distribution of all values of Y. This follows from the definition of expected value (after applying the law of the unconscious statistician) Therefore, marginalization provides the rule for the transformation of the probability distribution of a random variable Y and another random variable : Marginal probability density function Given two continuous random variables X and Y whose joint distribution is known, then the marginal probability density function can be obtained by integrating the joint probability distribution, , over Y, and vice versa. That is where , and . Marginal cumulative distribution function Finding the marginal
https://en.wikipedia.org/wiki/Equivalent%20dose
Equivalent dose is a dose quantity H representing the stochastic health effects of low levels of ionizing radiation on the human body which represents the probability of radiation-induced cancer and genetic damage. It is derived from the physical quantity absorbed dose, but also takes into account the biological effectiveness of the radiation, which is dependent on the radiation type and energy. In the SI system of units, the unit of measure is the sievert (Sv). Application To enable consideration of stochastic health risk, calculations are performed to convert the physical quantity absorbed dose into equivalent dose, the details of which depend on the radiation type. For applications in radiation protection and dosimetry assessment, the International Commission on Radiological Protection (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) have published recommendations and data on how to calculate equivalent dose from absorbed dose. Equivalent dose is designated by the ICRP as a "limiting quantity"; to specify exposure limits to ensure that "the occurrence of stochastic health effects is kept below unacceptable levels and that tissue reactions are avoided". This is a calculated value, as equivalent dose cannot be practically measured, and the purpose of the calculation is to generate a value of equivalent dose for comparison with observed health effects. Calculation Equivalent dose HT is calculated using the mean absorbed dose deposited in body tissue or organ T, multiplied by the radiation weighting factor WR which is dependent on the type and energy of the radiation R. The radiation weighting factor represents the relative biological effectiveness of the radiation and modifies the absorbed dose to take account of the different biological effects of various types and energies of radiation. The ICRP has assigned radiation weighting factors to specified radiation types dependent on their relative biological effectiveness, which are shown in accompanying table. Calculating equivalent dose from absorbed dose; where HT  is the equivalent dose in sieverts (Sv) absorbed by tissue T, DT,R  is the absorbed dose in grays (Gy) in tissue T by radiation type R and WR  is the radiation weighting factor defined by regulation. Thus for example, an absorbed dose of 1 Gy by alpha particles will lead to an equivalent dose of 20 Sv, and an equivalent dose of radiation is estimated to have the same biological effect as an equal amount of absorbed dose of gamma rays, which is given a weighting factor of 1. To obtain the equivalent dose for a mix of radiation types and energies, a sum is taken over all types of radiation energy doses. This takes into account the contributions of the varying biological effect of different radiation types. History The concept of equivalent dose was developed in the 1950s. In its 1990 recommendations, the ICRP revised the definitions of some radiation protection quantities, and provided new na
https://en.wikipedia.org/wiki/Spectral%20theory
In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter. Mathematical background The name spectral theory was introduced by David Hilbert in his original formulation of Hilbert space theory, which was cast in terms of quadratic forms in infinitely many variables. The original spectral theorem was therefore conceived as a version of the theorem on principal axes of an ellipsoid, in an infinite-dimensional setting. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous. Hilbert himself was surprised by the unexpected application of this theory, noting that "I developed my theory of infinitely many variables from purely mathematical interests, and even called it 'spectral analysis' without any presentiment that it would later find application to the actual spectrum of physics." There have been three main ways to formulate spectral theory, each of which find use in different domains. After Hilbert's initial formulation, the later development of abstract Hilbert spaces and the spectral theory of single normal operators on them were well suited to the requirements of physics, exemplified by the work of von Neumann. The further theory built on this to address Banach algebras in general. This development leads to the Gelfand representation, which covers the commutative case, and further into non-commutative harmonic analysis. The difference can be seen in making the connection with Fourier analysis. The Fourier transform on the real line is in one sense the spectral theory of differentiation as a differential operator. But for that to cover the phenomena one has already to deal with generalized eigenfunctions (for example, by means of a rigged Hilbert space). On the other hand it is simple to construct a group algebra, the spectrum of which captures the Fourier transform's basic properties, and this is carried out by means of Pontryagin duality. One can also study the spectral properties of operators on Banach spaces. For example, compact operators on Banach spaces have many spectral properties similar to that of matrices. Physical background The background in the physics of vibrations has been explained in this way: Such physical ideas have nothing to do with the mathematical theory on a technical level, but there are examples of indirect involvement (see for example Mark Kac's question Can you hear the shape of a drum?). Hilbert's adoption of the term "spectrum" has been attributed to an 1897 paper of Wilhelm Wirtinger on Hill
https://en.wikipedia.org/wiki/Regular%20local%20ring
In commutative algebra, a regular local ring is a Noetherian local ring having the property that the minimal number of generators of its maximal ideal is equal to its Krull dimension. In symbols, let A be a Noetherian local ring with maximal ideal m, and suppose a1, ..., an is a minimal set of generators of m. Then by Krull's principal ideal theorem n ≥ dim A, and A is defined to be regular if n = dim A. The appellation regular is justified by the geometric meaning. A point x on an algebraic variety X is nonsingular if and only if the local ring of germs at x is regular. (See also: regular scheme.) Regular local rings are not related to von Neumann regular rings. For Noetherian local rings, there is the following chain of inclusions: Characterizations There are a number of useful definitions of a regular local ring, one of which is mentioned above. In particular, if is a Noetherian local ring with maximal ideal , then the following are equivalent definitions: Let where is chosen as small as possible. Then is regular if , where the dimension is the Krull dimension. The minimal set of generators of are then called a regular system of parameters. Let be the residue field of . Then is regular if , where the second dimension is the Krull dimension. Let be the global dimension of (i.e., the supremum of the projective dimensions of all -modules.) Then is regular if , in which case, . Multiplicity one criterion states: if the completion of a Noetherian local ring A is unimixed (in the sense that there is no embedded prime divisor of the zero ideal and for each minimal prime p, ) and if the multiplicity of A is one, then A is regular. (The converse is always true: the multiplicity of a regular local ring is one.) This criterion corresponds to a geometric intuition in algebraic geometry that a local ring of an intersection is regular if and only if the intersection is a transversal intersection. In the positive characteristic case, there is the following important result due to Kunz: A Noetherian local ring of positive characteristic p is regular if and only if the Frobenius morphism is flat and is reduced. No similar result is known in characteristic zero (it is unclear how one should replace the Frobenius morphism). Examples Every field is a regular local ring. These have (Krull) dimension 0. In fact, the fields are exactly the regular local rings of dimension 0. Any discrete valuation ring is a regular local ring of dimension 1 and the regular local rings of dimension 1 are exactly the discrete valuation rings. Specifically, if k is a field and X is an indeterminate, then the ring of formal power series k is a regular local ring having (Krull) dimension 1. If p is an ordinary prime number, the ring of p-adic integers is an example of a discrete valuation ring, and consequently a regular local ring, which does not contain a field. More generally, if k is a field and X1, X2, ..., Xd are indeterminates, then the ring of formal p
https://en.wikipedia.org/wiki/Depth
Depth(s) may refer to: Science and mathematics Depth (ring theory), an important invariant of rings and modules in commutative and homological algebra Depth in a well, the measurement between two points in an oil well Color depth (or "number of bits" or "bit depth"), in computer graphics Market depth, in financial markets, the size of an order needed to move the market a given amount Moulded depth, a nautical measurement Sequence depth, or coverage, in genetic sequencing Depth (coordinate), a type of vertical distance Tree depth Art and entertainment Depth (video game), an asymmetrical multiplayer video game for Microsoft Windows Depths (novel), a 2004 novel by Henning Mankell Depths (Oceano album), 2009 Depths (Windy & Carl album), 1998 "Depths" (Law & Order: Criminal Intent), an episode of Law & Order: Criminal Intent Depth, the Japanese title for the PlayStation game released in Europe under the name Fluid Depths of Wikipedia, social media account dedicated to interesting or unusual Wikipedia content See also Altitude, height, and depth (ISO definitions) Altitude Depth charge (disambiguation) Depth perception, the visual ability to perceive the world in three dimensions (3D) Fluid pressure Plumb-bob Sea level Deep (disambiguation)
https://en.wikipedia.org/wiki/Multifactorial
Multifactorial (having many factors) can refer to: The multifactorial in mathematics. Multifactorial inheritance, a pattern of predisposition for a disease process.
https://en.wikipedia.org/wiki/Double%20factorial
In mathematics, the double factorial of a number , denoted by , is the product of all the positive integers up to that have the same parity (odd or even) as . That is, Restated, this says that for even , the double factorial is while for odd it is For example, . The zero double factorial as an empty product. The sequence of double factorials for even = starts as The sequence of double factorials for odd = starts as The term odd factorial is sometimes used for the double factorial of an odd number. History and usage In a 1902 paper, the physicist Arthur Schuster wrote: states that the double factorial was originally introduced in order to simplify the expression of certain trigonometric integrals that arise in the derivation of the Wallis product. Double factorials also arise in expressing the volume of a hypersphere, and they have many applications in enumerative combinatorics. They occur in Student's -distribution (1908), though Gosset did not use the double exclamation point notation. Relation to the factorial Because the double factorial only involves about half the factors of the ordinary factorial, its value is not substantially larger than the square root of the factorial , and it is much smaller than the iterated factorial . The factorial of a positive may be written as the product of two double factorials: and therefore where the denominator cancels the unwanted factors in the numerator. (The last form also applies when .) For an even non-negative integer with , the double factorial may be expressed as For odd with , combining the two previous formulas yields For an odd positive integer with , the double factorial may be expressed in terms of -permutations of or a falling factorial as Applications in enumerative combinatorics Double factorials are motivated by the fact that they occur frequently in enumerative combinatorics and other settings. For instance, for odd values of counts Perfect matchings of the complete graph for odd . In such a graph, any single vertex v has possible choices of vertex that it can be matched to, and once this choice is made the remaining problem is one of selecting a perfect matching in a complete graph with two fewer vertices. For instance, a complete graph with four vertices a, b, c, and d has three perfect matchings: ab and cd, ac and bd, and ad and bc. Perfect matchings may be described in several other equivalent ways, including involutions without fixed points on a set of items (permutations in which each cycle is a pair) or chord diagrams (sets of chords of a set of points evenly spaced on a circle such that each point is the endpoint of exactly one chord, also called Brauer diagrams). The numbers of matchings in complete graphs, without constraining the matchings to be perfect, are instead given by the telephone numbers, which may be expressed as a summation involving double factorials. Stirling permutations, permutations of the multiset of numbers in which each pa
https://en.wikipedia.org/wiki/Hyperfactorial
In mathematics, and more specifically number theory, the hyperfactorial of a positive integer is the product of the numbers of the form from to Definition The hyperfactorial of a positive integer is the product of the numbers . That is, Following the usual convention for the empty product, the hyperfactorial of 0 is 1. The sequence of hyperfactorials, beginning with , is: Interpolation and approximation The hyperfactorials were studied beginning in the 19th century by Hermann Kinkelin and James Whitbread Lee Glaisher. As Kinkelin showed, just as the factorials can be continuously interpolated by the gamma function, the hyperfactorials can be continuously interpolated by the K-function. Glaisher provided an asymptotic formula for the hyperfactorials, analogous to Stirling's formula for the factorials: where is the Glaisher–Kinkelin constant. Other properties According to an analogue of Wilson's theorem on the behavior of factorials modulo prime numbers, when is an odd prime number where is the notation for the double factorial. The hyperfactorials give the sequence of discriminants of Hermite polynomials in their probabilistic formulation. References External links Integer sequences Factorial and binomial topics