source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Join
Join may refer to: Join (law), to include additional counts or additional defendants on an indictment In mathematics: Join (mathematics), a least upper bound of sets orders in lattice theory Join (topology), an operation combining two topological spaces Join (sigma algebra), a refinement of sigma algebras Join (algebraic geometry), a union of lines between two varieties In computing: Join (relational algebra), a binary operation on tuples corresponding to the relation join of SQL Join (SQL), relational join, a binary operation on SQL and relational database tables join (Unix), a Unix command similar to relational join Join-calculus, a process calculus developed at INRIA for the design of distributed programming languages Join-pattern, generalization of Join-calculus Joins (concurrency library), a concurrent computing API from Microsoft Research Join Network Studio of NENU, a non-profit organization of Northeast Normal University Joins.com, the website for South Korean newspaper JoongAng Ilbo Joining (woodworking), woodworking processes of combining two or more pieces of wood together, generally through the use of nails or screws Joining (metalworking), metalworking processes which combine two or more pieces of metal together, typically by the use of screws or welding See also Joiner (disambiguation) The Joining (disambiguation) Joint (disambiguation) Joyn (disambiguation)
https://en.wikipedia.org/wiki/Maximal%20and%20minimal%20elements
In mathematics, especially in order theory, a maximal element of a subset of some preordered set is an element of that is not smaller than any other element in . A minimal element of a subset of some preordered set is defined dually as an element of that is not greater than any other element in . The notions of maximal and minimal elements are weaker than those of greatest element and least element which are also known, respectively, as maximum and minimum. The maximum of a subset of a preordered set is an element of which is greater than or equal to any other element of and the minimum of is again defined dually. In the particular case of a partially ordered set, while there can be at most one maximum and at most one minimum there may be multiple maximal or minimal elements. Specializing further to totally ordered sets, the notions of maximal element and maximum coincide, and the notions of minimal element and minimum coincide. As an example, in the collection ordered by containment, the element {d, o} is minimal as it contains no sets in the collection, the element {g, o, a, d} is maximal as there are no sets in the collection which contain it, the element {d, o, g} is neither, and the element {o, a, f} is both minimal and maximal. By contrast, neither a maximum nor a minimum exists for Zorn's lemma states that every partially ordered set for which every totally ordered subset has an upper bound contains at least one maximal element. This lemma is equivalent to the well-ordering theorem and the axiom of choice and implies major results in other mathematical areas like the Hahn–Banach theorem, the Kirszbraun theorem, Tychonoff's theorem, the existence of a Hamel basis for every vector space, and the existence of an algebraic closure for every field. Definition Let be a preordered set and let is an element such that if satisfies then necessarily Similarly, is an element such that if satisfies then necessarily Equivalently, is a minimal element of with respect to if and only if is a maximal element of with respect to where by definition, if and only if (for all ). If the subset is not specified then it should be assumed that Explicitly, a (respectively, ) is a maximal (resp. minimal) element of with respect to If the preordered set also happens to be a partially ordered set (or more generally, if the restriction is a partially ordered set) then is a maximal element of if and only if contains no element strictly greater than explicitly, this means that there does not exist any element such that and The characterization for minimal elements is obtained by using in place of Existence and uniqueness Maximal elements need not exist. Example 1: Let where denotes the real numbers. For all but (that is, but not ). Example 2: Let where denotes the rational numbers and where is irrational. In general is only a partial order on If is a maximal element and then it remains possi
https://en.wikipedia.org/wiki/Universal%20set
In set theory, a universal set is a set which contains all objects, including itself. In set theory as usually formulated, it can be proven in multiple ways that a universal set does not exist. However, some non-standard variants of set theory include a universal set. Reasons for nonexistence Many set theories do not allow for the existence of a universal set. There are several different arguments for its non-existence, based on different choices of axioms for set theory. Regularity In Zermelo–Fraenkel set theory, the axiom of regularity and axiom of pairing prevent any set from containing itself. For any set , the set (constructed using pairing) necessarily contains an element disjoint from , by regularity. Because its only element is , it must be the case that is disjoint from , and therefore that does not contain itself. Because a universal set would necessarily contain itself, it cannot exist under these axioms. Russell's paradox Russell's paradox prevents the existence of a universal set in set theories that include Zermelo's axiom of comprehension. This axiom states that, for any formula and any set , there exists a set that contains exactly those elements of that satisfy . As a consequence of this axiom, to every set there corresponds another set consisting of the elements of that do not contain themselves. cannot contain itself, because it consists only of sets that do not contain themselves. It cannot be a member of , because if it were it would be included as a member of itself, by its definition, contradicting the fact that it cannot contain itself. Therefore, every set is non-universal: there exists a set that it does not contain. This indeed holds even with predicative comprehension and over intuitionistic logic. Cantor's theorem Another difficulty with the idea of a universal set concerns the power set of the set of all sets. Because this power set is a set of sets, it would necessarily be a subset of the set of all sets, provided that both exist. However, this conflicts with Cantor's theorem that the power set of any set (whether infinite or not) always has strictly higher cardinality than the set itself. Theories of universality The difficulties associated with a universal set can be avoided either by using a variant of set theory in which the axiom of comprehension is restricted in some way, or by using a universal object that is not considered to be a set. Restricted comprehension There are set theories known to be consistent (if the usual set theory is consistent) in which the universal set does exist (and is true). In these theories, Zermelo's axiom of comprehension does not hold in general, and the axiom of comprehension of naive set theory is restricted in a different way. A set theory containing a universal set is necessarily a non-well-founded set theory. The most widely studied set theory with a universal set is Willard Van Orman Quine's New Foundations. Alonzo Church and Arnold Oberschelp also p
https://en.wikipedia.org/wiki/Completing%20the%20square
In elementary algebra, completing the square is a technique for converting a quadratic polynomial of the form to the form for some values of h and k. In other words, completing the square places a perfect square trinomial inside of a quadratic expression. Completing the square is used in solving quadratic equations, deriving the quadratic formula, graphing quadratic functions, evaluating integrals in calculus, such as Gaussian integrals with a linear term in the exponent, finding Laplace transforms. In mathematics, completing the square is often applied in any computation involving quadratic polynomials. History The technique of completing the square was known in the Old Babylonian Empire. Muhammad ibn Musa Al-Khwarizmi, a famous polymath who wrote the early algebraic treatise Al-Jabr, used the technique of completing the square to solve quadratic equations. Overview Background The formula in elementary algebra for computing the square of a binomial is: For example: In any perfect square, the coefficient of x is twice the number p, and the constant term is equal to p2. Basic example Consider the following quadratic polynomial: This quadratic is not a perfect square, since 28 is not the square of 5: However, it is possible to write the original quadratic as the sum of this square and a constant: This is called completing the square. General description Given any monic quadratic it is possible to form a square that has the same first two terms: This square differs from the original quadratic only in the value of the constant term. Therefore, we can write where . This operation is known as completing the square. For example: Non-monic case Given a quadratic polynomial of the form it is possible to factor out the coefficient a, and then complete the square for the resulting monic polynomial. Example: This process of factoring out the coefficient a can further be simplified by only factorising it out of the first 2 terms. The integer at the end of the polynomial does not have to be included. Example: This allows the writing of any quadratic polynomial in the form Formula Scalar case The result of completing the square may be written as a formula. In the general case, one has with In particular, when , one has with By solving the equation in terms of and reorganizing the resulting expression, one gets the quadratic formula for the roots of the quadratic equation: Matrix case The matrix case looks very similar: where and . Note that has to be symmetric. If is not symmetric the formulae for and have to be generalized to: Relation to the graph In analytic geometry, the graph of any quadratic function is a parabola in the xy-plane. Given a quadratic polynomial of the form the numbers h and k may be interpreted as the Cartesian coordinates of the vertex (or stationary point) of the parabola. That is, h is the x-coordinate of the axis of symmetry (i.e. the axis of symmetry has equation x = h), and k is the m
https://en.wikipedia.org/wiki/Macsyma
Macsyma (; "Project MAC's SYmbolic MAnipulator") is one of the oldest general-purpose computer algebra systems still in wide use. It was originally developed from 1968 to 1982 at MIT's Project MAC. In 1982, Macsyma was licensed to Symbolics and became a commercial product. In 1992, Symbolics Macsyma was spun off to Macsyma, Inc., which continued to develop Macsyma until 1999. That version is still available for Microsoft's Windows XP operating system. The 1982 version of MIT Macsyma remained available to academics and US government agencies, and it is distributed by the US Department of Energy (DOE). That version, DOE Macsyma, was maintained by Bill Schelter. Under the name of Maxima, it was released under the GPL in 1999, and remains under active maintenance. Development The project was initiated in July, 1968 by Carl Engelman, William A. Martin (front end, expression display, polynomial arithmetic) and Joel Moses (simplifier, indefinite integration: heuristic/Risch). Martin was in charge of the project until 1971, and Moses ran it for the next decade. Engelman and his staff left in 1969 to return to The MITRE Corporation. Some code came from earlier work, notably Knut Korsvold's simplifier. Later major contributors to the core mathematics engine were: Yannis Avgoustis (special functions), David Barton (solving algebraic systems of equations), Richard Bogen (special functions), Bill Dubuque (indefinite integration, limits, power series, number theory, special functions, functional equations, pattern matching, sign queries, Gröbner, TriangSys), Richard Fateman (rational functions, pattern matching, arbitrary precision floating-point), Michael Genesereth (comparison, knowledge database), Jeff Golden (simplifier, language, system), R. W. Gosper (definite summation, special functions, simplification, number theory), Carl Hoffman (general simplifier, macros, non-commutative simplifier, ports to Multics and LispM, system, visual equation editor), Charles Karney (plotting), John Kulp, Ed Lafferty (ODE solution, special functions), Stavros Macrakis (real/imaginary parts, compiler, system), Richard Pavelle (indicial tensor calculus, general relativity package, ordinary and partial differential equations), David A. Spear (Gröbner), Barry Trager (algebraic integration, factoring, Gröbner), Paul S. Wang (polynomial factorization and GCD, complex numbers, limits, definite integration, Fortran and LaTeX code generation), David Y. Y. Yun (polynomial GCDs), Gail Zacharias (Gröbner) and Rich Zippel (power series, polynomial factorization, number theory, combinatorics). Macsyma was written in Maclisp, and was, in some cases, a key motivator for improving that dialect of Lisp in the areas of numerical computing, efficient compilation and language design. Maclisp itself ran primarily on PDP-6 and PDP-10 computers, but also on the Multics OS and on the Lisp Machine architectures. Macsyma was one of the largest, if not the largest, Lisp programs of the time. C
https://en.wikipedia.org/wiki/Mercer%27s%20theorem
In mathematics, specifically functional analysis, Mercer's theorem is a representation of a symmetric positive-definite function on a square as a sum of a convergent sequence of product functions. This theorem, presented in , is one of the most notable results of the work of James Mercer (1883–1932). It is an important theoretical tool in the theory of integral equations; it is used in the Hilbert space theory of stochastic processes, for example the Karhunen–Loève theorem; and it is also used to characterize a symmetric positive-definite kernel. Introduction To explain Mercer's theorem, we first consider an important special case; see below for a more general formulation. A kernel, in this context, is a symmetric continuous function where symmetric means that for all . K is said to be positive-definite if and only if for all finite sequences of points x1, ..., xn of [a, b] and all choices of real numbers c1, ..., cn. Note that the term "positive-definite" is well-established in literature despite the weak inequality in the definition. Associated to K is a linear operator (more specifically a Hilbert–Schmidt integral operator) on functions defined by the integral For technical considerations we assume can range through the space L2[a, b] (see Lp space) of square-integrable real-valued functions. Since TK is a linear operator, we can talk about eigenvalues and eigenfunctions of TK. Theorem. Suppose K is a continuous symmetric positive-definite kernel. Then there is an orthonormal basis {ei}i of L2[a, b] consisting of eigenfunctions of TK such that the corresponding sequence of eigenvalues {λi}i is nonnegative. The eigenfunctions corresponding to non-zero eigenvalues are continuous on [a, b] and K has the representation where the convergence is absolute and uniform. Details We now explain in greater detail the structure of the proof of Mercer's theorem, particularly how it relates to spectral theory of compact operators. The map K ↦ TK is injective. TK is a non-negative symmetric compact operator on L2[a,b]; moreover K(x, x) ≥ 0. To show compactness, show that the image of the unit ball of L2[a,b] under TK equicontinuous and apply Ascoli's theorem, to show that the image of the unit ball is relatively compact in C([a,b]) with the uniform norm and a fortiori in L2[a,b]. Now apply the spectral theorem for compact operators on Hilbert spaces to TK to show the existence of the orthonormal basis {ei}i of L2[a,b] If λi ≠ 0, the eigenvector (eigenfunction) ei is seen to be continuous on [a,b]. Now which shows that the sequence converges absolutely and uniformly to a kernel K0 which is easily seen to define the same operator as the kernel K. Hence K=K0 from which Mercer's theorem follows. Finally, to show non-negativity of the eigenvalues one can write and expressing the right hand side as an integral well-approximated by its Riemann sums, which are non-negative by positive-definiteness of K, implying , implying . Trace
https://en.wikipedia.org/wiki/Censoring
Censoring may refer to: Censoring (statistics) Censorship Internet censorship
https://en.wikipedia.org/wiki/Riccati%20equation
In mathematics, a Riccati equation in the narrowest sense is any first-order ordinary differential equation that is quadratic in the unknown function. In other words, it is an equation of the form where and . If the equation reduces to a Bernoulli equation, while if the equation becomes a first order linear ordinary differential equation. The equation is named after Jacopo Riccati (1676–1754). More generally, the term Riccati equation is used to refer to matrix equations with an analogous quadratic term, which occur in both continuous-time and discrete-time linear-quadratic-Gaussian control. The steady-state (non-dynamic) version of these is referred to as the algebraic Riccati equation. Conversion to a second order linear equation The non-linear Riccati equation can always be converted to a second order linear ordinary differential equation (ODE): If then, wherever is non-zero and differentiable, satisfies a Riccati equation of the form where and , because Substituting , it follows that satisfies the linear 2nd order ODE since so that and hence A solution of this equation will lead to a solution of the original Riccati equation. Application to the Schwarzian equation An important application of the Riccati equation is to the 3rd order Schwarzian differential equation which occurs in the theory of conformal mapping and univalent functions. In this case the ODEs are in the complex domain and differentiation is with respect to a complex variable. (The Schwarzian derivative has the remarkable property that it is invariant under Möbius transformations, i.e. whenever is non-zero.) The function satisfies the Riccati equation By the above where is a solution of the linear ODE Since , integration gives for some constant . On the other hand any other independent solution of the linear ODE has constant non-zero Wronskian which can be taken to be after scaling. Thus so that the Schwarzian equation has solution Obtaining solutions by quadrature The correspondence between Riccati equations and second-order linear ODEs has other consequences. For example, if one solution of a 2nd order ODE is known, then it is known that another solution can be obtained by quadrature, i.e., a simple integration. The same holds true for the Riccati equation. In fact, if one particular solution can be found, the general solution is obtained as Substituting in the Riccati equation yields and since it follows that or which is a Bernoulli equation. The substitution that is needed to solve this Bernoulli equation is Substituting directly into the Riccati equation yields the linear equation A set of solutions to the Riccati equation is then given by where z is the general solution to the aforementioned linear equation. See also Linear-quadratic regulator Algebraic Riccati equation Linear-quadratic-Gaussian control References Further reading External links Riccati Equation at EqWorld: The World of Mathematical Equ
https://en.wikipedia.org/wiki/QR%20decomposition
In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix A into a product A = QR of an orthonormal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares problem and is the basis for a particular eigenvalue algorithm, the QR algorithm. Cases and definitions Square matrix Any real square matrix A may be decomposed as where Q is an orthogonal matrix (its columns are orthogonal unit vectors meaning and R is an upper triangular matrix (also called right triangular matrix). If A is invertible, then the factorization is unique if we require the diagonal elements of R to be positive. If instead A is a complex square matrix, then there is a decomposition A = QR where Q is a unitary matrix (so If A has n linearly independent columns, then the first n columns of Q form an orthonormal basis for the column space of A. More generally, the first k columns of Q form an orthonormal basis for the span of the first k columns of A for any . The fact that any column k of A only depends on the first k columns of Q corresponds to the triangular form of R. Rectangular matrix More generally, we can factor a complex m×n matrix A, with , as the product of an m×m unitary matrix Q and an m×n upper triangular matrix R. As the bottom (m−n) rows of an m×n upper triangular matrix consist entirely of zeroes, it is often useful to partition R, or both R and Q: where R1 is an n×n upper triangular matrix, 0 is an zero matrix, Q1 is m×n, Q2 is , and Q1 and Q2 both have orthogonal columns. call Q1R1 the thin QR factorization of A; Trefethen and Bau call this the reduced QR factorization. If A is of full rank n and we require that the diagonal elements of R1 are positive then R1 and Q1 are unique, but in general Q2 is not. R1 is then equal to the upper triangular factor of the Cholesky decomposition of A A (= ATA if A is real). QL, RQ and LQ decompositions Analogously, we can define QL, RQ, and LQ decompositions, with L being a lower triangular matrix. Computing the QR decomposition There are several methods for actually computing the QR decomposition, such as by means of the Gram–Schmidt process, Householder transformations, or Givens rotations. Each has a number of advantages and disadvantages. Using the Gram–Schmidt process Consider the Gram–Schmidt process applied to the columns of the full column rank matrix with inner product (or for the complex case). Define the projection: then: We can now express the s over our newly computed orthonormal basis: where This can be written in matrix form: where: and Example Consider the decomposition of Recall that an orthonormal matrix has the property Then, we can calculate by means of Gram–Schmidt as follows: Thus, we have Relation to RQ decomposition The RQ decomposition transforms a matrix A into the product of an upper triangular matrix R (also known as right-triangular) and an or
https://en.wikipedia.org/wiki/Multiset
In mathematics, a multiset (or bag, or mset) is a modification of the concept of a set that, unlike a set, allows for multiple instances for each of its elements. The number of instances given for each element is called the multiplicity of that element in the multiset. As a consequence, an infinite number of multisets exist which contain only elements and , but vary in the multiplicities of their elements: The set contains only elements and , each having multiplicity 1 when is seen as a multiset. In the multiset , the element has multiplicity 2, and has multiplicity 1. In the multiset , and both have multiplicity 3. These objects are all different when viewed as multisets, although they are the same set, since they all consist of the same elements. As with sets, and in contrast to tuples, the order in which elements are listed does not matter in discriminating multisets, so and denote the same multiset. To distinguish between sets and multisets, a notation that incorporates square brackets is sometimes used: the multiset can be denoted by . The cardinality of a multiset is the sum of the multiplicities of all its elements. For example, in the multiset the multiplicities of the members , , and are respectively 2, 3, and 1, and therefore the cardinality of this multiset is 6. Nicolaas Govert de Bruijn coined the word multiset in the 1970s, according to Donald Knuth. However, the concept of multisets predates the coinage of the word multiset by many centuries. Knuth himself attributes the first study of multisets to the Indian mathematician Bhāskarāchārya, who described permutations of multisets around 1150. Other names have been proposed or used for this concept, including list, bunch, bag, heap, sample, weighted set, collection, and suite. History Wayne Blizard traced multisets back to the very origin of numbers, arguing that "in ancient times, the number n was often represented by a collection of n strokes, tally marks, or units." These and similar collections of objects can be regarded as multisets, because strokes, tally marks, or units are considered indistinguishable. This shows that people implicitly used multisets even before mathematics emerged. Practical needs for this structure have caused multisets to be rediscovered several times, appearing in literature under different names. For instance, they were important in early AI languages, such as QA4, where they were referred to as bags, a term attributed to Peter Deutsch. A multiset has been also called an aggregate, heap, bunch, sample, weighted set, occurrence set, and fireset (finitely repeated element set). Although multisets were used implicitly from ancient times, their explicit exploration happened much later. The first known study of multisets is attributed to the Indian mathematician Bhāskarāchārya circa 1150, who described permutations of multisets. The work of Marius Nizolius (1498–1576) contains another early reference to the concept of multisets. Athanas
https://en.wikipedia.org/wiki/Quadratic%20irrational%20number
In mathematics, a quadratic irrational number (also known as a quadratic irrational or quadratic surd) is an irrational number that is the solution to some quadratic equation with rational coefficients which is irreducible over the rational numbers. Since fractions in the coefficients of a quadratic equation can be cleared by multiplying both sides by their least common denominator, a quadratic irrational is an irrational root of some quadratic equation with integer coefficients. The quadratic irrational numbers, a subset of the complex numbers, are algebraic numbers of degree 2, and can therefore be expressed as for integers ; with , and non-zero, and with square-free. When is positive, we get real quadratic irrational numbers, while a negative gives complex quadratic irrational numbers which are not real numbers. This defines an injection from the quadratic irrationals to quadruples of integers, so their cardinality is at most countable; since on the other hand every square root of a prime number is a distinct quadratic irrational, and there are countably many prime numbers, they are at least countable; hence the quadratic irrationals are a countable set. Quadratic irrationals are used in field theory to construct field extensions of the field of rational numbers . Given the square-free integer , the augmentation of by quadratic irrationals using produces a quadratic field ). For example, the inverses of elements of ) are of the same form as the above algebraic numbers: Quadratic irrationals have useful properties, especially in relation to continued fractions, where we have the result that all real quadratic irrationals, and only real quadratic irrationals, have periodic continued fraction forms. For example The periodic continued fractions can be placed in one-to-one correspondence with the rational numbers. The correspondence is explicitly provided by Minkowski's question mark function, and an explicit construction is given in that article. It is entirely analogous to the correspondence between rational numbers and strings of binary digits that have an eventually-repeating tail, which is also provided by the question mark function. Such repeating sequences correspond to periodic orbits of the dyadic transformation (for the binary digits) and the Gauss map for continued fractions. Real quadratic irrational numbers and indefinite binary quadratic forms We may rewrite a quadratic irrationality as follows: It follows that every quadratic irrational number can be written in the form This expression is not unique. Fix a non-square, positive integer congruent to or modulo , and define a set as Every quadratic irrationality is in some set , since the congruence conditions can be met by scaling the numerator and denominator by an appropriate factor. A matrix with integer entries and can be used to transform a number in . The transformed number is If is in , then is too. The relation between and above is an equivale
https://en.wikipedia.org/wiki/Steinhaus%E2%80%93Moser%20notation
In mathematics, Steinhaus–Moser notation is a notation for expressing certain large numbers. It is an extension (devised by Leo Moser) of Hugo Steinhaus's polygon notation. Definitions a number in a triangle means nn. a number in a square is equivalent to "the number inside triangles, which are all nested." a number in a pentagon is equivalent with "the number inside squares, which are all nested." etc.: written in an ()-sided polygon is equivalent with "the number inside nested -sided polygons". In a series of nested polygons, they are associated inward. The number inside two triangles is equivalent to nn inside one triangle, which is equivalent to nn raised to the power of nn. Steinhaus defined only the triangle, the square, and the circle , which is equivalent to the pentagon defined above. Special values Steinhaus defined: mega is the number equivalent to 2 in a circle: megiston is the number equivalent to 10 in a circle: ⑩ Moser's number is the number represented by "2 in a megagon". Megagon is here the name of a polygon with "mega" sides (not to be confused with the polygon with one million sides). Alternative notations: use the functions square(x) and triangle(x) let be the number represented by the number in nested -sided polygons; then the rules are: and mega =  megiston =  moser = Mega A mega, ②, is already a very large number, since ② = square(square(2)) = square(triangle(triangle(2))) = square(triangle(22)) = square(triangle(4)) = square(44) = square(256) = triangle(triangle(triangle(...triangle(256)...))) [256 triangles] = triangle(triangle(triangle(...triangle(256256)...))) [255 triangles] ~ triangle(triangle(triangle(...triangle(3.2317 × 10616)...))) [255 triangles] ... Using the other notation: mega = M(2,1,5) = M(256,256,3) With the function we have mega = where the superscript denotes a functional power, not a numerical power. We have (note the convention that powers are evaluated from right to left): M(256,2,3) = M(256,3,3) = ≈ Similarly: M(256,4,3) ≈ M(256,5,3) ≈ M(256,6,3) ≈ etc. Thus: mega = , where denotes a functional power of the function . Rounding more crudely (replacing the 257 at the end by 256), we get mega ≈ , using Knuth's up-arrow notation. After the first few steps the value of is each time approximately equal to . In fact, it is even approximately equal to (see also approximate arithmetic for very large numbers). Using base 10 powers we get: ( is added to the 616) ( is added to the , which is negligible; therefore just a 10 is added at the bottom) ... mega = , where denotes a functional power of the function . Hence Moser's number It has been proven that in Conway chained arrow notation, and, in Knuth's up-arrow notation, Therefore, Moser's number, although incomprehensibly large, is vanishingly small compared to Graham's number: See also Ackermann function References External links Robert Munafo's Large Numbers Factoid on Big Numbers Megistron
https://en.wikipedia.org/wiki/33%20%28number%29
33 (thirty-three) is the natural number following 32 and preceding 34. In mathematics 33 is: specifically, the 8th distinct semiprime, it being the 3rd of the form (3.q) where q is a higher prime. It also contains a semiprime aliquot sum of 15, within an aliquot sequence of four members (33, 15, 9, 4, 3, 1, 0) in the prime 3-aliquot tree. the largest positive integer that cannot be expressed as a sum of different triangular numbers. It is also the largest of twelve integers that are not the sum of five non-zero squares. the smallest odd repdigit that is not a prime number. the sum of the first four positive factorials. the sum of the sum of the divisors of the first six positive integers. the sum of three cubes: equal to the sum of the squares of the digits of its own square in bases 9, 16 and 31. For numbers greater than 1, this is a rare property to have in more than one base. the first member of the first cluster of three semiprimes 33, 34, 35; the next such cluster is 85, 86, 87. It is also the smallest integer such that it and the next two integers all have the same number of divisors. the first double digit centered dodecahedral number. divisible by the number of prime numbers (11) below 33. a palindrome in both decimal and binary. A positive definite quadratic integer matrix represents all odd numbers when it contains at least the set of seven integers: {1, 3, 5, 7, 11, 15, 33}. In science The atomic number of arsenic. 33 is, according to the Newton scale, the temperature at which water boils. A normal human spine has, on average, 33 vertebrae when the bones that form the coccyx are counted individually. Astronomy Messier object M33, a magnitude 7.0 galaxy in the constellation Triangulum, also known as the Triangulum Galaxy. The New General Catalogue object NGC 33, a double star in the constellation Pisces The smallest dwarf planet in the Solar System is Ceres, which is also the 33rd largest celestial body in the Solar System, comprising about one-third of the mass in the asteroid belt. 33 is the number of years that it takes for the Lunar phase to return to its original position in relation to the Solar calendar. A Lunar month (Synodic) contains 29.53 days. A twelve-month lunar year contains 354.36 days. A solar year (Tropical year) totals 365.24 days. The lunar year is therefore 10.88 days shorter than the 12-month solar year. As each year passes, the lunar month trails 10.88 days behind the solar year. On the turn of the 33rd year, the lunar month is approximately 359.04 days, close to one whole year behind the solar calendar from the original position measured, thus it has a 33-year cycle in relation to the solar year. Where the lunar year and the solar year , then and . Many cultures and civilisations have based their calendar on the lunar cycles including the Athenian Attic calendar and the Islamic Calendar, the Hijri calendar based on lunar observation. In technology In reference to gramophone records, 33 re
https://en.wikipedia.org/wiki/78%20%28number%29
78 (seventy-eight) is the natural number following 77 and followed by 79. In mathematics 78 is: the 4th discrete tri-prime; or also termed Sphenic number, and the 4th of the form (2.3.r). an abundant number with an aliquot sum of 90. a semiperfect number, as a multiple of a perfect number. the 12th triangular number. a palindromic number in bases 5 (3035), 7 (1417), 12 (6612), 25 (3325), and 38 (2238). a Harshad number in bases 3, 4, 5, 6, 7, 13 and 14. an Erdős–Woods number, since it is possible to find sequences of 78 consecutive integers such that each inner member shares a factor with either the first or the last member. the dimension of the exceptional Lie group E6 and several related objects. the smallest number that can be expressed as the sum of four distinct nonzero squares in more than one way: , or (see image). 77 and 78 form a Ruth–Aaron pair. In science The atomic number of platinum. In other fields 78 is also: In reference to gramophone records, 78 refers those meant to be spun at 78 revolutions per minute. Compare: LP, and 45 rpm. 33 + 45 = 78 A typical tarot deck containing the 21 trump cards, the Fool and the 56 suit cards make up 78 cards The Rule of 78s is a method of yearly interest calculation The number used by Martin Truex Jr. and Furniture Row Racing to win the 2017 Monster Energy NASCAR Cup Series championship and 2016 Coca-Cola 600. The team and driver Regan Smith also won the 2011 Showtime Southern 500 with 78. The number is now used by owner-driver B.J. McLeod for Live Fast Motorsports. References Integers
https://en.wikipedia.org/wiki/45%20%28number%29
45 (forty-five) is the natural number following 44 and preceding 46. In mathematics Forty-five is the smallest odd number that has more divisors than , and that has a larger sum of divisors than . It is the sixth positive integer with a square-prime prime factorization of the form , with and prime, and first of the form . 45 has an aliquot sum of 33 that is part of an aliquot sequence composed of five composite numbers (45, 33, 15, 9, 4, 3, 1, and 0), all of-which are rooted in the 3-aliquot tree. This is the longest aliquot sequence for an odd number up to 45. Forty-five is the sum of all single-digit decimal digits: . It is, equivalently, the ninth triangle number. Forty-five is also the fourth hexagonal number and the second hexadecagonal number, or 16-gonal number. It is also the second smallest triangle number (after 1 and 10) that can be written as the sum of two squares. Forty-five is the smallest positive number that can be expressed as the difference of two nonzero squares in more than two ways: , or (see image). Since the greatest prime factor of is 1,013, which is much more than 45 twice, 45 is a Størmer number. In decimal, 45 is a Kaprekar number and a Harshad number. Forty-five is a little Schroeder number; the next such number is 197, which is the 45th prime number. Forty-five is conjectured from Ramsey number . Abstract algebra In the classification of finite simple groups, the Tits group is sometimes defined as a nonstrict group of Lie type or sporadic group, which yields a total of 45 classes of finite simple groups: two stem from cyclic and alternating groups, sixteen are families of groups of Lie type, twenty-six are strictly sporadic, and one is the exceptional case of . Inside the largest sporadic group, the Friendly Giant , there exist at least 45 conjugacy classes of maximal subgroups, which includes the double cover of the second largest sporadic group . In science The atomic number of rhodium Astronomy Messier object M45, a magnitude 1.4 open cluster in the constellation Taurus, also known as the Pleiades The New General Catalogue object NGC 45, a magnitude 10.6 spiral galaxy in the constellation Cetus In music A type of gramophone record classified by its rotational speed of 45 revolutions per minute (rpm) The group Stars on 45 and its self-titled 1981 song, "Stars on 45" "45 and Fat", a 1996 song by Babybird "Forty-Five", the title of a 2000 song by The Atomic Bitchwax "45", the title of a 2002 song by Elvis Costello, both referring to the 45 rpm singles and to the artist's age when he wrote the song. "45", the title of a 2012 song by The Gaslight Anthem "45", the title of a 2006 song by noodles "45", the title of a 2007 song by The Saturday Knights "45", the title of a 2003 song by Shinedown 45, the title of a 1982 album by Kino "Do the 45", the title of a 2007 song by Ryan Shaw is repeated continuously in the lyrics of the 1997 song "Brimful of Asha" by Cornershop In other fie
https://en.wikipedia.org/wiki/Rayleigh%20quotient
In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix and nonzero vector is defined as:For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugate transpose to the usual transpose . Note that for any non-zero scalar . Recall that a Hermitian (or real symmetric) matrix is diagonalizable with only real eigenvalues. It can be shown that, for a given matrix, the Rayleigh quotient reaches its minimum value (the smallest eigenvalue of ) when is (the corresponding eigenvector). Similarly, and . The Rayleigh quotient is used in the min-max theorem to get exact values of all eigenvalues. It is also used in eigenvalue algorithms (such as Rayleigh quotient iteration) to obtain an eigenvalue approximation from an eigenvector approximation. The range of the Rayleigh quotient (for any matrix, not necessarily Hermitian) is called a numerical range and contains its spectrum. When the matrix is Hermitian, the numerical radius is equal to the spectral norm. Still in functional analysis, is known as the spectral radius. In the context of -algebras or algebraic quantum mechanics, the function that to associates the Rayleigh–Ritz quotient for a fixed and varying through the algebra would be referred to as vector state of the algebra. In quantum mechanics, the Rayleigh quotient gives the expectation value of the observable corresponding to the operator for a system whose state is given by . If we fix the complex matrix , then the resulting Rayleigh quotient map (considered as a function of ) completely determines via the polarization identity; indeed, this remains true even if we allow to be non-Hermitian. However, if we restrict the field of scalars to the real numbers, then the Rayleigh quotient only determines the symmetric part of . Bounds for Hermitian M As stated in the introduction, for any vector x, one has , where are respectively the smallest and largest eigenvalues of . This is immediate after observing that the Rayleigh quotient is a weighted average of eigenvalues of M: where is the -th eigenpair after orthonormalization and is the th coordinate of x in the eigenbasis. It is then easy to verify that the bounds are attained at the corresponding eigenvectors . The fact that the quotient is a weighted average of the eigenvalues can be used to identify the second, the third, ... largest eigenvalues. Let be the eigenvalues in decreasing order. If and is constrained to be orthogonal to , in which case , then has maximum value , which is achieved when . Special case of covariance matrices An empirical covariance matrix can be represented as the product of the data matrix pre-multiplied by its transpose . Being a positive semi-definite matrix, has non-negative eigenvalues, and orthogonal (or orthogonalisable) eigenvectors, which can be demonstrated as follows. Firstly, that the eigenvalues are non-negative: Secondly, that the eigenvectors are
https://en.wikipedia.org/wiki/Markov%20property
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process, which means that its future evolution is independent of its history. It is named after the Russian mathematician Andrey Markov. The term strong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of a random variable known as a stopping time. The term Markov assumption is used to describe a model where the Markov property is assumed to hold, such as a hidden Markov model. A Markov random field extends this property to two or more dimensions or to random variables defined for an interconnected network of items. An example of a model for such a field is the Ising model. A discrete-time stochastic process satisfying the Markov property is known as a Markov chain. Introduction A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present values) depends only upon the present state; that is, given the present, the future does not depend on the past. A process with this property is said to be Markov or Markovian and known as a Markov process. Two famous classes of Markov process are the Markov chain and Brownian motion. Note that there is a subtle, often overlooked and very important point that is often missed in the plain English statement of the definition. Namely that the statespace of the process is constant through time. The conditional description involves a fixed "bandwidth". For example, without this restriction we could augment any process to one which includes the complete history from a given initial condition and it would be made to be Markovian. But the state space would be of increasing dimensionality over time and does not meet the definition. History Definition Let be a probability space with a filtration , for some (totally ordered) index set ; and let be a measurable space. A -valued stochastic process adapted to the filtration is said to possess the Markov property if, for each and each with , In the case where is a discrete set with the discrete sigma algebra and , this can be reformulated as follows: Alternative formulations Alternatively, the Markov property can be formulated as follows. for all and bounded and measurable. Strong Markov property Suppose that is a stochastic process on a probability space with natural filtration . Then for any stopping time on , we can define . Then is said to have the strong Markov property if, for each stopping time , conditional on the event , we have that for each , is independent of given . The strong Markov property implies the ordinary Markov property since by taking the stopping time , the ordinary Markov property can be deduced. In forecasting In the fields of predictive modelling and probabilistic forecasting, the Markov property is considered desirable since it may enable the reasoni
https://en.wikipedia.org/wiki/August%20Beer
August Beer (; 31 July 1825 – 18 November 1863) was a German physicist, chemist, and mathematician of Jewish descent. Biography Beer was born in Trier, where he studied mathematics and natural sciences. Beer was educated at the technical school and gymnasium of his native town until 1845, when he went to Bonn to study mathematics and the sciences under the mathematician and physicist Julius Plücker, whose assistant he became later. In 1848 he won the prize for his essay, "De Situ Axium Opticorum in Crystallis Biaxibus," and obtained the degree of Ph.D. Two years later he was appointed lecturer at the University of Bonn. In 1852, Beer published a paper on the absorption of red light in coloured aqueous solutions of various salts. Beer makes use of the fact, derived from Bouguer's and Lambert's absorption laws, that the intensity of light transmitted through a solution at a given wavelength decreases exponentially with the path length d and the concentration c of the solute (the solvent is considered non-absorbing). The “Absorption Coëfficient” that Beer defined is actually the transmittance (or transmission ratio), T = I / I0. In Beer's formulation: "the transmittance of a concentrated solution can be derived from a measurement of the transmittance of a dilute solution." The transmittance measured for any concentration and path length can be normalized to the corresponding transmittance for a standard concentration and path length. Beer conducted a number of experiments to confirm this empirical law, and to define a standard concentration of 10%, and a standard path length of 10 cm. The photometer, devised by Beer, is shown in the gallery below. Beer continued to publish the results of his scientific labors, writing in 1854 Einleitung in die höhere Optik (Introduction to the Higher Optics). His findings, together with those of Johann Heinrich Lambert, make up the Beer–Lambert law. In 1855 he was appointed professor of mathematics at the University of Bonn. Beer also wrote "Einheit in der Electrostatik," published two years after his death. He died in Bonn in 1863. Beer's law Beer's law, also called the Beer-Lambert law, in spectroscopy, is the physical law stating that the quantity of light absorbed by a substance dissolved in a nonabsorbing solvent is directly proportional to the concentration of the substance and the path length of the light through the solution. Beer's law is commonly written in the form A = εcl, where A is the absorbance, c is the concentration in moles per liter, l is the path length in centimeters, and ε is a constant of proportionality known as the molar extinction coefficient. The law is accurate only for dilute solutions; deviations from the law occur in concentrated solutions because of interactions between molecules of the solute, the substance dissolved in the solvent. Gallery Selected writings Notes References In Greenfield, E. V. (1922). Technical and scientific German. Boston: D.C. Heath & Co. External
https://en.wikipedia.org/wiki/SLE%20%28disambiguation%29
SLE may refer to: Medicine Systemic lupus erythematosus, an autoimmune disease St. Louis encephalitis, a mosquito-borne disease Science and mathematics Semiconductor luminescence equations Sea level equation, following post-glacial rebound Schramm–Loewner evolution in statistical mechanics Transportation McNary Field, airport in Salem, Oregon, US, IATA code Seletar Expressway, Singapore Sleeper Either Class, a type of railway car Shore Line East commuter rail service in Connecticut, USA Other Sara Lee Corporation, NYSE symbol Separate legal entity in US Single loss expectancy for risk on an asset Societas Linguistica Europaea, a linguistics society Sri Lankan English Sierra Leone, ISO-3166-1 alpha-3 country code Sierra Leonean leone currency code Spearhead Land Element of UK armed forces Supported leading edge kite, a type of power kite SuSE SLE operating system Sensory Logical Extrovert in socionics The Space Link Extension of CCSDS
https://en.wikipedia.org/wiki/Gazetteer
A gazetteer is a geographical dictionary or directory used in conjunction with a map or atlas. It typically contains information concerning the geographical makeup, social statistics and physical features of a country, region, or continent. Content of a gazetteer can include a subject's location, dimensions of peaks and waterways, population, gross domestic product and literacy rate. This information is generally divided into topics with entries listed in alphabetical order. Ancient Greek gazetteers are known to have existed since the Hellenistic era. The first known Chinese gazetteer was released by the first century, and with the age of print media in China by the ninth century, the Chinese gentry became invested in producing gazetteers for their local areas as a source of information as well as local pride. The geographer Stephanus of Byzantium wrote a geographical dictionary (which currently has missing parts) in the sixth century which influenced later European compilers. Modern gazetteers can be found in reference sections of most libraries as well as on the internet. Etymology The Oxford English Dictionary defines a "gazetteer" as a "geographical index or dictionary". It includes as an example a work by the British historian Laurence Echard (d. 1730) in 1693 that bore the title "The Gazetteer's: or Newsman's Interpreter: Being a Geographical Index". Echard wrote that the title "Gazetteer's" was suggested to him by a "very eminent person" whose name he chose not to disclose. For Part II of this work published in 1704, Echard referred to the book simply as "the Gazeteer". This marked the introduction of the word "gazetteer" into the English language. Historian Robert C. White suggests that the "very eminent person" written of by Echard was his colleague Edmund Bohun, and chose not to mention Bohun because he became associated with the Jacobite movement. Since the 18th century, the word "gazetteer" has been used interchangeably to define either its traditional meaning (i.e., a geographical dictionary or directory) or a daily newspaper, such as the London Gazetteer. Types and organization Gazetteers are often categorized by the type, and scope, of the information presented. World gazetteers usually consist of an alphabetical listing of countries, with pertinent statistics for each one, with some gazetteers listing information on individual cities, towns, villages, and other settlements of varying sizes. Short-form gazetteers, often used in conjunction with computer mapping and GIS systems, may simply contain a list of place-names together with their locations in latitude and longitude or other spatial referencing systems (e.g., British National Grid reference). Short-form gazetteers appear as a place–name index in the rear of major published atlases. Descriptive gazetteers may include lengthy textual descriptions of the places they contain, including explanation of industries, government, geography, together with historical perspectives, m
https://en.wikipedia.org/wiki/Dual%20group
In mathematics, the dual group refer to: Pontryagin dual, of a locally compact abelian group Langlands dual, of a reductive algebraic group The dual group in the Deligne–Lusztig theory
https://en.wikipedia.org/wiki/Moritz%20Cantor
Moritz Benedikt Cantor (23 August 1829 – 10 April 1920) was a German historian of mathematics. Biography Cantor was born at Mannheim. He came from a Sephardi Jewish family that had emigrated to the Netherlands from Portugal, another branch of which had established itself in Russia. In his early youth, Moritz Cantor was not strong enough to go to school, and his parents decided to educate him at home. Later, however, he was admitted to an advanced class of the Gymnasium in Mannheim. From there he went to the University of Heidelberg in 1848, and soon after to the University of Göttingen, where he studied under Gauss and Weber, and where Stern awakened in him a strong interest in historical research. After obtaining his PhD at the University of Heidelberg in 1851, he went to Berlin, where he eagerly followed the lectures of Peter Gustav Lejeune Dirichlet; and upon his return to Heidelberg in 1853, he was appointed privat-docent at the university. In 1863, he was promoted to the position of assistant professor, and in 1877 he became honorary professor. Cantor was one of the founders of the Kritische Zeitschrift für Chemie, Physik und Mathematik. In 1859 he became associated with Schlömilch as editor of the Zeitschrift für Mathematik und Physik, taking charge of the historical and literary section. Since 1877, through his efforts, a supplement to the Zeitschrift was published under the separate title of Abhandlungen zur Geschichte der Mathematik. Cantor's inaugural dissertation, "Über ein weniger gebräuchliches Coordinaten-System" (1851), gave no indication that the history of exact sciences would soon be enriched by a master work by him. His first important work was "Über die Einführung unserer gegenwärtigen Ziffern in Europa", which he wrote for the Zeitschrift für Mathematik und Physik, 1856, vol. i. His greatest work was Vorlesungen über Geschichte der Mathematik. This comprehensive history of mathematics appeared as follows: Volume 1 (1880) - From the earliest times until 1200 Volume 2 (1892) - From 1200 to 1668 Volume 3 (1894-1896) - From 1668 to 1758 Volume 4 (1908) (with nine collaborators, Cantor as editor) - From 1759 to 1799 Many historians credit him for founding a new discipline in a field that had hitherto lacked the sound, conscientious, and critical methods of other fields of history. In 1900 Moritz Cantor received the honor of giving a plenary address at the International Congress of Mathematicians in Paris (Sur l'historiographie des mathématiques). References Sources Jewish Encyclopedia, 1906 External links Florian Cajori, Moritz Cantor, The historian of mathematics, Bull. Amer. Math. Soc. 26 (1920), pp. 21-28. 1829 births 1920 deaths 19th-century German mathematicians 20th-century German mathematicians German historians of mathematics Heidelberg University alumni University of Göttingen alumni German Sephardi Jews Scientists from Mannheim People from the Grand Duchy of Baden Mathematicians from the German Em
https://en.wikipedia.org/wiki/Quartic%20function
In algebra, a quartic function is a function of the form where a is nonzero, which is defined by a polynomial of degree four, called a quartic polynomial. A quartic equation, or equation of the fourth degree, is an equation that equates a quartic polynomial to zero, of the form where . The derivative of a quartic function is a cubic function. Sometimes the term biquadratic is used instead of quartic, but, usually, biquadratic function refers to a quadratic function of a square (or, equivalently, to the function defined by a quartic polynomial without terms of odd degree), having the form Since a quartic function is defined by a polynomial of even degree, it has the same infinite limit when the argument goes to positive or negative infinity. If a is positive, then the function increases to positive infinity at both ends; and thus the function has a global minimum. Likewise, if a is negative, it decreases to negative infinity and has a global maximum. In both cases it may or may not have another local maximum and another local minimum. The degree four (quartic case) is the highest degree such that every polynomial equation can be solved by radicals, according to the Abel–Ruffini theorem. History Lodovico Ferrari is credited with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna. The Soviet historian I. Y. Depman (ru) claimed that even earlier, in 1486, Spanish mathematician Valmes was burned at the stake for claiming to have solved the quartic equation. Inquisitor General Tomás de Torquemada allegedly told Valmes that it was the will of God that such a solution be inaccessible to human understanding. However, Petr Beckmann, who popularized this story of Depman in the West, said that it was unreliable and hinted that it may have been invented as Soviet antireligious propaganda. Beckmann's version of this story has been widely copied in several books and internet sites, usually without his reservations and sometimes with fanciful embellishments. Several attempts to find corroborating evidence for this story, or even for the existence of Valmes, have failed. The proof that four is the highest degree of a general polynomial for which such solutions can be found was first given in the Abel–Ruffini theorem in 1824, proving that all attempts at solving the higher order polynomials would be futile. The notes left by Évariste Galois prior to dying in a duel in 1832 later led to an elegant complete theory of the roots of polynomials, of which this theorem was one result. Applications Each coordinate of the intersection points of two conic sections is a solution of a quartic equation. The same is true for the intersection of a line and a torus. It follows that
https://en.wikipedia.org/wiki/50%20%28number%29
50 (fifty) is the natural number following 49 and preceding 51. In mathematics Fifty is the smallest number that is the sum of two non-zero square numbers in two distinct ways: 50 = 12 + 72 = 52 + 52 (see image). It is also the sum of three squares, 50 = 32 + 42 + 52, and the sum of four squares, 50 = 62 + 32 + 22 + 12. It is a Harshad number. 50 is a Stirling number of the first kind: and also a Narayana number: There is no solution to the equation φ(x) = 50, making 50 a nontotient. Nor is there a solution to the equation x − φ(x) = 50, making 50 a noncototient. Fifty is the sum of the number of faces that make up the Platonic solids (4 + 6 + 8 + 12 + 20 = 50). In science The atomic number of tin The fifth magic number in nuclear physics In religion In Kabbalah, there are 50 Gates of Wisdom (or Understanding) and 50 Gates of Impurity The traditional number of years in a jubilee period. The Christian Feast of Pentecost takes place on the 50th day of the Easter Season The Jewish Pentecost takes place 50 days after the Passover feast (the holiday of Shavuoth). In Hindu tantric tradition, the number 50 holds significance as the 50 Rudras in the Malinīvijayottara correlate with the 50 phonemes of Sanskrit, as well as with the 50 severed heads worn around goddess Kalis head. The mantra Aham ("I am"), as laid out in the Vijñāna Bhairava represents the first अ(a) and last ह(ha) phonemes of the Sanskrit alphabet and is believed to represent ultimate reality, in accordance with its non-dual philosophy. In sports In cricket one day internationals, each side may bat for 50 overs. In other fields Fifty is: There are 50 states in the United States of America. The TV show Hawaii Five-O and its reimagined version, Hawaii Five-0, are so called because Hawaii is the last (50th) of the states to officially become a state. 5-O (Five-Oh) - Slang for police officers and/or a warning that police are approaching. Derived from the television show Hawaii Five-O A calibre of ammunition (0.50 inches: see .50 BMG) In millimetres, the focal length of the normal lens in 35 mm photography The percentage (50%) equivalent to one half, so that the phrase "fifty-fifty" commonly expresses something divided equally in two; in business this is often denoted as being the ultimate in equal partnership In years of marriage, the gold or "golden" wedding anniversary See also List of highways numbered 50 References Integers
https://en.wikipedia.org/wiki/Connection%20%28mathematics%29
In geometry, the notion of a connection makes precise the idea of transporting local geometric objects, such as tangent vectors or tensors in the tangent space, along a curve or family of curves in a parallel and consistent manner. There are various kinds of connections in modern geometry, depending on what sort of data one wants to transport. For instance, an affine connection, the most elementary type of connection, gives a means for parallel transport of tangent vectors on a manifold from one point to another along a curve. An affine connection is typically given in the form of a covariant derivative, which gives a means for taking directional derivatives of vector fields, measuring the deviation of a vector field from being parallel in a given direction. Connections are of central importance in modern geometry in large part because they allow a comparison between the local geometry at one point and the local geometry at another point. Differential geometry embraces several variations on the connection theme, which fall into two major groups: the infinitesimal and the local theory. The local theory concerns itself primarily with notions of parallel transport and holonomy. The infinitesimal theory concerns itself with the differentiation of geometric data. Thus a covariant derivative is a way of specifying a derivative of a vector field along another vector field on a manifold. A Cartan connection is a way of formulating some aspects of connection theory using differential forms and Lie groups. An Ehresmann connection is a connection in a fibre bundle or a principal bundle by specifying the allowed directions of motion of the field. A Koszul connection is a connection which defines directional derivative for sections of a vector bundle more general than the tangent bundle. Connections also lead to convenient formulations of geometric invariants, such as the curvature (see also curvature tensor and curvature form), and torsion tensor. Motivation: the unsuitability of coordinates Consider the following problem. Suppose that a tangent vector to the sphere S is given at the north pole, and we are to define a manner of consistently moving this vector to other points of the sphere: a means for parallel transport. Naively, this could be done using a particular coordinate system. However, unless proper care is applied, the parallel transport defined in one system of coordinates will not agree with that of another coordinate system. A more appropriate parallel transportation system exploits the symmetry of the sphere under rotation. Given a vector at the north pole, one can transport this vector along a curve by rotating the sphere in such a way that the north pole moves along the curve without axial rolling. This latter means of parallel transport is the Levi-Civita connection on the sphere. If two different curves are given with the same initial and terminal point, and a vector v is rigidly moved along the first curve by a rotation,
https://en.wikipedia.org/wiki/Maxwell%27s%20theorem
In probability theory, Maxwell's theorem (known also as Herschel-Maxwell's theorem and Herschel-Maxwell's derivation) states that if the probability distribution of a random vector in is unchanged by rotations, and if the components are independent, then the components are identically distributed and normally distributed. Equivalent statements If the probability distribution of a vector-valued random variable X = ( X1, ..., Xn )T is the same as the distribution of GX for every n×n orthogonal matrix G and the components are independent, then the components X1, ..., Xn are normally distributed with expected value 0 and all have the same variance. This theorem is one of many characterizations of the normal distribution. The only rotationally invariant probability distributions on Rn that have independent components are multivariate normal distributions with expected value 0 and variance σ2In, (where In = the n×n identity matrix), for some positive number σ2. History James Clerk Maxwell proved the theorem in Proposition IV of his 1860 paper. Ten years earlier, John Herschel also proved the theorem. The logical and historical details of the theorem may be found in. Proof We only need to prove the theorem for the 2-dimensional case, since we can then generalize it to n-dimensions by applying the theorem sequentially to each pair of coordinates. Since rotating by 90 degrees preserves the joint distribution, both has the same probability measure. Let it be . If is a Dirac delta distribution at zero, then it's a gaussian distribution, just degenerate. Now assume that it is not. By Lebesgue's decomposition theorem, we decompose it to a sum of regular measure and an atomic measure: . We need to show that , with a proof by contradiction. Suppose contains an atomic part, then there exists some such that . By independence of , the conditional variable is distributed the same way as . Suppose , then since we assumed is not concentrated at zero, , and so the double ray has nonzero probability. Now by rotational symmetry of , any rotation of the double ray also has the same nonzero probability, and since any two rotations are disjoint, their union has infinite probability, contradiction. (As far as we can find, there is no literature about the case where is singularly continuous, so we will let that case go.) So now let have probability density function , and the problem reduces to solving the functional equation References Sources External Links Maxwell's theorem in a video by 3blue1brown Probability theorems James Clerk Maxwell
https://en.wikipedia.org/wiki/Heyting%20algebra
In mathematics, a Heyting algebra (also known as pseudo-Boolean algebra) is a bounded lattice (with join and meet operations written ∨ and ∧ and with least element 0 and greatest element 1) equipped with a binary operation a → b of implication such that (c ∧ a) ≤ b is equivalent to c ≤ (a → b). From a logical standpoint, A → B is by this definition the weakest proposition for which modus ponens, the inference rule A → B, A ⊢ B, is sound. Like Boolean algebras, Heyting algebras form a variety axiomatizable with finitely many equations. Heyting algebras were introduced by to formalize intuitionistic logic. As lattices, Heyting algebras are distributive. Every Boolean algebra is a Heyting algebra when a → b is defined as ¬a ∨ b, as is every complete distributive lattice satisfying a one-sided infinite distributive law when a → b is taken to be the supremum of the set of all c for which c ∧ a ≤ b. In the finite case, every nonempty distributive lattice, in particular every nonempty finite chain, is automatically complete and completely distributive, and hence a Heyting algebra. It follows from the definition that 1 ≤ 0 → a, corresponding to the intuition that any proposition a is implied by a contradiction 0. Although the negation operation ¬a is not part of the definition, it is definable as a → 0. The intuitive content of ¬a is the proposition that to assume a would lead to a contradiction. The definition implies that a ∧ ¬a = 0. It can further be shown that a ≤ ¬¬a, although the converse, ¬¬a ≤ a, is not true in general, that is, double negation elimination does not hold in general in a Heyting algebra. Heyting algebras generalize Boolean algebras in the sense that Boolean algebras are precisely the Heyting algebras satisfying a ∨ ¬a = 1 (excluded middle), equivalently ¬¬a = a. Those elements of a Heyting algebra H of the form ¬a comprise a Boolean lattice, but in general this is not a subalgebra of H (see below). Heyting algebras serve as the algebraic models of propositional intuitionistic logic in the same way Boolean algebras model propositional classical logic. The internal logic of an elementary topos is based on the Heyting algebra of subobjects of the terminal object 1 ordered by inclusion, equivalently the morphisms from 1 to the subobject classifier Ω. The open sets of any topological space form a complete Heyting algebra. Complete Heyting algebras thus become a central object of study in pointless topology. Every Heyting algebra whose set of non-greatest elements has a greatest element (and forms another Heyting algebra) is subdirectly irreducible, whence every Heyting algebra can be made subdirectly irreducible by adjoining a new greatest element. It follows that even among the finite Heyting algebras there exist infinitely many that are subdirectly irreducible, no two of which have the same equational theory. Hence no finite set of finite Heyting algebras can supply all the counterexamples to non-laws of Heyting algeb
https://en.wikipedia.org/wiki/Wigner%27s%20classification
In mathematics and theoretical physics, Wigner's classification is a classification of the nonnegative energy irreducible unitary representations of the Poincaré group which have either finite or zero mass eigenvalues. (These unitary representations are infinite-dimensional; the group is not semisimple and it does not satisfy Weyl's theorem on complete reducibility.) It was introduced by Eugene Wigner, to classify particles and fields in physics—see the article particle physics and representation theory. It relies on the stabilizer subgroups of that group, dubbed the Wigner little groups of various mass states. The Casimir invariants of the Poincaré group are (Einstein notation) where is the 4-momentum operator, and where is the Pauli–Lubanski pseudovector. The eigenvalues of these operators serve to label the representations. The first is associated with mass-squared and the second with helicity or spin. The physically relevant representations may thus be classified according to whether but or whether with Wigner found that massless particles are fundamentally different from massive particles. For the first case Note that the eigenspace (see generalized eigenspaces of unbounded operators) associated with is a representation of SO(3). In the ray interpretation, one can go over to Spin(3) instead. So, massive states are classified by an irreducible Spin(3) unitary representation that characterizes their spin, and a positive mass, . For the second case Look at the stabilizer of This is the double cover of SE(2) (see projective representation). We have two cases, one where irreps are described by an integral multiple of called the helicity, and the other called the "continuous spin" representation. For the third case The only finite-dimensional unitary solution is the trivial representation called the vacuum. Massive scalar fields As an example, let us visualize the irreducible unitary representation with and It corresponds to the space of massive scalar fields. Let be the hyperboloid sheet defined by: The Minkowski metric restricts to a Riemannian metric on , giving the metric structure of a hyperbolic space, in particular it is the hyperboloid model of hyperbolic space, see geometry of Minkowski space for proof. The Poincare group acts on because (forgetting the action of the translation subgroup with addition inside ) it preserves the Minkowski inner product, and an element of the translation subgroup of the Poincare group acts on by multiplication by suitable phase multipliers where These two actions can be combined in a clever way using induced representations to obtain an action of acting on that combines motions of and phase multiplication. This yields an action of the Poincare group on the space of square-integrable functions defined on the hypersurface in Minkowski space. These may be viewed as measures defined on Minkowski space that are concentrated on the set defined by The Fourier tra
https://en.wikipedia.org/wiki/Carnot%27s%20theorem%20%28inradius%2C%20circumradius%29
In Euclidean geometry, Carnot's theorem states that the sum of the signed distances from the circumcenter D to the sides of an arbitrary triangle ABC is where r is the inradius and R is the circumradius of the triangle. Here the sign of the distances is taken to be negative if and only if the open line segment DX (X = F, G, H) lies completely outside the triangle. In the diagram, DF is negative and both DG and DH are positive. The theorem is named after Lazare Carnot (1753–1823). It is used in a proof of the Japanese theorem for concyclic polygons. References Claudi Alsina, Roger B. Nelsen: When Less is More: Visualizing Basic Inequalities. MAA, 2009, , p.99 Frédéric Perrier: Carnot's Theorem in Trigonometric Disguise. The Mathematical Gazette, Volume 91, No. 520 (March, 2007), pp. 115–117 (JSTOR) David Richeson: The Japanese Theorem for Nonconvex Polygons – Carnot's Theorem. Convergence, December 2013 External links Carnot's Theorem at cut-the-knot Carnot's Theorem by Chris Boucher. The Wolfram Demonstrations Project. Theorems about triangles and circles
https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Vi%C3%A8te
François Viète, Seigneur de la Bigotière (; 1540 – 23 February 1603), commonly known by his mononym, Vieta, was a French mathematician whose work on new algebra was an important step towards modern algebra, due to his innovative use of letters as parameters in equations. He was a lawyer by trade, and served as a privy councillor to both Henry III and Henry IV of France. Biography Early life and education Viète was born at Fontenay-le-Comte in present-day Vendée. His grandfather was a merchant from La Rochelle. His father, Etienne Viète, was an attorney in Fontenay-le-Comte and a notary in Le Busseau. His mother was the aunt of Barnabé Brisson, a magistrate and the first president of parliament during the ascendancy of the Catholic League of France. Viète went to a Franciscan school and in 1558 studied law at Poitiers, graduating as a Bachelor of Laws in 1559. A year later, he began his career as an attorney in his native town. From the outset, he was entrusted with some major cases, including the settlement of rent in Poitou for the widow of King Francis I of France and looking after the interests of Mary, Queen of Scots. Serving Parthenay In 1564, Viète entered the service of Antoinette d'Aubeterre, Lady Soubise, wife of Jean V de Parthenay-Soubise, one of the main Huguenot military leaders and accompanied him to Lyon to collect documents about his heroic defence of that city against the troops of Jacques of Savoy, 2nd Duke of Nemours just the year before. The same year, at Parc-Soubise, in the commune of Mouchamps in present-day Vendée, Viète became the tutor of Catherine de Parthenay, Soubise's twelve-year-old daughter. He taught her science and mathematics and wrote for her numerous treatises on astronomy and trigonometry, some of which have survived. In these treatises, Viète used decimal numbers (twenty years before Stevin's paper) and he also noted the elliptic orbit of the planets, forty years before Kepler and twenty years before Giordano Bruno's death. John V de Parthenay presented him to King Charles IX of France. Viète wrote a genealogy of the Parthenay family and following the death of Jean V de Parthenay-Soubise in 1566 his biography. In 1568, Antoinette, Lady Soubise, married her daughter Catherine to Baron Charles de Quellenec and Viète went with Lady Soubise to La Rochelle, where he mixed with the highest Calvinist aristocracy, leaders like Coligny and Condé and Queen Jeanne d’Albret of Navarre and her son, Henry of Navarre, the future Henry IV of France. In 1570, he refused to represent the Soubise ladies in their infamous lawsuit against the Baron De Quellenec, where they claimed the Baron was unable (or unwilling) to provide an heir. First steps in Paris In 1571, he enrolled as an attorney in Paris, and continued to visit his student Catherine. He regularly lived in Fontenay-le-Comte, where he took on some municipal functions. He began publishing his Universalium inspectionum ad Canonem mathematicum liber singulari
https://en.wikipedia.org/wiki/Cosmic%20string
Cosmic strings are hypothetical 1-dimensional topological defects which may have formed during a symmetry-breaking phase transition in the early universe when the topology of the vacuum manifold associated to this symmetry breaking was not simply connected. Their existence was first contemplated by the theoretical physicist Tom Kibble in the 1970s. The formation of cosmic strings is somewhat analogous to the imperfections that form between crystal grains in solidifying liquids, or the cracks that form when water freezes into ice. The phase transitions leading to the production of cosmic strings are likely to have occurred during the earliest moments of the universe's evolution, just after cosmological inflation, and are a fairly generic prediction in both quantum field theory and string theory models of the early universe. Theories containing cosmic strings In string theory, the role of cosmic strings can be played by the fundamental strings (or F-strings) themselves that define the theory perturbatively, by D-strings which are related to the F-strings by weak-strong or so called S-duality, or higher-dimensional D-, NS- or M-branes that are partially wrapped on compact cycles associated to extra spacetime dimensions so that only one non-compact dimension remains. The prototypical example of a quantum field theory with cosmic strings is the Abelian Higgs model. The quantum field theory and string theory cosmic strings are expected to have many properties in common, but more research is needed to determine the precise distinguishing features. The F-strings for instance are fully quantum-mechanical and do not have a classical definition, whereas the field theory cosmic strings are almost exclusively treated classically. Dimensions Cosmic strings, if they exist, would be extremely thin with diameters of the same order of magnitude as that of a proton, i.e. , or smaller. Given that this scale is much smaller than any cosmological scale, these strings are often studied in the zero-width, or Nambu–Goto approximation. Under this assumption, strings behave as one-dimensional objects and obey the Nambu–Goto action, which is classically equivalent to the Polyakov action that defines the bosonic sector of superstring theory. In field theory, the string width is set by the scale of the symmetry breaking phase transition. In string theory, the string width is set (in the simplest cases) by the fundamental string scale, warp factors (associated to the spacetime curvature of an internal six-dimensional spacetime manifold) and/or the size of internal compact dimensions. (In string theory, the universe is either 10- or 11-dimensional, depending on the strength of interactions and the curvature of spacetime.) Gravitation A string is a geometrical deviation from Euclidean geometry in spacetime characterized by an angular deficit: a circle around the outside of a string would comprise a total angle less than 360°. From the general theory of relativity such a
https://en.wikipedia.org/wiki/Lindel%C3%B6f%20space
In mathematics, a Lindelöf space is a topological space in which every open cover has a countable subcover. The Lindelöf property is a weakening of the more commonly used notion of compactness, which requires the existence of a finite subcover. A is a topological space such that every subspace of it is Lindelöf. Such a space is sometimes called strongly Lindelöf, but confusingly that terminology is sometimes used with an altogether different meaning. The term hereditarily Lindelöf is more common and unambiguous. Lindelöf spaces are named after the Finnish mathematician Ernst Leonard Lindelöf. Properties of Lindelöf spaces Every compact space, and more generally every σ-compact space, is Lindelöf. In particular, every countable space is Lindelöf. A Lindelöf space is compact if and only if it is countably compact. Every second-countable space is Lindelöf, but not conversely. For example, there are many compact spaces that are not second countable. A metric space is Lindelöf if and only if it is separable, and if and only if it is second-countable. Every regular Lindelöf space is normal. Every regular Lindelöf space is paracompact. A countable union of Lindelöf subspaces of a topological space is Lindelöf. Every closed subspace of a Lindelöf space is Lindelöf. Consequently, every Fσ set in a Lindelöf space is Lindelöf. Arbitrary subspaces of a Lindelöf space need not be Lindelöf. The continuous image of a Lindelöf space is Lindelöf. The product of a Lindelöf space and a compact space is Lindelöf. The product of a Lindelöf space and a σ-compact space is Lindelöf. This is a corollary to the previous property. The product of two Lindelöf spaces need not be Lindelöf. For example, the Sorgenfrey line is Lindelöf, but the Sorgenfrey plane is not Lindelöf. In a Lindelöf space, every locally finite family of nonempty subsets is at most countable. Properties of hereditarily Lindelöf spaces A space is hereditarily Lindelöf if and only if every open subspace of it is Lindelöf. Hereditarily Lindelöf spaces are closed under taking countable unions, subspaces, and continuous images. A regular Lindelöf space is hereditarily Lindelöf if and only if it is perfectly normal. Every second-countable space is hereditarily Lindelöf. Every countable space is hereditarily Lindelöf. Every Suslin space is hereditarily Lindelöf. Every Radon measure on a hereditarily Lindelöf space is moderated. Example: the Sorgenfrey plane is not Lindelöf The product of Lindelöf spaces is not necessarily Lindelöf. The usual example of this is the Sorgenfrey plane which is the product of the real line under the half-open interval topology with itself. Open sets in the Sorgenfrey plane are unions of half-open rectangles that include the south and west edges and omit the north and east edges, including the northwest, northeast, and southeast corners. The antidiagonal of is the set of points such that Consider the open covering of which consists of:
https://en.wikipedia.org/wiki/Lower%20limit%20topology
In mathematics, the lower limit topology or right half-open interval topology is a topology defined on , the set of real numbers; it is different from the standard topology on (generated by the open intervals) and has a number of interesting properties. It is the topology generated by the basis of all half-open intervals [a,b), where a and b are real numbers. The resulting topological space is called the Sorgenfrey line after Robert Sorgenfrey or the arrow and is sometimes written . Like the Cantor set and the long line, the Sorgenfrey line often serves as a useful counterexample to many otherwise plausible-sounding conjectures in general topology. The product of with itself is also a useful counterexample, known as the Sorgenfrey plane. In complete analogy, one can also define the upper limit topology, or left half-open interval topology. Properties The lower limit topology is finer (has more open sets) than the standard topology on the real numbers (which is generated by the open intervals). The reason is that every open interval can be written as a (countably infinite) union of half-open intervals. For any real and , the interval is clopen in (i.e., both open and closed). Furthermore, for all real , the sets and are also clopen. This shows that the Sorgenfrey line is totally disconnected. Any compact subset of must be an at most countable set. To see this, consider a non-empty compact subset . Fix an , consider the following open cover of : Since is compact, this cover has a finite subcover, and hence there exists a real number such that the interval contains no point of apart from . This is true for all . Now choose a rational number . Since the intervals , parametrized by , are pairwise disjoint, the function is injective, and so is at most countable. It could be observed that a subset is compact if and only if it bounded from below and is well-ordered when endowed with the order "" (which in particular implies that it is bounded from above). The name "lower limit topology" comes from the following fact: a sequence (or net) in converges to the limit if and only if it "approaches from the right", meaning for every there exists an index such that . The Sorgenfrey line can thus be used to study right-sided limits: if is a function, then the ordinary right-sided limit of at (when the codomain carries the standard topology) is the same as the usual limit of at when the domain is equipped with the lower limit topology and the codomain carries the standard topology. In terms of separation axioms, is a perfectly normal Hausdorff space. In terms of countability axioms, is first-countable and separable, but not second-countable. In terms of compactness properties, is Lindelöf and paracompact, but not σ-compact nor locally compact. is not metrizable, since separable metric spaces are second-countable. However, the topology of a Sorgenfrey line is generated by a quasimetric. is a Baire space. does n
https://en.wikipedia.org/wiki/Equaliser%20%28mathematics%29
In mathematics, an equaliser is a set of arguments where two or more functions have equal values. An equaliser is the solution set of an equation. In certain contexts, a difference kernel is the equaliser of exactly two functions. Definitions Let X and Y be sets. Let f and g be functions, both from X to Y. Then the equaliser of f and g is the set of elements x of X such that f(x) equals g(x) in Y. Symbolically: The equaliser may be denoted Eq(f, g) or a variation on that theme (such as with lowercase letters "eq"). In informal contexts, the notation {f = g} is common. The definition above used two functions f and g, but there is no need to restrict to only two functions, or even to only finitely many functions. In general, if F is a set of functions from X to Y, then the equaliser of the members of F is the set of elements x of X such that, given any two members f and g of F, f(x) equals g(x) in Y. Symbolically: This equaliser may be written as Eq(f, g, h, ...) if is the set {f, g, h, ...}. In the latter case, one may also find {f = g = h = ···} in informal contexts. As a degenerate case of the general definition, let F be a singleton {f}. Since f(x) always equals itself, the equaliser must be the entire domain X. As an even more degenerate case, let F be the empty set. Then the equaliser is again the entire domain X, since the universal quantification in the definition is vacuously true. Difference kernels A binary equaliser (that is, an equaliser of just two functions) is also called a difference kernel. This may also be denoted DiffKer(f, g), Ker(f, g), or Ker(f − g). The last notation shows where this terminology comes from, and why it is most common in the context of abstract algebra: The difference kernel of f and g is simply the kernel of the difference f − g. Furthermore, the kernel of a single function f can be reconstructed as the difference kernel Eq(f, 0), where 0 is the constant function with value zero. Of course, all of this presumes an algebraic context where the kernel of a function is the preimage of zero under that function; that is not true in all situations. However, the terminology "difference kernel" has no other meaning. In category theory Equalisers can be defined by a universal property, which allows the notion to be generalised from the category of sets to arbitrary categories. In the general context, X and Y are objects, while f and g are morphisms from X to Y. These objects and morphisms form a diagram in the category in question, and the equaliser is simply the limit of that diagram. In more explicit terms, the equaliser consists of an object E and a morphism eq : E → X satisfying , and such that, given any object O and morphism m : O → X, if , then there exists a unique morphism u : O → E such that . A morphism is said to equalise and if . In any universal algebraic category, including the categories where difference kernels are used, as well as the category of sets itself, the object E can a
https://en.wikipedia.org/wiki/Distributive%20lattice
In mathematics, a distributive lattice is a lattice in which the operations of join and meet distribute over each other. The prototypical examples of such structures are collections of sets for which the lattice operations can be given by set union and intersection. Indeed, these lattices of sets describe the scenery completely: every distributive lattice is—up to isomorphism—given as such a lattice of sets. Definition As in the case of arbitrary lattices, one can choose to consider a distributive lattice L either as a structure of order theory or of universal algebra. Both views and their mutual correspondence are discussed in the article on lattices. In the present situation, the algebraic description appears to be more convenient. A lattice (L,∨,∧) is distributive if the following additional identity holds for all x, y, and z in L: x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z). Viewing lattices as partially ordered sets, this says that the meet operation preserves non-empty finite joins. It is a basic fact of lattice theory that the above condition is equivalent to its dual: x ∨ (y ∧ z) = (x ∨ y) ∧ (x ∨ z)   for all x, y, and z in L. In every lattice, if one defines the order relation p≤q as usual to mean p∧q=p, then the inequality x ∧ (y ∨ z) ≥ (x ∧ y) ∨ (x ∧ z) and its dual x ∨ (y ∧ z) ≤ (x ∨ y) ∧ (x ∨ z) are always true. A lattice is distributive if one of the converse inequalities holds, too. More information on the relationship of this condition to other distributivity conditions of order theory can be found in the article Distributivity (order theory). Morphisms A morphism of distributive lattices is just a lattice homomorphism as given in the article on lattices, i.e. a function that is compatible with the two lattice operations. Because such a morphism of lattices preserves the lattice structure, it will consequently also preserve the distributivity (and thus be a morphism of distributive lattices). Examples Distributive lattices are ubiquitous but also rather specific structures. As already mentioned the main example for distributive lattices are lattices of sets, where join and meet are given by the usual set-theoretic operations. Further examples include: The Lindenbaum algebra of most logics that support conjunction and disjunction is a distributive lattice, i.e. "and" distributes over "or" and vice versa. Every Boolean algebra is a distributive lattice. Every Heyting algebra is a distributive lattice. Especially this includes all locales and hence all open set lattices of topological spaces. Also note that Heyting algebras can be viewed as Lindenbaum algebras of intuitionistic logic, which makes them a special case of the first example. Every totally ordered set is a distributive lattice with max as join and min as meet. The natural numbers form a (conditionally complete) distributive lattice by taking the greatest common divisor as meet and the least common multiple as join. This lattice also has a least element, namely 1,
https://en.wikipedia.org/wiki/Dual%20%28category%20theory%29
In category theory, a branch of mathematics, duality is a correspondence between the properties of a category C and the dual properties of the opposite category Cop. Given a statement regarding the category C, by interchanging the source and target of each morphism as well as interchanging the order of composing two morphisms, a corresponding dual statement is obtained regarding the opposite category Cop. Duality, as such, is the assertion that truth is invariant under this operation on statements. In other words, if a statement is true about C, then its dual statement is true about Cop. Also, if a statement is false about C, then its dual has to be false about Cop. Given a concrete category C, it is often the case that the opposite category Cop per se is abstract. Cop need not be a category that arises from mathematical practice. In this case, another category D is also termed to be in duality with C if D and Cop are equivalent as categories. In the case when C and its opposite Cop are equivalent, such a category is self-dual. Formal definition We define the elementary language of category theory as the two-sorted first order language with objects and morphisms as distinct sorts, together with the relations of an object being the source or target of a morphism and a symbol for composing two morphisms. Let σ be any statement in this language. We form the dual σop as follows: Interchange each occurrence of "source" in σ with "target". Interchange the order of composing morphisms. That is, replace each occurrence of with Informally, these conditions state that the dual of a statement is formed by reversing arrows and compositions. Duality is the observation that σ is true for some category C if and only if σop is true for Cop. Examples A morphism is a monomorphism if implies . Performing the dual operation, we get the statement that implies For a morphism , this is precisely what it means for f to be an epimorphism. In short, the property of being a monomorphism is dual to the property of being an epimorphism. Applying duality, this means that a morphism in some category C is a monomorphism if and only if the reverse morphism in the opposite category Cop is an epimorphism. An example comes from reversing the direction of inequalities in a partial order. So if X is a set and ≤ a partial order relation, we can define a new partial order relation ≤new by x ≤new y if and only if y ≤ x. This example on orders is a special case, since partial orders correspond to a certain kind of category in which Hom(A,B) can have at most one element. In applications to logic, this then looks like a very general description of negation (that is, proofs run in the opposite direction). For example, if we take the opposite of a lattice, we will find that meets and joins have their roles interchanged. This is an abstract form of De Morgan's laws, or of duality applied to lattices. Limits and colimits are dual notions. Fibrations and cofibrations
https://en.wikipedia.org/wiki/Coalgebra
In mathematics, coalgebras or cogebras are structures that are dual (in the category-theoretic sense of reversing arrows) to unital associative algebras. The axioms of unital associative algebras can be formulated in terms of commutative diagrams. Turning all arrows around, one obtains the axioms of coalgebras. Every coalgebra, by (vector space) duality, gives rise to an algebra, but not in general the other way. In finite dimensions, this duality goes in both directions (see below). Coalgebras occur naturally in a number of contexts (for example, representation theory, universal enveloping algebras and group schemes). There are also F-coalgebras, with important applications in computer science. Informal discussion One frequently recurring example of coalgebras occurs in representation theory, and in particular, in the representation theory of the rotation group. A primary task, of practical use in physics, is to obtain combinations of systems with different states of angular momentum and spin. For this purpose, one uses the Clebsch–Gordan coefficients. Given two systems with angular momenta and , a particularly important task is to find the total angular momentum given the combined state . This is provided by the total angular momentum operator, which extracts the needed quantity from each side of the tensor product. It can be written as an "external" tensor product The word "external" appears here, in contrast to the "internal" tensor product of a tensor algebra. A tensor algebra comes with a tensor product (the internal one); it can also be equipped with a second tensor product, the "external" one, or the coproduct, having the form above. That they are two different products is emphasized by recalling that the internal tensor product of a vector and a scalar is just simple scalar multiplication. The external product keeps them separated. In this setting, the coproduct is the map that takes For this example, can be taken to be one of the spin representations of the rotation group, with the fundamental representation being the common-sense choice. This coproduct can be lifted to all of the tensor algebra, by a simple lemma that applies to free objects: the tensor algebra is a free algebra, therefore, any homomorphism defined on a subset can be extended to the entire algebra. Examining the lifting in detail, one observes that the coproduct behaves as the shuffle product, essentially because the two factors above, the left and right must be kept in sequential order during products of multiple angular momenta (rotations are not commutative). The peculiar form of having the appear only once in the coproduct, rather than (for example) defining is in order to maintain linearity: for this example, (and for representation theory in general), the coproduct must be linear. As a general rule, the coproduct in representation theory is reducible; the factors are given by the Littlewood–Richardson rule. (The Littlewood–Richardson rule conveys
https://en.wikipedia.org/wiki/Bialgebra
In mathematics, a bialgebra over a field K is a vector space over K which is both a unital associative algebra and a counital coassociative coalgebra. The algebraic and coalgebraic structures are made compatible with a few more axioms. Specifically, the comultiplication and the counit are both unital algebra homomorphisms, or equivalently, the multiplication and the unit of the algebra both are coalgebra morphisms. (These statements are equivalent since they are expressed by the same commutative diagrams.) Similar bialgebras are related by bialgebra homomorphisms. A bialgebra homomorphism is a linear map that is both an algebra and a coalgebra homomorphism. As reflected in the symmetry of the commutative diagrams, the definition of bialgebra is self-dual, so if one can define a dual of B (which is always possible if B is finite-dimensional), then it is automatically a bialgebra. Formal definition (B, ∇, η, Δ, ε) is a bialgebra over K if it has the following properties: B is a vector space over K; there are K-linear maps (multiplication) ∇: B ⊗ B → B (equivalent to K-multilinear map ∇: B × B → B) and (unit) η: K → B, such that (B, ∇, η) is a unital associative algebra; there are K-linear maps (comultiplication) Δ: B → B ⊗ B and (counit) ε: B → K, such that (B, Δ, ε) is a (counital coassociative) coalgebra; compatibility conditions expressed by the following commutative diagrams: Multiplication ∇ and comultiplication Δ where τ: B ⊗ B → B ⊗ B is the linear map defined by τ(x ⊗ y) = y ⊗ x for all x and y in B, Multiplication ∇ and counit ε Comultiplication Δ and unit η Unit η and counit ε Coassociativity and counit The K-linear map Δ: B → B ⊗ B is coassociative if . The K-linear map ε: B → K is a counit if . Coassociativity and counit are expressed by the commutativity of the following two diagrams (they are the duals of the diagrams expressing associativity and unit of an algebra): Compatibility conditions The four commutative diagrams can be read either as "comultiplication and counit are homomorphisms of algebras" or, equivalently, "multiplication and unit are homomorphisms of coalgebras". These statements are meaningful once we explain the natural structures of algebra and coalgebra in all the vector spaces involved besides B: (K, ∇0, η0) is a unital associative algebra in an obvious way and (B ⊗ B, ∇2, η2) is a unital associative algebra with unit and multiplication , so that or, omitting ∇ and writing multiplication as juxtaposition, ; similarly, (K, Δ0, ε0) is a coalgebra in an obvious way and B ⊗ B is a coalgebra with counit and comultiplication . Then, diagrams 1 and 3 say that Δ: B → B ⊗ B is a homomorphism of unital (associative) algebras (B, ∇, η) and (B ⊗ B, ∇2, η2) , or simply Δ(xy) = Δ(x) Δ(y), , or simply Δ(1B) = 1B ⊗ B; diagrams 2 and 4 say that ε: B → K is a homomorphism of unital (associative) algebras (B, ∇, η) and (K, ∇0, η0): , or simply ε(xy) = ε(x) ε(y) , or simply ε(1B) = 1K. Equival
https://en.wikipedia.org/wiki/Hodge%20star%20operator
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge. For example, in an oriented 3-dimensional Euclidean space, an oriented plane can be represented by the exterior product of two basis vectors, and its Hodge dual is the normal vector given by their cross product; conversely, any vector is dual to the oriented plane perpendicular to it, endowed with a suitable bivector. Generalizing this to an -dimensional vector space, the Hodge star is a one-to-one mapping of -vectors to -vectors; the dimensions of these spaces are the binomial coefficients . The naturalness of the star operator means it can play a role in differential geometry, when applied to the cotangent bundle of a pseudo-Riemannian manifold, and hence to differential -forms. This allows the definition of the codifferential as the Hodge adjoint of the exterior derivative, leading to the Laplace–de Rham operator. This generalizes the case of 3-dimensional Euclidean space, in which divergence of a vector field may be realized as the codifferential opposite to the gradient operator, and the Laplace operator on a function is the divergence of its gradient. An important application is the Hodge decomposition of differential forms on a closed Riemannian manifold. Formal definition for k-vectors Let be an -dimensional oriented vector space with a nondegenerate symmetric bilinear form , referred to here as an inner product. This induces an inner product on k-vectors for , by defining it on decomposable -vectors and to equal the Gram determinant extended to through linearity. The unit -vector is defined in terms of an oriented orthonormal basis of as: (Note: In the general pseudo-Riemannian case, orthonormality means: .) The Hodge star operator is a linear operator on the exterior algebra of , mapping -vectors to ()-vectors, for . It has the following property, which defines it completely: for every pair of -vectors Dually, in the space of -forms (alternating -multilinear functions on ), the dual to is the volume form , the function whose value on is the determinant of the matrix assembled from the column vectors of in -coordinates. Applying to the above equation, we obtain the dual definition: or equivalently, taking , , and : This means that, writing an orthonormal basis of -vectors as over all subsets of , the Hodge dual is the ()-vector corresponding to the complementary set : where is the sign of the permutation and is the product . In the Riemannian case, . Since Hodge star takes an orthonormal basis to an orthonormal basis, it is an isometry on the exterior algebra . Geometric explanation The Hodge star is motivated by the correspondence between a subspace of and its orth
https://en.wikipedia.org/wiki/Lie%20algebroid
In mathematics, a Lie algebroid is a vector bundle together with a Lie bracket on its space of sections and a vector bundle morphism , satisfying a Leibniz rule. A Lie algebroid can thus be thought of as a "many-object generalisation" of a Lie algebra. Lie algebroids play a similar same role in the theory of Lie groupoids that Lie algebras play in the theory of Lie groups: reducing global problems to infinitesimal ones. Indeed, any Lie groupoid gives rise to a Lie algebroid, which is the vertical bundle of the source map restricted at the units. However, unlike Lie algebras, not every Lie algebroid arises from a Lie groupoid. Lie algebroids were introduced in 1967 by Jean Pradines. Definition and basic concepts A Lie algebroid is a triple consisting of a vector bundle over a manifold a Lie bracket on its space of sections a morphism of vector bundles , called the anchor, where is the tangent bundle of such that the anchor and the bracket satisfy the following Leibniz rule: where and is the derivative of along the vector field . One often writes when the bracket and the anchor are clear from the context; some authors denote Lie algebroids by , suggesting a "limit" of a Lie groupoids when the arrows denoting source and target become "infinitesimally close". First properties It follows from the definition that for every , the kernel is a Lie algebra, called the isotropy Lie algebra at the kernel is a (not necessarily locally trivial) bundle of Lie algebras, called the isotropy Lie algebra bundle the image is a singular distribution which is integrable, i.e. its admits maximal immersed submanifolds , called the orbits, satisfying for every . Equivalently, orbits can be explicitly described as the sets of points which are joined by A-paths, i.e. pairs of paths in and in such that and the anchor map descends to a map between sections which is a Lie algebra morphism, i.e. for all . The property that induces a Lie algebra morphism was taken as an axiom in the original definition of Lie algebroid. Such redundancy, despite being known from an algebraic point of view already before Pradine's definition, was noticed only much later. Subalgebroids and ideals A Lie subalgebroid of a Lie algebroid is a vector subbundle of the restriction such that takes values in and is a Lie subalgebra of . Clearly, admits a unique Lie algebroid structure such that is a Lie algebra morphism. With the language introduced below, the inclusion is a Lie algebroid morphism. A Lie subalgebroid is called wide if . In analogy to the standard definition for Lie algebra, an ideal of a Lie algebroid is wide Lie subalgebroid such that is a Lie ideal. Such notion proved to be very restrictive, since is forced to be inside the isotropy bundle . For this reason, the more flexible notion of infinitesimal ideal system has been introduced. Morphisms A Lie algebroid morphism between two Lie algebroids and with the same base is a vec
https://en.wikipedia.org/wiki/Lie%20groupoid
In mathematics, a Lie groupoid is a groupoid where the set of objects and the set of morphisms are both manifolds, all the category operations (source and target, composition, identity-assigning map and inversion) are smooth, and the source and target operations are submersions. A Lie groupoid can thus be thought of as a "many-object generalization" of a Lie group, just as a groupoid is a many-object generalization of a group. Accordingly, while Lie groups provide a natural model for (classical) continuous symmetries, Lie groupoids are often used as model for (and arise from) generalised, point-dependent symmetries. Extending the correspondence between Lie groups and Lie algebras, Lie groupoids are the global counterparts of Lie algebroids. Lie groupoids were introduced by Charles Ehresmann under the name differentiable groupoids. Definition and basic concepts A Lie groupoid consists of two smooth manifolds and two surjective submersions (called, respectively, source and target projections) a map (called multiplication or composition map), where we use the notation a map (called unit map or object inclusion map), where we use the notation a map (called inversion), where we use the notation such that the composition satisfies and for every for which the composition is defined the composition is associative, i.e. for every for which the composition is defined works as an identity, i.e. for every and and for every works as an inverse, i.e. and for every . Using the language of category theory, a Lie groupoid can be more compactly defined as a groupoid (i.e. a small category where all the morphisms are invertible) such that the sets of objects and of morphisms are manifolds, the maps , , , and are smooth and and are submersions. A Lie groupoid is therefore not simply a groupoid object in the category of smooth manifolds: one has to ask the additional property that and are submersions. Lie groupoids are often denoted by , where the two arrows represent the source and the target. The notation is also frequently used, especially when stressing the simplicial structure of the associated nerve. In order to include more natural examples, the manifold is not required in general to be Hausdorff or second countable (while and all other spaces are). Alternative definitions The original definition by Ehresmann required and to possess a smooth structure such that only is smooth and the maps and are subimmersions (i.e. have locally constant rank). Such definition proved to be too weak and was replaced by Pradines with the one currently used. While some authors introduced weaker definitions which did not require and to be submersions, these properties are fundamental to develop the entire Lie theory of groupoids and algebroids. First properties The fact that the source and the target map of a Lie groupoid are smooth submersions has some immediate consequences: the -fibres , the -fibres , and the
https://en.wikipedia.org/wiki/Principal%20bundle
In mathematics, a principal bundle is a mathematical object that formalizes some of the essential features of the Cartesian product of a space with a group . In the same way as with the Cartesian product, a principal bundle is equipped with An action of on , analogous to for a product space. A projection onto . For a product space, this is just the projection onto the first factor, . Unlike a product space, principal bundles lack a preferred choice of identity cross-section; they have no preferred analog of . Likewise, there is not generally a projection onto generalizing the projection onto the second factor, that exists for the Cartesian product. They may also have a complicated topology that prevents them from being realized as a product space even if a number of arbitrary choices are made to try to define such a structure by defining it on smaller pieces of the space. A common example of a principal bundle is the frame bundle of a vector bundle , which consists of all ordered bases of the vector space attached to each point. The group in this case, is the general linear group, which acts on the right in the usual way: by changes of basis. Since there is no natural way to choose an ordered basis of a vector space, a frame bundle lacks a canonical choice of identity cross-section. Principal bundles have important applications in topology and differential geometry and mathematical gauge theory. They have also found application in physics where they form part of the foundational framework of physical gauge theories. Formal definition A principal -bundle, where denotes any topological group, is a fiber bundle together with a continuous right action such that preserves the fibers of (i.e. if then for all ) and acts freely and transitively (meaning each fiber is a G-torsor) on them in such a way that for each and , the map sending to is a homeomorphism. In particular each fiber of the bundle is homeomorphic to the group itself. Frequently, one requires the base space to be Hausdorff and possibly paracompact. Since the group action preserves the fibers of and acts transitively, it follows that the orbits of the -action are precisely these fibers and the orbit space is homeomorphic to the base space . Because the action is free and transitive, the fibers have the structure of G-torsors. A -torsor is a space that is homeomorphic to but lacks a group structure since there is no preferred choice of an identity element. An equivalent definition of a principal -bundle is as a -bundle with fiber where the structure group acts on the fiber by left multiplication. Since right multiplication by on the fiber commutes with the action of the structure group, there exists an invariant notion of right multiplication by on . The fibers of then become right -torsors for this action. The definitions above are for arbitrary topological spaces. One can also define principal -bundles in the category of smooth manifolds. Here is r
https://en.wikipedia.org/wiki/Quotient%20%28universal%20algebra%29
In mathematics, a quotient algebra is the result of partitioning the elements of an algebraic structure using a congruence relation. Quotient algebras are also called factor algebras. Here, the congruence relation must be an equivalence relation that is additionally compatible with all the operations of the algebra, in the formal sense described below. Its equivalence classes partition the elements of the given algebraic structure. The quotient algebra has these classes as its elements, and the compatibility conditions are used to give the classes an algebraic structure. The idea of the quotient algebra abstracts into one common notion the quotient structure of quotient rings of ring theory, quotient groups of group theory, the quotient spaces of linear algebra and the quotient modules of representation theory into a common framework. Compatible relation Let A be the set of the elements of an algebra , and let E be an equivalence relation on the set A. The relation E is said to be compatible with (or have the substitution property with respect to) an n-ary operation f, if for implies for any with . An equivalence relation compatible with all the operations of an algebra is called a congruence with respect to this algebra. Quotient algebras and homomorphisms Any equivalence relation E in a set A partitions this set in equivalence classes. The set of these equivalence classes is usually called the quotient set, and denoted A/E. For an algebra , it is straightforward to define the operations induced on the elements of A/E if E is a congruence. Specifically, for any operation of arity in (where the superscript simply denotes that it is an operation in , and the subscript enumerates the functions in and their arities) define as , where denotes the equivalence class of generated by E ("x modulo E"). For an algebra , given a congruence E on , the algebra is called the quotient algebra (or factor algebra) of modulo E. There is a natural homomorphism from to mapping every element to its equivalence class. In fact, every homomorphism h determines a congruence relation via the kernel of the homomorphism, . Given an algebra , a homomorphism h thus defines two algebras homomorphic to , the image h() and The two are isomorphic, a result known as the homomorphic image theorem or as the first isomorphism theorem for universal algebra. Formally, let be a surjective homomorphism. Then, there exists a unique isomorphism g from onto such that g composed with the natural homomorphism induced by equals h. Congruence lattice For every algebra on the set A, the identity relation on A, and are trivial congruences. An algebra with no other congruences is called simple. Let be the set of congruences on the algebra . Because congruences are closed under intersection, we can define a meet operation: by simply taking the intersection of the congruences . On the other hand, congruences are not closed under union. However, we can define the c
https://en.wikipedia.org/wiki/Subcategory
In mathematics, specifically category theory, a subcategory of a category C is a category S whose objects are objects in C and whose morphisms are morphisms in C with the same identities and composition of morphisms. Intuitively, a subcategory of C is a category obtained from C by "removing" some of its objects and arrows. Formal definition Let C be a category. A subcategory S of C is given by a subcollection of objects of C, denoted ob(S), a subcollection of morphisms of C, denoted hom(S). such that for every X in ob(S), the identity morphism idX is in hom(S), for every morphism f : X → Y in hom(S), both the source X and the target Y are in ob(S), for every pair of morphisms f and g in hom(S) the composite f o g is in hom(S) whenever it is defined. These conditions ensure that S is a category in its own right: its collection of objects is ob(S), its collection of morphisms is hom(S), and its identities and composition are as in C. There is an obvious faithful functor I : S → C, called the inclusion functor which takes objects and morphisms to themselves. Let S be a subcategory of a category C. We say that S is a full subcategory of C if for each pair of objects X and Y of S, A full subcategory is one that includes all morphisms in C between objects of S. For any collection of objects A in C, there is a unique full subcategory of C whose objects are those in A. Examples The category of finite sets forms a full subcategory of the category of sets. The category whose objects are sets and whose morphisms are bijections forms a non-full subcategory of the category of sets. The category of abelian groups forms a full subcategory of the category of groups. The category of rings (whose morphisms are unit-preserving ring homomorphisms) forms a non-full subcategory of the category of rngs. For a field K, the category of K-vector spaces forms a full subcategory of the category of (left or right) K-modules. Embeddings Given a subcategory S of C, the inclusion functor I : S → C is both a faithful functor and injective on objects. It is full if and only if S is a full subcategory. Some authors define an embedding to be a full and faithful functor. Such a functor is necessarily injective on objects up to isomorphism. For instance, the Yoneda embedding is an embedding in this sense. Some authors define an embedding to be a full and faithful functor that is injective on objects. Other authors define a functor to be an embedding if it is faithful and injective on objects. Equivalently, F is an embedding if it is injective on morphisms. A functor F is then called a full embedding if it is a full functor and an embedding. With the definitions of the previous paragraph, for any (full) embedding F : B → C the image of F is a (full) subcategory S of C, and F induces an isomorphism of categories between B and S. If F is not injective on objects then the image of F is equivalent to B. In some categories, one can also speak of morphisms of the category
https://en.wikipedia.org/wiki/Green%27s%20function
In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions. This means that if is the linear differential operator, then the Green's function is the solution of the equation , where is Dirac's delta function; the solution of the initial-value problem is the convolution (). Through the superposition principle, given a linear ordinary differential equation (ODE), , one can first solve , for each , and realizing that, since the source is a sum of delta functions, the solution is a sum of Green's functions as well, by linearity of . Green's functions are named after the British mathematician George Green, who first developed the concept in the 1820s. In the modern study of linear partial differential equations, Green's functions are studied largely from the point of view of fundamental solutions instead. Under many-body theory, the term is also used in physics, specifically in quantum field theory, aerodynamics, aeroacoustics, electrodynamics, seismology and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition. In quantum field theory, Green's functions take the roles of propagators. Definition and uses A Green's function, , of a linear differential operator acting on distributions over a subset of the Euclidean space , at a point , is any solution of where is the Dirac delta function. This property of a Green's function can be exploited to solve differential equations of the form If the kernel of is non-trivial, then the Green's function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green's function. Green's functions may be categorized, by the type of boundary conditions satisfied, by a Green's function number. Also, Green's functions in general are distributions, not necessarily functions of a real variable. Green's functions are also useful tools in solving wave equations and diffusion equations. In quantum mechanics, Green's function of the Hamiltonian is a key concept with important links to the concept of density of states. The Green's function as used in physics is usually defined with the opposite sign, instead. That is, This definition does not significantly change any of the properties of Green's function due to the evenness of the Dirac delta function. If the operator is translation invariant, that is, when has constant coefficients with respect to , then the Green's function can be taken to be a convolution kernel, that is, In this case, Green's function is the same as the impulse response of linear time-invariant system theory. Motivation Loosely speaking, if such a function can be found for the operator , then, if we multiply the equation () for the Green's function by , and then integrate with respect to , w
https://en.wikipedia.org/wiki/Rank-into-rank
In set theory, a branch of mathematics, a rank-into-rank embedding is a large cardinal property defined by one of the following four axioms given in order of increasing consistency strength. (A set of rank < λ is one of the elements of the set Vλ of the von Neumann hierarchy.) Axiom I3: There is a nontrivial elementary embedding of Vλ into itself. Axiom I2: There is a nontrivial elementary embedding of V into a transitive class M that includes Vλ where λ is the first fixed point above the critical point. Axiom I1: There is a nontrivial elementary embedding of Vλ+1 into itself. Axiom I0: There is a nontrivial elementary embedding of L(Vλ+1) into itself with critical point below λ. These are essentially the strongest known large cardinal axioms not known to be inconsistent in ZFC; the axiom for Reinhardt cardinals is stronger, but is not consistent with the axiom of choice. If j is the elementary embedding mentioned in one of these axioms and κ is its critical point, then λ is the limit of as n goes to ω. More generally, if the axiom of choice holds, it is provable that if there is a nontrivial elementary embedding of Vα into itself then α is either a limit ordinal of cofinality ω or the successor of such an ordinal. The axioms I0, I1, I2, and I3 were at first suspected to be inconsistent (in ZFC) as it was thought possible that Kunen's inconsistency theorem that Reinhardt cardinals are inconsistent with the axiom of choice could be extended to them, but this has not yet happened and they are now usually believed to be consistent. Every I0 cardinal κ (speaking here of the critical point of j) is an I1 cardinal. Every I1 cardinal κ (sometimes called ω-huge cardinals) is an I2 cardinal and has a stationary set of I2 cardinals below it. Every I2 cardinal κ is an I3 cardinal and has a stationary set of I3 cardinals below it. Every I3 cardinal κ has another I3 cardinal above it and is an n-huge cardinal for every n<ω. Axiom I1 implies that Vλ+1 (equivalently, H(λ+)) does not satisfy V=HOD. There is no set S⊂λ definable in Vλ+1 (even from parameters Vλ and ordinals <λ+) with S cofinal in λ and |S|<λ, that is, no such S witnesses that λ is singular. And similarly for Axiom I0 and ordinal definability in L(Vλ+1) (even from parameters in Vλ). However globally, and even in Vλ, V=HOD is relatively consistent with Axiom I1. Notice that I0 is sometimes strengthened further by adding an "Icarus set", so that it would be Axiom Icarus set: There is a nontrivial elementary embedding of L(Vλ+1, Icarus) into itself with the critical point below λ. The Icarus set should be in Vλ+2 − L(Vλ+1) but chosen to avoid creating an inconsistency. So for example, it cannot encode a well-ordering of Vλ+1. See section 10 of Dimonte for more details. Notes References . . . . Large cardinals Determinacy
https://en.wikipedia.org/wiki/Arthur%20Cayley
Arthur Cayley (; 16 August 1821 – 26 January 1895) was a prolific British mathematician who worked mostly on algebra. He helped found the modern British school of pure mathematics. As a child, Cayley enjoyed solving complex maths problems for amusement. He entered Trinity College, Cambridge, where he excelled in Greek, French, German, and Italian, as well as mathematics. He worked as a lawyer for 14 years. He postulated what is now known as the Cayley–Hamilton theorem—that every square matrix is a root of its own characteristic polynomial, and verified it for matrices of order 2 and 3. He was the first to define the concept of a group in the modern way—as a set with a binary operation satisfying certain laws. Formerly, when mathematicians spoke of "groups", they had meant permutation groups. Cayley tables and Cayley graphs as well as Cayley's theorem are named in honour of Cayley. Early years Arthur Cayley was born in Richmond, London, England, on 16 August 1821. His father, Henry Cayley, was a distant cousin of Sir George Cayley, the aeronautics engineer innovator, and descended from an ancient Yorkshire family. He settled in Saint Petersburg, Russia, as a merchant. His mother was Maria Antonia Doughty, daughter of William Doughty. According to some writers she was Russian, but her father's name indicates an English origin. His brother was the linguist Charles Bagot Cayley. Arthur spent his first eight years in Saint Petersburg. In 1829 his parents were settled permanently at Blackheath, near London. Arthur was sent to a private school. At age 14 he was sent to King's College School. The school's master observed indications of mathematical genius and advised the father to educate his son not for his own business, as he had intended, but at the University of Cambridge. Education At the unusually early age of 17 Cayley began residence at Trinity College, Cambridge. The cause of the Analytical Society had now triumphed, and the Cambridge Mathematical Journal had been instituted by Gregory and Robert Leslie Ellis. To this journal, at the age of twenty, Cayley contributed three papers, on subjects that had been suggested by reading the Mécanique analytique of Lagrange and some of the works of Laplace. Cayley's tutor at Cambridge was George Peacock and his private coach was William Hopkins. He finished his undergraduate course by winning the place of Senior Wrangler, and the first Smith's prize. His next step was to take the M.A. degree, and win a Fellowship by competitive examination. He continued to reside at Cambridge University for four years; during which time he took some pupils, but his main work was the preparation of 28 memoirs to the Mathematical Journal. As a lawyer Because of the limited tenure of his fellowship it was necessary to choose a profession; like De Morgan, Cayley chose law, and was admitted to Lincoln's Inn, London on 20 April 1846 at the age of 24. He made a specialty of conveyancing. It was while he was a pupil at th
https://en.wikipedia.org/wiki/Bounded%20function
In mathematics, a function f defined on some set X with real or complex values is called bounded if the set of its values is bounded. In other words, there exists a real number M such that for all x in X. A function that is not bounded is said to be unbounded. If f is real-valued and f(x) ≤ A for all x in X, then the function is said to be bounded (from) above by A. If f(x) ≥ B for all x in X, then the function is said to be bounded (from) below by B. A real-valued function is bounded if and only if it is bounded from above and below. An important special case is a bounded sequence, where X is taken to be the set N of natural numbers. Thus a sequence f = (a0, a1, a2, ...) is bounded if there exists a real number M such that for every natural number n. The set of all bounded sequences forms the sequence space . The definition of boundedness can be generalized to functions f : X → Y taking values in a more general space Y by requiring that the image f(X) is a bounded set in Y. Related notions Weaker than boundedness is local boundedness. A family of bounded functions may be uniformly bounded. A bounded operator T : X → Y is not a bounded function in the sense of this page's definition (unless T = 0), but has the weaker property of preserving boundedness: Bounded sets M ⊆ X are mapped to bounded sets T(M) ⊆ Y. This definition can be extended to any function f : X → Y if X and Y allow for the concept of a bounded set. Boundedness can also be determined by looking at a graph. Examples The sine function sin : R → R is bounded since for all . The function , defined for all real x except for −1 and 1, is unbounded. As x approaches −1 or 1, the values of this function get larger in magnitude. This function can be made bounded if one restricts its domain to be, for example, [2, ∞) or (−∞, −2]. The function , defined for all real x, is bounded, since for all x. The inverse trigonometric function arctangent defined as: y = or x = is increasing for all real numbers x and bounded with − < y < radians By the boundedness theorem, every continuous function on a closed interval, such as f : [0, 1] → R, is bounded. More generally, any continuous function from a compact space into a metric space is bounded. All complex-valued functions f : C → C which are entire are either unbounded or constant as a consequence of Liouville's theorem. In particular, the complex sin : C → C must be unbounded since it is entire. The function f which takes the value 0 for x rational number and 1 for x irrational number (cf. Dirichlet function) is bounded. Thus, a function does not need to be "nice" in order to be bounded. The set of all bounded functions defined on [0, 1] is much larger than the set of continuous functions on that interval. Moreover, continuous functions need not be bounded; for example, the functions and defined by and are both continuous, but neither is bounded. (However, a continuous function must be bounded if its domain is both closed an
https://en.wikipedia.org/wiki/Wolfgang%20Haken
Wolfgang Haken (; June 21, 1928 – October 2, 2022) was a German American mathematician who specialized in topology, in particular 3-manifolds. Biography Haken was born on June 21, 1928, in Berlin, Germany. His father was Werner Haken, a physicist who had Max Planck as a doctoral thesis advisor. In 1953, Haken earned a Ph.D. degree in mathematics from Christian-Albrechts-Universität zu Kiel (Kiel University) and married Anna-Irmgard von Bredow, who earned a Ph.D. degree in mathematics from the same university in 1959. In 1962, they left Germany so he could accept a position as visiting professor at the University of Illinois at Urbana-Champaign. He became a full professor in 1965, retiring in 1998. In 1976, together with colleague Kenneth Appel at the University of Illinois at Urbana-Champaign, Haken solved the four-color problem: they proved that any planar graph can be properly colored using at most four colors. Haken has introduced several ideas, including Haken manifolds, Kneser-Haken finiteness, and an expansion of the work of Kneser into a theory of normal surfaces. Much of his work has an algorithmic aspect, and he is a figure in algorithmic topology. One of his key contributions to this field is an algorithm to detect whether a knot is unknotted. In 1978, Haken delivered an invited address at the International Congress of Mathematicians in Helsinki. He was a recipient of the 1979 Fulkerson Prize of the American Mathematical Society for his proof with Appel of the four-color theorem. Haken died in Champaign, Illinois, on October 2, 2022, aged 94. Family Haken's eldest son, Armin, proved that there exist propositional tautologies that require resolution proofs of exponential size. Haken's eldest daughter, Dorothea Blostein, is a professor of computer science, known for her discovery of the master theorem for divide-and-conquer recurrences. Haken’s second son, Lippold, is the inventor of the Continuum Fingerboard. Haken’s youngest son, Rudolf, is a professor of music, who established the world's first Electric Strings university degree program at the University of Illinois at Urbana-Champaign. Wolfgang is the cousin of Hermann Haken, a physicist known for laser theory and synergetics. See also Unknotting problem References Haken, W. "Theorie der Normalflachen." Acta Math. 105, 245–375, 1961. External links Wolfgang Haken memorial website Haken's faculty page at the University of Illinois at Urbana-Champaign Wolfgang Haken biography from World of Mathematics Lippold Haken's life story 1928 births 2022 deaths Topologists University of Illinois Urbana-Champaign faculty University of Kiel alumni Emigrants from West Germany to the United States Scientists from Berlin
https://en.wikipedia.org/wiki/Partition%20function%20%28number%20theory%29
In number theory, the partition function represents the number of possible partitions of a non-negative integer . For instance, because the integer 4 has the five partitions , , , , and . No closed-form expression for the partition function is known, but it has both asymptotic expansions that accurately approximate it and recurrence relations by which it can be calculated exactly. It grows as an exponential function of the square root of its argument. The multiplicative inverse of its generating function is the Euler function; by Euler's pentagonal number theorem this function is an alternating sum of pentagonal number powers of its argument. Srinivasa Ramanujan first discovered that the partition function has nontrivial patterns in modular arithmetic, now known as Ramanujan's congruences. For instance, whenever the decimal representation of ends in the digit 4 or 9, the number of partitions of will be divisible by 5. Definition and examples For a positive integer , is the number of distinct ways of representing as a sum of positive integers. For the purposes of this definition, the order of the terms in the sum is irrelevant: two sums with the same terms in a different order are not considered to be distinct. By convention , as there is one way (the empty sum) of representing zero as a sum of positive integers. Furthermore when is negative. The first few values of the partition function, starting with , are: Some exact values of for larger values of include: , the largest known prime number among the values of is , with 40,000 decimal digits. Until March 2022, this was also the largest prime that has been proved using elliptic curve primality proving. Generating function The generating function for p(n) is given by The equality between the products on the first and second lines of this formula is obtained by expanding each factor into the geometric series To see that the expanded product equals the sum on the first line, apply the distributive law to the product. This expands the product into a sum of monomials of the form for some sequence of coefficients , only finitely many of which can be non-zero. The exponent of the term is , and this sum can be interpreted as a representation of as a partition into copies of each number . Therefore, the number of terms of the product that have exponent is exactly , the same as the coefficient of in the sum on the left. Therefore, the sum equals the product. The function that appears in the denominator in the third and fourth lines of the formula is the Euler function. The equality between the product on the first line and the formulas in the third and fourth lines is Euler's pentagonal number theorem. The exponents of in these lines are the pentagonal numbers for (generalized somewhat from the usual pentagonal numbers, which come from the same formula for the positive values of ). The pattern of positive and negative signs in the third line comes from the term in the fourt
https://en.wikipedia.org/wiki/Symplectomorphism
In mathematics, a symplectomorphism or symplectic map is an isomorphism in the category of symplectic manifolds. In classical mechanics, a symplectomorphism represents a transformation of phase space that is volume-preserving and preserves the symplectic structure of phase space, and is called a canonical transformation. Formal definition A diffeomorphism between two symplectic manifolds is called a symplectomorphism if where is the pullback of . The symplectic diffeomorphisms from to are a (pseudo-)group, called the symplectomorphism group (see below). The infinitesimal version of symplectomorphisms gives the symplectic vector fields. A vector field is called symplectic if Also, is symplectic iff the flow of is a symplectomorphism for every . These vector fields build a Lie subalgebra of . Here, is the set of smooth vector fields on , and is the Lie derivative along the vector field Examples of symplectomorphisms include the canonical transformations of classical mechanics and theoretical physics, the flow associated to any Hamiltonian function, the map on cotangent bundles induced by any diffeomorphism of manifolds, and the coadjoint action of an element of a Lie group on a coadjoint orbit. Flows Any smooth function on a symplectic manifold gives rise, by definition, to a Hamiltonian vector field and the set of all such vector fields form a subalgebra of the Lie algebra of symplectic vector fields. The integration of the flow of a symplectic vector field is a symplectomorphism. Since symplectomorphisms preserve the symplectic 2-form and hence the symplectic volume form, Liouville's theorem in Hamiltonian mechanics follows. Symplectomorphisms that arise from Hamiltonian vector fields are known as Hamiltonian symplectomorphisms. Since the flow of a Hamiltonian vector field also preserves . In physics this is interpreted as the law of conservation of energy. If the first Betti number of a connected symplectic manifold is zero, symplectic and Hamiltonian vector fields coincide, so the notions of Hamiltonian isotopy and symplectic isotopy of symplectomorphisms coincide. It can be shown that the equations for a geodesic may be formulated as a Hamiltonian flow, see Geodesics as Hamiltonian flows. The group of (Hamiltonian) symplectomorphisms The symplectomorphisms from a manifold back onto itself form an infinite-dimensional pseudogroup. The corresponding Lie algebra consists of symplectic vector fields. The Hamiltonian symplectomorphisms form a subgroup, whose Lie algebra is given by the Hamiltonian vector fields. The latter is isomorphic to the Lie algebra of smooth functions on the manifold with respect to the Poisson bracket, modulo the constants. The group of Hamiltonian symplectomorphisms of usually denoted as . Groups of Hamiltonian diffeomorphisms are simple, by a theorem of Banyaga. They have natural geometry given by the Hofer norm. The homotopy type of the symplectomorphism group for certain simple symplectic fo
https://en.wikipedia.org/wiki/Law%20of%20total%20probability
In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities. It expresses the total probability of an outcome which can be realized via several distinct events, hence the name. Statement The law of total probability is a theorem that states, in its discrete case, if is a finite or countably infinite partition of a sample space (in other words, a set of pairwise disjoint events whose union is the entire sample space) and each event is measurable, then for any event of the same sample space: or, alternatively, where, for any , if , then these terms are simply omitted from the summation since is finite. The summation can be interpreted as a weighted average, and consequently the marginal probability, , is sometimes called "average probability"; "overall probability" is sometimes used in less formal writings. The law of total probability can also be stated for conditional probabilities: Taking the as above, and assuming is an event independent of any of the : Continuous case The law of total probability extends to the case of conditioning on events generated by continuous random variables. Let be a probability space. Suppose is a random variable with distribution function , and an event on . Then the law of total probability states If admits a density function , then the result is Moreover, for the specific case where , where is a Borel set, then this yields Example Suppose that two factories supply light bulbs to the market. Factory X's bulbs work for over 5000 hours in 99% of cases, whereas factory Y's bulbs work for over 5000 hours in 95% of cases. It is known that factory X supplies 60% of the total bulbs available and Y supplies 40% of the total bulbs available. What is the chance that a purchased bulb will work for longer than 5000 hours? Applying the law of total probability, we have: where is the probability that the purchased bulb was manufactured by factory X; is the probability that the purchased bulb was manufactured by factory Y; is the probability that a bulb manufactured by X will work for over 5000 hours; is the probability that a bulb manufactured by Y will work for over 5000 hours. Thus each purchased light bulb has a 97.4% chance to work for more than 5000 hours. Other names The term law of total probability is sometimes taken to mean the law of alternatives, which is a special case of the law of total probability applying to discrete random variables. One author uses the terminology of the "Rule of Average Conditional Probabilities", while another refers to it as the "continuous law of alternatives" in the continuous case. This result is given by Grimmett and Welsh as the partition theorem, a name that they also give to the related law of total expectation. See also Law of total expectation Law of total variance Law of total covariance Law of total cumulance Marginal distribution Notes References
https://en.wikipedia.org/wiki/Law%20of%20total%20expectation
The proposition in probability theory known as the law of total expectation, the law of iterated expectations (LIE), Adam's law, the tower rule, and the smoothing theorem, among other names, states that if is a random variable whose expected value is defined, and is any random variable on the same probability space, then i.e., the expected value of the conditional expected value of given is the same as the expected value of . One special case states that if is a finite or countable partition of the sample space, then Note: The conditional expected value E(X | Y), with Y a random variable, is not a simple number; it is a random variable whose value depends on the value of Y. That is, the conditional expected value of X given the event Y = y is a number and it is a function of y. If we write g(y) for the value of E(X | Y = y) then the random variable E(X | Y) is g(Y). Example Suppose that only two factories supply light bulbs to the market. Factory 's bulbs work for an average of 5000 hours, whereas factory 's bulbs work for an average of 4000 hours. It is known that factory supplies 60% of the total bulbs available. What is the expected length of time that a purchased bulb will work for? Applying the law of total expectation, we have: where is the expected life of the bulb; is the probability that the purchased bulb was manufactured by factory ; is the probability that the purchased bulb was manufactured by factory ; is the expected lifetime of a bulb manufactured by ; is the expected lifetime of a bulb manufactured by . Thus each purchased light bulb has an expected lifetime of 4600 hours. Informal proof When a joint probability density function is well defined and the expectations are integrable, we write for the general case A similar derivation works for discrete distributions using summation instead of integration. For the specific case of a partition, give each cell of the partition a unique label and let the random variable Y be the function of the sample space that assigns a cell's label to each point in that cell. Proof in the general case Let be a probability space on which two sub σ-algebras are defined. For a random variable on such a space, the smoothing law states that if is defined, i.e. , then Proof. Since a conditional expectation is a Radon–Nikodym derivative, verifying the following two properties establishes the smoothing law: -measurable for all The first of these properties holds by definition of the conditional expectation. To prove the second one, so the integral is defined (not equal ). The second property thus holds since implies Corollary. In the special case when and , the smoothing law reduces to Alternative proof for This is a simple consequence of the measure-theoretic definition of conditional expectation. By definition, is a -measurable random variable that satisfies for every measurable set . Taking proves the claim. See also The fundamental theorem of
https://en.wikipedia.org/wiki/Autonomous%20system%20%28mathematics%29
In mathematics, an autonomous system or autonomous differential equation is a system of ordinary differential equations which does not explicitly depend on the independent variable. When the variable is time, they are also called time-invariant systems. Many laws in physics, where the independent variable is usually assumed to be time, are expressed as autonomous systems because it is assumed the laws of nature which hold now are identical to those for any point in the past or future. Definition An autonomous system is a system of ordinary differential equations of the form where takes values in -dimensional Euclidean space; is often interpreted as time. It is distinguished from systems of differential equations of the form in which the law governing the evolution of the system does not depend solely on the system's current state but also the parameter , again often interpreted as time; such systems are by definition not autonomous. Properties Solutions are invariant under horizontal translations: Let be a unique solution of the initial value problem for an autonomous system Then solves Denoting gets and , thus For the initial condition, the verification is trivial, Example The equation is autonomous, since the independent variable () does not explicitly appear in the equation. To plot the slope field and isocline for this equation, one can use the following code in GNU Octave/MATLAB Ffun = @(X, Y)(2 - Y) .* Y; % function f(x,y)=(2-y)y [X, Y] = meshgrid(0:.2:6, -1:.2:3); % choose the plot sizes DY = Ffun(X, Y); DX = ones(size(DY)); % generate the plot values quiver(X, Y, DX, DY, 'k'); % plot the direction field in black hold on; contour(X, Y, DY, [0 1 2], 'g'); % add the isoclines(0 1 2) in green title('Slope field and isoclines for f(x,y)=(2-y)y') One can observe from the plot that the function is -invariant, and so is the shape of the solution, i.e. for any shift . Solving the equation symbolically in MATLAB, by running syms y(x); equation = (diff(y) == (2 - y) * y); % solve the equation for a general solution symbolically y_general = dsolve(equation); obtains two equilibrium solutions, and , and a third solution involving an unknown constant , -2 / (exp(C3 - 2 * x) - 1). Picking up some specific values for the initial condition, one can add the plot of several solutions % solve the initial value problem symbolically % for different initial conditions y1 = dsolve(equation, y(1) == 1); y2 = dsolve(equation, y(2) == 1); y3 = dsolve(equation, y(3) == 1); y4 = dsolve(equation, y(1) == 3); y5 = dsolve(equation, y(2) == 3); y6 = dsolve(equation, y(3) == 3); % plot the solutions ezplot(y1, [0 6]); ezplot(y2, [0 6]); ezplot(y3, [0 6]); ezplot(y4, [0 6]); ezplot(y5, [0 6]); ezplot(y6, [0 6]); title('Slope field, isoclines and solutions for f(x,y)=(2-y)y') legend('Slope field', 'Isoclines', 'Solutions y_{1..6}'); text([1 2 3], [1 1 1], strcat('\leftarrow', {'y_1', 'y_2', 'y_3'})); text([1 2 3], [3 3 3], strcat('\leftarrow'
https://en.wikipedia.org/wiki/Law%20of%20total%20variance
In probability theory, the law of total variance or variance decomposition formula or conditional variance formulas or law of iterated variances also known as Eve's law, states that if and are random variables on the same probability space, and the variance of is finite, then In language perhaps better known to statisticians than to probability theorists, the two terms are the "unexplained" and the "explained" components of the variance respectively (cf. fraction of variance unexplained, explained variation). In actuarial science, specifically credibility theory, the first component is called the expected value of the process variance (EVPV) and the second is called the variance of the hypothetical means (VHM). These two components are also the source of the term "Eve's law", from the initials EV VE for "expectation of variance" and "variance of expectation". Example Suppose is a coin flip with the probability of heads being . Suppose that when then is drawn from a normal distribution with mean and standard deviation , and that when then is drawn from normal distribution with mean and standard deviation . Then the first, "unexplained" term on the right-hand side of the above formula is the weighted average of the variances, , and the second, "explained" term is the variance of the distribution that gives with probability and gives with probability . Formulation There is a general variance decomposition formula for components (see below). For example, with two conditioning random variables: which follows from the law of total conditional variance: Note that the conditional expected value is a random variable in its own right, whose value depends on the value of Notice that the conditional expected value of given the is a function of (this is where adherence to the conventional and rigidly case-sensitive notation of probability theory becomes important!). If we write then the random variable is just Similar comments apply to the conditional variance. One special case, (similar to the law of total expectation) states that if is a partition of the whole outcome space, that is, these events are mutually exclusive and exhaustive, then In this formula, the first component is the expectation of the conditional variance; the other two components are the variance of the conditional expectation. Proof The law of total variance can be proved using the law of total expectation. First, from the definition of variance. Again, from the definition of variance, and applying the law of total expectation, we have Now we rewrite the conditional second moment of in terms of its variance and first moment, and apply the law of total expectation on the right hand side: Since the expectation of a sum is the sum of expectations, the terms can now be regrouped: Finally, we recognize the terms in the second set of parentheses as the variance of the conditional expectation : General variance decomposition applicable to dynamic sys
https://en.wikipedia.org/wiki/Mutual%20exclusivity
In logic and probability theory, two events (or propositions) are mutually exclusive or disjoint if they cannot both occur at the same time. A clear example is the set of outcomes of a single coin toss, which can result in either heads or tails, but not both. In the coin-tossing example, both outcomes are, in theory, collectively exhaustive, which means that at least one of the outcomes must happen, so these two possibilities together exhaust all the possibilities. However, not all mutually exclusive events are collectively exhaustive. For example, the outcomes 1 and 4 of a single roll of a six-sided die are mutually exclusive (both cannot happen at the same time) but not collectively exhaustive (there are other possible outcomes; 2,3,5,6). Logic In logic, two mutually exclusive propositions are propositions that logically cannot be true in the same sense at the same time. To say that more than two propositions are mutually exclusive, depending on the context, means that one cannot be true if the other one is true, or at least one of them cannot be true. The term pairwise mutually exclusive always means that two of them cannot be true simultaneously. Probability In probability theory, events E1, E2, ..., En are said to be mutually exclusive if the occurrence of any one of them implies the non-occurrence of the remaining n − 1 events. Therefore, two mutually exclusive events cannot both occur. Formally said, the intersection of each two of them is empty (the null event): A ∩ B = ∅. In consequence, mutually exclusive events have the property: P(A ∩ B) = 0. For example, in a standard 52-card deck with two colors it is impossible to draw a card that is both red and a club because clubs are always black. If just one card is drawn from the deck, either a red card (heart or diamond) or a black card (club or spade) will be drawn. When A and B are mutually exclusive, . To find the probability of drawing a red card or a club, for example, add together the probability of drawing a red card and the probability of drawing a club. In a standard 52-card deck, there are twenty-six red cards and thirteen clubs: 26/52 + 13/52 = 39/52 or 3/4. One would have to draw at least two cards in order to draw both a red card and a club. The probability of doing so in two draws depends on whether the first card drawn was replaced before the second drawing since without replacement there is one fewer card after the first card was drawn. The probabilities of the individual events (red, and club) are multiplied rather than added. The probability of drawing a red and a club in two drawings without replacement is then , or 13/51. With replacement, the probability would be , or 13/52. In probability theory, the word or allows for the possibility of both events happening. The probability of one or both events occurring is denoted P(A ∪ B) and in general, it equals P(A) + P(B) – P(A ∩ B). Therefore, in the case of drawing a red card or a king, drawing any of a red king, a red
https://en.wikipedia.org/wiki/Jacques%20Hadamard
Jacques Salomon Hadamard (; 8 December 1865 – 17 October 1963) was a French mathematician who made major contributions in number theory, complex analysis, differential geometry and partial differential equations. Biography The son of a teacher, Amédée Hadamard, of Jewish descent, and Claire Marie Jeanne Picard, Hadamard was born in Versailles, France and attended the Lycée Charlemagne and Lycée Louis-le-Grand, where his father taught. In 1884 Hadamard entered the École Normale Supérieure, having placed first in the entrance examinations both there and at the École Polytechnique. His teachers included Tannery, Hermite, Darboux, Appell, Goursat and Picard. He obtained his doctorate in 1892 and in the same year was awarded the for his essay on the Riemann zeta function. In 1892 Hadamard married Louise-Anna Trénel, also of Jewish descent, with whom he had three sons and two daughters. The following year he took up a lectureship in the University of Bordeaux, where he proved his celebrated inequality on determinants, which led to the discovery of Hadamard matrices when equality holds. In 1896 he made two important contributions: he proved the prime number theorem, using complex function theory (also proved independently by Charles Jean de la Vallée-Poussin); and he was awarded the Bordin Prize of the French Academy of Sciences for his work on geodesics in the differential geometry of surfaces and dynamical systems. In the same year he was appointed Professor of Astronomy and Rational Mechanics in Bordeaux. His foundational work on geometry and symbolic dynamics continued in 1898 with the study of geodesics on surfaces of negative curvature. For his cumulative work, he was awarded the Prix Poncelet in 1898. After the Dreyfus affair, which involved him personally because his second cousin Lucie was the wife of Dreyfus, Hadamard became politically active and a staunch supporter of Jewish causes though he professed to be an atheist in his religion. In 1897 he moved back to Paris, holding positions in the Sorbonne and the Collège de France, where he was appointed Professor of Mechanics in 1909. In addition to this post, he was appointed to chairs of analysis at the École Polytechnique in 1912 and at the École Centrale in 1920, succeeding Jordan and Appell. In Paris Hadamard concentrated his interests on the problems of mathematical physics, in particular partial differential equations, the calculus of variations and the foundations of functional analysis. He introduced the idea of well-posed problem and the method of descent in the theory of partial differential equations, culminating in his seminal book on the subject, based on lectures given at Yale University in 1922. Later in his life he wrote on probability theory and mathematical education. Hadamard was elected to the French Academy of Sciences in 1916, in succession to Poincaré, whose complete works he helped edit. He became foreign member of the Royal Netherlands Academy of Arts and Sciences
https://en.wikipedia.org/wiki/Hellenic%20Statistical%20Authority
The Hellenic Statistical Authority ( ), known by its acronym ELSTAT (), is the national statistical service of Greece. The purpose of ELSTAT is to produce, on a regular basis, official statistics, as well as to conduct statistical surveys which: cover all the fields of activity of the public and private sector, underpin the processes for decision making, policy drawing and evaluating the policies of the Government and the public administrations and services (evaluation indicators), are submitted to international agencies in compliance with the obligations of the country and concern the general public or specific categories of users of statistics in Greece and abroad In accordance with its establishing law, ELSTAT is an independent authority and it is not subject to the control of any governmental bodies or other administrative authority. Its operation is subject to the control of the Hellenic Parliament. History The agency was originally established as the National Statistical Service of Greece (Εθνική Στατιστική Υπηρεσία Ελλάδος) in 1956 by Legislative Decree 3627/1956. In 1986, Presidential Decree 224/1986 it was transformed into the General Secretariat of the National Statistical Service of Greece and became part of the Ministry of National Economy. Law 2392/1996 provided for the arrangement of issues concerning the access of the General Secretariat of the National Statistical Service of Greece to administrative sources and files, as well as statistical confidentiality issues. On 20 October 2009, the new finance minister in the newly elected Cabinet of George Papandreou announced that Greece's budget deficit was expected to reach ~12.5% of GDP. On 8 January 2010, the European Commission published its report 'Report on Greek government deficit and debt statistics'. On 23 April 2010 Prime Minister George Papandreou formally requested an international bailout for Greece. The European Union (EU), European Central Bank (ECB) and International Monetary Fund (IMF) agreed to participate in the bailout. On 2 May 2010, the IMF, Papandreou, and other Eurozone PMs agreed to the first bailout package for €110 billion ($143 billion) over 3 years. The third austerity package was announced by the Greek government. As recommended by Eurostat, ESYE was dissolved and replaced by ELSTAT in July 2010 via Law 3832/2010 (amended since by Laws 3842/2010, 3899/2010, 3943/2011, 3965/2011, 4047/2012 and 4072/2012). Overview The Hellenic Statistical Authority collects data pertaining to the population (it is responsible for the conduct of the population census, every 10 years), health and social security, employment and unemployment, education, etc. The statistical data collected by ELSTAT are used by both the Greek State and international organisations (such as UNESCO, the UN, OECD), by enterprises, the scientific community, citizens and others. ELSTAT employs 740 people working in the central office and in 50 Regional Statistical Offices located in vario
https://en.wikipedia.org/wiki/Mathematical%20morphology
Mathematical morphology (MM) is a theory and technique for the analysis and processing of geometrical structures, based on set theory, lattice theory, topology, and random functions. MM is most commonly applied to digital images, but it can be employed as well on graphs, surface meshes, solids, and many other spatial structures. Topological and geometrical continuous-space concepts such as size, shape, convexity, connectivity, and geodesic distance, were introduced by MM on both continuous and discrete spaces. MM is also the foundation of morphological image processing, which consists of a set of operators that transform images according to the above characterizations. The basic morphological operators are erosion, dilation, opening and closing. MM was originally developed for binary images, and was later extended to grayscale functions and images. The subsequent generalization to complete lattices is widely accepted today as MM's theoretical foundation. History Mathematical Morphology was developed in 1964 by the collaborative work of Georges Matheron and Jean Serra, at the École des Mines de Paris, France. Matheron supervised the PhD thesis of Serra, devoted to the quantification of mineral characteristics from thin cross sections, and this work resulted in a novel practical approach, as well as theoretical advancements in integral geometry and topology. In 1968, the Centre de Morphologie Mathématique was founded by the École des Mines de Paris in Fontainebleau, France, led by Matheron and Serra. During the rest of the 1960s and most of the 1970s, MM dealt essentially with binary images, treated as sets, and generated a large number of binary operators and techniques: Hit-or-miss transform, dilation, erosion, opening, closing, granulometry, thinning, skeletonization, ultimate erosion, conditional bisector, and others. A random approach was also developed, based on novel image models. Most of the work in that period was developed in Fontainebleau. From the mid-1970s to mid-1980s, MM was generalized to grayscale functions and images as well. Besides extending the main concepts (such as dilation, erosion, etc.) to functions, this generalization yielded new operators, such as morphological gradients, top-hat transform and the Watershed (MM's main segmentation approach). In the 1980s and 1990s, MM gained a wider recognition, as research centers in several countries began to adopt and investigate the method. MM started to be applied to a large number of imaging problems and applications, especially in the field of non-linear filtering of noisy images. In 1986, Serra further generalized MM, this time to a theoretical framework based on complete lattices. This generalization brought flexibility to the theory, enabling its application to a much larger number of structures, including color images, video, graphs, meshes, etc. At the same time, Matheron and Serra also formulated a theory for morphological filtering, based on the new lattice frame
https://en.wikipedia.org/wiki/Totally%20real%20number%20field
In number theory, a number field F is called totally real if for each embedding of F into the complex numbers the image lies inside the real numbers. Equivalent conditions are that F is generated over Q by one root of an integer polynomial P, all of the roots of P being real; or that the tensor product algebra of F with the real field, over Q, is isomorphic to a tensor power of R. For example, quadratic fields F of degree 2 over Q are either real (and then totally real), or complex, depending on whether the square root of a positive or negative number is adjoined to Q. In the case of cubic fields, a cubic integer polynomial P irreducible over Q will have at least one real root. If it has one real and two complex roots the corresponding cubic extension of Q defined by adjoining the real root will not be totally real, although it is a field of real numbers. The totally real number fields play a significant special role in algebraic number theory. An abelian extension of Q is either totally real, or contains a totally real subfield over which it has degree two. Any number field that is Galois over the rationals must be either totally real or totally imaginary. See also Totally imaginary number field CM-field, a totally imaginary quadratic extension of a totally real field References Field (mathematics) Algebraic number theory
https://en.wikipedia.org/wiki/Local%20analysis
In mathematics, the term local analysis has at least two meanings, both derived from the idea of looking at a problem relative to each prime number p first, and then later trying to integrate the information gained at each prime into a 'global' picture. These are forms of the localization approach. Group theory In group theory, local analysis was started by the Sylow theorems, which contain significant information about the structure of a finite group G for each prime number p dividing the order of G. This area of study was enormously developed in the quest for the classification of finite simple groups, starting with the Feit–Thompson theorem that groups of odd order are solvable. Number theory In number theory one may study a Diophantine equation, for example, modulo p for all primes p, looking for constraints on solutions. The next step is to look modulo prime powers, and then for solutions in the p-adic field. This kind of local analysis provides conditions for solution that are necessary. In cases where local analysis (plus the condition that there are real solutions) provides also sufficient conditions, one says that the Hasse principle holds: this is the best possible situation. It does for quadratic forms, but certainly not in general (for example for elliptic curves). The point of view that one would like to understand what extra conditions are needed has been very influential, for example for cubic forms. Some form of local analysis underlies both the standard applications of the Hardy–Littlewood circle method in analytic number theory, and the use of adele rings, making this one of the unifying principles across number theory. See also :Category:Localization (mathematics) Localization of a category Localization of a module Localization of a ring Localization of a topological space Hasse principle Number theory Finite groups Localization (mathematics)
https://en.wikipedia.org/wiki/Jordan%20normal%20form
In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal (on the superdiagonal), and with identical diagonal entries to the left and below them. Let V be a vector space over a field K. Then a basis with respect to which the matrix has the required form exists if and only if all eigenvalues of the matrix lie in K, or equivalently if the characteristic polynomial of the operator splits into linear factors over K. This condition is always satisfied if K is algebraically closed (for instance, if it is the field of complex numbers). The diagonal entries of the normal form are the eigenvalues (of the operator), and the number of times each eigenvalue occurs is called the algebraic multiplicity of the eigenvalue. If the operator is originally given by a square matrix M, then its Jordan normal form is also called the Jordan normal form of M. Any square matrix has a Jordan normal form if the field of coefficients is extended to one containing all the eigenvalues of the matrix. In spite of its name, the normal form for a given M is not entirely unique, as it is a block diagonal matrix formed of Jordan blocks, the order of which is not fixed; it is conventional to group blocks for the same eigenvalue together, but no ordering is imposed among the eigenvalues, nor among the blocks for a given eigenvalue, although the latter could for instance be ordered by weakly decreasing size. The Jordan–Chevalley decomposition is particularly simple with respect to a basis for which the operator takes its Jordan normal form. The diagonal form for diagonalizable matrices, for instance normal matrices, is a special case of the Jordan normal form. The Jordan normal form is named after Camille Jordan, who first stated the Jordan decomposition theorem in 1870. Overview Notation Some textbooks have the ones on the subdiagonal; that is, immediately below the main diagonal instead of on the superdiagonal. The eigenvalues are still on the main diagonal. Motivation An n × n matrix A is diagonalizable if and only if the sum of the dimensions of the eigenspaces is n. Or, equivalently, if and only if A has n linearly independent eigenvectors. Not all matrices are diagonalizable; matrices that are not diagonalizable are called defective matrices. Consider the following matrix: Including multiplicity, the eigenvalues of A are λ = 1, 2, 4, 4. The dimension of the eigenspace corresponding to the eigenvalue 4 is 1 (and not 2), so A is not diagonalizable. However, there is an invertible matrix P such that J = P−1AP, where The matrix is almost diagonal. This is the Jordan normal form of A. The section Example below fills in the details of the computation. Complex matrices In gener
https://en.wikipedia.org/wiki/Star%20polygon
In geometry, a star polygon is a type of non-convex polygon. Regular star polygons have been studied in depth; while star polygons in general appear not to have been formally defined, certain notable ones can arise through truncation operations on regular simple and star polygons. Branko Grünbaum identified two primary definitions used by Johannes Kepler, one being the regular star polygons with intersecting edges that don't generate new vertices, and the second being simple isotoxal concave polygons. The first usage is included in polygrams which includes polygons like the pentagram but also compound figures like the hexagram. One definition of a star polygon, used in turtle graphics, is a polygon having 2 or more turns (turning number and density), like in spirolaterals. Names Star polygon names combine a numeral prefix, such as penta-, with the Greek suffix -gram (in this case generating the word pentagram). The prefix is normally a Greek cardinal, but synonyms using other prefixes exist. For example, a nine-pointed polygon or enneagram is also known as a nonagram, using the ordinal nona from Latin. The -gram suffix derives from γραμμή (grammḗ) meaning a line. Regular star polygon A "regular star polygon" is a self-intersecting, equilateral equiangular polygon. A regular star polygon is denoted by its Schläfli symbol {p/q}, where p (the number of vertices) and q (the density) are relatively prime (they share no factors) and q ≥ 2. The density of a polygon can also be called its turning number, the sum of the turn angles of all the vertices divided by 360°. The symmetry group of {n/k} is dihedral group Dn of order 2n, independent of k. Regular star polygons were first studied systematically by Thomas Bradwardine, and later Johannes Kepler. Construction via vertex connection Regular star polygons can be created by connecting one vertex of a simple, regular, p-sided polygon to another, non-adjacent vertex and continuing the process until the original vertex is reached again. Alternatively for integers p and q, it can be considered as being constructed by connecting every qth point out of p points regularly spaced in a circular placement. For instance, in a regular pentagon, a five-pointed star can be obtained by drawing a line from the first to the third vertex, from the third vertex to the fifth vertex, from the fifth vertex to the second vertex, from the second vertex to the fourth vertex, and from the fourth vertex to the first vertex. If q is greater than half of p, then the construction will result in the same polygon as p-q; connecting every third vertex of the pentagon will yield an identical result to that of connecting every second vertex. However, the vertices will be reached in the opposite direction, which makes a difference when retrograde polygons are incorporated in higher-dimensional polytopes. For example, an antiprism formed from a prograde pentagram {5/2} results in a pentagrammic antiprism; the analogous constr
https://en.wikipedia.org/wiki/Honda%20Toshiaki
was a Japanese political economist in the late Edo period. Born in Echigo, Toshiaki went to Edo to study astronomy, mathematics and kendo. At the age of 24, he opened his own school. He wrote A Secret Plan of Government (Keisei Hisaku; 経世秘策), in which he proposed lifting a ban of a foreign trade and colonization of Ezo, and Tales of the West (Seiiki Monogatari; 西域物語), both in 1798. Toshiaki was a polymath who knew a bit about the Western world, and wrote that Japan ought to mimic the policies of England, another island country, and "called for more active official promotion of national wealth and strength". In particular, he wrote about four goals for the Japan of his day: Production of gunpowder Smelting of iron and other metals Setting up a merchant fleet Settlement of Ezo in the north References Bibliography Keene, Donald. The Japanese Discovery of Europe, 1720-1830. Stanford: Stanford University Press. . [revised and expanded edition of The Discovery of Europe: Honda Toshiaki and Other Discoverers, 1720-1798, London, 1952] 1744 births 1821 deaths People from Niigata Prefecture 18th-century Japanese mathematicians 19th-century Japanese mathematicians Deified Japanese people
https://en.wikipedia.org/wiki/Polynomial%20long%20division
In algebra, polynomial long division is an algorithm for dividing a polynomial by another polynomial of the same or lower degree, a generalized version of the familiar arithmetic technique called long division. It can be done easily by hand, because it separates an otherwise complex division problem into smaller ones. Sometimes using a shorthand version called synthetic division is faster, with less writing and fewer calculations. Another abbreviated method is polynomial short division (Blomqvist's method). Polynomial long division is an algorithm that implements the Euclidean division of polynomials, which starting from two polynomials A (the dividend) and B (the divisor) produces, if B is not zero, a quotient Q and a remainder R such that A = BQ + R, and either R = 0 or the degree of R is lower than the degree of B. These conditions uniquely define Q and R, which means that Q and R do not depend on the method used to compute them. The result R = 0 occurs if and only if the polynomial A has B as a factor. Thus long division is a means for testing whether one polynomial has another as a factor, and, if it does, for factoring it out. For example, if a root r of A is known, it can be factored out by dividing A by (x – r). Example Polynomial long division Find the quotient and the remainder of the division of the dividend, by the divisor. The dividend is first rewritten like this: The quotient and remainder can then be determined as follows: Divide the first term of the dividend by the highest term of the divisor (meaning the one with the highest power of x, which in this case is x). Place the result above the bar (x3 ÷ x = x2). Multiply the divisor by the result just obtained (the first term of the eventual quotient). Write the result under the first two terms of the dividend (). Subtract the product just obtained from the appropriate terms of the original dividend (being careful that subtracting something having a minus sign is equivalent to adding something having a plus sign), and write the result underneath (). Then, "bring down" the next term from the dividend. Repeat the previous three steps, except this time use the two terms that have just been written as the dividend. Repeat step 4. This time, there is nothing to "bring down". The polynomial above the bar is the quotient q(x), and the number left over (5) is the remainder r(x). The long division algorithm for arithmetic is very similar to the above algorithm, in which the variable x is replaced (in base 10) by the specific number 10. Polynomial short division Blomqvist's method is an abbreviated version of the long division above. This pen-and-paper method uses the same algorithm as polynomial long division, but mental calculation is used to determine remainders. This requires less writing, and can therefore be a faster method once mastered. The division is at first written in a similar way as long multiplication with the dividend at the top, and the divisor below it. The q
https://en.wikipedia.org/wiki/M%C3%B6bius%20transformation
In geometry and complex analysis, a Möbius transformation of the complex plane is a rational function of the form of one complex variable z; here the coefficients a, b, c, d are complex numbers satisfying . Geometrically, a Möbius transformation can be obtained by first applying the inverse stereographic projection from the plane to the unit sphere, moving and rotating the sphere to a new location and orientation in space, and then applying a stereographic projection to map from the sphere back to the plane. These transformations preserve angles, map every straight line to a line or circle, and map every circle to a line or circle. The Möbius transformations are the projective transformations of the complex projective line. They form a group called the Möbius group, which is the projective linear group . Together with its subgroups, it has numerous applications in mathematics and physics. Möbius geometries and their transformations generalize this case to any number of dimensions over other fields. Möbius transformations are named in honor of August Ferdinand Möbius; they are an example of homographies, linear fractional transformations, bilinear transformations, and spin transformations (in relativity theory). Overview Möbius transformations are defined on the extended complex plane (i.e., the complex plane augmented by the point at infinity). Stereographic projection identifies with a sphere, which is then called the Riemann sphere; alternatively, can be thought of as the complex projective line . The Möbius transformations are exactly the bijective conformal maps from the Riemann sphere to itself, i.e., the automorphisms of the Riemann sphere as a complex manifold; alternatively, they are the automorphisms of as an algebraic variety. Therefore, the set of all Möbius transformations forms a group under composition. This group is called the Möbius group, and is sometimes denoted . The Möbius group is isomorphic to the group of orientation-preserving isometries of hyperbolic 3-space and therefore plays an important role when studying hyperbolic 3-manifolds. In physics, the identity component of the Lorentz group acts on the celestial sphere in the same way that the Möbius group acts on the Riemann sphere. In fact, these two groups are isomorphic. An observer who accelerates to relativistic velocities will see the pattern of constellations as seen near the Earth continuously transform according to infinitesimal Möbius transformations. This observation is often taken as the starting point of twistor theory. Certain subgroups of the Möbius group form the automorphism groups of the other simply-connected Riemann surfaces (the complex plane and the hyperbolic plane). As such, Möbius transformations play an important role in the theory of Riemann surfaces. The fundamental group of every Riemann surface is a discrete subgroup of the Möbius group (see Fuchsian group and Kleinian group). A particularly important discrete subgroup of the
https://en.wikipedia.org/wiki/Octagon
In geometry, an octagon (from the Greek ὀκτάγωνον oktágōnon, "eight angles") is an eight-sided polygon or 8-gon. A regular octagon has Schläfli symbol {8} and can also be constructed as a quasiregular truncated square, t{4}, which alternates two types of edges. A truncated octagon, t{8} is a hexadecagon, {16}. A 3D analog of the octagon can be the rhombicuboctahedron with the triangular faces on it like the replaced edges, if one considers the octagon to be a truncated square. Properties The sum of all the internal angles of any octagon is 1080°. As with all polygons, the external angles total 360°. If squares are constructed all internally or all externally on the sides of an octagon, then the midpoints of the segments connecting the centers of opposite squares form a quadrilateral that is both equidiagonal and orthodiagonal (that is, whose diagonals are equal in length and at right angles to each other). The midpoint octagon of a reference octagon has its eight vertices at the midpoints of the sides of the reference octagon. If squares are constructed all internally or all externally on the sides of the midpoint octagon, then the midpoints of the segments connecting the centers of opposite squares themselves form the vertices of a square. Regularity A regular octagon is a closed figure with sides of the same length and internal angles of the same size. It has eight lines of reflective symmetry and rotational symmetry of order 8. A regular octagon is represented by the Schläfli symbol {8}. The internal angle at each vertex of a regular octagon is 135° ( radians). The central angle is 45° ( radians). Area The area of a regular octagon of side length a is given by In terms of the circumradius R, the area is In terms of the apothem r (see also inscribed figure), the area is These last two coefficients bracket the value of pi, the area of the unit circle. The area can also be expressed as where S is the span of the octagon, or the second-shortest diagonal; and a is the length of one of the sides, or bases. This is easily proven if one takes an octagon, draws a square around the outside (making sure that four of the eight sides overlap with the four sides of the square) and then takes the corner triangles (these are 45–45–90 triangles) and places them with right angles pointed inward, forming a square. The edges of this square are each the length of the base. Given the length of a side a, the span S is The span, then, is equal to the silver ratio times the side, a. The area is then as above: Expressed in terms of the span, the area is Another simple formula for the area is More often the span S is known, and the length of the sides, a, is to be determined, as when cutting a square piece of material into a regular octagon. From the above, The two end lengths e on each side (the leg lengths of the triangles (green in the image) truncated from the square), as well as being may be calculated as Circumradius and inradius The circu
https://en.wikipedia.org/wiki/One-form%20%28differential%20geometry%29
In differential geometry, a one-form on a differentiable manifold is a smooth section of the cotangent bundle. Equivalently, a one-form on a manifold is a smooth mapping of the total space of the tangent bundle of to whose restriction to each fibre is a linear functional on the tangent space. Symbolically, where is linear. Often one-forms are described locally, particularly in local coordinates. In a local coordinate system, a one-form is a linear combination of the differentials of the coordinates: where the are smooth functions. From this perspective, a one-form has a covariant transformation law on passing from one coordinate system to another. Thus a one-form is an order 1 covariant tensor field. Examples The most basic non-trivial differential one-form is the "change in angle" form This is defined as the derivative of the angle "function" (which is only defined up to an additive constant), which can be explicitly defined in terms of the atan2 function. Taking the derivative yields the following formula for the total derivative: While the angle "function" cannot be continuously defined – the function atan2 is discontinuous along the negative -axis – which reflects the fact that angle cannot be continuously defined, this derivative is continuously defined except at the origin, reflecting the fact that infinitesimal (and indeed local) in angle can be defined everywhere except the origin. Integrating this derivative along a path gives the total change in angle over the path, and integrating over a closed loop gives the winding number times In the language of differential geometry, this derivative is a one-form, and it is closed (its derivative is zero) but not exact (it is not the derivative of a 0-form, that is, a function), and in fact it generates the first de Rham cohomology of the punctured plane. This is the most basic example of such a form, and it is fundamental in differential geometry. Differential of a function Let be open (for example, an interval ), and consider a differentiable function with derivative The differential of at a point is defined as a certain linear map of the variable Specifically, (The meaning of the symbol is thus revealed: it is simply an argument, or independent variable, of the linear function ) Hence the map sends each point to a linear functional This is the simplest example of a differential (one-)form. In terms of the de Rham cochain complex, one has an assignment from zero-forms (scalar functions) to one-forms; that is, See also References Differential forms 1 (number)
https://en.wikipedia.org/wiki/Synthetic%20division
In algebra, synthetic division is a method for manually performing Euclidean division of polynomials, with less writing and fewer calculations than long division. It is mostly taught for division by linear monic polynomials (known as Ruffini's rule), but the method can be generalized to division by any polynomial. The advantages of synthetic division are that it allows one to calculate without writing variables, it uses few calculations, and it takes significantly less space on paper than long division. Also, the subtractions in long division are converted to additions by switching the signs at the very beginning, helping to prevent sign errors. Regular synthetic division The first example is synthetic division with only a monic linear denominator . The numerator can be written as . The zero of the denominator is . The coefficients of are arranged as follows, with the zero of on the left: The after the bar is "dropped" to the last row. The is multiplied by the before the bar, and placed in the . An is performed in the next column. The previous two steps are repeated and the following is obtained: Here, the last term (-123) is the remainder while the rest correspond to the coefficients of the quotient. The terms are written with increasing degree from right to left beginning with degree zero for the remainder and the result. Hence the quotient and remainder are: Evaluating polynomials by the remainder theorem The above form of synthetic division is useful in the context of the polynomial remainder theorem for evaluating univariate polynomials. To summarize, the value of at is equal to the remainder of the division of by The advantage of calculating the value this way is that it requires just over half as many multiplication steps as naive evaluation. An alternative evaluation strategy is Horner's method. Expanded synthetic division This method generalizes to division by any monic polynomial with only a slight modification with changes in bold. Using the same steps as before, perform the following division: We concern ourselves only with the coefficients. Write the coefficients of the polynomial to be divided at the top. Negate the coefficients of the divisor. Write in every coefficient but the first one on the left in an upward right diagonal (see next diagram). Note the change of sign from 1 to −1 and from −3 to 3 . "Drop" the first coefficient after the bar to the last row. Multiply the dropped number by the diagonal before the bar, and place the resulting entries diagonally to the right from the dropped entry. Perform an addition in the next column. Repeat the previous two steps until you would go past the entries at the top with the next diagonal. Then simply add up any remaining columns. Count the terms to the left of the bar. Since there are two, the remainder has degree one and this is the two right-most terms under the bar. Mark the separation with a vertical bar. The terms are written w
https://en.wikipedia.org/wiki/Monic%20polynomial
In algebra, a monic polynomial is a non-zero univariate polynomial (that is, a polynomial in a single variable) in which the leading coefficient (the nonzero coefficient of highest degree) is equal to 1. That is to say, a monic polynomial is one that can be written as with Uses Monic polynomials are widely used in algebra and number theory, since they produce many simplifications and they avoid divisions and denominators. Here are some examples. Every polynomial is associated to a unique monic polynomial. In particular, the unique factorization property of polynomials can be stated as: Every polynomial can be uniquely factorized as the product of its leading coefficient and a product of monic irreducible polynomials. Vieta's formulas are simpler in the case of monic polynomials: The th elementary symmetric function of the roots of a monic polynomial of degree equals where is the coefficient of the th power of the indeterminate. Euclidean division of a polynomial by a monic polynomial does not introduce divisions of coefficients. Therefore, it is defined for polynomials with coefficients in a commutative ring. Algebraic integers are defined as the roots of monic polynomials with integer coefficients. Properties Every nonzero univariate polynomial (polynomial with a single indeterminate) can be written where are the coefficients of the polynomial, and the leading coefficient is not zero. By definition, such a polynomial is monic if A product of monic polynomials is monic. A product of polynomials is monic if and only if the product of the leading coefficients of the factors equals . This implies that, the monic polynomials in a univariate polynomial ring over a commutative ring form a monoid under polynomial multiplication. Two monic polynomials are associated if and only if they are equal, since the multiplication of a polynomial by a nonzero constant produces a polynomial with this constant as its leading coefficient. Divisibility induces a partial order on monic polynomials. This results almost immediately from the preceding properties. Polynomial equations Let be a polynomial equation, where is a univariate polynomial of degree . If one divides all coefficients of by its leading coefficient one obtains a new polynomial equation that has the same solutions and consists to equate to zero a monic polynomial. For example, the equation is equivalent to the monic equation When the coefficients are unspecified, or belong to a field where division does not result into fractions (such as or a finite field), this reduction to monic equations may provide simplification. On the other hand, as shown by the previous example, when the coefficients are explicit integers, the associated monic polynomial is generally more complicated. Therefore, primitive polynomials are often used instead of monic polynomials when dealing with integer coefficients. Integral elements Monic polynomial equations are at the basis of the theory of a
https://en.wikipedia.org/wiki/Boolean%20prime%20ideal%20theorem
In mathematics, the Boolean prime ideal theorem states that ideals in a Boolean algebra can be extended to prime ideals. A variation of this statement for filters on sets is known as the ultrafilter lemma. Other theorems are obtained by considering different mathematical structures with appropriate notions of ideals, for example, rings and prime ideals (of ring theory), or distributive lattices and maximal ideals (of order theory). This article focuses on prime ideal theorems from order theory. Although the various prime ideal theorems may appear simple and intuitive, they cannot be deduced in general from the axioms of Zermelo–Fraenkel set theory without the axiom of choice (abbreviated ZF). Instead, some of the statements turn out to be equivalent to the axiom of choice (AC), while others—the Boolean prime ideal theorem, for instance—represent a property that is strictly weaker than AC. It is due to this intermediate status between ZF and ZF + AC (ZFC) that the Boolean prime ideal theorem is often taken as an axiom of set theory. The abbreviations BPI or PIT (for Boolean algebras) are sometimes used to refer to this additional axiom. Prime ideal theorems An order ideal is a (non-empty) directed lower set. If the considered partially ordered set (poset) has binary suprema (a.k.a. joins), as do the posets within this article, then this is equivalently characterized as a non-empty lower set I that is closed for binary suprema (that is, implies ). An ideal I is prime if its set-theoretic complement in the poset is a filter (that is, implies or ). Ideals are proper if they are not equal to the whole poset. Historically, the first statement relating to later prime ideal theorems was in fact referring to filters—subsets that are ideals with respect to the dual order. The ultrafilter lemma states that every filter on a set is contained within some maximal (proper) filter—an ultrafilter. Recall that filters on sets are proper filters of the Boolean algebra of its powerset. In this special case, maximal filters (i.e. filters that are not strict subsets of any proper filter) and prime filters (i.e. filters that with each union of subsets X and Y contain also X or Y) coincide. The dual of this statement thus assures that every ideal of a powerset is contained in a prime ideal. The above statement led to various generalized prime ideal theorems, each of which exists in a weak and in a strong form. Weak prime ideal theorems state that every non-trivial algebra of a certain class has at least one prime ideal. In contrast, strong prime ideal theorems require that every ideal that is disjoint from a given filter can be extended to a prime ideal that is still disjoint from that filter. In the case of algebras that are not posets, one uses different substructures instead of filters. Many forms of these theorems are actually known to be equivalent, so that the assertion that "PIT" holds is usually taken as the assertion that the corresponding statement for
https://en.wikipedia.org/wiki/Ideal
Ideal may refer to: Philosophy Ideal (ethics), values that one actively pursues as goals Platonic ideal, a philosophical idea of trueness of form, associated with Plato Mathematics Ideal (ring theory), special subsets of a ring considered in abstract algebra Ideal, special subsets of a semigroup Ideal (order theory), special kind of lower sets of an order Ideal (set theory), a collection of sets regarded as "small" or "negligible" Ideal (Lie algebra), a particular subset in a Lie algebra Ideal point, a boundary point in hyperbolic geometry Ideal triangle, a triangle in hyperbolic geometry whose vertices are ideal points Science Ideal chain, in science, the simplest model describing a polymer Ideal gas law, in physics, governing the pressure of an ideal gas Ideal transformer, an electrical transformer having zero resistance and perfect magnetic threading Ideal final result, in TRIZ methodology, the best possible solution Thought experiment, sometimes called an ideal experiment Ideal type, a social science term Ideal solution, a solution with thermodynamic properties analogous to those of a mixture of ideal gases Entertainment Ideal (group), a late-1990s/2000s American R&B group Ideal (German band), an early-1980s German rock group Ideal (album), a 1999 album by the R&B group Ideal An Ideal, a 2016 album by Li Ronghao Ideal (novel), a 1934 novel by Ayn Rand, published in 2015 Ideal (play), a 1936 play by Ayn Rand, adapted from the novel, published in 1989 Ideal (TV series), a British situation comedy Ideal Film Company, a British film studio of the Silent Era Ideal Ice Cream, an ice cream company Ideal Toy Company, a defunct toy company Places Ideal, Georgia Ideal, Illinois Ideal, South Dakota Ideal Mini School Miscellaneous Changhe Ideal, a city car produced by a joint-venture of Changhe and Suzuki Ideal 18, a Canadian sailboat design IDEAL Scholars Fund, an American scholarship program for underrepresented students Ideal (newspaper), a Spanish-language newspaper iDEAL, an online payment method in the Netherlands Ideal Industries, an American manufacturer of electrical connectors and tools IDEAL framework (Idea, Development, Exploration, Assessment, Long-term study), a framework for describing the stages of innovation in surgery IDEAL (Interactive Development Environment for an Application Lifecycle), a development language above COBOL for DATACOM/DB See also Idealism (disambiguation) Idea Idol (disambiguation) Idle (disambiguation) Idyl (disambiguation)
https://en.wikipedia.org/wiki/Gelfand%E2%80%93Naimark%20theorem
In mathematics, the Gelfand–Naimark theorem states that an arbitrary C*-algebra A is isometrically *-isomorphic to a C*-subalgebra of bounded operators on a Hilbert space. This result was proven by Israel Gelfand and Mark Naimark in 1943 and was a significant point in the development of the theory of C*-algebras since it established the possibility of considering a C*-algebra as an abstract algebraic entity without reference to particular realizations as an operator algebra. Details The Gelfand–Naimark representation π is the direct sum of representations πf of A where f ranges over the set of pure states of A and πf is the irreducible representation associated to f by the GNS construction. Thus the Gelfand–Naimark representation acts on the Hilbert direct sum of the Hilbert spaces Hf by π(x) is a bounded linear operator since it is the direct sum of a family of operators, each one having norm ≤ ||x||. Theorem. The Gelfand–Naimark representation of a C*-algebra is an isometric *-representation. It suffices to show the map π is injective, since for *-morphisms of C*-algebras injective implies isometric. Let x be a non-zero element of A. By the Krein extension theorem for positive linear functionals, there is a state f on A such that f(z) ≥ 0 for all non-negative z in A and f(−x* x) < 0. Consider the GNS representation πf with cyclic vector ξ. Since it follows that πf (x) ≠ 0, so π (x) ≠ 0, so π is injective. The construction of Gelfand–Naimark representation depends only on the GNS construction and therefore it is meaningful for any Banach *-algebra A having an approximate identity. In general (when A is not a C*-algebra) it will not be a faithful representation. The closure of the image of π(A) will be a C*-algebra of operators called the C*-enveloping algebra of A. Equivalently, we can define the C*-enveloping algebra as follows: Define a real valued function on A by as f ranges over pure states of A. This is a semi-norm, which we refer to as the C* semi-norm of A. The set I of elements of A whose semi-norm is 0 forms a two sided-ideal in A closed under involution. Thus the quotient vector space A / I is an involutive algebra and the norm factors through a norm on A / I, which except for completeness, is a C* norm on A / I (these are sometimes called pre-C*-norms). Taking the completion of A / I relative to this pre-C*-norm produces a C*-algebra B. By the Krein–Milman theorem one can show without too much difficulty that for x an element of the Banach *-algebra A having an approximate identity: It follows that an equivalent form for the C* norm on A is to take the above supremum over all states. The universal construction is also used to define universal C*-algebras of isometries. Remark. The Gelfand representation or Gelfand isomorphism for a commutative C*-algebra with unit is an isometric *-isomorphism from to the algebra of continuous complex-valued functions on the space of multiplicative linear functionals, wh
https://en.wikipedia.org/wiki/Isosceles%20triangle
In geometry, an isosceles triangle () is a triangle that has two sides of equal length. Sometimes it is specified as having exactly two sides of equal length, and sometimes as having at least two sides of equal length, the latter version thus including the equilateral triangle as a special case. Examples of isosceles triangles include the isosceles right triangle, the golden triangle, and the faces of bipyramids and certain Catalan solids. The mathematical study of isosceles triangles dates back to ancient Egyptian mathematics and Babylonian mathematics. Isosceles triangles have been used as decoration from even earlier times, and appear frequently in architecture and design, for instance in the pediments and gables of buildings. The two equal sides are called the legs and the third side is called the base of the triangle. The other dimensions of the triangle, such as its height, area, and perimeter, can be calculated by simple formulas from the lengths of the legs and base. Every isosceles triangle has an axis of symmetry along the perpendicular bisector of its base. The two angles opposite the legs are equal and are always acute, so the classification of the triangle as acute, right, or obtuse depends only on the angle between its two legs. Terminology, classification, and examples Euclid defined an isosceles triangle as a triangle with exactly two equal sides, but modern treatments prefer to define isosceles triangles as having at least two equal sides. The difference between these two definitions is that the modern version makes equilateral triangles (with three equal sides) a special case of isosceles triangles. A triangle that is not isosceles (having three unequal sides) is called scalene. "Isosceles" is made from the Greek roots "isos" (equal) and "skelos" (leg). The same word is used, for instance, for isosceles trapezoids, trapezoids with two equal sides, and for isosceles sets, sets of points every three of which form an isosceles triangle. In an isosceles triangle that has exactly two equal sides, the equal sides are called legs and the third side is called the base. The angle included by the legs is called the vertex angle and the angles that have the base as one of their sides are called the base angles. The vertex opposite the base is called the apex. In the equilateral triangle case, since all sides are equal, any side can be called the base. Whether an isosceles triangle is acute, right or obtuse depends only on the angle at its apex. In Euclidean geometry, the base angles can not be obtuse (greater than 90°) or right (equal to 90°) because their measures would sum to at least 180°, the total of all angles in any Euclidean triangle. Since a triangle is obtuse or right if and only if one of its angles is obtuse or right, respectively, an isosceles triangle is obtuse, right or acute if and only if its apex angle is respectively obtuse, right or acute. In Edwin Abbott's book Flatland, this classification of shapes was used as a
https://en.wikipedia.org/wiki/Golden%20angle
In geometry, the golden angle is the smaller of the two angles created by sectioning the circumference of a circle according to the golden ratio; that is, into two arcs such that the ratio of the length of the smaller arc to the length of the larger arc is the same as the ratio of the length of the larger arc to the full circumference of the circle. Algebraically, let a+b be the circumference of a circle, divided into a longer arc of length a and a smaller arc of length b such that The golden angle is then the angle subtended by the smaller arc of length b. It measures approximately 137.5077640500378546463487 ...° or in radians 2.39996322972865332 ... . The name comes from the golden angle's connection to the golden ratio φ; the exact value of the golden angle is or where the equivalences follow from well-known algebraic properties of the golden ratio. As its sine and cosine are transcendental numbers, the golden angle cannot be constructed using a straightedge and compass. Derivation The golden ratio is equal to φ = a/b given the conditions above. Let ƒ be the fraction of the circumference subtended by the golden angle, or equivalently, the golden angle divided by the angular measurement of the circle. But since it follows that This is equivalent to saying that φ 2 golden angles can fit in a circle. The fraction of a circle occupied by the golden angle is therefore The golden angle g can therefore be numerically approximated in degrees as: or in radians as : Golden angle in nature The golden angle plays a significant role in the theory of phyllotaxis; for example, the golden angle is the angle separating the florets on a sunflower. Analysis of the pattern shows that it is highly sensitive to the angle separating the individual primordia, with the Fibonacci angle giving the parastichy with optimal packing density. Mathematical modelling of a plausible physical mechanism for floret development has shown the pattern arising spontaneously from the solution of a nonlinear partial differential equation on a plane. See also 137 (number) 138 (number) References External links Golden Angle at MathWorld Golden ratio Angle Mathematical constants
https://en.wikipedia.org/wiki/Stem
Stem or STEM may refer to: Plant stem, a structural axis of a vascular plant Science, technology, engineering, and mathematics Language and writing Word stem, part of a word responsible for its lexical meaning Stemming, a process in natural language processing Stem (music), in music notation, the vertical lines directly connected to the note head Stem (typography), the main vertical stroke of a letter Stem, the opening of a multiple choice question Music and audio Stem (audio), a collection of audio sources mixed together to be dealt with downstream as one unit Stem (music), a part of a written musical note Stem mixing and mastering, a method of mixing audio material The Stems, an Australian garage punk band "Stem" (DJ Shadow song), 1996 "Stem" (Ringo Sheena song), 2003 "Stem", a song by Hayden from the 1995 album Everything I Long For "Stem", a song by Static-X from the 1999 album Wisconsin Death Trip "Die Stem van Suid-Afrika", or "Die Stem", the national anthem of South Africa during the apartheid era Science, technology and transportation Scanning transmission electron microscopy, a type of microscopy Spatiotemporal Epidemiological Modeler, free software Stem, part of a compound variable in the Rexx computer programming language Stem (bicycle part), connecting the handlebars to the steerer tube of a bicycle fork Stem (ship), the most forward part of a boat or ship's bow Stem, part of a watch Other uses Stem (glass), the stem of a drinking glass Stem (lesbian), or soft butch, who exhibits some stereotypical butch traits Stem (skiing), a technique in skiing Stem, North Carolina, U.S., a town STEM.org, an American multinational education company Stemming, a term in climbing Upgrade (film), a 2018 film, originally released as STEM, featuring an AI chip called STEM See also Stemm (disambiguation) STEM Academy (disambiguation) Stem cell, an undifferentiated biological cell that can differentiate into specialized cells Stem group, or crown group, in phylogenetics Main stem, the primary downstream segment of a river Stipe (botany), a stalk to support some other structure Stipe (mycology), the stem of a mushroom under the cap Trunk (botany), the woody stem of a tree
https://en.wikipedia.org/wiki/List%20of%20Major%20League%20Baseball%20career%20games%20finished%20leaders
In baseball statistics, a relief pitcher is credited with a game finished (denoted by GF) if he is the last pitcher to pitch for his team in a game. A starting pitcher is not credited with a GF for pitching a complete game. Mariano Rivera is the all-time leader in games finished with 952. Rivera is the only pitcher in MLB history to finish more than 900 career games. Trevor Hoffman and Lee Smith are the only other pitchers to finish more than 800 games in their careers. Key List Stats updated as of October 1, 2023. See also Games pitched Games started Notes References External links Major League Baseball Finished Major League Baseball statistics
https://en.wikipedia.org/wiki/Putout
In baseball statistics, a putout (PO) is awarded to a defensive player who (generally while in secure possession of the ball) records an out by one of the following methods: Tagging a runner with the ball when he is not touching a base (a tagout) Catching a batted or thrown ball and tagging a base to put out a batter or runner (a force out, or if done after a flyout, a doubling off) Catching a thrown ball and tagging a base to record an out on an appeal play Catching a third strike (a strikeout) Catching a batted ball on the fly (a flyout) Being positioned closest to a runner called out for interference In a regulation nine-inning game, the winning team will always have a total of 27 putouts, as one putout is awarded for every defensive out made; this is one aspect of proving a box score. While the abbreviation for putout is "PO", baseball scorekeeping typically records the specific manner in which an out was achieved, without explicitly noting which player is awarded the putout for common plays. For example, a strikeout is recorded without noting the putout by the catcher, with additional detail only provided as needed. For example, "Fryman struck out (catcher to first)" in a play-by-play summary in reference to an out recorded following an uncaught third strike, which indicates the putout was credited to the first baseman rather than the catcher. All-time records Content in this section has been updated through completion of the 2022 major-league season. Career records Jake Beckley: 23,767 (1888–1907) Cap Anson: 22,572 (1871–1897) Ed Konetchy: 21,378 (1907–1921) Eddie Murray: 21,265 (1977–1997) Charlie Grimm: 20,722 (1916–1936) Stuffy McInnis: 20,120 (1909–1927) Mickey Vernon: 19,819 (1939–1960) Jake Daubert: 19,634 (1910–1924) Lou Gehrig: 19,525 (1923–1939) Joe Kuhel: 19,386 (1930–1947) Note: each of the above players was primarily a first baseman. Note: entering the season, Joey Votto has the most putouts among active MLB players, with 14,440. Source: Single season records The most putouts recorded by any player in a single major-league season is 1,846 by Jiggs Donahue, a first baseman with the 1907 Chicago White Sox. Pitchers Source: Catchers Source: Note: as the majority of putouts by catchers occur on strikeouts, most single-season putout records for catchers have occurred in recent seasons (excepting the shortened season), consistent with the increase in total strikeouts per MLB season (for example; 42,104 in 2021 compared to 34,489 in 2011). First basemen Source: Second basemen Source: Third basemen Source: Shortstops Source: Left fielders Source: Center fielders Source: Right fielders Source: See also Assist (baseball) References Fielding statistics
https://en.wikipedia.org/wiki/Total%20chances
In baseball statistics, total chances (TC), also called chances offered, represents the number of plays in which a defensive player has participated. It is the sum of putouts plus assists plus errors. Chances accepted refers to the total of putouts and assists only. See also Fielding percentage References Fielding statistics
https://en.wikipedia.org/wiki/Icosian%20calculus
The icosian calculus is a non-commutative algebraic structure discovered by the Irish mathematician William Rowan Hamilton in 1856. In modern terms, he gave a group presentation of the icosahedral rotation group by generators and relations. Hamilton's discovery derived from his attempts to find an algebra of "triplets" or 3-tuples that he believed would reflect the three Cartesian axes. The symbols of the icosian calculus can be equated to moves between vertices on a dodecahedron. Hamilton's work in this area resulted indirectly in the terms Hamiltonian circuit and Hamiltonian path in graph theory. He also invented the icosian game as a means of illustrating and popularising his discovery. Informal definition The algebra is based on three symbols that are each roots of unity, in that repeated application of any of them yields the value 1 after a particular number of steps. They are: Hamilton also gives one other relation between the symbols: (In modern terms this is the (2,3,5) triangle group.) The operation is associative but not commutative. They generate a group of order 60, isomorphic to the group of rotations of a regular icosahedron or dodecahedron, and therefore to the alternating group of degree five. Although the algebra exists as a purely abstract construction, it can be most easily visualised in terms of operations on the edges and vertices of a dodecahedron. Hamilton himself used a flattened dodecahedron as the basis for his instructional game. Imagine an insect crawling along a particular edge of Hamilton's labelled dodecahedron in a certain direction, say from to . We can represent this directed edge by . The icosian symbol equates to changing direction on any edge, so the insect crawls from to (following the directed edge ). The icosian symbol equates to rotating the insect's current travel anti-clockwise around the end point. In our example this would mean changing the initial direction to become . The icosian symbol equates to making a right-turn at the end point, moving from to . Legacy The icosian calculus is one of the earliest examples of many mathematical ideas, including: presenting and studying a group by generators and relations; a triangle group, later generalized to Coxeter groups; visualization of a group by a graph, which led to combinatorial group theory and later geometric group theory; Hamiltonian circuits and Hamiltonian paths in graph theory; dessin d'enfant – see dessin d'enfant: history for details. See also Icosian References Graph theory Abstract algebra Binary operations Rotational symmetry William Rowan Hamilton
https://en.wikipedia.org/wiki/Darrell%20Huff
Darrell Huff (July 15, 1913 – June 27, 2001) was an American writer, and is best known as the author of How to Lie with Statistics (1954), the best-selling statistics book of the second half of the twentieth century. More than 50 years after it's publication, How to Lie with Statistics remains the most read statistics book in the history of the world Career Huff was born in Gowrie, Iowa, and educated at the University of Iowa, (BA 1938, MA 1939). Before turning to full-time writing in 1946, Huff served as editor of Better Homes and Gardens and Liberty magazine. As a freelancer, Huff produced hundreds of "How to" feature articles and wrote at least sixteen books, most of which concerned household projects. One of his biggest projects was a prize-winning home in Carmel-by-the-Sea, California, where he lived until his death. Personal life Huff married Frances Marie Nelson in 1937. At her instigation, Huff gave up his editorial work (which had become a "rat race" for him) and they moved to California in 1946, bought ten acres in the Valley of the Moon. They built their own house, and later several more houses. Frances Marie would sometimes be Huff's co-author. They had four daughters. Two would assist with his last books. Social role Huff is credited with introducing statistics to a generation of college and high-school students on a level that was meaningful, available, and practical. His most famous book, How to Lie with Statistics, is still being translated into new languages. His books have been published in over 22 languages, and continue to be used in classrooms the world over. His other publications focused on making practical projects accessible to ordinary Americans without specialised tools, trades, or advanced education, foreshadowing the modern "DIY" movements. Huff, like some prominent statisticians of the era, was later funded by the tobacco industry to publish a follow-up to his book on statistics: How to Lie with Smoking Statistics. The book was intended to be published by Macmillan, but near the end of 1968, the plans for its publication came to an abrupt halt. Andrew Gelman, professor of statistics at Columbia University, reviewed the ethics of Huff’s involvement with the industry and suggested Huff could have intentionally killed the project to save his own reputation, which would have been destroyed by his association with tobacco. It is not clear whether Huff himself sabotaged the book. Selected bibliography Books Huff, D. (1944). Pictures by Pete: A Career Story of a Young Commercial Photographer. Dodd, Mead, New York. Huff, D. (1945). Twenty Careers of Tomorrow. WhittleseyHouse, McGraw–Hill, New York. Huff, D. (1946). The Dog that Came True (illust. C. Moran and D. Thorne). Whittlesey House, McGraw–Hill, New York. (Adapted from a short story by Darrell Huff which appeared in Woman's Day.) Huff, D. (1954) How to Lie with Statistics (illust. I. Geis), Norton, New York, Huff, D. (1959). How to Take a Chance: The
https://en.wikipedia.org/wiki/Circular%20error%20probable
In the military science of ballistics, circular error probable (CEP) (also circular error probability or circle of equal probability) is a measure of a weapon system's precision. It is defined as the radius of a circle, centered on the mean, whose perimeter is expected to enclose the landing points of 50% of the rounds; said otherwise, it is the median error radius. That is, if a given munitions design has a CEP of 100 m, when 100 munitions are targeted at the same point, 50 will fall within a circle with a radius of 100 m around their average impact point. (The distance between the target point and the average impact point is referred to as bias.) There are associated concepts, such as the DRMS (distance root mean square), which is the square root of the average squared distance error, and R95, which is the radius of the circle where 95% of the values would fall in. The concept of CEP also plays a role when measuring the accuracy of a position obtained by a navigation system, such as GPS or older systems such as LORAN and Loran-C. Concept The original concept of CEP was based on a circular bivariate normal distribution (CBN) with CEP as a parameter of the CBN just as μ and σ are parameters of the normal distribution. Munitions with this distribution behavior tend to cluster around the mean impact point, with most reasonably close, progressively fewer and fewer further away, and very few at long distance. That is, if CEP is n metres, 50% of shots land within n metres of the mean impact, 43.7% between n and 2n, and 6.1% between 2n and 3n metres, and the proportion of shots that land farther than three times the CEP from the mean is only 0.2%. CEP is not a good measure of accuracy when this distribution behavior is not met. Precision-guided munitions generally have more "close misses" and so are not normally distributed. Munitions may also have larger standard deviation of range errors than the standard deviation of azimuth (deflection) errors, resulting in an elliptical confidence region. Munition samples may not be exactly on target, that is, the mean vector will not be (0,0). This is referred to as bias. To incorporate accuracy into the CEP concept in these conditions, CEP can be defined as the square root of the mean square error (MSE). The MSE will be the sum of the variance of the range error plus the variance of the azimuth error plus the covariance of the range error with the azimuth error plus the square of the bias. Thus the MSE results from pooling all these sources of error, geometrically corresponding to radius of a circle within which 50% of rounds will land. Several methods have been introduced to estimate CEP from shot data. Included in these methods are the plug-in approach of Blischke and Halpin (1966), the Bayesian approach of Spall and Maryak (1992), and the maximum likelihood approach of Winkler and Bickert (2012). The Spall and Maryak approach applies when the shot data represent a mixture of different projectile charac
https://en.wikipedia.org/wiki/Fibration
The notion of a fibration generalizes the notion of a fiber bundle and plays an important role in algebraic topology, a branch of mathematics. Fibrations are used, for example, in Postnikov systems or obstruction theory. In this article, all mappings are continuous mappings between topological spaces. Formal definitions Homotopy lifting property A mapping satisfies the homotopy lifting property for a space if: for every homotopy and for every mapping (also called lift) lifting (i.e. ) there exists a (not necessarily unique) homotopy lifting (i.e. ) with The following commutative diagram shows the situation: Fibration A fibration (also called Hurewicz fibration) is a mapping satisfying the homotopy lifting property for all spaces The space is called base space and the space is called total space. The fiber over is the subspace Serre fibration A Serre fibration (also called weak fibration) is a mapping satisfying the homotopy lifting property for all CW-complexes. Every Hurewicz fibration is a Serre fibration. Quasifibration A mapping is called quasifibration, if for every and holds that the induced mapping is an isomorphism. Every Serre fibration is a quasifibration. Examples The projection onto the first factor is a fibration. That is, trivial bundles are fibrations. Every covering is a fibration. Specifically, for every homotopy and every lift there exists a uniquely defined lift with Every fiber bundle satisfies the homotopy lifting property for every CW-complex. A fiber bundle with a paracompact and Hausdorff base space satisfies the homotopy lifting property for all spaces. An example for a fibration, which is not a fiber bundle, is given by the mapping induced by the inclusion where a topological space and is the space of all continuous mappings with the compact-open topology. The Hopf fibration is a non trivial fiber bundle and specifically a Serre fibration. Basic concepts Fiber homotopy equivalence A mapping between total spaces of two fibrations and with the same base space is a fibration homomorphism if the following diagram commutes: The mapping is a fiber homotopy equivalence if in addition a fibration homomorphism exists, such that the mappings and are homotopic, by fibration homomorphisms, to the identities and Pullback fibration Given a fibration and a mapping , the mapping is a fibration, where is the pullback and the projections of onto and yield the following commutative diagram: The fibration is called the pullback fibration or induced fibration. Pathspace fibration With the pathspace construction, any continuous mapping can be extended to a fibration by enlarging its domain to a homotopy equivalent space. This fibration is called pathspace fibration. The total space of the pathspace fibration for a continuous mapping between topological spaces consists of pairs with and paths with starting point where is the unit interval. The space c
https://en.wikipedia.org/wiki/Incenter
In geometry, the incenter of a triangle is a triangle center, a point defined for any triangle in a way that is independent of the triangle's placement or scale. The incenter may be equivalently defined as the point where the internal angle bisectors of the triangle cross, as the point equidistant from the triangle's sides, as the junction point of the medial axis and innermost point of the grassfire transform of the triangle, and as the center point of the inscribed circle of the triangle. Together with the centroid, circumcenter, and orthocenter, it is one of the four triangle centers known to the ancient Greeks, and the only one of the four that does not in general lie on the Euler line. It is the first listed center, X(1), in Clark Kimberling's Encyclopedia of Triangle Centers, and the identity element of the multiplicative group of triangle centers. For polygons with more than three sides, the incenter only exists for tangential polygons - those that have an incircle that is tangent to each side of the polygon. In this case the incenter is the center of this circle and is equally distant from all sides. Definition and construction It is a theorem in Euclidean geometry that the three interior angle bisectors of a triangle meet in a single point. In Euclid's Elements, Proposition 4 of Book IV proves that this point is also the center of the inscribed circle of the triangle. The incircle itself may be constructed by dropping a perpendicular from the incenter to one of the sides of the triangle and drawing a circle with that segment as its radius.<ref>[[Euclid's Elements|Euclid's Elements]], Book IV, Proposition 4: To inscribe a circle in a given triangle. David Joyce, Clark University, retrieved 2014-10-28.</ref> The incenter lies at equal distances from the three line segments forming the sides of the triangle, and also from the three lines containing those segments. It is the only point equally distant from the line segments, but there are three more points equally distant from the lines, the excenters, which form the centers of the excircles of the given triangle. The incenter and excenters together form an orthocentric system. The medial axis of a polygon is the set of points whose nearest neighbor on the polygon is not unique: these points are equidistant from two or more sides of the polygon. One method for computing medial axes is using the grassfire transform, in which one forms a continuous sequence of offset curves, each at some fixed distance from the polygon; the medial axis is traced out by the vertices of these curves. In the case of a triangle, the medial axis consists of three segments of the angle bisectors, connecting the vertices of the triangle to the incenter, which is the unique point on the innermost offset curve. The straight skeleton, defined in a similar way from a different type of offset curve, coincides with the medial axis for convex polygons and so also has its junction at the incenter. Proofs Ratio proof
https://en.wikipedia.org/wiki/Ehrhart%20polynomial
In mathematics, an integral polytope has an associated Ehrhart polynomial that encodes the relationship between the volume of a polytope and the number of integer points the polytope contains. The theory of Ehrhart polynomials can be seen as a higher-dimensional generalization of Pick's theorem in the Euclidean plane. These polynomials are named after Eugène Ehrhart who studied them in the 1960s. Definition Informally, if is a polytope, and is the polytope formed by expanding by a factor of in each dimension, then is the number of integer lattice points in . More formally, consider a lattice in Euclidean space and a -dimensional polytope in with the property that all vertices of the polytope are points of the lattice. (A common example is and a polytope for which all vertices have integer coordinates.) For any positive integer , let be the -fold dilation of (the polytope formed by multiplying each vertex coordinate, in a basis for the lattice, by a factor of ), and let be the number of lattice points contained in the polytope . Ehrhart showed in 1962 that is a rational polynomial of degree in , i.e. there exist rational numbers such that: for all positive integers . The Ehrhart polynomial of the interior of a closed convex polytope can be computed as: where is the dimension of . This result is known as Ehrhart–Macdonald reciprocity. Examples Let be a -dimensional unit hypercube whose vertices are the integer lattice points all of whose coordinates are 0 or 1. In terms of inequalities, Then the -fold dilation of is a cube with side length , containing integer points. That is, the Ehrhart polynomial of the hypercube is . Additionally, if we evaluate at negative integers, then as we would expect from Ehrhart–Macdonald reciprocity. Many other figurate numbers can be expressed as Ehrhart polynomials. For instance, the square pyramidal numbers are given by the Ehrhart polynomials of a square pyramid with an integer unit square as its base and with height one; the Ehrhart polynomial in this case is . Ehrhart quasi-polynomials Let be a rational polytope. In other words, suppose where and (Equivalently, is the convex hull of finitely many points in ) Then define In this case, is a quasi-polynomial in . Just as with integral polytopes, Ehrhart–Macdonald reciprocity holds, that is, Examples of Ehrhart quasi-polynomials Let be a polygon with vertices (0,0), (0,2), (1,1) and (, 0). The number of integer points in will be counted by the quasi-polynomial Interpretation of coefficients If is closed (i.e. the boundary faces belong to ), some of the coefficients of have an easy interpretation: the leading coefficient, , is equal to the -dimensional volume of , divided by (see lattice for an explanation of the content or covolume of a lattice); the second coefficient, , can be computed as follows: the lattice induces a lattice on any face of ; take the -dimensional volume of , divide by , and add those numb
https://en.wikipedia.org/wiki/Quantization%20%28signal%20processing%29
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms. The difference between an input value and its quantized value (such as round-off error) is referred to as quantization error. A device or algorithmic function that performs quantization is called a quantizer. An analog-to-digital converter is an example of a quantizer. Example For example, rounding a real number to the nearest integer value forms a very basic type of quantizer – a uniform one. A typical (mid-tread) uniform quantizer with a quantization step size equal to some value can be expressed as , where the notation denotes the floor function. Alternatively, the same quantizer may be expressed in terms of the ceiling function, as . (The notation denotes the ceiling function). The essential property of a quantizer is having a countable-set of possible output-values members smaller than the set of possible input values. The members of the set of output values may have integer, rational, or real values. For simple rounding to the nearest integer, the step size is equal to 1. With or with equal to any other integer value, this quantizer has real-valued inputs and integer-valued outputs. When the quantization step size (Δ) is small relative to the variation in the signal being quantized, it is relatively simple to show that the mean squared error produced by such a rounding operation will be approximately . Mean squared error is also called the quantization noise power. Adding one bit to the quantizer halves the value of Δ, which reduces the noise power by the factor ¼. In terms of decibels, the noise power change is Because the set of possible output values of a quantizer is countable, any quantizer can be decomposed into two distinct stages, which can be referred to as the classification stage (or forward quantization stage) and the reconstruction stage (or inverse quantization stage), where the classification stage maps the input value to an integer quantization index and the reconstruction stage maps the index to the reconstruction value that is the output approximation of the input value. For the example uniform quantizer described above, the forward quantization stage can be expressed as , and the reconstruction stage for this example quantizer is simply . This decomposition is useful for the design and analysis of quantization behavior, and it illustrates how the quantized data can be communicated over a communication channel – a source encoder can perform the forward q
https://en.wikipedia.org/wiki/Cover%20%28topology%29
In mathematics, and more particularly in set theory, a cover (or covering) of a set is a family of subsets of whose union is all of . More formally, if is an indexed family of subsets (indexed by the set ), then is a cover of if . Thus the collection is a cover of if each element of belongs to at least one of the subsets . A subcover of a cover of a set is a subset of the cover that also covers the set. A cover is called an open cover if each of its elements is an open set. Cover in topology Covers are commonly used in the context of topology. If the set is a topological space, then a cover of is a collection of subsets of whose union is the whole space . In this case we say that covers , or that the sets cover . Also, if is a (topological) subspace of , then a cover of is a collection of subsets of whose union contains , i.e., is a cover of if That is, we may cover with either sets in itself or sets in the parent space . Let C be a cover of a topological space X. A subcover of C is a subset of C that still covers X. We say that C is an if each of its members is an open set (i.e. each Uα is contained in T, where T is the topology on X). A cover of X is said to be locally finite if every point of X has a neighborhood that intersects only finitely many sets in the cover. Formally, C = {Uα} is locally finite if for any there exists some neighborhood N(x) of x such that the set is finite. A cover of X is said to be point finite if every point of X is contained in only finitely many sets in the cover. A cover is point finite if it is locally finite, though the converse is not necessarily true. Refinement A refinement of a cover of a topological space is a new cover of such that every set in is contained in some set in . Formally, is a refinement of if for all there exists such that In other words, there is a refinement map satisfying for every This map is used, for instance, in the Čech cohomology of . Every subcover is also a refinement, but the opposite is not always true. A subcover is made from the sets that are in the cover, but omitting some of them; whereas a refinement is made from any sets that are subsets of the sets in the cover. The refinement relation on the set of covers of is transitive, irreflexive, and asymmetric. Generally speaking, a refinement of a given structure is another that in some sense contains it. Examples are to be found when partitioning an interval (one refinement of being ), considering topologies (the standard topology in Euclidean space being a refinement of the trivial topology). When subdividing simplicial complexes (the first barycentric subdivision of a simplicial complex is a refinement), the situation is slightly different: every simplex in the finer complex is a face of some simplex in the coarser one, and both have equal underlying polyhedra. Yet another notion of refinement is that of star refinement. Subcover A simple way to get a subcover is
https://en.wikipedia.org/wiki/Gaussian%20rational
In mathematics, a Gaussian rational number is a complex number of the form p + qi, where p and q are both rational numbers. The set of all Gaussian rationals forms the Gaussian rational field, denoted Q(i), obtained by adjoining the imaginary number i to the field of rationals Q. Properties of the field The field of Gaussian rationals provides an example of an algebraic number field, which is both a quadratic field and a cyclotomic field (since i is a 4th root of unity). Like all quadratic fields it is a Galois extension of Q with Galois group cyclic of order two, in this case generated by complex conjugation, and is thus an abelian extension of Q, with conductor 4. As with cyclotomic fields more generally, the field of Gaussian rationals is neither ordered nor complete (as a metric space). The Gaussian integers Z[i] form the ring of integers of Q(i). The set of all Gaussian rationals is countably infinite. The field of Gaussian rationals is also a two-dimensional vector space over Q with natural basis . Ford spheres The concept of Ford circles can be generalized from the rational numbers to the Gaussian rationals, giving Ford spheres. In this construction, the complex numbers are embedded as a plane in a three-dimensional Euclidean space, and for each Gaussian rational point in this plane one constructs a sphere tangent to the plane at that point. For a Gaussian rational represented in lowest terms as , the radius of this sphere should be where represents the complex conjugate of . The resulting spheres are tangent for pairs of Gaussian rationals and with , and otherwise they do not intersect each other. References Cyclotomic fields it:Intero di Gauss#Campo dei quozienti
https://en.wikipedia.org/wiki/French%20Institute%20for%20Research%20in%20Computer%20Science%20and%20Automation
The National Institute for Research in Digital Science and Technology (Inria) () is a French national research institution focusing on computer science and applied mathematics. It was created under the name French Institute for Research in Computer Science and Automation (IRIA) () in 1967 at Rocquencourt near Paris, part of Plan Calcul. Its first site was the historical premises of SHAPE (central command of NATO military forces), which is still used as Inria's main headquarters. In 1980, IRIA became INRIA. Since 2011, it has been styled Inria. Inria is a Public Scientific and Technical Research Establishment (EPST) under the double supervision of the French Ministry of National Education, Advanced Instruction and Research and the Ministry of Economy, Finance and Industry. Administrative status Inria has nine research centers distributed across France (in Bordeaux, Grenoble-Inovallée, Lille, Lyon, Nancy, Paris-Rocquencourt, Rennes, Saclay, and Sophia Antipolis) and one center abroad in Santiago de Chile, Chile. It also contributes to academic research teams outside of those centers. Inria Rennes is part of the joint Institut de recherche en informatique et systèmes aléatoires (IRISA) with several other entities. Before December 2007, the three centers of Bordeaux, Lille and Saclay formed a single research center called INRIA Futurs. In October 2010, Inria, with Pierre and Marie Curie University (Now Sorbonne University) and Paris Diderot University started IRILL, a center for innovation and research initiative for free software. Inria employs 3800 people. Among them are 1300 researchers, 1000 Ph.D. students and 500 postdoctorates. Research Inria does both theoretical and applied research in computer science. In the process, it has produced many widely used programs, such as Bigloo, a Scheme implementation CADP, a tool box for the verification of asynchronous concurrent systems Caml, a language from the ML family Caml Light and OCaml implementations Chorus, microkernel-based distributed operating system CompCert, verified C compiler for PowerPC, ARM and x86_32 Contrail Coq, a proof assistant CYCLADES, pioneered the use of datagrams, functional layering, and the end-to-end strategy. Eigen (C++ library) Esterel, a programming language for State Automata Geneauto — code-generation from model Graphite, a research platform for computer graphics, 3D modeling and numerical geometry Gudhi — A C++ library with Python interface for computational topology and topological data analysis Le Lisp, a portable Lisp implementation medInria, a medical image processing software, popularly used for MRI images. GNU MPFR, an arbitrary-precision floating-point library OpenViBE, a software platform dedicated to designing, testing and using brain–computer interfaces. Pharo, an open-source Smalltalk derived from Squeak . scikit-learn, a machine learning software package Scilab, a numerical computation software package SimGrid SmartEiffel,
https://en.wikipedia.org/wiki/Leopold%20Kronecker
Leopold Kronecker (; 7 December 1823 – 29 December 1891) was a German mathematician who worked on number theory, algebra and logic. He criticized Georg Cantor's work on set theory, and was quoted by as having said, "" ("God made the integers, all else is the work of man"). Kronecker was a student and lifelong friend of Ernst Kummer. Biography Leopold Kronecker was born on 7 December 1823 in Liegnitz, Prussia (now Legnica, Poland) in a wealthy Jewish family. His parents, Isidor and Johanna (née Prausnitzep), took care of their children's education and provided them with private tutoring at home—Leopold's younger brother Hugo Kronecker would also follow a scientific path, later becoming a notable physiologist. Kronecker then went to the Liegnitz Gymnasium where he was interested in a wide range of topics including science, history and philosophy, while also practicing gymnastics and swimming. At the gymnasium he was taught by Ernst Kummer, who noticed and encouraged the boy's interest in mathematics. In 1841 Kronecker became a student at the University of Berlin where his interest did not immediately focus on mathematics, but rather spread over several subjects including astronomy and philosophy. He spent the summer of 1843 at the University of Bonn studying astronomy and 1843–44 at the University of Breslau following his former teacher Kummer. Back in Berlin, Kronecker studied mathematics with Peter Gustav Lejeune Dirichlet and in 1845 defended his dissertation in algebraic number theory written under Dirichlet's supervision. After obtaining his degree, Kronecker did not follow his interest in research on an academic career path. He went back to his hometown to manage a large farming estate built up by his mother's uncle, a former banker. In 1848 he married his cousin Fanny Prausnitzer, and the couple had six children. For several years Kronecker focused on business, and although he continued to study mathematics as a hobby and corresponded with Kummer, he published no mathematical results. In 1853 he wrote a memoir on the algebraic solvability of equations extending the work of Évariste Galois on the theory of equations. Due to his business activity, Kronecker was financially comfortable, and thus he could return to Berlin in 1855 to pursue mathematics as a private scholar. Dirichlet, whose wife Rebecka came from the wealthy Mendelssohn family, had introduced Kronecker to the Berlin elite. He became a close friend of Karl Weierstrass, who had recently joined the university, and his former teacher Kummer who had just taken over Dirichlet's mathematics chair. Over the following years Kronecker published numerous papers resulting from his previous years' independent research. As a result of this published research, he was elected a member of the Berlin Academy in 1861. Although he held no official university position, Kronecker had the right as a member of the Academy to hold classes at the University of Berlin and he decided to do so, startin
https://en.wikipedia.org/wiki/Paul%20Halmos
Paul Richard Halmos (; March 3, 1916 – October 2, 2006) was a Hungarian-born American mathematician and statistician who made fundamental advances in the areas of mathematical logic, probability theory, statistics, operator theory, ergodic theory, and functional analysis (in particular, Hilbert spaces). He was also recognized as a great mathematical expositor. He has been described as one of The Martians. Early life and education Born in Hungary into a Jewish family, Halmos arrived in the U.S. at 13 years of age. He obtained his B.A. from the University of Illinois, majoring in mathematics, but fulfilling the requirements for both a math and philosophy degree. He took only three years to obtain the degree, and was only 19 when he graduated. He then began a Ph.D. in philosophy, still at the Champaign–Urbana campus; but, after failing his masters' oral exams, he shifted to mathematics, graduating in 1938. Joseph L. Doob supervised his dissertation, titled Invariants of Certain Stochastic Transformations: The Mathematical Theory of Gambling Systems. Career Shortly after his graduation, Halmos left for the Institute for Advanced Study, lacking both job and grant money. Six months later, he was working under John von Neumann, which proved a decisive experience. While at the Institute, Halmos wrote his first book, Finite Dimensional Vector Spaces, which immediately established his reputation as a fine expositor of mathematics. From 1967 to 1968 he was the Donegall Lecturer in Mathematics at Trinity College Dublin. Halmos taught at Syracuse University, the University of Chicago (1946–60), the University of Michigan (~1961–67), the University of Hawaii (1967–68), Indiana University (1969–85), and the University of California at Santa Barbara (1976–78). From his 1985 retirement from Indiana until his death, he was affiliated with the Mathematics department at Santa Clara University (1985–2006). Accomplishments In a series of papers reprinted in his 1962 Algebraic Logic, Halmos devised polyadic algebras, an algebraic version of first-order logic differing from the better known cylindric algebras of Alfred Tarski and his students. An elementary version of polyadic algebra is described in monadic Boolean algebra. In addition to his original contributions to mathematics, Halmos was an unusually clear and engaging expositor of university mathematics. He won the Lester R. Ford Award in 1971 and again in 1977 (shared with W. P. Ziemer, W. H. Wheeler, S. H. Moolgavkar, J. H. Ewing and W. H. Gustafson). Halmos chaired the American Mathematical Society committee that wrote the AMS style guide for academic mathematics, published in 1973. In 1983, he received the AMS's Leroy P. Steele Prize for exposition. In the American Scientist 56(4): 375–389, Halmos argued that mathematics is a creative art, and that mathematicians should be seen as artists, not number crunchers. He discussed the division of the field into and , further arguing that mathematicians and pa
https://en.wikipedia.org/wiki/Lie%20algebra%20representation
In the mathematical field of representation theory, a Lie algebra representation or representation of a Lie algebra is a way of writing a Lie algebra as a set of matrices (or endomorphisms of a vector space) in such a way that the Lie bracket is given by the commutator. In the language of physics, one looks for a vector space together with a collection of operators on satisfying some fixed set of commutation relations, such as the relations satisfied by the angular momentum operators. The notion is closely related to that of a representation of a Lie group. Roughly speaking, the representations of Lie algebras are the differentiated form of representations of Lie groups, while the representations of the universal cover of a Lie group are the integrated form of the representations of its Lie algebra. In the study of representations of a Lie algebra, a particular ring, called the universal enveloping algebra, associated with the Lie algebra plays an important role. The universality of this ring says that the category of representations of a Lie algebra is the same as the category of modules over its enveloping algebra. Formal definition Let be a Lie algebra and let be a vector space. We let denote the space of endomorphisms of , that is, the space of all linear maps of to itself. We make into a Lie algebra with bracket given by the commutator: for all ρ,σ in . Then a representation of on is a Lie algebra homomorphism . Explicitly, this means that should be a linear map and it should satisfy for all X, Y in . The vector space V, together with the representation ρ, is called a -module. (Many authors abuse terminology and refer to V itself as the representation). The representation is said to be faithful if it is injective. One can equivalently define a -module as a vector space V together with a bilinear map such that for all X,Y in and v in V. This is related to the previous definition by setting X ⋅ v = ρ(X)(v). Examples Adjoint representations The most basic example of a Lie algebra representation is the adjoint representation of a Lie algebra on itself: Indeed, by virtue of the Jacobi identity, is a Lie algebra homomorphism. Infinitesimal Lie group representations A Lie algebra representation also arises in nature. If : G → H is a homomorphism of (real or complex) Lie groups, and and are the Lie algebras of G and H respectively, then the differential on tangent spaces at the identities is a Lie algebra homomorphism. In particular, for a finite-dimensional vector space V, a representation of Lie groups determines a Lie algebra homomorphism from to the Lie algebra of the general linear group GL(V), i.e. the endomorphism algebra of V. For example, let . Then the differential of at the identity is an element of . Denoting it by one obtains a representation of G on the vector space . This is the adjoint representation of G. Applying the preceding, one gets the Lie algebra representation . It can be shown that , t
https://en.wikipedia.org/wiki/Hyperbolic%20space
In mathematics, hyperbolic space of dimension n is the unique simply connected, n-dimensional Riemannian manifold of constant sectional curvature equal to -1. It is homogeneous, and satisfies the stronger property of being a symmetric space. There are many ways to construct it as an open subset of with an explicitly written Riemannian metric; such constructions are referred to as models. Hyperbolic 2-space, H2, which was the first instance studied, is also called the hyperbolic plane. It is also sometimes referred to as Lobachevsky space or Bolyai–Lobachevsky space after the names of the author who first published on the topic of hyperbolic geometry. Sometimes the qualificative "real" is added to differentiate it from complex hyperbolic spaces, quaternionic hyperbolic spaces and the octononic hyperbolic plane which are the other symmetric spaces of negative curvature. Hyperbolic space serves as the prototype of a Gromov hyperbolic space which is a far-reaching notion including differential-geometric as well as more combinatorial spaces via a synthetic approach to negative curvature. Another generalisation is the notion of a CAT(-1) space. Formal definition and models Definition The -dimensional hyperbolic space or Hyperbolic -space, usually denoted , is the unique simply connected, -dimensional complete Riemannian manifold with a constant negative sectional curvature equal to -1. The unicity means that any two Riemannian manifolds which satisfy these properties are isometric to each other. It is a consequence of the Killing–Hopf theorem. Models of hyperbolic space To prove the existence of such a space as described above one can explicitly construct it, for example as an open subset of with a Riemannian metric given by a simple formula. There are many such constructions or models of hyperbolic space, each suited to different aspects of its study. They are isometric to each other according to the previous paragraph, and in each case an explicit isometry can be explicitly given. Here is a list of the better-known models which are described in more detail in their namesake articles: Poincaré half-plane model: this is the upper-half space with the metric Poincaré disc model: this is the unit ball of with the metric . The isometry to the half-space model can be realised by a homography sending a point of the unit sphere to infinity. Hyperboloid model: In contrast with the previous two models this realises hyperbolic -space as isometrically embedded inside the -dimensional Minkowski space (which is not a Riemannian but rather a Lorentzian manifold). More precisely, looking at the quadratic form on , its restriction to the tangent spaces of the upper sheet of the hyperboloid given by are definite positive, hence they endow it with a Riemannian metric which turns out to be of constant curvature -1. The isometry to the previous models can be realised by stereographic projection from the hyperboloid to the plane , taking the vertex fro
https://en.wikipedia.org/wiki/Ernst%20Kummer
Ernst Eduard Kummer (29 January 1810 – 14 May 1893) was a German mathematician. Skilled in applied mathematics, Kummer trained German army officers in ballistics; afterwards, he taught for 10 years in a gymnasium, the German equivalent of high school, where he inspired the mathematical career of Leopold Kronecker. Life Kummer was born in Sorau, Brandenburg (then part of Prussia). He was awarded a PhD from the University of Halle in 1831 for writing a prize-winning mathematical essay (De cosinuum et sinuum potestatibus secundum cosinus et sinus arcuum multiplicium evolvendis), which was published a year later. In 1840, Kummer married Ottilie Mendelssohn, daughter of Nathan Mendelssohn and Henriette Itzig. Ottilie was a cousin of Felix Mendelssohn and his sister Rebecca Mendelssohn Bartholdy, the wife of the mathematician Peter Gustav Lejeune Dirichlet. His second wife (whom he married soon after the death of Ottilie in 1848), Bertha Cauer, was a maternal cousin of Ottilie. Overall, he had 13 children. His daughter Marie married the mathematician Hermann Schwarz. Kummer retired from teaching and from mathematics in 1890 and died three years later in Berlin. Mathematics Kummer made several contributions to mathematics in different areas; he codified some of the relations between different hypergeometric series, known as contiguity relations. The Kummer surface results from taking the quotient of a two-dimensional abelian variety by the cyclic group {1, −1} (an early orbifold: it has 16 singular points, and its geometry was intensively studied in the nineteenth century). Kummer also proved Fermat's Last Theorem for a considerable class of prime exponents (see regular prime, ideal class group). His methods were closer, perhaps, to p-adic ones than to ideal theory as understood later, though the term 'ideal' was invented by Kummer. He studied what were later called Kummer extensions of fields: that is, extensions generated by adjoining an nth root to a field already containing a primitive nth root of unity. This is a significant extension of the theory of quadratic extensions, and the genus theory of quadratic forms (linked to the 2-torsion of the class group). As such, it is still foundational for class field theory. Kummer further conducted research in ballistics and, jointly with William Rowan Hamilton he investigated ray systems. Publications See also Kummer configuration Kummer's congruence Kummer series Kummer theory Kummer's theorem, on prime-power divisors of binomial coefficients Kummer's function Kummer sum Kummer variety Kummer–Vandiver conjecture Kummer's transformation of series Ideal number Regular prime Reflection theorem Principalization 25628 Kummer – asteroid named after Ernst Kummer References Eric Temple Bell, Men of Mathematics, Simon and Schuster, New York: 1986. R. W. H. T. Hudson, Kummer's Quartic Surface, Cambridge, [1905] rept. 1990. "Ernst Kummer," in Dictionary of Scientific Biography, ed. C. Gill
https://en.wikipedia.org/wiki/Well-founded%20relation
In mathematics, a binary relation is called well-founded (or wellfounded or foundational) on a class if every non-empty subset has a minimal element with respect to , that is, an element not related by (for instance, " is not smaller than ") for any . In other words, a relation is well founded if Some authors include an extra condition that is set-like, i.e., that the elements less than any given element form a set. Equivalently, assuming the axiom of dependent choice, a relation is well-founded when it contains no infinite descending chains, which can be proved when there is no infinite sequence of elements of such that for every natural number . In order theory, a partial order is called well-founded if the corresponding strict order is a well-founded relation. If the order is a total order then it is called a well-order. In set theory, a set is called a well-founded set if the set membership relation is well-founded on the transitive closure of . The axiom of regularity, which is one of the axioms of Zermelo–Fraenkel set theory, asserts that all sets are well-founded. A relation is converse well-founded, upwards well-founded or Noetherian on , if the converse relation is well-founded on . In this case is also said to satisfy the ascending chain condition. In the context of rewriting systems, a Noetherian relation is also called terminating. Induction and recursion An important reason that well-founded relations are interesting is because a version of transfinite induction can be used on them: if () is a well-founded relation, is some property of elements of , and we want to show that holds for all elements of , it suffices to show that: If is an element of and is true for all such that , then must also be true. That is, Well-founded induction is sometimes called Noetherian induction, after Emmy Noether. On par with induction, well-founded relations also support construction of objects by transfinite recursion. Let be a set-like well-founded relation and a function that assigns an object to each pair of an element and a function on the initial segment of . Then there is a unique function such that for every , That is, if we want to construct a function on , we may define using the values of for . As an example, consider the well-founded relation , where is the set of all natural numbers, and is the graph of the successor function . Then induction on is the usual mathematical induction, and recursion on gives primitive recursion. If we consider the order relation , we obtain complete induction, and course-of-values recursion. The statement that is well-founded is also known as the well-ordering principle. There are other interesting special cases of well-founded induction. When the well-founded relation is the usual ordering on the class of all ordinal numbers, the technique is called transfinite induction. When the well-founded set is a set of recursively-defined data structures, the techniqu
https://en.wikipedia.org/wiki/Index%20notation
In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, a vector, or a matrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing a computer program. In mathematics It is frequently helpful in mathematics to refer to the elements of an array using subscripts. The subscripts can be integers or variables. The array takes the form of tensors in general, since these can be treated as multi-dimensional arrays. Special (and more familiar) cases are vectors (1d arrays) and matrices (2d arrays). The following is only an introduction to the concept: index notation is used in more detail in mathematics (particularly in the representation and manipulation of tensor operations). See the main article for further details. One-dimensional arrays (vectors) A vector treated as an array of numbers by writing as a row vector or column vector (whichever is used depends on convenience or context): Index notation allows indication of the elements of the array by simply writing ai, where the index i is known to run from 1 to n, because of n-dimensions. For example, given the vector: then some entries are . The notation can be applied to vectors in mathematics and physics. The following vector equation can also be written in terms of the elements of the vector (aka components), that is where the indices take a given range of values. This expression represents a set of equations, one for each index. If the vectors each have n elements, meaning i = 1,2,…n, then the equations are explicitly Hence, index notation serves as an efficient shorthand for representing the general structure to an equation, while applicable to individual components. Two-dimensional arrays More than one index is used to describe arrays of numbers, in two or more dimensions, such as the elements of a matrix, (see also image to right); The entry of a matrix A is written using two indices, say i and j, with or without commas to separate the indices: aij or ai,j, where the first subscript is the row number and the second is the column number. Juxtaposition is also used as notation for multiplication; this may be a source of confusion. For example, if then some entries are . For indices larger than 9, the comma-based notation may be preferable (e.g., a3,12 instead of a312). Matrix equations are written similarly to vector equations, such as in terms of the elements of the matrices (aka components) for all values of i and j. Again this expression represents a set of equations, one for each index. If the matrices each have m rows and n columns, meaning and , then there are mn equations. Multi-dimensional arrays The notation allows a clear generalization to multi-dimensional arrays of elements: tensors. For example, repres
https://en.wikipedia.org/wiki/Affine%20variety
In algebraic geometry, an affine algebraic set is the set of the common zeros over an algebraically closed field of some family of polynomials in the polynomial ring An affine variety or affine algebraic variety, is an affine algebraic set such that the ideal generated by the defining polynomials is prime. Some texts call variety any algebraic set, and irreducible variety an algebraic set whose defining ideal is prime (affine variety in the above sense). In some contexts (see, for example, Hilbert's Nullstellensatz), it is useful to distinguish the field in which the coefficients are considered, from the algebraically closed field (containing ) over which the common zeros are considered (that is, the points of the affine algebric set are in ). In this case, the variety is said defined over , and the points of the variety that belong to are said -rational or rational over . In the common case where is the field of real numbers, a -rational point is called a real point. When the field is not specified, a rational point is a point that is rational over the rational numbers. For example, Fermat's Last Theorem asserts that the affine algebraic variety (it is a curve) defined by has no rational points for any integer greater than two. Introduction An affine algebraic set is the set of solutions in an algebraically closed field of a system of polynomial equations with coefficients in . More precisely, if are polynomials with coefficients in , they define an affine algebraic set An affine (algebraic) variety is an affine algebraic set which is not the union of two proper affine algebraic subsets. Such an affine algebraic set is often said to be irreducible. If is an affine algebraic set, and is the ideal of all polynomials that are zero on , then the quotient ring is called the of X. If X is an affine variety, then I is prime, so the coordinate ring is an integral domain. The elements of the coordinate ring R are also called the regular functions or the polynomial functions on the variety. They form the ring of regular functions on the variety, or, simply, the ring of the variety; in other words (see #Structure sheaf), it is the space of global sections of the structure sheaf of X. The dimension of a variety is an integer associated to every variety, and even to every algebraic set, whose importance relies on the large number of its equivalent definitions (see Dimension of an algebraic variety). Examples The complement of a hypersurface in an affine variety (that is for some polynomial ) is affine. Its defining equations are obtained by saturating by the defining ideal of . The coordinate ring is thus the localization . In particular, (the affine line with the origin removed) is affine. On the other hand, (the affine plane with the origin removed) is not an affine variety; cf. Hartogs' extension theorem. The subvarieties of codimension one in the affine space are exactly the hypersurfaces, that is the varieties defi
https://en.wikipedia.org/wiki/Projective%20variety
In algebraic geometry, a projective variety over an algebraically closed field k is a subset of some projective n-space over k that is the zero-locus of some finite family of homogeneous polynomials of n + 1 variables with coefficients in k, that generate a prime ideal, the defining ideal of the variety. Equivalently, an algebraic variety is projective if it can be embedded as a Zariski closed subvariety of . A projective variety is a projective curve if its dimension is one; it is a projective surface if its dimension is two; it is a projective hypersurface if its dimension is one less than the dimension of the containing projective space; in this case it is the set of zeros of a single homogeneous polynomial. If X is a projective variety defined by a homogeneous prime ideal I, then the quotient ring is called the homogeneous coordinate ring of X. Basic invariants of X such as the degree and the dimension can be read off the Hilbert polynomial of this graded ring. Projective varieties arise in many ways. They are complete, which roughly can be expressed by saying that there are no points "missing". The converse is not true in general, but Chow's lemma describes the close relation of these two notions. Showing that a variety is projective is done by studying line bundles or divisors on X. A salient feature of projective varieties are the finiteness constraints on sheaf cohomology. For smooth projective varieties, Serre duality can be viewed as an analog of Poincaré duality. It also leads to the Riemann–Roch theorem for projective curves, i.e., projective varieties of dimension 1. The theory of projective curves is particularly rich, including a classification by the genus of the curve. The classification program for higher-dimensional projective varieties naturally leads to the construction of moduli of projective varieties. Hilbert schemes parametrize closed subschemes of with prescribed Hilbert polynomial. Hilbert schemes, of which Grassmannians are special cases, are also projective schemes in their own right. Geometric invariant theory offers another approach. The classical approaches include the Teichmüller space and Chow varieties. A particularly rich theory, reaching back to the classics, is available for complex projective varieties, i.e., when the polynomials defining X have complex coefficients. Broadly, the GAGA principle says that the geometry of projective complex analytic spaces (or manifolds) is equivalent to the geometry of projective complex varieties. For example, the theory of holomorphic vector bundles (more generally coherent analytic sheaves) on X coincide with that of algebraic vector bundles. Chow's theorem says that a subset of projective space is the zero-locus of a family of holomorphic functions if and only if it is the zero-locus of homogeneous polynomials. The combination of analytic and algebraic methods for complex projective varieties lead to areas such as Hodge theory. Variety and scheme structure Varie
https://en.wikipedia.org/wiki/Tarski%27s%20theorem%20about%20choice
In mathematics, Tarski's theorem, proved by , states that in ZF the theorem "For every infinite set , there is a bijective map between the sets and " implies the axiom of choice. The opposite direction was already known, thus the theorem and axiom of choice are equivalent. Tarski told that when he tried to publish the theorem in Comptes Rendus de l'Académie des Sciences de Paris, Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest. Proof The goal is to prove that the axiom of choice is implied by the statement "for every infinite set ". It is known that the well-ordering theorem is equivalent to the axiom of choice; thus it is enough to show that the statement implies that for every set there exists a well-order. Since the collection of all ordinals such that there exists a surjective function from to the ordinal is a set, there exists an infinite ordinal, such that there is no surjective function from to We assume without loss of generality that the sets and are disjoint. By the initial assumption, thus there exists a bijection For every it is impossible that because otherwise we could define a surjective function from to Therefore, there exists at least one ordinal such that so the set is not empty. We can define a new function: This function is well defined since is a non-empty set of ordinals, and so has a minimum. For every the sets and are disjoint. Therefore, we can define a well order on for every we define since the image of that is, is a set of ordinals and therefore well ordered. References Axiom of choice Cardinal numbers Set theory Theorems in the foundations of mathematics fr:Ordinal de Hartogs#Produit cardinal
https://en.wikipedia.org/wiki/Zero%20morphism
In category theory, a branch of mathematics, a zero morphism is a special kind of morphism exhibiting properties like the morphisms to and from a zero object. Definitions Suppose C is a category, and f : X → Y is a morphism in C. The morphism f is called a constant morphism (or sometimes left zero morphism) if for any object W in C and any , fg = fh. Dually, f is called a coconstant morphism (or sometimes right zero morphism) if for any object Z in C and any g, h : Y → Z, gf = hf. A zero morphism is one that is both a constant morphism and a coconstant morphism. A category with zero morphisms is one where, for every two objects A and B in C, there is a fixed morphism 0AB : A → B, and this collection of morphisms is such that for all objects X, Y, Z in C and all morphisms f : Y → Z, g : X → Y, the following diagram commutes: The morphisms 0XY necessarily are zero morphisms and form a compatible system of zero morphisms. If C is a category with zero morphisms, then the collection of 0XY is unique. This way of defining a "zero morphism" and the phrase "a category with zero morphisms" separately is unfortunate, but if each hom-set has a ″zero morphism", then the category "has zero morphisms". Examples Related concepts If C has a zero object 0, given two objects X and Y in C, there are canonical morphisms f : X → 0 and g : 0 → Y. Then, gf is a zero morphism in MorC(X, Y). Thus, every category with a zero object is a category with zero morphisms given by the composition 0XY : X → 0 → Y. If a category has zero morphisms, then one can define the notions of kernel and cokernel for any morphism in that category. References Section 1.7 of . Notes Morphisms 0 (number)