source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Abc%20conjecture
The abc conjecture (also known as the Oesterlé–Masser conjecture) is a conjecture in number theory that arose out of a discussion of Joseph Oesterlé and David Masser in 1985. It is stated in terms of three positive integers and (hence the name) that are relatively prime and satisfy . The conjecture essentially states that the product of the distinct prime factors of is usually not much smaller than . A number of famous conjectures and theorems in number theory would follow immediately from the abc conjecture or its versions. Mathematician Dorian Goldfeld described the abc conjecture as "The most important unsolved problem in Diophantine analysis". The abc conjecture originated as the outcome of attempts by Oesterlé and Masser to understand the Szpiro conjecture about elliptic curves, which involves more geometric structures in its statement than the abc conjecture. The abc conjecture was shown to be equivalent to the modified Szpiro's conjecture. Various attempts to prove the abc conjecture have been made, but none are currently accepted by the mainstream mathematical community, and, as of 2023, the conjecture is still regarded as unproven. Formulations Before stating the conjecture, the notion of the radical of an integer must be introduced: for a positive integer , the radical of , denoted , is the product of the distinct prime factors of . For example, If a, b, and c are coprime positive integers such that a + b = c, it turns out that "usually" . The abc conjecture deals with the exceptions. Specifically, it states that: An equivalent formulation is: Equivalently (using the little o notation): A fourth equivalent formulation of the conjecture involves the quality q(a, b, c) of the triple (a, b, c), which is defined as For example: A typical triple (a, b, c) of coprime positive integers with a + b = c will have c < rad(abc), i.e. q(a, b, c) < 1. Triples with q > 1 such as in the second example are rather special, they consist of numbers divisible by high powers of small prime numbers. The fourth formulation is: Whereas it is known that there are infinitely many triples (a, b, c) of coprime positive integers with a + b = c such that q(a, b, c) > 1, the conjecture predicts that only finitely many of those have q > 1.01 or q > 1.001 or even q > 1.0001, etc. In particular, if the conjecture is true, then there must exist a triple (a, b, c) that achieves the maximal possible quality q(a, b, c). Examples of triples with small radical The condition that ε > 0 is necessary as there exist infinitely many triples a, b, c with c > rad(abc). For example, let The integer b is divisible by 9: Using this fact, the following calculation is made: By replacing the exponent 6n with other exponents forcing b to have larger square factors, the ratio between the radical and c can be made arbitrarily small. Specifically, let p > 2 be a prime and consider Now it may be plausibly claimed that b is divisible by p2: The last step uses the fact that
https://en.wikipedia.org/wiki/GRP
GRP may refer to: Biochemistry Gastrin-releasing peptide Grp78, Grp94, Grp170, glucose-regulated proteins Grape reaction product Mathematics Grp, the Category of groups Technology and materials Glass-reinforced-polymer, also known as Fiberglass, or Fibreglass. Gentoo Reference Platform Transport Grove Park railway station, London, National Rail station code Other uses Government resource planning US Grasslands Reserve Program Gross rating point Gross regional product GRP Records, an American jazz label Gurupi Airport, in Brazil
https://en.wikipedia.org/wiki/Splitting%20field
In abstract algebra, a splitting field of a polynomial with coefficients in a field is the smallest field extension of that field over which the polynomial splits, i.e., decomposes into linear factors. Definition A splitting field of a polynomial p(X) over a field K is a field extension L of K over which p factors into linear factors where and for each we have with ai not necessarily distinct and such that the roots ai generate L over K. The extension L is then an extension of minimal degree over K in which p splits. It can be shown that such splitting fields exist and are unique up to isomorphism. The amount of freedom in that isomorphism is known as the Galois group of p (if we assume it is separable). Properties An extension L which is a splitting field for a set of polynomials p(X) over K is called a normal extension of K. Given an algebraically closed field A containing K, there is a unique splitting field L of p between K and A, generated by the roots of p. If K is a subfield of the complex numbers, the existence is immediate. On the other hand, the existence of algebraic closures in general is often proved by 'passing to the limit' from the splitting field result, which therefore requires an independent proof to avoid circular reasoning. Given a separable extension K′ of K, a Galois closure L of K′ is a type of splitting field, and also a Galois extension of K containing K′ that is minimal, in an obvious sense. Such a Galois closure should contain a splitting field for all the polynomials p over K that are minimal polynomials over K of elements a of K′. Constructing splitting fields Motivation Finding roots of polynomials has been an important problem since the time of the ancient Greeks. Some polynomials, however, such as over , the real numbers, have no roots. By constructing the splitting field for such a polynomial one can find the roots of the polynomial in the new field. The construction Let F be a field and p(X) be a polynomial in the polynomial ring F[X] of degree n. The general process for constructing K, the splitting field of p(X) over F, is to construct a chain of fields such that Ki is an extension of Ki&hairsp;−1 containing a new root of p(X). Since p(X) has at most n roots the construction will require at most n extensions. The steps for constructing Ki are given as follows: Factorize p(X) over Ki into irreducible factors . Choose any nonlinear irreducible factor f(X) = fi&hairsp;(X). Construct the field extension Ki&hairsp;+1 of Ki as the quotient ring Ki&hairsp;+1 = Ki&hairsp;[X] / (f(X)) where (f(X)) denotes the ideal in Ki&hairsp;[X] generated by f(X). Repeat the process for Ki&hairsp;+1 until p(X) completely factors. The irreducible factor fi&hairsp;(X) used in the quotient construction may be chosen arbitrarily. Although different choices of factors may lead to different subfield sequences, the resulting splitting fields will be isomorphic. Since f(X) is irreducible, (f(X)) is a maximal ideal of K
https://en.wikipedia.org/wiki/Abelian%20extension
In abstract algebra, an abelian extension is a Galois extension whose Galois group is abelian. When the Galois group is also cyclic, the extension is also called a cyclic extension. Going in the other direction, a Galois extension is called solvable if its Galois group is solvable, i.e., if the group can be decomposed into a series of normal extensions of an abelian group. Every finite extension of a finite field is a cyclic extension. Description Class field theory provides detailed information about the abelian extensions of number fields, function fields of algebraic curves over finite fields, and local fields. There are two slightly different definitions of the term cyclotomic extension. It can mean either an extension formed by adjoining roots of unity to a field, or a subextension of such an extension. The cyclotomic fields are examples. A cyclotomic extension, under either definition, is always abelian. If a field K contains a primitive n-th root of unity and the n-th root of an element of K is adjoined, the resulting Kummer extension is an abelian extension (if K has characteristic p we should say that p doesn't divide n, since otherwise this can fail even to be a separable extension). In general, however, the Galois groups of n-th roots of elements operate both on the n-th roots and on the roots of unity, giving a non-abelian Galois group as semi-direct product. The Kummer theory gives a complete description of the abelian extension case, and the Kronecker–Weber theorem tells us that if K is the field of rational numbers, an extension is abelian if and only if it is a subfield of a field obtained by adjoining a root of unity. There is an important analogy with the fundamental group in topology, which classifies all covering spaces of a space: abelian covers are classified by its abelianisation which relates directly to the first homology group. References Field extensions Algebraic number theory Class field theory
https://en.wikipedia.org/wiki/Common%20logarithm
In mathematics, the common logarithm is the logarithm with base 10. It is also known as the decadic logarithm and as the decimal logarithm, named after its base, or Briggsian logarithm, after Henry Briggs, an English mathematician who pioneered its use, as well as standard logarithm. Historically, it was known as logarithmus decimalis or logarithmus decadis. It is indicated by , , or sometimes with a capital (however, this notation is ambiguous, since it can also mean the complex natural logarithmic multi-valued function). On calculators, it is printed as "log", but mathematicians usually mean natural logarithm (logarithm with base e ≈ 2.71828) rather than common logarithm when they write "log". To mitigate this ambiguity, the ISO 80000 specification recommends that should be written , and should be . Before the early 1970s, handheld electronic calculators were not available, and mechanical calculators capable of multiplication were bulky, expensive and not widely available. Instead, tables of base-10 logarithms were used in science, engineering and navigation—when calculations required greater accuracy than could be achieved with a slide rule. By turning multiplication and division to addition and subtraction, use of logarithms avoided laborious and error-prone paper-and-pencil multiplications and divisions. Because logarithms were so useful, tables of base-10 logarithms were given in appendices of many textbooks. Mathematical and navigation handbooks included tables of the logarithms of trigonometric functions as well. For the history of such tables, see log table. Mantissa and characteristic An important property of base-10 logarithms, which makes them so useful in calculations, is that the logarithm of numbers greater than 1 that differ by a factor of a power of 10 all have the same fractional part. The fractional part is known as the mantissa. Thus, log tables need only show the fractional part. Tables of common logarithms typically listed the mantissa, to four or five decimal places or more, of each number in a range, e.g. 1000 to 9999. The integer part, called the characteristic, can be computed by simply counting how many places the decimal point must be moved, so that it is just to the right of the first significant digit. For example, the logarithm of 120 is given by the following calculation: The last number (0.07918)—the fractional part or the mantissa of the common logarithm of 120—can be found in the table shown. The location of the decimal point in 120 tells us that the integer part of the common logarithm of 120, the characteristic, is 2. Negative logarithms Positive numbers less than 1 have negative logarithms. For example, To avoid the need for separate tables to convert positive and negative logarithms back to their original numbers, one can express a negative logarithm as a negative integer characteristic plus a positive mantissa. To facilitate this, a special notation, called bar notation, is used: The bar over the
https://en.wikipedia.org/wiki/Dirichlet%20character
In analytic number theory and related branches of mathematics, a complex-valued arithmetic function is a Dirichlet character of modulus (where is a positive integer) if for all integers and : that is, is completely multiplicative. (gcd is the greatest common divisor) ; that is, is periodic with period . The simplest possible character, called the principal character, usually denoted , (see Notation below) exists for all moduli: The German mathematician Peter Gustav Lejeune Dirichlet—for whom the character is named—introduced these functions in his 1837 paper on primes in arithmetic progressions. Notation is Euler's totient function. is a complex primitive n-th root of unity: but is the group of units mod . It has order is the group of Dirichlet characters mod . etc. are prime numbers. is a standard abbreviation for etc. are Dirichlet characters. (the lowercase Greek letter chi for character) There is no standard notation for Dirichlet characters that includes the modulus. In many contexts (such as in the proof of Dirichlet's theorem) the modulus is fixed. In other contexts, such as this article, characters of different moduli appear. Where appropriate this article employs a variation of Conrey labeling (introduced by Brian Conrey and used by the LMFDB). In this labeling characters for modulus are denoted where the index is described in the section the group of characters below. In this labeling, denotes an unspecified character and denotes the principal character mod . Relation to group characters The word "character" is used several ways in mathematics. In this section it refers to a homomorphism from a group (written multiplicatively) to the multiplicative group of the field of complex numbers: The set of characters is denoted If the product of two characters is defined by pointwise multiplication the identity by the trivial character and the inverse by complex inversion then becomes an abelian group. If is a finite abelian group then there are 1) an isomorphism and 2) the orthogonality relations:     and     The elements of the finite abelian group are the residue classes where A group character can be extended to a Dirichlet character by defining and conversely, a Dirichlet character mod defines a group character on Paraphrasing Davenport Dirichlet characters can be regarded as a particular case of Abelian group characters. But this article follows Dirichlet in giving a direct and constructive account of them. This is partly for historical reasons, in that Dirichlet's work preceded by several decades the development of group theory, and partly for a mathematical reason, namely that the group in question has a simple and interesting structure which is obscured if one treats it as one treats the general Abelian group. Elementary facts 4) Since property 2) says so it can be canceled from both sides of : 5) Property 3) is equivalent to if   then 6) Property 1) imp
https://en.wikipedia.org/wiki/Algebraic%20number%20theory
Algebraic number theory is a branch of number theory that uses the techniques of abstract algebra to study the integers, rational numbers, and their generalizations. Number-theoretic questions are expressed in terms of properties of algebraic objects such as algebraic number fields and their rings of integers, finite fields, and function fields. These properties, such as whether a ring admits unique factorization, the behavior of ideals, and the Galois groups of fields, can resolve questions of primary importance in number theory, like the existence of solutions to Diophantine equations. History of algebraic number theory Diophantus The beginnings of algebraic number theory can be traced to Diophantine equations, named after the 3rd-century Alexandrian mathematician, Diophantus, who studied them and developed methods for the solution of some kinds of Diophantine equations. A typical Diophantine problem is to find two integers x and y such that their sum, and the sum of their squares, equal two given numbers A and B, respectively: Diophantine equations have been studied for thousands of years. For example, the solutions to the quadratic Diophantine equation x2 + y2 = z2 are given by the Pythagorean triples, originally solved by the Babylonians (). Solutions to linear Diophantine equations, such as 26x + 65y = 13, may be found using the Euclidean algorithm (c. 5th century BC). Diophantus' major work was the Arithmetica, of which only a portion has survived. Fermat Fermat's Last Theorem was first conjectured by Pierre de Fermat in 1637, famously in the margin of a copy of Arithmetica where he claimed he had a proof that was too large to fit in the margin. No successful proof was published until 1995 despite the efforts of countless mathematicians during the 358 intervening years. The unsolved problem stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century. Gauss One of the founding works of algebraic number theory, the Disquisitiones Arithmeticae (Latin: Arithmetical Investigations) is a textbook of number theory written in Latin by Carl Friedrich Gauss in 1798 when Gauss was 21 and first published in 1801 when he was 24. In this book Gauss brings together results in number theory obtained by mathematicians such as Fermat, Euler, Lagrange and Legendre and adds important new results of his own. Before the Disquisitiones was published, number theory consisted of a collection of isolated theorems and conjectures. Gauss brought the work of his predecessors together with his own original work into a systematic framework, filled in gaps, corrected unsound proofs, and extended the subject in numerous ways. The Disquisitiones was the starting point for the work of other nineteenth century European mathematicians including Ernst Kummer, Peter Gustav Lejeune Dirichlet and Richard Dedekind. Many of the annotations given by Gauss are in effect announcements of further research
https://en.wikipedia.org/wiki/Laplace%20operator
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian of a function at a point measures by how much the average value of over small spheres or balls centered at deviates from . The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics: the Laplacian of the gravitational potential due to a given mass density distribution is a constant multiple of that density distribution. Solutions of Laplace's equation are called harmonic functions and represent the possible gravitational potentials in regions of vacuum. The Laplacian occurs in many differential equations describing physical phenomena. Poisson's equation describes electric and gravitational potentials; the diffusion equation describes heat and fluid flow; the wave equation describes wave propagation; and the Schrödinger equation describes the wave function in quantum mechanics. In image processing and computer vision, the Laplacian operator has been used for various tasks, such as blob and edge detection. The Laplacian is the simplest elliptic operator and is at the core of Hodge theory as well as the results of de Rham cohomology. Definition The Laplace operator is a second-order differential operator in the n-dimensional Euclidean space, defined as the divergence () of the gradient (). Thus if is a twice-differentiable real-valued function, then the Laplacian of is the real-valued function defined by: where the latter notations derive from formally writing: Explicitly, the Laplacian of is thus the sum of all the unmixed second partial derivatives in the Cartesian coordinates : As a second-order differential operator, the Laplace operator maps functions to functions for . It is a linear operator , or more generally, an operator for any open set . Motivation Diffusion In the physical theory of diffusion, the Laplace operator arises naturally in the mathematical description of equilibrium. Specifically, if is the density at equilibrium of some quantity such as a chemical concentration, then the net flux of through the boundary of any smooth region is zero, provided there is no source or sink within : where is the outward unit normal to the boundary of . By the divergence theorem, Since this holds for all smooth regions , one can show that it implies: The left-hand side of this equation is the Laplace operator, and the entire equation is known as Laplace's equation. Solutions of the Laplace
https://en.wikipedia.org/wiki/Div
Div or DIV may refer to: Science and technology Division (mathematics), the mathematical operation that is the inverse of multiplication Span and div, HTML tags that implement generic elements div, a C mathematical function Divergence, a mathematical operation in vector calculus Digital Intrinsic Value, a digital value given to users for their data Days in vitro, for example see Cultured neuronal network Desquamative inflammatory vaginitis, an uncommon acute inflammation of the vagina; see Vulva disease Other uses Diversity Immigrant Visa, a United States congressionally mandated lottery program for receiving a United States Permanent Resident Card 504 (number) (DIV), in Roman numerals Div (Middle Eastern Mythology), a demon in Middle Eastern mythology. Divisi or div., a music term used in orchestral scores Div, a character in the Penny Arcade Divorce, a process in which a married couple breaks up and their marriage license is nullified. See also DIV Games Studio, software for a game development programming language developed by Hammer Technologies; see Fenix Project Master of Divinity (M.Div.), a professional and academic degree Division (disambiguation) Divide (disambiguation) Divine (disambiguation) Divinity (disambiguation) D4 (disambiguation), or D.IV
https://en.wikipedia.org/wiki/Goro%20Shimura
was a Japanese mathematician and Michael Henry Strater Professor Emeritus of Mathematics at Princeton University who worked in number theory, automorphic forms, and arithmetic geometry. He was known for developing the theory of complex multiplication of abelian varieties and Shimura varieties, as well as posing the Taniyama–Shimura conjecture which ultimately led to the proof of Fermat's Last Theorem. Biography Gorō Shimura was born in Hamamatsu, Japan, on 23 February 1930. Shimura graduated with a B.A. in mathematics and a D.Sc. in mathematics from the University of Tokyo in 1952 and 1958, respectively. After graduating, Shimura became a lecturer at the University of Tokyo, then worked abroad — including ten months in Paris and a seven-month stint at Princeton's Institute for Advanced Study — before returning to Tokyo, where he married Chikako Ishiguro. He then moved from Tokyo to join the faculty of Osaka University, but growing unhappy with his funding situation, he decided to seek employment in the United States. Through André Weil he obtained a position at Princeton University. Shimura joined the Princeton faculty in 1964 and retired in 1999, during which time he advised over 28 doctoral students and received the Guggenheim Fellowship in 1970, the Cole Prize for number theory in 1977, the Asahi Prize in 1991, and the Steele Prize for lifetime achievement in 1996. Shimura described his approach to mathematics as "phenomenological": his interest was in finding new types of interesting behavior in the theory of automorphic forms. He also argued for a "romantic" approach, something he found lacking in the younger generation of mathematicians. Shimura used a two-part process for research, using one desk in his home dedicated to working on new research in the mornings and a second desk for perfecting papers in the afternoon. Shimura had two children, Tomoko and Haru, with his wife Chikako. Shimura died on 3 May 2019 in Princeton, New Jersey at the age of 89. Research Shimura was a colleague and a friend of Yutaka Taniyama, with whom he wrote the first book on the complex multiplication of abelian varieties and formulated the Taniyama–Shimura conjecture. Shimura then wrote a long series of major papers, extending the phenomena found in the theory of complex multiplication of elliptic curves and the theory of modular forms to higher dimensions (e.g. Shimura varieties). This work provided examples for which the equivalence between motivic and automorphic L-functions postulated in the Langlands program could be tested: automorphic forms realized in the cohomology of a Shimura variety have a construction that attaches Galois representations to them. In 1958, Shimura generalized the initial work of Martin Eichler on the Eichler–Shimura congruence relation between the local L-function of a modular curve and the eigenvalues of Hecke operators. In 1959, Shimura extended the work of Eichler on the Eichler–Shimura isomorphism between Eichler cohomology
https://en.wikipedia.org/wiki/Backslash
The backslash is a typographical mark used mainly in computing and mathematics. It is the mirror image of the common slash . It is a relatively recent mark, first documented in the 1930s. It is sometimes called a hack, whack, escape (from C/UNIX), reverse slash, slosh, downwhack, backslant, backwhack, bash, reverse slant, reverse solidus, and reversed virgule. History , efforts to identify either the origin of this character or its purpose before the 1960s have not been successful. The earliest known reference found to date is a 1937 maintenance manual from the Teletype Corporation with a photograph showing the keyboard of its Kleinschmidt keyboard perforator WPE-3 using the Wheatstone system. {{Original research span|The symbol was called the "diagonal key",<ref> In June 1960, IBM published an "Extended character set standard" that includes the symbol at 0x19. In September 1961, Bob Bemer (IBM) proposed to the X3.2 standards committee that , and be made part of the proposed standard, describing the backslash as a "reverse division operator" and cited its prior use by Teletype in telecommunications. In particular, he said, the was needed so that the ALGOL boolean operators (logical conjunction) and (logical disjunction) could be composed using and respectively. The Committee adopted these changes into the draft American Standard (subsequently called ASCII) at its November 1961 meeting. These operators were used for min and max in early versions of the C programming language supplied with Unix V6 and V7. Usage Programming languages In many programming languages such as C, Perl, PHP, Python, Unix scripting languages, and many file formats such as JSON, the backslash is used as an escape character, to indicate that the character following it should be treated specially (if it would otherwise be treated literally), or literally (if it would otherwise be treated specially). For instance, inside a C string literal the sequence produces a newline byte instead of an 'n', and the sequence produces an actual double quote rather than the special meaning of the double quote ending the string. An actual backslash is produced by a double backslash . Regular expression languages used it the same way, changing subsequent literal characters into metacharacters and vice versa. For instance searches for either '|' or 'b', the first bar is escaped and searched for, the second is not escaped and acts as an "or". Outside quoted strings, the only common use of backslash is to ignore ("escape") a newline immediately after it. In this context it may be called a "continued line" as the current line continues into the next one. Some software replaces the backslash+newline with a space. To support computers that lacked the backslash character, the C trigraph was added, which is equivalent to a backslash. Since this can escape the next character, which may itself be a , the primary modern use may be for code obfuscation. Support for trigraphs in C++ wa
https://en.wikipedia.org/wiki/Functional%20predicate
In formal logic and related branches of mathematics, a functional predicate, or function symbol, is a logical symbol that may be applied to an object term to produce another object term. Functional predicates are also sometimes called mappings, but that term has additional meanings in mathematics. In a model, a function symbol will be modelled by a function. Specifically, the symbol F in a formal language is a functional symbol if, given any symbol X representing an object in the language, F(X) is again a symbol representing an object in that language. In typed logic, F is a functional symbol with domain type T and codomain type U if, given any symbol X representing an object of type T, F(X) is a symbol representing an object of type U. One can similarly define function symbols of more than one variable, analogous to functions of more than one variable; a function symbol in zero variables is simply a constant symbol. Now consider a model of the formal language, with the types T and U modelled by sets [T] and [U] and each symbol X of type T modelled by an element [X] in [T]. Then F can be modelled by the set which is simply a function with domain [T] and codomain [U]. It is a requirement of a consistent model that [F(X)] = [F(Y)] whenever [X] = [Y]. Introducing new function symbols In a treatment of predicate logic that allows one to introduce new predicate symbols, one will also want to be able to introduce new function symbols. Given the function symbols F and G, one can introduce a new function symbol F ∘ G, the composition of F and G, satisfying (F ∘ G)(X) = F(G(X)), for all X. Of course, the right side of this equation doesn't make sense in typed logic unless the domain type of F matches the codomain type of G, so this is required for the composition to be defined. One also gets certain function symbols automatically. In untyped logic, there is an identity predicate id that satisfies id(X) = X for all X. In typed logic, given any type T, there is an identity predicate idT with domain and codomain type T; it satisfies idT(X) = X for all X of type T. Similarly, if T is a subtype of U, then there is an inclusion predicate of domain type T and codomain type U that satisfies the same equation; there are additional function symbols associated with other ways of constructing new types out of old ones. Additionally, one can define functional predicates after proving an appropriate theorem. (If you're working in a formal system that doesn't allow you to introduce new symbols after proving theorems, then you will have to use relation symbols to get around this, as in the next section.) Specifically, if you can prove that for every X (or every X of a certain type), there exists a unique Y satisfying some condition P, then you can introduce a function symbol F to indicate this. Note that P will itself be a relational predicate involving both X and Y. So if there is such a predicate P and a theorem: For all X of type T, for some unique Y of type U,
https://en.wikipedia.org/wiki/Relational%20algebra
In database theory, relational algebra is a theory that uses algebraic structures for modeling data, and defining queries on it with a well founded semantics. The theory was introduced by Edgar F. Codd. The main application of relational algebra is to provide a theoretical foundation for relational databases, particularly query languages for such databases, chief among which is SQL. Relational databases store tabular data represented as relations. Queries over relational databases often likewise return tabular data represented as relations. The main purpose of relational algebra is to define operators that transform one or more input relations to an output relation. Given that these operators accept relations as input and produce relations as output, they can be combined and used to express complex queries that transform multiple input relations (whose data are stored in the database) into a single output relation (the query results). Unary operators accept a single relation as input. Examples include operators to filter certain attributes (columns) or tuples (rows) from an input relation. Binary operators accept two relations as input and combine them into a single output relation. For example, taking all tuples found in either relation (union), removing tuples from the first relation found in the second relation (difference), extending the tuples of the first relation with tuples in the second relation matching certain conditions, and so forth. Other more advanced operators can also be included, where the inclusion or exclusion of certain operators gives rise to a family of algebras. Introduction Relational algebra received little attention outside of pure mathematics until the publication of E.F. Codd's relational model of data in 1970. Codd proposed such an algebra as a basis for database query languages. (See section Implementations.) Relational algebra operates on homogeneous sets of tuples where we commonly interpret m to be the number of rows in a table and n to be the number of columns. All entries in each column have the same type. Five primitive operators of Codd's algebra are the selection, the projection, the Cartesian product (also called the cross product or cross join), the set union, and the set difference. Set operators The relational algebra uses set union, set difference, and Cartesian product from set theory, but adds additional constraints to these operators. For set union and set difference, the two relations involved must be union-compatible—that is, the two relations must have the same set of attributes. Because set intersection is defined in terms of set union and set difference, the two relations involved in set intersection must also be union-compatible. For the Cartesian product to be defined, the two relations involved must have disjoint headers—that is, they must not have a common attribute name. In addition, the Cartesian product is defined differently from the one in set theory in the sense that t
https://en.wikipedia.org/wiki/Tuple%20relational%20calculus
Tuple calculus is a calculus that was created and introduced by Edgar F. Codd as part of the relational model, in order to provide a declarative database-query language for data manipulation in this data model. It formed the inspiration for the database-query languages QUEL and SQL, of which the latter, although far less faithful to the original relational model and calculus, is now the de facto standard database-query language; a dialect of SQL is used by nearly every relational-database-management system. Michel Lacroix and Alain Pirotte proposed domain calculus, which is closer to first-order logic and together with Codd showed that both of these calculi (as well as relational algebra) are equivalent in expressive power. Subsequently, query languages for the relational model were called relationally complete if they could express at least all of these queries. Definition of the calculus Relational database Since the calculus is a query language for relational databases we first have to define a relational database. The basic relational building block is the domain (somewhat similar, but not equal to, a data type). A tuple is a finite sequence of attributes, which are ordered pairs of domains and values. A relation is a set of (compatible) tuples. Although these relational concepts are mathematically defined, those definitions map loosely to traditional database concepts. A table is an accepted visual representation of a relation; a tuple is similar to the concept of a row. We first assume the existence of a set C of column names, examples of which are "name", "author", "address", etcetera. We define headers as finite subsets of C. A relational database schema is defined as a tuple S = (D, R, h) where D is the domain of atomic values (see relational model for more on the notions of domain and atomic value), R is a finite set of relation names, and h : R → 2C a function that associates a header with each relation name in R. (Note that this is a simplification from the full relational model where there is more than one domain and a header is not just a set of column names but also maps these column names to a domain.) Given a domain D we define a tuple over D as a partial function that maps some column names to an atomic value in D. An example would be (name : "Harry", age : 25). t : C ⇸ D The set of all tuples over D is denoted as TD. The subset of C for which a tuple t is defined is called the domain of t (not to be confused with the domain in the schema) and denoted as dom(t). Finally we define a relational database given a schema S = (D, R, h) as a function db : R → 2TD that maps the relation names in R to finite subsets of TD, such that for every relation name r in R and tuple t in db(r) it holds that dom(t) = h(r). The latter requirement simply says that all the tuples in a relation should contain the same column names, namely those defined for it in the schema. Atoms For the construction of the formulas we will assu
https://en.wikipedia.org/wiki/Cayley%E2%80%93Dickson%20construction
In mathematics, the Cayley–Dickson construction, named after Arthur Cayley and Leonard Eugene Dickson, produces a sequence of algebras over the field of real numbers, each with twice the dimension of the previous one. The algebras produced by this process are known as Cayley–Dickson algebras, for example complex numbers, quaternions, and octonions. These examples are useful composition algebras frequently applied in mathematical physics. The Cayley–Dickson construction defines a new algebra as a Cartesian product of an algebra with itself, with multiplication defined in a specific way (different from the componentwise multiplication) and an involution known as conjugation. The product of an element and its conjugate (or sometimes the square root of this product) is called the norm. The symmetries of the real field disappear as the Cayley–Dickson construction is repeatedly applied: first losing order, then commutativity of multiplication, associativity of multiplication, and finally alternativity. More generally, the Cayley–Dickson construction takes any algebra with involution to another algebra with involution of twice the dimension. Hurwitz's theorem (composition algebras) states that the reals, complex numbers, quaternions, and octonions are the only (normed) division algebras (over the real numbers). Synopsis The Cayley–Dickson construction is due to Leonard Dickson in 1919 showing how the octonions can be constructed as a two-dimensional algebra over quaternions. In fact, starting with a field F, the construction yields a sequence of F-algebras of dimension 2n. For n = 2 it is an associative algebra called a quaternion algebra, and for n = 3 it is an alternative algebra called an octonion algebra. These instances n = 1, 2 and 3 produce composition algebras as shown below. The case n = 1 starts with elements (a, b) in F × F and defines the conjugate (a, b)* to be (a*, –b) where a* = a in case n = 1, and subsequently determined by the formula. The essence of the F-algebra lies in the definition of the product of two elements (a, b) and (c, d): Proposition 1: For and the conjugate of the product is proof: Proposition 2: If the F-algebra is associative and ,then proof: + terms that cancel by the associative property. Stages in construction of real algebras Details of the construction of the classical real algebras are as follows: Complex numbers as ordered pairs The complex numbers can be written as ordered pairs of real numbers and , with the addition operator being component-wise and with multiplication defined by A complex number whose second component is zero is associated with a real number: the complex number is associated with the real number . The complex conjugate of is given by since is a real number and is its own conjugate. The conjugate has the property that which is a non-negative real number. In this way, conjugation defines a norm, making the complex numbers a normed vector space over
https://en.wikipedia.org/wiki/Relational%20calculus
The relational calculus consists of two calculi, the tuple relational calculus and the domain relational calculus, that is part of the relational model for databases and provide a declarative way to specify database queries. The raison d'être of relational calculus is the formalization of query optimization, which is finding more efficient manners to execute the same query in a database. The relational calculus is similar to the relational algebra, which is also part of the relational model: While the relational calculus is meant as a declarative language that prescribes no execution order on the subexpressions of a relational calculus expression, the relational algebra is meant as an imperative language: the sub-expressions of a relational algebraic expression are meant to be executed from left-to-right and inside-out following their nesting. Per Codd's theorem, the relational algebra and the domain-independent relational calculus are logically equivalent. Example A relational algebra expression might prescribe the following steps to retrieve the phone numbers and names of book stores that supply Some Sample Book: Join book stores and titles over the BookstoreID. Restrict the result of that join to tuples for the book Some Sample Book. Project the result of that restriction over StoreName and StorePhone. A relational calculus expression would formulate this query in the following descriptive or declarative manner: Get StoreName and StorePhone for book stores such that there exists a title BK with the same BookstoreID value and with a BookTitle value of Some Sample Book. Mathematical properties The relational algebra and the domain-independent relational calculus are logically equivalent: for any algebraic expression, there is an equivalent expression in the calculus, and vice versa. This result is known as Codd's theorem. Purpose The raison d'être of the relational calculus is the formalization of query optimization. Query optimization consists in determining from a query the most efficient manner (or manners) to execute it. Query optimization can be formalized as translating a relational calculus expression delivering an answer A into efficient relational algebraic expressions delivering the same answer A. See also Calculus of relations References Logical calculi Relational model
https://en.wikipedia.org/wiki/Whole%20number
Whole number is a colloquial term in mathematics. The meaning is ambiguous. It may refer to either: Natural number, an element of the set or of the set Integer, an element of the set
https://en.wikipedia.org/wiki/Underwood%20Dudley
Underwood Dudley (born January 6, 1937) is an American mathematician and writer. His popular works include several books describing crank mathematics by pseudomathematicians who incorrectly believe they have squared the circle or done other impossible things. Career Dudley was born in New York City. He received bachelor's and master's degrees from the Carnegie Institute of Technology and a PhD from the University of Michigan. His academic career consisted of two years at Ohio State University followed by 37 at DePauw University, from which he retired in 2004. He edited the College Mathematics Journal and the Pi Mu Epsilon Journal, and was a Pólya Lecturer for the Mathematical Association of America (MAA) for two years. He is the discoverer of the Dudley triangle. Publications Dudley's popular books include Mathematical Cranks (MAA 1992, ), The Trisectors (MAA 1996, ), and Numerology: Or, What Pythagoras Wrought (MAA 1997, ). Dudley won the Trevor Evans Award for expository writing from the MAA in 1996. Dudley has also written and edited straightforward mathematical works such as Readings for Calculus (MAA 1993, ) and Elementary Number Theory (W.H. Freeman 1978, ). In 2009, he authored "A Guide to Elementary Number Theory" (MAA, 2009, ), published under Mathematical Association of America's Dolciani Mathematical Expositions. Lawsuit In 1995, Dudley was one of several people sued by William Dilworth for defamation because Mathematical Cranks included an analysis of Dilworth's "A correction in set theory", an attempted refutation of Cantor's diagonal method. The suit was dismissed in 1996 due to failure to state a claim. The dismissal was upheld on appeal in a decision written by jurist Richard Posner. From the decision: "A crank is a person inexplicably obsessed by an obviously unsound idea—a person with a bee in his bonnet. To call a person a crank is to say that because of some quirk of temperament he is wasting his time pursuing a line of thought that is plainly without merit or promise ... To call a person a crank is basically just a colorful and insulting way of expressing disagreement with his master idea, and it therefore belongs to the language of controversy rather than to the language of defamation." See also Pseudomathematics References External links DePauw University News story on Underwood Dudley and his "crank file"" (with photo) Review of Hans Walser's The Golden Section by Underwood Dudley 1937 births Living people 20th-century American mathematicians 21st-century American mathematicians American folklorists Carnegie Mellon University alumni University of Michigan alumni DePauw University faculty Ohio State University faculty Pseudomathematics
https://en.wikipedia.org/wiki/Riemann%20sum
In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is in numerical integration, i.e., approximating the area of functions or lines on a graph, where it is also known as the rectangle rule. It can also be applied for approximating the length of curves and other approximations. The sum is calculated by partitioning the region into shapes (rectangles, trapezoids, parabolas, or cubics) that together form a region that is similar to the region being measured, then calculating the area for each of these shapes, and finally adding all of these small areas together. This approach can be used to find a numerical approximation for a definite integral even if the fundamental theorem of calculus does not make it easy to find a closed-form solution. Because the region by the small shapes is usually not exactly the same shape as the region being measured, the Riemann sum will differ from the area being measured. This error can be reduced by dividing up the region more finely, using smaller and smaller shapes. As the shapes get smaller and smaller, the sum approaches the Riemann integral. Definition Let be a function defined on a closed interval of the real numbers, , and as a partition of , that is A Riemann sum of over with partition is defined as where and . One might produce different Riemann sums depending on which 's are chosen. In the end this will not matter, if the function is Riemann integrable, when the difference or width of the summands approaches zero. Types of Riemann sums Specific choices of give different types of Riemann sums: If for all i, the method is the left rule and gives a left Riemann sum. If for all i, the method is the right rule and gives a right Riemann sum. If for all i, the method is the midpoint rule and gives a middle Riemann sum. If (that is, the supremum of over ), the method is the upper rule and gives an upper Riemann sum or upper Darboux sum. If (that is, the infimum of f over ), the method is the lower rule and gives a lower Riemann sum or lower Darboux sum. All these Riemann summation methods are among the most basic ways to accomplish numerical integration. Loosely speaking, a function is Riemann integrable if all Riemann sums converge as the partition "gets finer and finer". While not derived as a Riemann sum, taking the average of the left and right Riemann sums is the trapezoidal rule and gives a trapezoidal sum. It is one of the simplest of a very general way of approximating integrals using weighted averages. This is followed in complexity by Simpson's rule and Newton–Cotes formulas. Any Riemann sum on a given partition (that is, for any choice of between and ) is contained between the lower and upper Darboux sums. This forms the basis of the Darboux integral, which is ultimately equivalent to the Riemann integral. Riemann summation
https://en.wikipedia.org/wiki/Well-posed%20problem
In mathematics, a well-posed problem is one for which the following properties hold: The problem has a solution The solution is unique The solution's behavior changes continuously with the initial conditions This definition of a well-posed problem comes from the work of Jacques Hadamard on mathematical modeling of physical phenomena. Examples of archetypal well-posed problems include the Dirichlet problem for Laplace's equation, and the heat equation with specified initial conditions. These might be regarded as 'natural' problems in that there are physical processes modelled by these problems. Problems that are not well-posed in the sense of Hadamard are termed ill-posed. Inverse problems are often ill-posed. For example, the inverse heat equation, deducing a previous distribution of temperature from final data, is not well-posed in that the solution is highly sensitive to changes in the final data. Continuum models must often be discretized in order to obtain a numerical solution. While solutions may be continuous with respect to the initial conditions, they may suffer from numerical instability when solved with finite precision, or with errors in the data. Even if a problem is well-posed, it may still be ill-conditioned, meaning that a small error in the initial data can result in much larger errors in the answers. Problems in nonlinear complex systems (so-called chaotic systems) provide well-known examples of instability. An ill-conditioned problem is indicated by a large condition number. If the problem is well-posed, then it stands a good chance of solution on a computer using a stable algorithm. If it is not well-posed, it needs to be re-formulated for numerical treatment. Typically this involves including additional assumptions, such as smoothness of solution. This process is known as regularization. Tikhonov regularization is one of the most commonly used for regularization of linear ill-posed problems. Energy method A method to determine the well-posedness of a problem is the energy method. The method is based upon deriving an energy estimate for a given problem. Example: Consider the linear advection equation with homogeneous Dirichlet boundary conditions and suitable initial data . Then carrying out the energy method for this problem, one would multiply the equation by and integrate in space over the given interval. Then one would integrate in time and one would obtain the energy estimate (p-norm) From this energy estimate one can conclude that the problem is well-posed. See also Total absorption spectroscopy – an example of an inverse problem or ill-posed problem in a real-life situation that is solved by means of the expectation–maximization algorithm References Numerical analysis Partial differential equations
https://en.wikipedia.org/wiki/Computational%20geometry
Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. While modern computational geometry is a recent development, it is one of the oldest fields of computing with a history stretching back to antiquity. Computational complexity is central to computational geometry, with great practical significance if algorithms are used on very large datasets containing tens or hundreds of millions of points. For such sets, the difference between O(n2) and O(n log n) may be the difference between days and seconds of computation. The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization. Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), and computer vision (3D reconstruction). The main branches of computational geometry are: Combinatorial computational geometry, also called algorithmic geometry, which deals with geometric objects as discrete entities. A groundlaying book in the subject by Preparata and Shamos dates the first use of the term "computational geometry" in this sense by 1975. Numerical computational geometry, also called machine geometry, computer-aided geometric design (CAGD), or geometric modeling, which deals primarily with representing real-world objects in forms suitable for computer computations in CAD/CAM systems. This branch may be seen as a further development of descriptive geometry and is often considered a branch of computer graphics or CAD. The term "computational geometry" in this meaning has been in use since 1971. Although most algorithms of computational geometry have been developed (and are being developed) for electronic computers, some algorithms were developed for unconventional computers (e.g. optical computers ) Combinatorial computational geometry The primary goal of research in combinatorial computational geometry is to develop efficient algorithms and data structures for solving problems stated in terms of basic geometrical objects: points, line segments, polygons, polyhedra, etc. Some of these problems seem so simple that they were not regarded as problems at all until the advent of computers. Consider, for example, the Closest pair problem: Given n points in the plane, find the two with the smallest distance from each other. One could compute the distances between all the pairs of
https://en.wikipedia.org/wiki/Error%20function
In mathematics, the error function (also called the Gauss error function), often denoted by , is a complex function of a complex variable defined as: Some authors define without the factor of . This nonelementary integral is a sigmoid function that occurs often in probability, statistics, and partial differential equations. In many of these applications, the function argument is a real number. If the function argument is real, then the function value is also real. In statistics, for non-negative values of , the error function has the following interpretation: for a random variable that is normally distributed with mean 0 and standard deviation , is the probability that falls in the range . Two closely related functions are the complementary error function () defined as and the imaginary error function () defined as where is the imaginary unit. Name The name "error function" and its abbreviation were proposed by J. W. L. Glaisher in 1871 on account of its connection with "the theory of Probability, and notably the theory of Errors." The error function complement was also discussed by Glaisher in a separate publication in the same year. For the "law of facility" of errors whose density is given by (the normal distribution), Glaisher calculates the probability of an error lying between and as: Applications When the results of a series of measurements are described by a normal distribution with standard deviation and expected value 0, then is the probability that the error of a single measurement lies between and , for positive . This is useful, for example, in determining the bit error rate of a digital communication system. The error and complementary error functions occur, for example, in solutions of the heat equation when boundary conditions are given by the Heaviside step function. The error function and its approximations can be used to estimate results that hold with high probability or with low probability. Given a random variable (a normal distribution with mean and standard deviation ) and a constant , it can be shown via integration by substitution: where and are certain numeric constants. If is sufficiently far from the mean, specifically , then: so the probability goes to 0 as . The probability for being in the interval can be derived as Properties The property means that the error function is an odd function. This directly results from the fact that the integrand is an even function (the antiderivative of an even function which is zero at the origin is an odd function and vice versa). Since the error function is an entire function which takes real numbers to real numbers, for any complex number : where is the complex conjugate of z. The integrand and are shown in the complex -plane in the figures at right with domain coloring. The error function at is exactly 1 (see Gaussian integral). At the real axis, approaches unity at and −1 at . At the imaginary axis, it tends to . Taylor series The e
https://en.wikipedia.org/wiki/Equational%20prover
EQP, an abbreviation for equational prover, is an automated theorem proving program for equational logic, developed by the Mathematics and Computer Science Division of the Argonne National Laboratory. It was one of the provers used for solving a longstanding problem posed by Herbert Robbins, namely, whether all Robbins algebras are Boolean algebras. External links EQP project. Robbins Algebras Are Boolean. Argonne National Laboratory, Mathematics and Computer Science Division. Theorem proving software systems
https://en.wikipedia.org/wiki/Crime%20statistics
Crime statistics refer to systematic, quantitative results about crime, as opposed to crime news or anecdotes. Notably, crime statistics can be the result of two rather different processes: scientific research, such as criminological studies, victimisation surveys; official figures, such as published by the police, prosecution, courts, and prisons. However, in their research, criminologists often draw on official figures as well. Methods There are several methods for the measuring of crime. Public surveys are occasionally conducted to estimate the amount of crime that has not been reported to police. Such surveys are usually more reliable for assessing trends. However, they also have their limitations and generally don't procure statistics useful for local crime prevention, often ignore offenses against children and do not count offenders brought before the criminal justice system. Law enforcement agencies in some countries offer compilations of statistics for various types of crime. Two major methods for collecting crime data are law enforcement reports, which only reflect crimes that are reported, recorded, and not subsequently canceled; and victim study (victimization statistical surveys), which rely on individual memory and honesty. For less frequent crimes such as intentional homicide and armed robbery, reported incidences are generally more reliable, but suffer from under-recording; for example, no criming in the United Kingdom sees over one third of reported violent crimes being not recorded by the police. Because laws and practices vary between jurisdictions, comparing crime statistics between and even within countries can be difficult: typically only violent deaths (homicide or manslaughter) can reliably be compared, due to consistent and high reporting and relative clear definition. The U.S. has two major data collection programs, the Uniform Crime Reports from the FBI and the National Crime Victimization Survey from the Bureau of Justice Statistics. However, the U.S. has no comprehensive infrastructure to monitor crime trends and report the information to related parties such as law enforcement. Research using a series of victim surveys in 18 countries of the European Union, funded by the European Commission, has reported (2005) that the level of crime in Europe has fallen back to the levels of 1990, and notes that levels of common crime have shown declining trends in the U.S., Canada, Australia and other industrialized countries as well. The European researchers say a general consensus identifies demographic change as the leading cause for this international trend. Although homicide and robbery rates rose in the U.S. in the 1980s, by the end of the century they had declined by 40%. However, the European research suggests that "increased use of crime prevention measures may indeed be the common factor behind the near universal decrease in overall levels of crime in the Western world", since decreases have been most pronounced
https://en.wikipedia.org/wiki/Voronoi%20diagram
In mathematics, a Voronoi diagram is a partition of a plane into regions close to each of a given set of objects. It can be classified also as a tessellation. In the simplest case, these objects are just finitely many points in the plane (called seeds, sites, or generators). For each seed there is a corresponding region, called a Voronoi cell, consisting of all points of the plane closer to that seed than to any other. The Voronoi diagram of a set of points is dual to that set's Delaunay triangulation. The Voronoi diagram is named after mathematician Georgy Voronoy, and is also called a Voronoi tessellation, a Voronoi decomposition, a Voronoi partition, or a Dirichlet tessellation (after Peter Gustav Lejeune Dirichlet). Voronoi cells are also known as Thiessen polygons. Voronoi diagrams have practical and theoretical applications in many fields, mainly in science and technology, but also in visual art. The simplest case In the simplest case, shown in the first picture, we are given a finite set of points in the Euclidean plane. In this case each site is one of these given points, and its corresponding Voronoi cell consists of every point in the Euclidean plane for which is the nearest site: the distance to is less than or equal to the minimum distance to any other site . For one other site , the points that are closer to than to , or equally distant, form a closed half-space, whose boundary is the perpendicular bisector of line segment . Cell is the intersection of all of these half-spaces, and hence it is a convex polygon. When two cells in the Voronoi diagram share a boundary, it is a line segment, ray, or line, consisting of all the points in the plane that are equidistant to their two nearest sites. The vertices of the diagram, where three or more of these boundaries meet, are the points that have three or more equally distant nearest sites. Formal definition Let be a metric space with distance function . Let be a set of indices and let be a tuple (indexed collection) of nonempty subsets (the sites) in the space . The Voronoi cell, or Voronoi region, , associated with the site is the set of all points in whose distance to is not greater than their distance to the other sites , where is any index different from . In other words, if denotes the distance between the point and the subset , then The Voronoi diagram is simply the tuple of cells . In principle, some of the sites can intersect and even coincide (an application is described below for sites representing shops), but usually they are assumed to be disjoint. In addition, infinitely many sites are allowed in the definition (this setting has applications in geometry of numbers and crystallography), but again, in many cases only finitely many sites are considered. In the particular case where the space is a finite-dimensional Euclidean space, each site is a point, there are finitely many points and all of them are different, then the Voronoi cells are convex polytopes
https://en.wikipedia.org/wiki/Asker
{{Historical populations |footnote = Source: Statistics Norway. |shading = off |1951|13625 |1961|17755 |1971|31702 |1981|35977 |1991|41903 |2001|49661 |2011|55284 |2014|59037 |2021?|63381 |2031?|69296 }} Asker (), also called Asker proper (Askerbygda or gamle Asker in Norwegian), is a district and former municipality in Akershus, Norway. From 2020 it is part of the larger administrative municipality Asker, Viken (also known as Greater Asker) in Viken county, together with the traditional Buskerud districts Røyken and Hurum; Asker proper constitutes the northern fourth and is part of the Greater Oslo Region. The administrative centre was the town of Asker, which remains so for the new larger municipality. Asker was established as a parish in the Middle Ages and as a municipality on 1 January 1838. History Since the Middle Ages, the Asker parish consisted of the later municipalities Asker and Bærum. In the 19th century Bærum became the Vestre Bærum and Østre Bærum parish, and Asker and Bærum were also established as separate municipalities. In 2020, Asker municipality merged with Røyken and Hurum to form Asker, Viken, a larger administrative region than traditional/geographical Asker. Name The municipality (originally the parish) is named after the old Asker farm, since the first church was built here. The name (Old Norse: Askar) is the plural form of ask which means "ash tree". Coat-of-arms The coat-of-arms is from modern times. They were granted on 7 October 1975. The arms show a green background with three silver-colored tree trunks () and are thus canting arms. The trees are ashes, which were cropped every year to provide food for the animals. The trees thus developed after many years a very typical shape, which was characteristic for the area. Place of the Millennium In 1998, just before the millennium, the 'Askerbøringer' (the inhabitants of Asker) elected the beautiful area of Semsvannet including the mountain ridge Skaugumsåsen – to be their Place of the Millennium. Geography Its main parts are Asker, Gullhella, Vollen, Vettre, Blakstad, Bleiker, Borgen, Drengsrud, Dikemark, Vardåsen, Engelsrud, Holmen, Høn, Hvalstad, Billingstad, Nesøya, Nesbru, and Heggedal. Asker is a coastal place with many beaches, but also contains hills and woods. The district is known for many important businesses. It is also known for gardening. The Skaugum estate, where Crown Prince Haakon of Norway lives with his family, is situated here. The first IKEA store outside of Sweden opened at Slependen in Asker in 1963. There are many hiking/ sightseeing spots around Asker; such as Semsvannet lake and Drengsrud cultural pa
https://en.wikipedia.org/wiki/Hari%20Seldon
Hari Seldon is a fictional character in Isaac Asimov's Foundation series. In his capacity as mathematics professor at Streeling University on the planet Trantor, Seldon develops psychohistory, an algorithmic science that allows him to predict the future in probabilistic terms. On the basis of his psychohistory he is able to predict the eventual fall of the Galactic Empire and to develop a means to shorten the millennia of chaos to follow. The significance of his discoveries lies behind his nickname "Raven" Seldon. In the first five books of the Foundation series, Hari Seldon made only one in-the-flesh appearance, in the first part of the first book (Foundation), although he did appear at other times in pre-recorded messages to reveal a "Seldon Crisis". After writing five books in chronological order, Asimov retroactively added two books to expand on the genesis of psychohistory. The two prequels—Prelude to Foundation and Forward the Foundation—describe Seldon's life in considerable detail. He is also the central character of the Second Foundation Trilogy written after Asimov's death (Foundation's Fear by Gregory Benford, Foundation and Chaos by Greg Bear, and Foundation's Triumph by David Brin), which are set after Asimov's two prequels. Fictional biography Galactic Empire First Minister and psychohistorian Hari Seldon was born in the 10th month of the 11,988th year of the Galactic Era (GE) (-79 Foundation Era (FE)) and died 12,069 GE (1 FE). He was born on the planet Helicon in the Arcturus sector where his father worked as a tobacco grower in a hydroponics plant. He shows incredible mathematical abilities at a very early age. He also learns martial arts on Helicon that later help him on Trantor, the principal art being Heliconian Twisting (a form seemingly equal parts Jiu Jitsu, Krav Maga, and Submission Wrestling). Helicon is said to be "less notable for its mathematics, and more for its martial arts" (Prelude to Foundation). Seldon is awarded a Ph.D. in mathematics for his work on turbulence at the University of Helicon. There he becomes an assistant professor specializing in the mathematical analysis of social structures. Seldon is the subject of a biography by Gaal Dornick. Seldon is Emperor Cleon I's second and last First Minister, the first being Eto Demerzel/R. Daneel Olivaw. He is deposed as First Minister after Cleon I's assassination. Foundation Using psychohistory, Seldon mathematically determines what he calls The Seldon Plan—a plan to determine the right time and place to set up a new society, one that would replace the collapsing Galactic Empire by sheer force of social pressure, but over only a thousand-year time span, rather than the ten-to-thirty-thousand-year time span that would normally have been required, and thus reduce the human suffering from living in a time of barbarism. The Foundation is placed on Terminus, a remote and resource-poor planet entirely populated by scientists and their families. The planet—or so
https://en.wikipedia.org/wiki/General%20topology
In mathematics, general topology (or point set topology) is the branch of topology that deals with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology. The fundamental concepts in point-set topology are continuity, compactness, and connectedness: Continuous functions, intuitively, take nearby points to nearby points. Compact sets are those that can be covered by finitely many sets of arbitrarily small size. Connected sets are sets that cannot be divided into two pieces that are far apart. The terms 'nearby', 'arbitrarily small', and 'far apart' can all be made precise by using the concept of open sets. If we change the definition of 'open set', we change what continuous functions, compact sets, and connected sets are. Each choice of definition for 'open set' is called a topology. A set with a topology is called a topological space. Metric spaces are an important class of topological spaces where a real, non-negative distance, also called a metric, can be defined on pairs of points in the set. Having a metric simplifies many proofs, and many of the most common topological spaces are metric spaces. History General topology grew out of a number of areas, most importantly the following: the detailed study of subsets of the real line (once known as the topology of point sets; this usage is now obsolete) the introduction of the manifold concept the study of metric spaces, especially normed linear spaces, in the early days of functional analysis. General topology assumed its present form around 1940. It captures, one might say, almost everything in the intuition of continuity, in a technically adequate form that can be applied in any area of mathematics. A topology on a set Let X be a set and let τ be a family of subsets of X. Then τ is called a topology on X if: Both the empty set and X are elements of τ Any union of elements of τ is an element of τ Any intersection of finitely many elements of τ is an element of τ If τ is a topology on X, then the pair (X, τ) is called a topological space. The notation Xτ may be used to denote a set X endowed with the particular topology τ. The members of τ are called open sets in X. A subset of X is said to be closed if its complement is in τ (i.e., its complement is open). A subset of X may be open, closed, both (clopen set), or neither. The empty set and X itself are always both closed and open. Basis for a topology A base (or basis) B for a topological space X with topology T is a collection of open sets in T such that every open set in T can be written as a union of elements of B. We say that the base generates the topology T. Bases are useful because many properties of topologies can be reduced to statements about a base that generates that topology—and because many topologies are most easily defined in terms of a base that generates them
https://en.wikipedia.org/wiki/Ernst%20Zermelo
Ernst Friedrich Ferdinand Zermelo (, ; 27 July 187121 May 1953) was a German logician and mathematician, whose work has major implications for the foundations of mathematics. He is known for his role in developing Zermelo–Fraenkel axiomatic set theory and his proof of the well-ordering theorem. Furthermore, his 1929 work on ranking chess players is the first description of a model for pairwise comparison that continues to have a profound impact on various applied fields utilizing this method. Life Ernst Zermelo graduated from Berlin's Luisenstädtisches Gymnasium (now ) in 1889. He then studied mathematics, physics and philosophy at the University of Berlin, the University of Halle, and the University of Freiburg. He finished his doctorate in 1894 at the University of Berlin, awarded for a dissertation on the calculus of variations (Untersuchungen zur Variationsrechnung). Zermelo remained at the University of Berlin, where he was appointed assistant to Planck, under whose guidance he began to study hydrodynamics. In 1897, Zermelo went to the University of Göttingen, at that time the leading centre for mathematical research in the world, where he completed his habilitation thesis in 1899. In 1910, Zermelo left Göttingen upon being appointed to the chair of mathematics at Zurich University, which he resigned in 1916. He was appointed to an honorary chair at the University of Freiburg in 1926, which he resigned in 1935 because he disapproved of Adolf Hitler's regime. At the end of World War II and at his request, Zermelo was reinstated to his honorary position in Freiburg. Research in set theory In 1900, in the Paris conference of the International Congress of Mathematicians, David Hilbert challenged the mathematical community with his famous Hilbert's problems, a list of 23 unsolved fundamental questions which mathematicians should attack during the coming century. The first of these, a problem of set theory, was the continuum hypothesis introduced by Cantor in 1878, and in the course of its statement Hilbert mentioned also the need to prove the well-ordering theorem. Zermelo began to work on the problems of set theory under Hilbert's influence and in 1902 published his first work concerning the addition of transfinite cardinals. By that time he had also discovered the so-called Russell paradox. In 1904, he succeeded in taking the first step suggested by Hilbert towards the continuum hypothesis when he proved the well-ordering theorem (every set can be well ordered). This result brought fame to Zermelo, who was appointed Professor in Göttingen, in 1905. His proof of the well-ordering theorem, based on the powerset axiom and the axiom of choice, was not accepted by all mathematicians, mostly because the axiom of choice was a paradigm of non-constructive mathematics. In 1908, Zermelo succeeded in producing an improved proof making use of Dedekind's notion of the "chain" of a set, which became more widely accepted; this was mainly because that sa
https://en.wikipedia.org/wiki/Gotthilf%20Hagen
Gotthilf Heinrich Ludwig Hagen (3 March 1797 – 3 February 1884) was a German civil engineer who made important contributions to fluid dynamics, hydraulic engineering and probability theory. Life and work Hagen was born in Königsberg, East Prussia (Kaliningrad, Russia) to Friedrich Ludwig Hagen and Helene Charlotte Albertine Hagen. His father was a government official and his mother was the daughter of Christian Reccard, professor of Theology at University of Königsberg, consistorial councillor and astronomer. He showed promise in mathematics in high school and he went on to study at the University of Königsberg where his uncle, Karl Gottfried Hagen was professor of physics and chemistry. In 1816 Hagen began studying mathematics and astronomy with Friedrich Wilhelm Bessel, but in 1818 he switched to study civil engineering as he was more attracted to applied than theoretical science. Nevertheless, he remained in close contact with Bessel throughout his life. In 1819 he undertook the examination for surveyors (Landvermesserprüfung) and after graduating took a job as a junior engineer (Baukondukteur) in the civil service. His main responsibility was for hydraulic engineering and water management. In 1822 he took the state examination in Berlin to qualify as a master builder (Baumeister). He became known through his publications about various hydraulic constructions which he had visited during travels in Europe. In 1824 he was appointed director of building (Baukondukteur) by the mercantile community in Königsberg and in 1825 he became deputy governmental building officer (stellvertretender Regierungs- und Baurat) for Danzig (Gdańsk). A year later he transferred to become harbor building inspector (Hafenbauinspektor) in Pillau, where he was responsible for the harbor and dyke construction. Methods he developed are still relevant to current harbor management in the region. On 27 April 1827 he married his niece Auguste Hagen (1806–1884), with whom he had two daughters and five sons. His son Ludwig Hagen also became a notable civil engineer. In 1830 Hagen joined the supreme building authority (Oberbaudeputation) in Berlin and became chief government building surveyor (Oberbaurat) in 1831. From 1834 to 1849 he taught as a professor of hydraulic engineering at the Bauakademie and the United Artillery and Engineering School in Berlin. Hagen was unusual in stressing the mathematical and theoretical aspects of hydraulic engineering. In particular he was interested in using probability calculus for land surveying and this interest led to his contributions to probability theory. In a letter to Bessel dated 2 August 1836 Hagen presented his hypothesis of elementary errors and deduced a Gaussian distribution for observational errors. This idea was further developed in a book published in 1837 Grundzüge der Wahrscheinlichkeitsrechnung mit besonderer Anwendung auf die Operationen der Feldmeßkunst (“Foundations of Probability Calculus with Special Application
https://en.wikipedia.org/wiki/Dirichlet%20convolution
In mathematics, the Dirichlet convolution is a binary operation defined for arithmetic functions; it is important in number theory. It was developed by Peter Gustav Lejeune Dirichlet. Definition If are two arithmetic functions from the positive integers to the complex numbers, the Dirichlet convolution is a new arithmetic function defined by: where the sum extends over all positive divisors d of n, or equivalently over all distinct pairs of positive integers whose product is n. This product occurs naturally in the study of Dirichlet series such as the Riemann zeta function. It describes the multiplication of two Dirichlet series in terms of their coefficients: Properties The set of arithmetic functions forms a commutative ring, the , under pointwise addition, where is defined by , and Dirichlet convolution. The multiplicative identity is the unit function ε defined by if and if . The units (invertible elements) of this ring are the arithmetic functions f with . Specifically, Dirichlet convolution is associative, distributive over addition , commutative, , and has an identity element, = . Furthermore, for each having , there exists an arithmetic function with , called the of . The Dirichlet convolution of two multiplicative functions is again multiplicative, and every not constantly zero multiplicative function has a Dirichlet inverse which is also multiplicative. In other words, multiplicative functions form a subgroup of the group of invertible elements of the Dirichlet ring. Beware however that the sum of two multiplicative functions is not multiplicative (since ), so the subset of multiplicative functions is not a subring of the Dirichlet ring. The article on multiplicative functions lists several convolution relations among important multiplicative functions. Another operation on arithmetic functions is pointwise multiplication: is defined by . Given a completely multiplicative function , pointwise multiplication by distributes over Dirichlet convolution: . The convolution of two completely multiplicative functions is multiplicative, but not necessarily completely multiplicative. Examples In these formulas, we use the following arithmetical functions: is the multiplicative identity: , otherwise 0 (). is the constant function with value 1: for all . Keep in mind that is not the identity. (Some authors denote this as because the associated Dirichlet series is the Riemann zeta function.) for is a set indicator function: iff , otherwise 0. is the identity function with value n: . is the kth power function: . The following relations hold: , the Dirichlet inverse of the constant function is the Möbius function. Hence: if and only if , the Möbius inversion formula , the kth-power-of-divisors sum function σk , the sum-of-divisors function , the number-of-divisors function ,  by Möbius inversion of the formulas for σk, σ, and d , proved under Euler's totient function , by Möbius inversion  , from
https://en.wikipedia.org/wiki/SPSS
SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. Versions of the software released since 2015 have the brand name IBM SPSS Statistics. The software name originally stood for Statistical Package for the Social Sciences (SPSS), reflecting the original market, then later changed to Statistical Product and Service Solutions. Overview SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping and creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software. The many features of SPSS Statistics are accessible via pull-down menus or can be programmed with a proprietary 4GL command syntax language. Command syntax programming has the benefits of reproducible output, simplifying repetitive tasks, and handling complex data manipulations and analyses. Additionally, some complex applications can only be programmed in syntax and are not accessible through the menu structure. The pull-down menu interface also generates command syntax: this can be displayed in the output, although the default settings have to be changed to make the syntax visible to the user. They can also be pasted into a syntax file using the "paste" button present in each menu. Programs can be run interactively or unattended, using the supplied Production Job Facility. A "macro" language can be used to write command language subroutines. A Python programmability extension can access the information in the data dictionary and data and dynamically build command syntax programs. This extension, introduced in SPSS 14, replaced the less functional SAX Basic "scripts" for most purposes, although SaxBasic remains available. In addition, the Python extension allows SPSS to run any of the statistics in the free software package R. From version 14 onwards, SPSS can be driven externally by a Python or a VB.NET program using supplied "plug-ins". (From version 20 onwards, these two scripting facilities, as well as many scripts, are included on the installation media and are normally installed by default.) SPSS Statistics places constraints on internal file structure, data types, data processing, and matching files, which together considerably simplify programming. SPSS datasets have a two-dimensional table structure, where the rows typically represent cases (such as individuals or households) and the col
https://en.wikipedia.org/wiki/Snub%20cube
In geometry, the snub cube, or snub cuboctahedron, is an Archimedean solid with 38 faces: 6 squares and 32 equilateral triangles. It has 60 edges and 24 vertices. It is a chiral polyhedron; that is, it has two distinct forms, which are mirror images (or "enantiomorphs") of each other. The union of both forms is a compound of two snub cubes, and the convex hull of both sets of vertices is a truncated cuboctahedron. Kepler first named it in Latin as cubus simus in 1619 in his Harmonices Mundi. H. S. M. Coxeter, noting it could be derived equally from the octahedron as the cube, called it snub cuboctahedron, with a vertical extended Schläfli symbol , and representing an alternation of a truncated cuboctahedron, which has Schläfli symbol . Dimensions For a snub cube with edge length , its surface area and volume are: where t is the tribonacci constant If the original snub cube has edge length 1, its dual pentagonal icositetrahedron has side lengths . Cartesian coordinates Cartesian coordinates for the vertices of a snub cube are all the even permutations of (±1, ±, ±t) with an even number of plus signs, along with all the odd permutations with an odd number of plus signs, where t ≈ 1.83929 is the tribonacci constant. Taking the even permutations with an odd number of plus signs, and the odd permutations with an even number of plus signs, gives a different snub cube, the mirror image. Taking all of them together yields the compound of two snub cubes. This snub cube has edges of length , a number which satisfies the equation and can be written as To get a snub cube with unit edge length, divide all the coordinates above by the value α given above. Orthogonal projections The snub cube has two special orthogonal projections, centered, on two types of faces: triangles, and squares, correspond to the A2 and B2 Coxeter planes. Spherical tiling The snub cube can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Great circle arcs (geodesics) on the sphere are projected as circular arcs on the plane. Geometric relations The snub cube can be generated by taking the six faces of the cube, pulling them outward so they no longer touch, then giving them each a small rotation on their centers (all clockwise or all counter-clockwise) until the spaces between can be filled with equilateral triangles. The snub cube can also be derived from the truncated cuboctahedron by the process of alternation. 24 vertices of the truncated cuboctahedron form a polyhedron topologically equivalent to the snub cube; the other 24 form its mirror-image. The resulting polyhedron is vertex-transitive but not uniform. An "improved" snub cube, with a slightly smaller square face and slightly larger triangular faces compared to Archimedes' uniform snub cube, is useful as a spherical design. Related polyhedra and tilings The snub cube is one of a fam
https://en.wikipedia.org/wiki/Stellation
In geometry, stellation is the process of extending a polygon in two dimensions, polyhedron in three dimensions, or, in general, a polytope in n dimensions to form a new figure. Starting with an original figure, the process extends specific elements such as its edges or face planes, usually in a symmetrical way, until they meet each other again to form the closed boundary of a new figure. The new figure is a stellation of the original. The word stellation comes from the Latin stellātus, "starred", which in turn comes from Latin stella, "star". Stellation is the reciprocal or dual process to faceting. Kepler's definition In 1619 Kepler defined stellation for polygons and polyhedra as the process of extending edges or faces until they meet to form a new polygon or polyhedron. He stellated the regular dodecahedron to obtain two regular star polyhedra, the small stellated dodecahedron and great stellated dodecahedron. He also stellated the regular octahedron to obtain the stella octangula, a regular compound of two tetrahedra. Stellating polygons Stellating a regular polygon symmetrically creates a regular star polygon or polygonal compound. These polygons are characterised by the number of times m that the polygonal boundary winds around the centre of the figure. Like all regular polygons, their vertices lie on a circle. m also corresponds to the number of vertices around the circle to get from one end of a given edge to the other, starting at 1. A regular star polygon is represented by its Schläfli symbol {n/m}, where n is the number of vertices, m is the step used in sequencing the edges around it, and m and n are coprime (have no common factor). The case m = 1 gives the convex polygon {n}. m also must be less than half of n; otherwise the lines will either be parallel or diverge, preventing the figure from ever closing. If n and m do have a common factor, then the figure is a regular compound. For example {6/2} is the regular compound of two triangles {3} or hexagram, while {10/4} is a compound of two pentagrams {5/2}. Some authors use the Schläfli symbol for such regular compounds. Others regard the symbol as indicating a single path which is wound m times around vertex points, such that one edge is superimposed upon another and each vertex point is visited m times. In this case a modified symbol may be used for the compound, for example 2{3} for the hexagram and 2{5/2} for the regular compound of two pentagrams. A regular n-gon has stellations if n is even (assuming compounds of multiple degenerate digons are not considered), and stellations if n is odd. Like the heptagon, the octagon also has two octagrammic stellations, one, {8/3} being a star polygon, and the other, {8/2}, being the compound of two squares. Stellating polyhedra A polyhedron is stellated by extending the edges or face planes of a polyhedron until they meet again to form a new polyhedron or compound. The interior of the new polyhedron is divided by the faces i
https://en.wikipedia.org/wiki/Cubic%20equation
In algebra, a cubic equation in one variable is an equation of the form in which is nonzero. The solutions of this equation are called roots of the cubic function defined by the left-hand side of the equation. If all of the coefficients , , , and of the cubic equation are real numbers, then it has at least one real root (this is true for all odd-degree polynomial functions). All of the roots of the cubic equation can be found by the following means: algebraically: more precisely, they can be expressed by a cubic formula involving the four coefficients, the four basic arithmetic operations, square roots and cube roots. (This is also true of quadratic (second-degree) and quartic (fourth-degree) equations, but not for higher-degree equations, by the Abel–Ruffini theorem.) trigonometrically numerical approximations of the roots can be found using root-finding algorithms such as Newton's method. The coefficients do not need to be real numbers. Much of what is covered below is valid for coefficients in any field with characteristic other than 2 and 3. The solutions of the cubic equation do not necessarily belong to the same field as the coefficients. For example, some cubic equations with rational coefficients have roots that are irrational (and even non-real) complex numbers. History Cubic equations were known to the ancient Babylonians, Greeks, Chinese, Indians, and Egyptians. Babylonian (20th to 16th centuries BC) cuneiform tablets have been found with tables for calculating cubes and cube roots. The Babylonians could have used the tables to solve cubic equations, but no evidence exists to confirm that they did. The problem of doubling the cube involves the simplest and oldest studied cubic equation, and one for which the ancient Egyptians did not believe a solution existed. In the 5th century BC, Hippocrates reduced this problem to that of finding two mean proportionals between one line and another of twice its length, but could not solve this with a compass and straightedge construction, a task which is now known to be impossible. Methods for solving cubic equations appear in The Nine Chapters on the Mathematical Art, a Chinese mathematical text compiled around the 2nd century BC and commented on by Liu Hui in the 3rd century. In the 3rd century AD, the Greek mathematician Diophantus found integer or rational solutions for some bivariate cubic equations (Diophantine equations). Hippocrates, Menaechmus and Archimedes are believed to have come close to solving the problem of doubling the cube using intersecting conic sections, though historians such as Reviel Netz dispute whether the Greeks were thinking about cubic equations or just problems that can lead to cubic equations. Some others like T. L. Heath, who translated all of Archimedes' works, disagree, putting forward evidence that Archimedes really solved cubic equations using intersections of two conics, but also discussed the conditions where the roots are 0, 1 or 2. In the 7th
https://en.wikipedia.org/wiki/Bruno%20de%20Finetti
Bruno de Finetti (13 June 1906 – 20 July 1985) was an Italian probabilist statistician and actuary, noted for the "operational subjective" conception of probability. The classic exposition of his distinctive theory is the 1937 , which discussed probability founded on the coherence of betting odds and the consequences of exchangeability. Life De Finetti was born in Innsbruck, Austria, and studied mathematics at Politecnico di Milano. He graduated in 1927, writing his thesis under the supervision of Giulio Vivanti. After graduation, he worked as an actuary and a statistician at (National Institute of Statistics) in Rome and, from 1931, the Trieste insurance company Assicurazioni Generali. In 1936 he won a competition for Chair of Financial Mathematics and Statistics, but was not nominated due to a fascist law barring access to unmarried candidates; he was appointed as ordinary professor at the University of Trieste only in 1950. He published extensively (17 papers in 1930 alone, according to Lindley) and acquired an international reputation in the small world of probability mathematicians. He taught mathematical analysis in Padua and then won a chair in Financial Mathematics at Trieste University (1939). In 1954 he moved to the Sapienza University of Rome, first to another chair in Financial Mathematics and then, from 1961 to 1976, one in the Calculus of Probabilities. De Finetti developed his ideas on subjective probability in the 1920s independently of Frank P. Ramsey. Still, according to the preface of his "Theory of Probability", he drew on ideas of Harold Jeffreys, I. J. Good and B. O. Koopman. He also reasoned about the connection of economics and probability, and thought that guiding principles to be Paretian optimum further inspired by "fairness" criteria. De Finetti held different social and political beliefs through his life: following fascism during his youth, then moving to Christian socialism and finally adhering to the Radical Party. De Finetti only became known in the Anglo-American statistical world in the 1950s when L. J. Savage, who had independently adopted subjectivism, drew him into it; another great champion was Dennis Lindley. De Finetti died in Rome in 1985. Work and impact De Finetti emphasized a predictive inference approach to statistics; he proposed a thought experiment along the following lines (described in greater detail at coherence): You must set the price of a promise to pay $1 if there was life on Mars 1 billion years ago, and $0 if there was not, and tomorrow the answer will be revealed. You know that your opponent will be able to choose either to buy such a promise from you at the price you have set, or require you to buy such a promise from your opponent, still at the same price. In other words: you set the odds, but your opponent decides which side of the bet will be yours. The price you set is the "operational subjective probability" that you assign to the proposition on which you are betting. This price
https://en.wikipedia.org/wiki/De%20Finetti%27s%20theorem
In probability theory, de Finetti's theorem states that exchangeable observations are conditionally independent relative to some latent variable. An epistemic probability distribution could then be assigned to this variable. It is named in honor of Bruno de Finetti. For the special case of an exchangeable sequence of Bernoulli random variables it states that such a sequence is a "mixture" of sequences of independent and identically distributed (i.i.d.) Bernoulli random variables. A sequence of random variables is called exchangeable if the joint distribution of the sequence is unchanged by any permutation of the indices. While the variables of the exchangeable sequence are not themselves independent, only exchangeable, there is an underlying family of i.i.d. random variables. That is, there are underlying, generally unobservable, quantities that are i.i.d. – exchangeable sequences are mixtures of i.i.d. sequences. Background A Bayesian statistician often seeks the conditional probability distribution of a random quantity given the data. The concept of exchangeability was introduced by de Finetti. De Finetti's theorem explains a mathematical relationship between independence and exchangeability. An infinite sequence of random variables is said to be exchangeable if for any natural number n and any finite sequence i1, ..., in and any permutation of the sequence π:{i1, ..., in } → {i1, ..., in }, both have the same joint probability distribution. If an identically distributed sequence is independent, then the sequence is exchangeable; however, the converse is false—there exist exchangeable random variables that are not statistically independent, for example the Pólya urn model. Statement of the theorem A random variable X has a Bernoulli distribution if Pr(X = 1) = p and Pr(X = 0) = 1 − p for some p ∈ (0, 1). De Finetti's theorem states that the probability distribution of any infinite exchangeable sequence of Bernoulli random variables is a "mixture" of the probability distributions of independent and identically distributed sequences of Bernoulli random variables. "Mixture", in this sense, means a weighted average, but this need not mean a finite or countably infinite (i.e., discrete) weighted average: it can be an integral rather than a sum. More precisely, suppose X1, X2, X3, ... is an infinite exchangeable sequence of Bernoulli-distributed random variables. Then there is some probability distribution m on the interval [0, 1] and some random variable Y such that The probability distribution of Y is m, and The conditional probability distribution of the whole sequence X1, X2, X3, ... given the value of Y is described by saying that X1, X2, X3, ... are conditionally independent given Y, and For any i ∈ {1, 2, 3, ...}, the conditional probability that Xi = 1, given the value of Y, is Y. Another way of stating the theorem Suppose is an infinite exchangeable sequence of Bernoulli random variables. Then are conditionally in
https://en.wikipedia.org/wiki/Hypergeometric%20distribution
In probability theory and statistics, the hypergeometric distribution is a discrete probability distribution that describes the probability of successes (random draws for which the object drawn has a specified feature) in draws, without replacement, from a finite population of size that contains exactly objects with that feature, wherein each draw is either a success or a failure. In contrast, the binomial distribution describes the probability of successes in draws with replacement. Definitions Probability mass function The following conditions characterize the hypergeometric distribution: The result of each draw (the elements of the population being sampled) can be classified into one of two mutually exclusive categories (e.g. Pass/Fail or Employed/Unemployed). The probability of a success changes on each draw, as each draw decreases the population (sampling without replacement from a finite population). A random variable follows the hypergeometric distribution if its probability mass function (pmf) is given by where is the population size, is the number of success states in the population, is the number of draws (i.e. quantity drawn in each trial), is the number of observed successes, is a binomial coefficient. The is positive when . A random variable distributed hypergeometrically with parameters , and is written and has probability mass function above. Combinatorial identities As required, we have which essentially follows from Vandermonde's identity from combinatorics. Also note that This identity can be shown by expressing the binomial coefficients in terms of factorials and rearranging the latter. Additionally, it follows from the symmetry of the problem, described in two different but interchangeable ways. For example, consider two rounds of drawing without replacement. In the first round, out of neutral marbles are drawn from an urn without replacement and coloured green. Then the colored marbles are put back. In the second round, marbles are drawn without replacement and colored red. Then, the number of marbles with both colors on them (that is, the number of marbles that have been drawn twice) has the hypergeometric distribution. The symmetry in and stems from the fact that the two rounds are independent, and one could have started by drawing balls and colouring them red first. Note that we are interested in the probability of successes in draws without replacement, since the probability of success on each trial is not the same, as the size of the remaining population changes as we remove each marble. Keep in mind not to confuse with the binomial distribution, which describes the probability of successes in draws with replacement. Properties Working example The classical application of the hypergeometric distribution is sampling without replacement. Think of an urn with two colors of marbles, red and green. Define drawing a green marble as a success and drawing a red marble as a failure.
https://en.wikipedia.org/wiki/Kalman%20filter
For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory. This digital filter is sometimes termed the Stratonovich–Kalman–Bucy filter because it is a special case of a more general, nonlinear filter developed somewhat earlier by the Soviet mathematician Ruslan Stratonovich. In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before summer 1961, when Kalman met with Stratonovich during a conference in Moscow. Kalman filtering has numerous technological applications. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and ships positioned dynamically. Furthermore, Kalman filtering is a concept much applied in time series analysis used for topics such as signal processing and econometrics. Kalman filtering is also one of the main topics of robotic motion planning and control and can be used for trajectory optimization. Kalman filtering also works for modeling the central nervous system's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, the use of Kalman filters provides a realistic model for making estimates of the current state of a motor system and issuing updated commands. The algorithm works by a two-phase process. For the prediction phase, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with greater certainty. The algorithm is recursive. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required. Optimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: "In summary, the following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear." Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense. It is a common misconcept
https://en.wikipedia.org/wiki/List%20of%20islands%20of%20Sweden
This is a list of islands of Sweden. According to 2013 statistics report there are in total 267,570 islands in Sweden, fewer than 1000 of which are inhabited. Their total area is 1.2 million hectares, which corresponds to 3 percent of the total land area of Sweden. Rough population statistics are from 2015. Ordered by size Other well-known islands Adelsö Björkö (Birka) Frösön Gåsö Gotska Sandön Helgö Holmöarna Koster Islands Lidingö Märket Mjältön Stora Karlsö Ven Visingsö Furusund See also List of islands of Bothnian Bay List of islands of Stockholm List of lighthouses and lightvessels in Sweden List of islands in the Baltic Sea List of islands References Sweden, List of islands of Islands
https://en.wikipedia.org/wiki/Empty%20product
In mathematics, an empty product, or nullary product or vacuous product, is the result of multiplying no factors. It is by convention equal to the multiplicative identity (assuming there is an identity for the multiplication operation in question), just as the empty sum—the result of adding no numbers—is by convention zero, or the additive identity. When numbers are implied, the empty product becomes one. The term empty product is most often used in the above sense when discussing arithmetic operations. However, the term is sometimes employed when discussing set-theoretic intersections, categorical products, and products in computer programming. Nullary arithmetic product Definition Let a1, a2, a3, ... be a sequence of numbers, and let be the product of the first m elements of the sequence. Then for all m = 1, 2, ... provided that we use the convention . In other words, a "product" with no factors at all evaluates to 1. Allowing a "product" with zero factors reduces the number of cases to be considered in many mathematical formulas. Such a "product" is a natural starting point in induction proofs, as well as in algorithms. For these reasons, the "empty product is one" convention is common practice in mathematics and computer programming. Relevance of defining empty products The notion of an empty product is useful for the same reason that the number zero and the empty set are useful: while they seem to represent quite uninteresting notions, their existence allows for a much shorter mathematical presentation of many subjects. For example, the empty products 0! = 1 (the factorial of zero) and x0 = 1 shorten Taylor series notation (see zero to the power of zero for a discussion of when x = 0). Likewise, if M is an n × n matrix, then M0 is the n × n identity matrix, reflecting the fact that applying a linear map zero times has the same effect as applying the identity map. As another example, the fundamental theorem of arithmetic says that every positive integer greater than 1 can be written uniquely as a product of primes. However, if we do not allow products with only 0 or 1 factors, then the theorem (and its proof) become longer. More examples of the use of the empty product in mathematics may be found in the binomial theorem (which assumes and implies that x0 = 1 for all x), Stirling number, König's theorem, binomial type, binomial series, difference operator and Pochhammer symbol. Logarithms and exponentials Since logarithms map products to sums: they map an empty product to an empty sum. Conversely, the exponential function maps sums into products: and maps an empty sum to an empty product. Nullary Cartesian product Consider the general definition of the Cartesian product: If I is empty, the only such g is the empty function , which is the unique subset of that is a function , namely the empty subset (the only subset that has): Thus, the cardinality of the Cartesian product of no sets is 1. Under the perhaps more fa
https://en.wikipedia.org/wiki/Discrete%20logarithm
In mathematics, for given real numbers a and b, the logarithm logb a is a number x such that . Analogously, in any group G, powers bk can be defined for all integers k, and the discrete logarithm logb a is an integer k such that . In number theory, the more commonly used term is index: we can write x = indr a (mod m) (read "the index of a to the base r modulo m") for rx ≡ a (mod m) if r is a primitive root of m and gcd(a,m) = 1. Discrete logarithms are quickly computable in a few special cases. However, no efficient method is known for computing them in general. Several important algorithms in public-key cryptography, such as ElGamal, base their security on the assumption that the discrete logarithm problem (DLP) over carefully chosen groups has no efficient solution. Definition Let G be any group. Denote its group operation by multiplication and its identity element by 1. Let b be any element of G. For any positive integer k, the expression bk denotes the product of b with itself k times: Similarly, let b−k denote the product of b−1 with itself k times. For k = 0, the kth power is the identity: . Let a also be an element of G. An integer k that solves the equation is termed a discrete logarithm (or simply logarithm, in this context) of a to the base b. One writes k = logb a. Examples Powers of 10 The powers of 10 are For any number a in this list, one can compute log10 a. For example, log10 10000 = 4, and log10 0.001 = −3. These are instances of the discrete logarithm problem. Other base-10 logarithms in the real numbers are not instances of the discrete logarithm problem, because they involve non-integer exponents. For example, the equation log10 53 = 1.724276… means that 101.724276… = 53. While integer exponents can be defined in any group using products and inverses, arbitrary real exponents, such as this 1.724276…, require other concepts such as the exponential function. In group-theoretic terms, the powers of 10 form a cyclic group G under multiplication, and 10 is a generator for this group. The discrete logarithm log10 a is defined for any a in G. Powers of a fixed real number A similar example holds for any non-zero real number b. The powers form a multiplicative subgroup G = {…, b−3, b−2, b−1, 1, b1, b2, b3, …} of the non-zero real numbers. For any element a of G, one can compute logb a. Modular arithmetic One of the simplest settings for discrete logarithms is the group (Zp)×. This is the group of multiplication modulo the prime p. Its elements are congruence classes modulo p, and the group product of two elements may be obtained by ordinary integer multiplication of the elements followed by reduction modulo p. The kth power of one of the numbers in this group may be computed by finding its kth power as an integer and then finding the remainder after division by p. When the numbers involved are large, it is more efficient to reduce modulo p multiple times during the computation. Regardless of the specific algorithm us
https://en.wikipedia.org/wiki/Fermi%20gas
A Fermi gas is an idealized model, an ensemble of many non-interacting fermions. Fermions are particles that obey Fermi–Dirac statistics, like electrons, protons, and neutrons, and, in general, particles with half-integer spin. These statistics determine the energy distribution of fermions in a Fermi gas in thermal equilibrium, and is characterized by their number density, temperature, and the set of available energy states. The model is named after the Italian physicist Enrico Fermi. This physical model is useful for certain systems with many fermions. Some key examples are the behaviour of charge carriers in a metal, nucleons in an atomic nucleus, neutrons in a neutron star, and electrons in a white dwarf. Description An ideal Fermi gas or free Fermi gas is a physical model assuming a collection of non-interacting fermions in a constant potential well. Fermions are elementary or composite particles with half-integer spin, thus follow Fermi-Dirac statistics. The equivalent model for integer spin particles is called the Bose gas (an ensemble of non-interacting bosons). At low enough particle number density and high temperature, both the Fermi gas and the Bose gas behave like a classical ideal gas. By the Pauli exclusion principle, no quantum state can be occupied by more than one fermion with an identical set of quantum numbers. Thus a non-interacting Fermi gas, unlike a Bose gas, concentrates a small number of particles per energy. Thus a Fermi gas is prohibited from condensing into a Bose–Einstein condensate, although weakly-interacting Fermi gases might form a Cooper pair and condensate (also known as BCS-BEC crossover regime). The total energy of the Fermi gas at absolute zero is larger than the sum of the single-particle ground states because the Pauli principle implies a sort of interaction or pressure that keeps fermions separated and moving. For this reason, the pressure of a Fermi gas is non-zero even at zero temperature, in contrast to that of a classical ideal gas. For example, this so-called degeneracy pressure stabilizes a neutron star (a Fermi gas of neutrons) or a white dwarf star (a Fermi gas of electrons) against the inward pull of gravity, which would ostensibly collapse the star into a black hole. Only when a star is sufficiently massive to overcome the degeneracy pressure can it collapse into a singularity. It is possible to define a Fermi temperature below which the gas can be considered degenerate (its pressure derives almost exclusively from the Pauli principle). This temperature depends on the mass of the fermions and the density of energy states. The main assumption of the free electron model to describe the delocalized electrons in a metal can be derived from the Fermi gas. Since interactions are neglected due to screening effect, the problem of treating the equilibrium properties and dynamics of an ideal Fermi gas reduces to the study of the behaviour of single independent particles. In these systems the Fermi tem
https://en.wikipedia.org/wiki/Kronecker%20delta
In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: or with use of Iverson brackets: For example, because , whereas because . The Kronecker delta appears naturally in many areas of mathematics, physics, engineering and computer science, as a means of compactly expressing its definition above. In linear algebra, the identity matrix has entries equal to the Kronecker delta: where and take the values , and the inner product of vectors can be written as Here the Euclidean vectors are defined as -tuples: and and the last step is obtained by using the values of the Kronecker delta to reduce the summation over . It is common for and to be restricted to a set of the form or , but the Kronecker delta can be defined on an arbitrary set. Properties The following equations are satisfied: Therefore, the matrix can be considered as an identity matrix. Another useful representation is the following form: This can be derived using the formula for the geometric series. Alternative notation Using the Iverson bracket: Often, a single-argument notation is used, which is equivalent to setting : In linear algebra, it can be thought of as a tensor, and is written . Sometimes the Kronecker delta is called the substitution tensor. Digital signal processing In the study of digital signal processing (DSP), the unit sample function represents a special case of a 2-dimensional Kronecker delta function where the Kronecker indices include the number zero, and where one of the indices is zero. In this case: Or more generally where: However, this is only a special case. In tensor calculus, it is more common to number basis vectors in a particular dimension starting with index 1, rather than index 0. In this case, the relation does not exist, and in fact, the Kronecker delta function and the unit sample function are different functions that overlap in the specific case where the indices include the number 0, the number of indices is 2, and one of the indices has the value of zero. While the discrete unit sample function and the Kronecker delta function use the same letter, they differ in the following ways. For the discrete unit sample function, it is more conventional to place a single integer index in square braces; in contrast the Kronecker delta can have any number of indexes. Further, the purpose of the discrete unit sample function is different from the Kronecker delta function. In DSP, the discrete unit sample function is typically used as an input function to a discrete system for discovering the system function of the system which will be produced as an output of the system. In contrast, the typical purpose of the Kronecker delta function is for filtering terms from an Einstein summation convention. The discrete unit sample function is more simply defined as: In addition, the Dirac de
https://en.wikipedia.org/wiki/List%20of%20unsolved%20problems%20in%20mathematics
Many mathematical problems have been stated but not yet solved. These problems come from many areas of mathematics, such as theoretical physics, computer science, algebra, analysis, combinatorics, algebraic, differential, discrete and Euclidean geometries, graph theory, group theory, model theory, number theory, set theory, Ramsey theory, dynamical systems, and partial differential equations. Some problems belong to more than one discipline and are studied using techniques from different areas. Prizes are often awarded for the solution to a long-standing problem, and some lists of unsolved problems, such as the Millennium Prize Problems, receive considerable attention. This list is a composite of notable unsolved problems mentioned in previously published lists, including but not limited to lists considered authoritative. Although this list may never be comprehensive, the problems listed here vary widely in both difficulty and importance. Lists of unsolved problems in mathematics Various mathematicians and organizations have published and promoted lists of unsolved mathematical problems. In some cases, the lists have been associated with prizes for the discoverers of solutions. Millennium Prize Problems Of the original seven Millennium Prize Problems listed by the Clay Mathematics Institute in 2000, six remain unsolved to date: Birch and Swinnerton-Dyer conjecture Hodge conjecture Navier–Stokes existence and smoothness P versus NP Riemann hypothesis Yang–Mills existence and mass gap The seventh problem, the Poincaré conjecture, was solved by Grigori Perelman in 2003. However, a generalization called the smooth four-dimensional Poincaré conjecture—that is, whether a four-dimensional topological sphere can have two or more inequivalent smooth structures—is unsolved. Notebooks The Kourovka Notebook () is a collection of unsolved problems in group theory, first published in 1965 and updated many times since. The Sverdlovsk Notebook () is a collection of unsolved problems in semigroup theory, first published in 1969 and updated many times since. The Dniester Notebook () lists several hundred unsolved problems in algebra, particularly ring theory and modulus theory. The Erlagol Notebook () lists unsolved problems in algebra and model theory. Unsolved problems Algebra Birch–Tate conjecture on the relation between the order of the center of the Steinberg group of the ring of integers of a number field to the field's Dedekind zeta function. Bombieri–Lang conjectures on densities of rational points of algebraic surfaces and algebraic varieties defined on number fields and their field extensions. Connes embedding problem in Von Neumann algebra theory Crouzeix's conjecture: the matrix norm of a complex function applied to a complex matrix is at most twice the supremum of over the field of values of . Determinantal conjecture on the determinant of the sum of two normal matrices. Eilenberg–Ganea conjecture: a group with cohomolo
https://en.wikipedia.org/wiki/Metamathematics
Metamathematics is the study of mathematics itself using mathematical methods. This study produces metatheories, which are mathematical theories about other mathematical theories. Emphasis on metamathematics (and perhaps the creation of the term itself) owes itself to David Hilbert's attempt to secure the foundations of mathematics in the early part of the 20th century. Metamathematics provides "a rigorous mathematical technique for investigating a great variety of foundation problems for mathematics and logic" (Kleene 1952, p. 59). An important feature of metamathematics is its emphasis on differentiating between reasoning from inside a system and from outside a system. An informal illustration of this is categorizing the proposition "2+2=4" as belonging to mathematics while categorizing the proposition "'2+2=4' is valid" as belonging to metamathematics. History Metamathematical metatheorems about mathematics itself were originally differentiated from ordinary mathematical theorems in the 19th century to focus on what was then called the foundational crisis of mathematics. Richard's paradox (Richard 1905) concerning certain 'definitions' of real numbers in the English language is an example of the sort of contradictions that can easily occur if one fails to distinguish between mathematics and metamathematics. Something similar can be said around the well-known Russell's paradox (Does the set of all those sets that do not contain themselves contain itself?). Metamathematics was intimately connected to mathematical logic, so that the early histories of the two fields, during the late 19th and early 20th centuries, largely overlap. More recently, mathematical logic has often included the study of new pure mathematics, such as set theory, category theory, recursion theory and pure model theory, which is not directly related to metamathematics. Serious metamathematical reflection began with the work of Gottlob Frege, especially his Begriffsschrift, published in 1879. David Hilbert was the first to invoke the term "metamathematics" with regularity (see Hilbert's program), in the early 20th century. In his hands, it meant something akin to contemporary proof theory, in which finitary methods are used to study various axiomatized mathematical theorems (Kleene 1952, p. 55). Other prominent figures in the field include Bertrand Russell, Thoralf Skolem, Emil Post, Alonzo Church, Alan Turing, Stephen Kleene, Willard Quine, Paul Benacerraf, Hilary Putnam, Gregory Chaitin, Alfred Tarski, Paul Cohen and Kurt Gödel. Today, metalogic and metamathematics broadly overlap, and both have been substantially subsumed by mathematical logic in academia. Milestones The discovery of hyperbolic geometry The discovery of hyperbolic geometry had important philosophical consequences for metamathematics. Before its discovery there was just one geometry and mathematics; the idea that another geometry existed was considered improbable. When Gauss discovered hyperboli
https://en.wikipedia.org/wiki/Primality%20test
A primality test is an algorithm for determining whether an input number is prime. Among other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy (its running time is polynomial in the size of the input). Some primality tests prove that a number is prime, while others like Miller–Rabin prove that a number is composite. Therefore, the latter might more accurately be called compositeness tests instead of primality tests. Simple methods The simplest primality test is trial division: given an input number, , check whether it is divisible by any prime number between 2 and (i.e., whether the division leaves no remainder). If so, then is composite. Otherwise, it is prime. In fact, for any divisor , there must be another divisor , and a prime divisor of , and therefore looking for prime divisors at most is sufficient. For example, consider the number 100, whose divisors are these numbers: 1, 2, 4, 5, 10, 20, 25, 50, 100. When all possible divisors up to are tested, some divisors will be discovered twice. To observe this, consider the list of divisor pairs of 100: . Notice that products past are the reverse of products that appeared earlier. For example, and are the reverse of each other. Note further that of the two divisors, and . This observation generalizes to all : all divisor pairs of contain a divisor less than or equal to , so the algorithm need only search for divisors less than / equal to to guarantee detection of all divisor pairs. Also notice that 2 is a prime dividing 100, which immediately proves that 100 is not prime. Every positive integer except 1 is divisible by at least one prime number by the Fundamental Theorem of Arithmetic. Therefore the algorithm need only search for prime divisors less than / equal to . For another example, consider how this algorithm determines the primality of 17. One has , and the only primes are 2 and 3. Neither divides 17, proving that 17 is prime. For a last example, consider 221. One has , and the primes are 2, 3, 5, 7, 11, and 13. Upon checking each, one discovers that , proving that 221 is not prime. In cases where it is not feasible to compute the list of primes , it is also possible to simply (and slowly) check all numbers between and for divisors. A rather simple optimization is to test divisibility by 2 and by just the odd numbers between 3 and , since divisibility by an even number implies divisibility by 2. This method can be improved further. Observe that all primes greater than 3 are of the form for a nonnegative integer and . Indeed, every integer is of the form for a positive integer and . Since 2 divides , and , and 3 divides and , the only possible remainders mod 6 for a prime greater than 3 are 1 and 5. So,
https://en.wikipedia.org/wiki/Rhombus
In plane Euclidean geometry, a rhombus (: rhombi or rhombuses) is a quadrilateral whose four sides all have the same length. Another name is equilateral quadrilateral, since equilateral means that all of its sides are equal in length. The rhombus is often called a "diamond", after the diamonds suit in playing cards which resembles the projection of an octahedral diamond, or a lozenge, though the former sometimes refers specifically to a rhombus with a 60° angle (which some authors call a calisson after the French sweet — also see Polyiamond), and the latter sometimes refers specifically to a rhombus with a 45° angle. Every rhombus is simple (non-self-intersecting), and is a special case of a parallelogram and a kite. A rhombus with right angles is a square. Etymology The word "rhombus" comes from , meaning something that spins, which derives from the verb , romanized: , meaning "to turn round and round." The word was used both by Euclid and Archimedes, who used the term "solid rhombus" for a bicone, two right circular cones sharing a common base. The surface we refer to as rhombus today is a cross section of the bicone on a plane through the apexes of the two cones. Characterizations A simple (non-self-intersecting) quadrilateral is a rhombus if and only if it is any one of the following: a parallelogram in which a diagonal bisects an interior angle a parallelogram in which at least two consecutive sides are equal in length a parallelogram in which the diagonals are perpendicular (an orthodiagonal parallelogram) a quadrilateral with four sides of equal length (by definition) a quadrilateral in which the diagonals are perpendicular and bisect each other a quadrilateral in which each diagonal bisects two opposite interior angles a quadrilateral ABCD possessing a point P in its plane such that the four triangles ABP, BCP, CDP, and DAP are all congruent a quadrilateral ABCD in which the incircles in triangles ABC, BCD, CDA and DAB have a common point Basic properties Every rhombus has two diagonals connecting pairs of opposite vertices, and two pairs of parallel sides. Using congruent triangles, one can prove that the rhombus is symmetric across each of these diagonals. It follows that any rhombus has the following properties: Opposite angles of a rhombus have equal measure. The two diagonals of a rhombus are perpendicular; that is, a rhombus is an orthodiagonal quadrilateral. Its diagonals bisect opposite angles. The first property implies that every rhombus is a parallelogram. A rhombus therefore has all of the properties of a parallelogram: for example, opposite sides are parallel; adjacent angles are supplementary; the two diagonals bisect one another; any line through the midpoint bisects the area; and the sum of the squares of the sides equals the sum of the squares of the diagonals (the parallelogram law). Thus denoting the common side as a and the diagonals as p and q, in every rhombus Not every parallelogram is a rhombus, though an
https://en.wikipedia.org/wiki/K%C3%B6nig%27s%20theorem%20%28set%20theory%29
In set theory, König's theorem states that if the axiom of choice holds, I is a set, and are cardinal numbers for every i in I, and for every i in I, then The sum here is the cardinality of the disjoint union of the sets mi, and the product is the cardinality of the Cartesian product. However, without the use of the axiom of choice, the sum and the product cannot be defined as cardinal numbers, and the meaning of the inequality sign would need to be clarified. König's theorem was introduced by in the slightly weaker form that the sum of a strictly increasing sequence of nonzero cardinal numbers is less than their product. Details The precise statement of the result: if I is a set, Ai and Bi are sets for every i in I, and for every i in I, then where < means strictly less than in cardinality, i.e. there is an injective function from Ai to Bi, but not one going the other way. The union involved need not be disjoint (a non-disjoint union can't be any bigger than the disjoint version, also assuming the axiom of choice). In this formulation, König's theorem is equivalent to the axiom of choice. (Of course, König's theorem is trivial if the cardinal numbers mi and ni are finite and the index set I is finite. If I is empty, then the left sum is the empty sum and therefore 0, while the right product is the empty product and therefore 1). König's theorem is remarkable because of the strict inequality in the conclusion. There are many easy rules for the arithmetic of infinite sums and products of cardinals in which one can only conclude a weak inequality ≤, for example: if for all i in I, then one can only conclude since, for example, setting and , where the index set I is the natural numbers, yields the sum for both sides, and we have an equality. Corollaries of König's theorem If is a cardinal, then . If we take mi = 1, and ni = 2 for each i in κ, then the left side of the above inequality is just κ, while the right side is 2κ, the cardinality of functions from κ to {0, 1}, that is, the cardinality of the power set of κ. Thus, König's theorem gives us an alternate proof of Cantor's theorem. (Historically of course Cantor's theorem was proved much earlier.) Axiom of choice One way of stating the axiom of choice is "an arbitrary Cartesian product of non-empty sets is non-empty". Let Bi be a non-empty set for each i in I. Let Ai = {} for each i in I. Thus by König's theorem, we have: If , then . That is, the Cartesian product of the given non-empty sets Bi has a larger cardinality than the sum of empty sets. Thus it is non-empty, which is just what the axiom of choice states. Since the axiom of choice follows from König's theorem, we will use the axiom of choice freely and implicitly when discussing consequences of the theorem. König's theorem and cofinality König's theorem has also important consequences for cofinality of cardinal numbers. If , then . Choose a strictly increasing cf(κ)-sequence of ordinals approaching κ. Ea
https://en.wikipedia.org/wiki/Bias%20%28statistics%29
Statistical bias, in the mathematical field of statistics, is a systematic tendency in which the methods used to gather data and generate statistics present an inaccurate, skewed or biased depiction of reality. Statistical bias exists in numerous stages of the data collection and analysis process, including: the source of the data, the methods used to collect the data, the estimator chosen, and the methods used to analyze the data. Data analysts can take various measures at each stage of the process to reduce the impact of statistical bias in their work. Understanding the source of statistical bias can help to assess whether the observed results are close to actuality. Issues of statistical bias has been argued to be closely linked to issues of statistical validity. Statistical bias can have significant real world implications as data is used to inform decision making across a wide variety of processes in society. Data is used to inform lawmaking, industry regulation, corporate marketing and distribution tactics, and institutional policies in organizations and workplaces. Therefore, there can be significant implications if statistical bias is not accounted for and controlled. For example, if a pharmaceutical company wishes to explore the effect of a medication on the common cold but the data sample only includes men, any conclusions made from that data will be biased towards how the medication affects men rather than people in general. That means the information would be incomplete and not useful for deciding if the medication is ready for release in the general public. In this scenario, the bias can be addressed by broadening the sample. This sampling error is only one of the ways in which data can be biased. Bias can be differentiated from other statistical mistakes such as accuracy (instrument failure/inadequacy), lack of data, or mistakes in transcription (typos). Bias implies that the data selection may have been skewed by the collection criteria. Other forms of human-based bias emerge in data collection as well such as response bias, in which participants give inaccurate responses to a question. Bias does not preclude the existence of any other mistakes. One may have a poorly designed sample, an inaccurate measurement device, and typos in recording data simultaneously. Ideally, all factors are controlled and accounted for. Also it is useful to recognize that the term “error” specifically refers to the outcome rather than the process (errors of rejection or acceptance of the hypothesis being tested), or from the phenomenon of random errors. The terms flaw or mistake are recommended to differentiate procedural errors from these specifically defined outcome-based terms. Bias of an estimator Statistical bias is a feature of a statistical technique or of its results whereby the expected value of the results differs from the true underlying quantitative parameter being estimated. The bias of an estimator of a parameter should not be confus
https://en.wikipedia.org/wiki/Girolamo%20Fracastoro
Girolamo Fracastoro (; c. 1476/86 August 1553) was an Italian physician, poet, and scholar in mathematics, geography and astronomy. Fracastoro subscribed to the philosophy of atomism, and rejected appeals to hidden causes in scientific investigation. His studies of the mode of syphilis transmission are an early example of epidemiology. Life Fracastoro was born in Verona, Republic of Venice and educated at Padua where at the age of 19 he was appointed professor at the university. On account of his eminence in the practice of medicine, he was elected physician of the Council of Trent. A bronze statue was erected in his honor by the citizens of Padua, while his native city commemorated their great compatriot with a marble statue. He lived and practised in his hometown. In 1546 he proposed that epidemic diseases are caused by transferable tiny particles or "spores" that could transmit infection by direct contact, indirect contact, or even without contact over long distances. In his writing, the "spores" of diseases may refer to chemicals rather than to any living entities. He appears to have first used the Latin word fomes, meaning tinder, in the sense of infectious agent, in his essay on contagion De Contagione et Contagiosis Morbis (On Contagion and Contagious Diseases), published in 1546: "I call fomites [from the Latin fomes, meaning "tinder"] such things as clothes, linen, etc., which although not themselves corrupt, can nevertheless foster the essential seeds of the contagion and thus cause infection." His theory remained influential for nearly three centuries, before being superseded by a fully developed germ theory. The name for syphilis is derived from Fracastoro's 1530 epic poem in three books, Syphilis sive morbus gallicus ("Syphilis or The French Disease"), about a shepherd boy named Syphilus who tended the flocks of King Alcinous. Syphilus insulted Sol Pater, the god of the Sun, and was punished by him with a horrible disease. The poem suggests using mercury and "guaiaco" as a cure. In 1546 his book (De contagione, "On Contagion") also gave the first description of typhus. The collected works of Fracastoro appeared for the first time in 1555. Alongside Syphilis, Fracastoro wrote a Biblical epic in two books, Joseph, and a collection of miscellaneous poems, Carmina. Joseph was translated under the title The Maidens Blush, or Joseph by Josuah Sylvester. A full edition and English translation of Fracastoro's poetry was prepared by James Gardner for The I Tatti Renaissance Library. In 1546 Fracastoro described an epidemic in cattle that devastated farmers near Verona, Italy. That disease is now recognized as foot-and-mouth disease (FMD), an animal illness of great antiquity. A portrait of Fracastoro that has been in the collection of the National Gallery since 1924 has recently been attributed to the renowned Italian painter Titian. The re-attribution has led scholars to speculate that Titian may have painted the portrait in exchange
https://en.wikipedia.org/wiki/David%20H.%20Bailey%20%28mathematician%29
David Harold Bailey (born 14 August 1948) is a mathematician and computer scientist. He received his B.S. in mathematics from Brigham Young University in 1972 and his Ph.D. in mathematics from Stanford University in 1976. He worked for 14 years as a computer scientist at NASA Ames Research Center, and then from 1998 to 2013 as a Senior Scientist at the Lawrence Berkeley National Laboratory. He is now retired from the Berkeley Lab. Bailey is perhaps best known as a co-author (with Peter Borwein and Simon Plouffe) of a 1997 paper that presented a new formula for π (pi), which had been discovered by Plouffe in 1995. This Bailey–Borwein–Plouffe formula permits one to calculate binary or hexadecimal digits of pi beginning at an arbitrary position, by means of a simple algorithm. Subsequently, Bailey and Richard Crandall showed that the existence of this and similar formulas has implications for the long-standing question of "normality"—whether and why the digits of certain mathematical constants (including pi) appear "random" in a particular sense. Bailey is a long-time collaborator with the late Jonathan Borwein (Peter's brother). They co-authored five books and over 80 technical papers on experimental mathematics. Bailey also does research in numerical analysis and parallel computing. He has published studies on the fast Fourier transform (FFT), high-precision arithmetic, and the PSLQ algorithm (used for integer relation detection). He is a co-author of the NAS Benchmarks, which are used to assess and analyze the performance of parallel scientific computers. A "4-step" method of calculating the FFT is widely known as Bailey's FFT algorithm (Bailey himself credits it to W. M. Gentleman and G. Sande). He has also published articles in the area of mathematical finance, including a 2014 paper "Pseudo-mathematics and financial charlatanism," which emphasizes the dangers of statistical overfitting and other abuses of mathematics in the financial field. In 1993, Bailey received the Sidney Fernbach award from the IEEE Computer Society, as well as the Chauvenet Prize and the Hasse Prize from the Mathematical Association of America. In 2008 he was a co-recipient of the Gordon Bell Prize from the Association for Computing Machinery. In 2017 he was a co-recipient of the Levi L. Conant Prize from the American Mathematical Society. Bailey is a member of the Church of Jesus Christ of Latter-day Saints. He has positioned himself as an advocate of the teaching of science and that accepting the conclusions of modern science is not incompatible with a religious view. Selected works with Peter B. Borwein and Simon Plouffe: with Michał Misiurewicz: with Jonathan Borwein, Marcos Lopez de Prado and Qiji Jim Zhu: with Jonathan Borwein: Mathematics by experiment: Plausible reasoning in the 21st century, A. K. Peters 2004, 2008 (with accompanying CD Experiments in Mathematics, 2006) with Jonathan Borwein, Neil Calkin, Roland Girgensohn, D. Russell Luke, V
https://en.wikipedia.org/wiki/Quotient%20ring
In ring theory, a branch of abstract algebra, a quotient ring, also known as factor ring, difference ring or residue class ring, is a construction quite similar to the quotient group in group theory and to the quotient space in linear algebra. It is a specific example of a quotient, as viewed from the general setting of universal algebra. Starting with a ring and a two-sided ideal in , a new ring, the quotient ring , is constructed, whose elements are the cosets of in subject to special and operations. (Only the fraction slash "/" is used in quotient ring notation, not a horizontal fraction bar.) Quotient rings are distinct from the so-called "quotient field", or field of fractions, of an integral domain as well as from the more general "rings of quotients" obtained by localization. Formal quotient ring construction Given a ring and a two-sided ideal in , we may define an equivalence relation on as follows: if and only if is in . Using the ideal properties, it is not difficult to check that is a congruence relation. In case , we say that and are congruent modulo . The equivalence class of the element in is given by . This equivalence class is also sometimes written as and called the "residue class of modulo ". The set of all such equivalence classes is denoted by ; it becomes a ring, the factor ring or quotient ring of modulo , if one defines ; . (Here one has to check that these definitions are well-defined. Compare coset and quotient group.) The zero-element of is , and the multiplicative identity is . The map from to defined by is a surjective ring homomorphism, sometimes called the natural quotient map or the canonical homomorphism. Examples The quotient ring } is naturally isomorphic to R, and is the zero ring {0}, since, by our definition, for any r in R, we have that }}, which equals R itself. This fits with the rule of thumb that the larger the ideal I, the smaller the quotient ring . If I is a proper ideal of R, i.e., , then is not the zero ring. Consider the ring of integers Z and the ideal of even numbers, denoted by 2Z. Then the quotient ring has only two elements, the coset consisting of the even numbers and the coset consisting of the odd numbers; applying the definition, , where 2Z is the ideal of even numbers. It is naturally isomorphic to the finite field with two elements, F2. Intuitively: if you think of all the even numbers as 0, then every integer is either 0 (if it is even) or 1 (if it is odd and therefore differs from an even number by 1). Modular arithmetic is essentially arithmetic in the quotient ring (which has n elements). Now consider the ring of polynomials in the variable X with real coefficients, R[X], and the ideal consisting of all multiples of the polynomial . The quotient ring is naturally isomorphic to the field of complex numbers C, with the class [X] playing the role of the imaginary unit i. The reason is that we "forced" , i.e. , which is the defining property of i
https://en.wikipedia.org/wiki/Jaime%20Escalante
Jaime Alfonso Escalante Gutiérrez (December 31, 1930 – March 30, 2010) was a Bolivian-American educator known for teaching students calculus from 1974 to 1991 at Garfield High School in East Los Angeles. Escalante was the subject of the 1988 film Stand and Deliver, in which he is portrayed by Edward James Olmos. In 1993, the asteroid 5095 Escalante was named after him. Early life Escalante was born in 1930 in La Paz, Bolivia. Both of his parents were teachers. Escalante was proud of his Aymara heritage. Early career Escalante taught mathematics and physics for 12 years in Bolivia before he immigrated to the United States. He worked various jobs while teaching himself English and earning another college degree before eventually returning to the classroom as an educator. In 1974, he began to teach at Garfield High School. Escalante was initially so disheartened by the lack of preparation of his students that he called his former employer and asked for his old job back. Escalante eventually changed his mind about returning to work when he found 12 students willing to take an algebra class. Shortly after Escalante came to Garfield High School, its accreditation became threatened. Instead of gearing classes to poorly performing students, Escalante offered AP Calculus. He had already earned the criticism of an administrator, who disapproved of his requiring the students to answer a homework question before being allowed into the classroom: "He said to 'Just get them inside.' I said, 'There is no teaching, no learning going on here. We are just baby-sitting.'" Determined to change the status quo, Escalante persuaded a few students that they could control their futures with the right education. He promised them that they could get jobs in engineering, electronics, and computers if they would learn math: "I'll teach you math and that's your language. With that, you're going to make it. You're going to college and sit in the first row, not the back because you're going to know more than anybody." The school administration opposed Escalante frequently during his first few years. He was threatened with dismissal by an assistant principal because he was coming in too early, leaving too late, and failing to get administrative permission to raise funds to pay for his students' Advanced Placement tests. The opposition changed with the arrival of a new principal, Henry Gradillas. Aside from allowing Escalante to stay, Gradillas overhauled the academic curriculum at Garfield, reducing the number of basic math classes and requiring those taking basic math to take algebra as well. He denied extracurricular activities to students who failed to maintain a C average and to new students who failed basic skills tests. One of Escalante's students remarked, "If he wants to teach us that bad, we can learn." Escalante continued to teach at Garfield and instructed his first calculus class in 1978. He recruited fellow teacher Ben Jiménez and taught calculus to five
https://en.wikipedia.org/wiki/Function%20%28mathematics%29
In mathematics, a function from a set to a set assigns to each element of exactly one element of . The set is called the domain of the function and the set is called the codomain of the function. Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly enlarged the domains of application of the concept. A function is most often denoted by letters such as , and , and the value of a function at an element of its domain is denoted by ; the numerical value resulting from the function evaluation at a particular input value is denoted by replacing with this value; for example, the value of at is denoted by . When the function is not named and is represented by an expression , the value of the function at, say, may be denoted by . For example, the value at of the function that maps to may be denoted by (which results in A function is uniquely represented by the set of all pairs , called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane. Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics. Definition A function from a set to a set is an assignment of an element of to each element of . The set is called the domain of the function and the set is called the codomain of the function. A function, its domain, and its codomain, are declared by the notation , and the value of a function at an element of , denoted by , is called the image of under , or the value of applied to the argument . Functions are also called maps or mappings, though some authors make some distinction between "maps" and "functions" (see ). Two functions and are equal if their domain and codomain sets are the same and their output values agree on the whole domain. More formally, given and , we have if and only if for all . The domain and codomain are not always explicitly given when a function is defined, and, without some (possibly difficult) computation, one might only know that the domain is contained in a larger set. Typically, this occurs in mathematical analysis, where "a function often refers to a function that may have a proper subset of as domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable. However, a "function from the reals t
https://en.wikipedia.org/wiki/Informal%20mathematics
Informal mathematics, also called naïve mathematics, has historically been the predominant form of mathematics at most times and in most cultures, and is the subject of modern ethno-cultural studies of mathematics. The philosopher Imre Lakatos in his Proofs and Refutations aimed to sharpen the formulation of informal mathematics, by reconstructing its role in nineteenth century mathematical debates and concept formation, opposing the predominant assumptions of mathematical formalism. Informality may not discern between statements given by inductive reasoning (as in approximations which are deemed "correct" merely because they are useful), and statements derived by deductive reasoning. Terminology Informal mathematics means any informal mathematical practices, as used in everyday life, or by aboriginal or ancient peoples, without historical or geographical limitation. Modern mathematics, exceptionally from that point of view, emphasizes formal and strict proofs of all statements from given axioms. This can usefully be called therefore formal mathematics. Informal practices are usually understood intuitively and justified with examples—there are no axioms. This is of direct interest in anthropology and psychology: it casts light on the perceptions and agreements of other cultures. It is also of interest in developmental psychology as it reflects a naïve understanding of the relationships between numbers and things. Another term used for informal mathematics is folk mathematics, which is ambiguous; the mathematical folklore article is dedicated to the usage of that term among professional mathematicians. The field of naïve physics is concerned with similar understandings of physics. People use mathematics and physics in everyday life, without really understanding (or caring) how mathematical and physical ideas were historically derived and justified. History There has long been a standard account of the development of geometry in ancient Egypt, followed by Greek mathematics and the emergence of deductive logic. The modern sense of the term mathematics, as meaning only those systems justified with reference to axioms, is however an anachronism if read back into history. Several ancient societies built impressive mathematical systems and carried out complex calculations based on proofless heuristics and practical approaches. Mathematical facts were accepted on a pragmatic basis. Empirical methods, as in science, provided the justification for a given technique. Commerce, engineering, calendar creation and the prediction of eclipses and stellar progression were practiced by ancient cultures on at least three continents. N.C. Ghosh included informal mathematics in the list of Folk Mathematics. See also Folk psychology Mathematical Platonism Pseudomathematics Ethnomathematics Numeracy References Philosophy of mathematics Critical pedagogy Sociology of scientific knowledge Mathematics and culture Scientific folklore
https://en.wikipedia.org/wiki/Division%20by%20zero
In mathematics, division by zero is division where the divisor (denominator) is zero. Such a division can be formally expressed as , where is the dividend (numerator). In ordinary arithmetic, the expression has no meaning, as there is no number that, when multiplied by , gives (assuming ); thus, division by zero is undefined (a type of singularity). Since any number multiplied by zero is zero, the expression is also undefined; when it is the form of a limit, it is an indeterminate form. Historically, one of the earliest recorded references to the mathematical impossibility of assigning a value to is contained in Anglo-Irish philosopher George Berkeley's criticism of infinitesimal calculus in 1734 in The Analyst ("ghosts of departed quantities"). There are mathematical structures in which is defined for some such as in the Riemann sphere (a model of the extended complex plane) and the projectively extended real line; however, such structures do not satisfy every ordinary rule of arithmetic (the field axioms). In computing, a program error may result from an attempt to divide by zero. Depending on the programming environment and the type of number (e.g., floating point, integer) being divided by zero, it may generate positive or negative infinity by the IEEE 754 floating-point standard, generate an exception, generate an error message, cause the program to terminate, result in a special not-a-number value, or crash. Elementary arithmetic When division is explained at the elementary arithmetic level, it is often considered as splitting a set of objects into equal parts. As an example, consider having ten cookies, and these cookies are to be distributed equally to five people at a table. Each person would receive cookies. Similarly, if there are ten cookies, and only one person at the table, that person would receive cookies. So, for dividing by zero, what is the number of cookies that each person receives when 10 cookies are evenly distributed among 0 people at a table? Certain words can be pinpointed in the question to highlight the problem. The problem with this question is the "when". There is no way to distribute 10 cookies to nobody. Therefore, at least in elementary arithmeticis said to be either meaningless or undefined. If there are 5 cookies and 2 people, the problem is in "evenly distribute". In any integer partition of 5 things into 2 parts, either one of the parts of the partition will have more elements than the other or there will be a remainder (written as = 2 r1). Or, the problem with 5 cookies and 2 people can be solved by cutting one cookie in half, which introduces the idea of fractions ( = ) . The problem with 5 cookies and 0 people, on the other hand, cannot be solved in any way that preserves the meaning of "divides". In elementary algebra, another way of looking at division by zero is that division can always be checked using multiplication. Considering the example above, setting x = , if x equals ten divided b
https://en.wikipedia.org/wiki/Felicific%20calculus
The felicific calculus is an algorithm formulated by utilitarian philosopher Jeremy Bentham (1747–1832) for calculating the degree or amount of pleasure that a specific action is likely to induce. Bentham, an ethical hedonist, believed the moral rightness or wrongness of an action to be a function of the amount of pleasure or pain that it produced. The felicific calculus could, in principle at least, determine the moral status of any considered act. The algorithm is also known as the utility calculus, the hedonistic calculus and the hedonic calculus. To be included in this calculation are several variables (or vectors), which Bentham called "circumstances". These are: Intensity: How strong is the pleasure? Duration: How long will the pleasure last? Certainty or uncertainty: How likely or unlikely is it that the pleasure will occur? Propinquity or remoteness: How soon will the pleasure occur? Fecundity: The probability that the action will be followed by sensations of the same kind. Purity: The probability that it will not be followed by sensations of the opposite kind. Extent: How many people will be affected? Bentham's instructions To take an exact account of the general tendency of any act, by which the interests of a community are affected, proceed as follows. Begin with any one person of those whose interests seem most immediately to be affected by it: and take an account, Of the value of each distinguishable pleasure which appears to be produced by it in the first instance. Of the value of each pain which appears to be produced by it in the first instance. Of the value of each pleasure which appears to be produced by it after the first. This constitutes the fecundity of the first pleasure and the impurity of the first pain. Of the value of each pain which appears to be produced by it after the first. This constitutes the fecundity of the first pain, and the impurity of the first pleasure. Sum up all the values of all the pleasures on the one side, and those of all the pains on the other. The balance, if it be on the side of pleasure, will give the good tendency of the act upon the whole, with respect to the interests of that individual person; if on the side of pain, the bad tendency of it upon the whole. Take an account of the number of persons whose interests appear to be concerned; and repeat the above process with respect to each. Sum up the numbers expressive of the degrees of good tendency, which the act has, with respect to each individual, in regard to whom the tendency of it is good upon the whole. Do this again with respect to each individual, in regard to whom the tendency of it is bad upon the whole. Take the balance which if on the side of pleasure, will give the general good tendency of the act, with respect to the total number or community of individuals concerned; if on the side of pain, the general evil tendency, with respect to the same community. To make his proposal easier to remember, Bentham devised wh
https://en.wikipedia.org/wiki/Unifying%20theories%20in%20mathematics
There have been several attempts in history to reach a unified theory of mathematics. Some of the most respected mathematicians in the academia have expressed views that the whole subject should be fitted into one theory. The unification of mathematical topics has been called mathematical consolidation: "By a consolidation of two or more concepts or theories Ti we mean the creation of a new theory which incorporates elements of all the Ti into one system which achieves more general implications than are obtainable from any single Ti." Historical perspective The process of unification might be seen as helping to define what constitutes mathematics as a discipline. For example, mechanics and mathematical analysis were commonly combined into one subject during the 18th century, united by the differential equation concept; while algebra and geometry were considered largely distinct. Now we consider analysis, algebra, and geometry, but not mechanics, as parts of mathematics because they are primarily deductive formal sciences, while mechanics like physics must proceed from observation. There is no major loss of content, with analytical mechanics in the old sense now expressed in terms of symplectic topology, based on the newer theory of manifolds. Mathematical theories The term theory is used informally within mathematics to mean a self-consistent body of definitions, axioms, theorems, examples, and so on. (Examples include group theory, Galois theory, control theory, and K-theory.) In particular there is no connotation of hypothetical. Thus the term unifying theory is more like a sociological term used to study the actions of mathematicians. It may assume nothing conjectural that would be analogous to an undiscovered scientific link. There is really no cognate within mathematics to such concepts as Proto-World in linguistics or the Gaia hypothesis. Nonetheless there have been several episodes within the history of mathematics in which sets of individual theorems were found to be special cases of a single unifying result, or in which a single perspective about how to proceed when developing an area of mathematics could be applied fruitfully to multiple branches of the subject. Geometrical theories A well-known example was the development of analytic geometry, which in the hands of mathematicians such as Descartes and Fermat showed that many theorems about curves and surfaces of special types could be stated in algebraic language (then new), each of which could then be proved using the same techniques. That is, the theorems were very similar algebraically, even if the geometrical interpretations were distinct. In 1859, Arthur Cayley initiated a unification of metric geometries through use of the Cayley-Klein metrics. Later Felix Klein used such metrics to provide a foundation for non-Euclidean geometry. In 1872, Felix Klein noted that the many branches of geometry which had been developed during the 19th century (affine geometry, projective
https://en.wikipedia.org/wiki/Regression%20toward%20the%20mean
In statistics, regression toward the mean (also called reversion to the mean, and reversion to mediocrity) is the phenomenon where if one sample of a random variable is extreme, the next sampling of the same random variable is likely to be closer to its mean. Furthermore, when many random variables are sampled and the most extreme results are intentionally picked out, it refers to the fact that (in many cases) a second sampling of these picked-out variables will result in "less extreme" results, closer to the initial mean of all of the variables. Mathematically, the strength of this "regression" effect is dependent on whether or not all of the random variables are drawn from the same distribution, or if there are genuine differences in the underlying distributions for each random variable. In the first case, the "regression" effect is statistically likely to occur, but in the second case, it may occur less strongly or not at all. Regression toward the mean is thus a useful concept to consider when designing any scientific experiment, data analysis, or test, which intentionally selects the "most extreme" events - it indicates that follow-up checks may be useful in order to avoid jumping to false conclusions about these events; they may be "genuine" extreme events, a completely meaningless selection due to statistical noise, or a mix of the two cases. Conceptual examples Simple example: students taking a test Consider a class of students taking a 100-item true/false test on a subject. Suppose that all students choose randomly on all questions. Then, each student's score would be a realization of one of a set of independent and identically distributed random variables, with an expected mean of 50. Naturally, some students will score substantially above 50 and some substantially below 50 just by chance. If one selects only the top scoring 10% of the students and gives them a second test on which they again choose randomly on all items, the mean score would again be expected to be close to 50. Thus the mean of these students would "regress" all the way back to the mean of all students who took the original test. No matter what a student scores on the original test, the best prediction of their score on the second test is 50. If choosing answers to the test questions was not random – i.e. if there were no luck (good or bad) or random guessing involved in the answers supplied by the students – then all students would be expected to score the same on the second test as they scored on the original test, and there would be no regression toward the mean. Most realistic situations fall between these two extremes: for example, one might consider exam scores as a combination of skill and luck. In this case, the subset of students scoring above average would be composed of those who were skilled and had not especially bad luck, together with those who were unskilled, but were extremely lucky. On a retest of this subset, the unskilled will be unlikely to r
https://en.wikipedia.org/wiki/Alternativity
In abstract algebra, alternativity is a property of a binary operation. A magma G is said to be if for all and if for all A magma that is both left and right alternative is said to be (). Any associative magma (that is, a semigroup) is alternative. More generally, a magma in which every pair of elements generates an associative submagma must be alternative. The converse, however, is not true, in contrast to the situation in alternative algebras. In fact, an alternative magma need not even be power-associative. References Properties of binary operations
https://en.wikipedia.org/wiki/Complex%20geometry
In mathematics, complex geometry is the study of geometric structures and constructions arising out of, or described by, the complex numbers. In particular, complex geometry is concerned with the study of spaces such as complex manifolds and complex algebraic varieties, functions of several complex variables, and holomorphic constructions such as holomorphic vector bundles and coherent sheaves. Application of transcendental methods to algebraic geometry falls in this category, together with more geometric aspects of complex analysis. Complex geometry sits at the intersection of algebraic geometry, differential geometry, and complex analysis, and uses tools from all three areas. Because of the blend of techniques and ideas from various areas, problems in complex geometry are often more tractable or concrete than in general. For example, the classification of complex manifolds and complex algebraic varieties through the minimal model program and the construction of moduli spaces sets the field apart from differential geometry, where the classification of possible smooth manifolds is a significantly harder problem. Additionally, the extra structure of complex geometry allows, especially in the compact setting, for global analytic results to be proven with great success, including Shing-Tung Yau's proof of the Calabi conjecture, the Hitchin–Kobayashi correspondence, the nonabelian Hodge correspondence, and existence results for Kähler–Einstein metrics and constant scalar curvature Kähler metrics. These results often feed back into complex algebraic geometry, and for example recently the classification of Fano manifolds using K-stability has benefited tremendously both from techniques in analysis and in pure birational geometry. Complex geometry has significant applications to theoretical physics, where it is essential in understanding conformal field theory, string theory, and mirror symmetry. It is often a source of examples in other areas of mathematics, including in representation theory where generalized flag varieties may be studied using complex geometry leading to the Borel–Weil–Bott theorem, or in symplectic geometry, where Kähler manifolds are symplectic, in Riemannian geometry where complex manifolds provide examples of exotic metric structures such as Calabi–Yau manifolds and hyperkähler manifolds, and in gauge theory, where holomorphic vector bundles often admit solutions to important differential equations arising out of physics such as the Yang–Mills equations. Complex geometry additionally is impactful in pure algebraic geometry, where analytic results in the complex setting such as Hodge theory of Kähler manifolds inspire understanding of Hodge structures for varieties and schemes as well as p-adic Hodge theory, deformation theory for complex manifolds inspires understanding of the deformation theory of schemes, and results about the cohomology of complex manifolds inspired the formulation of the Weil conjectures and Grothendieck's
https://en.wikipedia.org/wiki/Algebraic%20element
In mathematics, if is a field extension of , then an element of is called an algebraic element over , or just algebraic over , if there exists some non-zero polynomial with coefficients in such that . Elements of which are not algebraic over are called transcendental over . These notions generalize the algebraic numbers and the transcendental numbers (where the field extension is , being the field of complex numbers and being the field of rational numbers). Examples The square root of 2 is algebraic over , since it is the root of the polynomial whose coefficients are rational. Pi is transcendental over but algebraic over the field of real numbers : it is the root of , whose coefficients (1 and −) are both real, but not of any polynomial with only rational coefficients. (The definition of the term transcendental number uses , not .) Properties The following conditions are equivalent for an element of : is algebraic over , the field extension is algebraic, i.e. every element of is algebraic over (here denotes the smallest subfield of containing and ), the field extension has finite degree, i.e. the dimension of as a -vector space is finite, , where is the set of all elements of that can be written in the form with a polynomial whose coefficients lie in . To make this more explicit, consider the polynomial evaluation . This is a homomorphism and its kernel is . If is algebraic, this ideal contains non-zero polynomials, but as is a euclidean domain, it contains a unique polynomial with minimal degree and leading coefficient , which then also generates the ideal and must be irreducible. The polynomial is called the minimal polynomial of and it encodes many important properties of . Hence the ring isomorphism obtained by the homomorphism theorem is an isomorphism of fields, where we can then observe that . Otherwise, is injective and hence we obtain a field isomorphism , where is the field of fractions of , i.e. the field of rational functions on , by the universal property of the field of fractions. We can conclude that in any case, we find an isomorphism or . Investigating this construction yields the desired results. This characterization can be used to show that the sum, difference, product and quotient of algebraic elements over are again algebraic over . For if and are both algebraic, then is finite. As it contains the aforementioned combinations of and , adjoining one of them to also yields a finite extension, and therefore these elements are algebraic as well. Thus set of all elements of which are algebraic over is a field that sits in between and . Fields that do not allow any algebraic elements over them (except their own elements) are called algebraically closed. The field of complex numbers is an example. If is algebraically closed, then the field of algebraic elements of over is algebraically closed, which can again be directly shown using the characterisation of simple algebra
https://en.wikipedia.org/wiki/Statistics%20Canada
Statistics Canada (StatCan; ), formed in 1971, is the agency of the Government of Canada commissioned with producing statistics to help better understand Canada, its population, resources, economy, society, and culture. It is headquartered in Ottawa. The agency is led by the chief statistician of Canada, currently Anil Arora, who assumed the role on September 19, 2016. StatCan is accountable to Parliament through the Minister of Innovation, Science and Industry, currently François-Philippe Champagne. Statistics Canada acts as the national statistical agency for Canada, and Statistics Canada produces statistics for all the provinces as well as the federal government. In addition to conducting about 350 active surveys on virtually all aspects of Canadian life, the Statistics Act mandates that Statistics Canada has a duty to conduct a country-wide census of population every five years and a census of agriculture every ten years. It has regularly been considered the best statistical organization in the world by The Economist, such as in the 1991 and 1993 "Good Statistics" surveys. The Public Policy Forum and others have also recognized successes of the agency. Leadership The head of Statistics Canada is the chief statistician of Canada. The heads of Statistics Canada and the previous organization, the Dominion Bureau of Statistics, are: Robert H. Coats (1918–1942) Sedley A. Cudmore (1942–1945) Herbert Marshall (1945–1956) Walter E. Duffett (1957–1972) Sylvia Ostry (1972–1975) Peter G. Kirkham (1975–1980) James L. Fry (1980) Martin B. Wilk (1980–1985) Ivan P. Fellegi (1985–2008) Munir Sheikh (2008–2010) Wayne Smith (interim 2010; 2011–2016) Anil Arora (2016–) Publications Statistics Canada publishes numerous documents covering a range of statistical information about Canada, including census data, economic and health indicators, immigration economics, income distribution, and social and justice conditions. It also publishes a peer-reviewed statistics journal, Survey Methodology. Statistics Canada provides free access to numerous aggregate data tables on various subjects of relevance to Canadian life. Many tables used to be published as the Canadian Socio-economic Information Management System, or CANSIM, which has since been replaced by new, more easily manipulated data tables. The Daily is Statistics Canada's free online bulletin that provides current information from StatCan, updated daily, on current social and economic conditions. Statistics Canada also provides the Canadian Income Survey (CIS)—a cross-sectional survey that assesses the income, income sources, and the economic status of individuals and families in Canada. Data from the Labour Force Survey (LFS) is combined with data from the CIS. The February 24, 2020 reported statistics on the poverty based on the market basket measure (MBM). Data accessibility and licensing As of February 1, 2012, "information published by Statistics Canada is automatically covered by the
https://en.wikipedia.org/wiki/Morlet%20wavelet
In mathematics, the Morlet wavelet (or Gabor wavelet) is a wavelet composed of a complex exponential (carrier) multiplied by a Gaussian window (envelope). This wavelet is closely related to human perception, both hearing and vision. History In 1946, physicist Dennis Gabor, applying ideas from quantum physics, introduced the use of Gaussian-windowed sinusoids for time-frequency decomposition, which he referred to as atoms, and which provide the best trade-off between spatial and frequency resolution. These are used in the Gabor transform, a type of short-time Fourier transform. In 1984, Jean Morlet introduced Gabor's work to the seismology community and, with Goupillaud and Grossmann, modified it to keep the same wavelet shape over equal octave intervals, resulting in the first formalization of the continuous wavelet transform. Definition The wavelet is defined as a constant subtracted from a plane wave and then localised by a Gaussian window: where is defined by the admissibility criterion, and the normalisation constant is: The Fourier transform of the Morlet wavelet is: The "central frequency" is the position of the global maximum of which, in this case, is given by the positive solution to: which can be solved by a fixed-point iteration starting at (the fixed-point iterations converge to the unique positive solution for any initial ). The parameter in the Morlet wavelet allows trade between time and frequency resolutions. Conventionally, the restriction is used to avoid problems with the Morlet wavelet at low (high temporal resolution). For signals containing only slowly varying frequency and amplitude modulations (audio, for example) it is not necessary to use small values of . In this case, becomes very small (e.g. ) and is, therefore, often neglected. Under the restriction , the frequency of the Morlet wavelet is conventionally taken to be . The wavelet exists as a complex version or a purely real-valued version. Some distinguish between the "real Morlet" vs the "complex Morlet". Others consider the complex version to be the "Gabor wavelet", while the real-valued version is the "Morlet wavelet". Uses Use in medicine In magnetic resonance spectroscopy imaging, the Morlet wavelet transform method offers an intuitive bridge between frequency and time information which can clarify the interpretation of complex head trauma spectra obtained with Fourier transform. The Morlet wavelet transform, however, is not intended as a replacement for the Fourier transform, but rather a supplement that allows qualitative access to time related changes and takes advantage of the multiple dimensions available in a free induction decay analysis. The application of the Morlet wavelet analysis is also used to discriminate abnormal heartbeat behavior in the electrocardiogram (ECG). Since the variation of the abnormal heartbeat is a non-stationary signal, this signal is suitable for wavelet-based analysis. Use in music The Morlet wav
https://en.wikipedia.org/wiki/Hall%27s%20marriage%20theorem
In mathematics, Hall's marriage theorem, proved by , is a theorem with two equivalent formulations. In each case, the theorem gives a necessary and sufficient condition for an object to exist: The combinatorial formulation answers whether a finite collection of sets has a transversal—that is, whether an element can be chosen from each set without repetition. Hall's condition is that for any group of sets from the collection, the total unique elements they contain is at least as large as the number of sets in the group. The graph theoretic formulation answers whether a finite bipartite graph has a perfect matching—that is, a way to match each vertex from one group uniquely to an adjacent vertex from the other group. Hall's condition is that any subset of vertices from one group has a neighbourhood of equal or greater size. Combinatorial formulation Statement Let be a finite family of sets (note that although is not itself allowed to be infinite, the sets in it may be so, and may contain the same set multiple times). Let be the union of all the sets in , the set of elements that belong to at least one of its sets. A transversal for is a subset of that can be obtained by choosing a distinct element from each set in . This concept can be formalized by defining a transversal to be the image of an injective function such that for each . An alternative term for transversal is system of distinct representatives. The collection satisfies the marriage condition when each subfamily of contains at least as many distinct members as its number of sets. That is, for all , If a transversal exists then the marriage condition must be true: the function used to define the transversal maps to a subset of its union, of size equal to , so the whole union must be at least as large. Hall's theorem states that the converse is also true: Examples Example 1 Consider the family with and The transversal could be generated by the function that maps to , to , and to , or alternatively by the function that maps to , to , and to . There are other transversals, such as and . Because this family has at least one transversal, the marriage condition is met. Every subfamily of has equal size to the set of representatives it is mapped to, which is less than or equal to the size of the union of the subfamily. Example 2 Consider with No valid transversal exists; the marriage condition is violated as is shown by the subfamily . Here the number of sets in the subfamily is , while the union of the three sets contains only two elements. A lower bound on the different number of transversals that a given finite family of size may have is obtained as follows: If each of the sets in has cardinality , then the number of different transversals for is either if , or if . Recall that a transversal for a family is an ordered sequence, so two different transversals could have exactly the same elements. For instance, the collection , has and as distin
https://en.wikipedia.org/wiki/Quadratic%20function
In mathematics, a quadratic polynomial is a polynomial of degree two in one or more variables. A quadratic function is the polynomial function defined by a quadratic polynomial. Before the 20th century, the distinction was unclear between a polynomial and its associated polynomial function; so "quadratic polynomial" and "quadratic function" were almost synonymous. This is still the case in many elementary courses, where both terms are often abbreviated as "quadratic". For example, a univariate (single-variable) quadratic function has the form where is its variable. The graph of a univariate quadratic function is a parabola, a curve that has an axis of symmetry parallel to the -axis. If a quadratic function is equated with zero, then the result is a quadratic equation. The solutions of a quadratic equation are the zeros of the corresponding quadratic function. The bivariate case in terms of variables and has the form with at least one of not equal to zero. The zeros of this quadratic function is, in general (that is, if a certain expression of the coefficients is not equal to zero), a conic section (a circle or other ellipse, a parabola, or a hyperbola). A quadratic function in three variables , , and contains exclusively terms , , , , and a constant: where at least one of the coefficients of the second-degree terms is not zero. A quadratic function can have an arbitrarily large number of variables. The set of its zero form a quadric, which is a surface in the case of three variables and a hypersurface in general case. Etymology The adjective quadratic comes from the Latin word quadrātum ("square"). A term raised to the second power like is called a square in algebra because it is the area of a square with side . Terminology Coefficients The coefficients of a quadric function are often taken to be real or complex numbers, but they may be taken in any ring, in which case the domain and the codomain are this ring (see polynomial evaluation). Degree When using the term "quadratic polynomial", authors sometimes mean "having degree exactly 2", and sometimes "having degree at most 2". If the degree is less than 2, this may be called a "degenerate case". Usually the context will establish which of the two is meant. Sometimes the word "order" is used with the meaning of "degree", e.g. a second-order polynomial. However, where the "degree of a polynomial" refers to the largest degree of a non-zero term of the polynomial, more typically "order" refers to the lowest degree of a non-zero term of a power series. Variables A quadratic polynomial may involve a single variable x (the univariate case), or multiple variables such as x, y, and z (the multivariate case). The one-variable case Any single-variable quadratic polynomial may be written as where x is the variable, and a, b, and c represent the coefficients. Such polynomials often arise in a quadratic equation The solutions to this equation are called the roots and can be e
https://en.wikipedia.org/wiki/Self-adjoint%20operator
In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product (equivalently, a Hermitian operator in the finite-dimensional case) is a linear map A (from V to itself) that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension. Self-adjoint operators are used in functional analysis and quantum mechanics. In quantum mechanics their importance lies in the Dirac–von Neumann formulation of quantum mechanics, in which physical observables such as position, momentum, angular momentum and spin are represented by self-adjoint operators on a Hilbert space. Of particular significance is the Hamiltonian operator defined by which as an observable corresponds to the total energy of a particle of mass m in a real potential field V. Differential operators are an important class of unbounded operators. The structure of self-adjoint operators on infinite-dimensional Hilbert spaces essentially resembles the finite-dimensional case. That is to say, operators are self-adjoint if and only if they are unitarily equivalent to real-valued multiplication operators. With suitable modifications, this result can be extended to possibly unbounded operators on infinite-dimensional spaces. Since an everywhere-defined self-adjoint operator is necessarily bounded, one needs be more attentive to the domain issue in the unbounded case. This is explained below in more detail. Definitions Let be an unbounded (i.e. not necessarily bounded) operator with a dense domain This condition holds automatically when is finite-dimensional since for every linear operator on a finite-dimensional space. Let the inner product be conjugate-linear on the second argument. This applies to complex Hilbert spaces only. By definition, the adjoint operator acts on the subspace consisting of the elements for which there is a such that for every Setting defines the linear operator The graph of an (arbitrary) operator is the set An operator is said to extend if This is written as The densely defined operator is called symmetric if for all As shown below, is symmetric if and only if The unbounded densely defined operator is called self-adjoint if Explicitly, and Every self-adjoint operator is symmetric. Conversely, a symmetric operator for which is self-adjoint. In physics, the term Hermitian refers to symmetric as well as self-adjoint operators alike. The subtle difference between the two is generally overlooked. A subset is called the resolvent set (or regular set) if for every the
https://en.wikipedia.org/wiki/Orientability
In mathematics, orientability is a property of some topological spaces such as real vector spaces, Euclidean spaces, surfaces, and more generally manifolds that allows a consistent definition of "clockwise" and "anticlockwise". A space is orientable if such a consistent definition exists. In this case, there are two possible definitions, and a choice between them is an orientation of the space. Real vector spaces, Euclidean spaces, and spheres are orientable. A space is non-orientable if "clockwise" is changed into "counterclockwise" after running through some loops in it, and coming back to the starting point. This means that a geometric shape, such as , that moves continuously along such a loop is changed into its own mirror image . A Möbius strip is an example of a non-orientable space. Various equivalent formulations of orientability can be given, depending on the desired application and level of generality. Formulations applicable to general topological manifolds often employ methods of homology theory, whereas for differentiable manifolds more structure is present, allowing a formulation in terms of differential forms. A generalization of the notion of orientability of a space is that of orientability of a family of spaces parameterized by some other space (a fiber bundle) for which an orientation must be selected in each of the spaces which varies continuously with respect to changes in the parameter values. Orientable surfaces A surface S in the Euclidean space R3 is orientable if a chiral two-dimensional figure (for example, ) cannot be moved around the surface and back to where it started so that it looks like its own mirror image (). Otherwise the surface is non-orientable. An abstract surface (i.e., a two-dimensional manifold) is orientable if a consistent concept of clockwise rotation can be defined on the surface in a continuous manner. That is to say that a loop going around one way on the surface can never be continuously deformed (without overlapping itself) to a loop going around the opposite way. This turns out to be equivalent to the question of whether the surface contains no subset that is homeomorphic to the Möbius strip. Thus, for surfaces, the Möbius strip may be considered the source of all non-orientability. For an orientable surface, a consistent choice of "clockwise" (as opposed to counter-clockwise) is called an orientation, and the surface is called oriented. For surfaces embedded in Euclidean space, an orientation is specified by the choice of a continuously varying surface normal n at every point. If such a normal exists at all, then there are always two ways to select it: n or −n. More generally, an orientable surface admits exactly two orientations, and the distinction between an oriented surface and an orientable surface is subtle and frequently blurred. An orientable surface is an abstract surface that admits an orientation, while an oriented surface is a surface that is abstractly orientable, and h
https://en.wikipedia.org/wiki/Percolation%20theory
In statistical physics and mathematics, percolation theory describes the behavior of a network when nodes or links are added. This is a geometric type of phase transition, since at a critical fraction of addition the network of small, disconnected clusters merge into significantly larger connected, so-called spanning clusters. The applications of percolation theory to materials science and in many other disciplines are discussed here and in the articles Network theory and Percolation (cognitive psychology). Introduction A representative question (and the source of the name) is as follows. Assume that some liquid is poured on top of some porous material. Will the liquid be able to make its way from hole to hole and reach the bottom? This physical question is modelled mathematically as a three-dimensional network of vertices, usually called "sites", in which the edge or "bonds" between each two neighbors may be open (allowing the liquid through) with probability , or closed with probability , and they are assumed to be independent. Therefore, for a given , what is the probability that an open path (meaning a path, each of whose links is an "open" bond) exists from the top to the bottom? The behavior for large  is of primary interest. This problem, called now bond percolation, was introduced in the mathematics literature by , and has been studied intensively by mathematicians and physicists since then. In a slightly different mathematical model for obtaining a random graph, a site is "occupied" with probability or "empty" (in which case its edges are removed) with probability ; the corresponding problem is called site percolation. The question is the same: for a given p, what is the probability that a path exists between top and bottom? Similarly, one can ask, given a connected graph at what fraction of failures the graph will become disconnected (no large component). The same questions can be asked for any lattice dimension. As is quite typical, it is actually easier to examine infinite networks than just large ones. In this case the corresponding question is: does an infinite open cluster exist? That is, is there a path of connected points of infinite length "through" the network? By Kolmogorov's zero–one law, for any given , the probability that an infinite cluster exists is either zero or one. Since this probability is an increasing function of (proof via coupling argument), there must be a critical (denoted by ) below which the probability is always 0 and above which the probability is always 1. In practice, this criticality is very easy to observe. Even for as small as 100, the probability of an open path from the top to the bottom increases sharply from very close to zero to very close to one in a short span of values of . History The Flory–Stockmayer theory was the first theory investigating percolation processes. The history of the percolation model as we know it has its root in the coal industry. Since the industrial revoluti
https://en.wikipedia.org/wiki/Centroid
In mathematics and physics, the centroid, also known as geometric center or center of figure, of a plane figure or solid figure is the point defined by the arithmetic mean position of all the points in the surface of the figure. In a polytope, it can be found using the arithmetic mean position of the vertices. The same definition extends to any object in -dimensional Euclidean space. In geometry, one often assumes uniform mass density, in which case the barycenter or center of mass coincides with the centroid. Informally, it can be understood as the point at which a cutout of the shape (with uniformly distributed mass) could be perfectly balanced on the tip of a pin. In physics, if variations in gravity are considered, then a center of gravity can be defined as the weighted mean of all points weighted by their specific weight. In geography, the centroid of a radial projection of a region of the Earth's surface to sea level is the region's geographical center. History The term "centroid" is of recent coinage (1814). It is used as a substitute for the older terms "center of gravity" and "center of mass" when the purely geometrical aspects of that point are to be emphasized. The term is peculiar to the English language; the French, for instance, use "" on most occasions, and others use terms of similar meaning. The center of gravity, as the name indicates, is a notion that arose in mechanics, most likely in connection with building activities. It is uncertain when the idea first appeared, as the concept likely occurred to many people individually with minor differences. Nonetheless, the center of gravity of figures was studied extensively in Antiquity; Bossut credits Archimedes (287–212 BCE) with being the first to find the centroid of plane figures, although he never defines it. A treatment of centroids of solids by Archimedes has been lost. It is unlikely that Archimedes learned the theorem that the medians of a triangle meet in a point—the center of gravity of the triangle—directly from Euclid, as this proposition is not in the Elements. The first explicit statement of this proposition is due to Heron of Alexandria (perhaps the first century CE) and occurs in his Mechanics. It may be added, in passing, that the proposition did not become common in the textbooks on plane geometry until the nineteenth century. Properties The geometric centroid of a convex object always lies in the object. A non-convex object might have a centroid that is outside the figure itself. The centroid of a ring or a bowl, for example, lies in the object's central void. If the centroid is defined, it is a fixed point of all isometries in its symmetry group. In particular, the geometric centroid of an object lies in the intersection of all its hyperplanes of symmetry. The centroid of many figures (regular polygon, regular polyhedron, cylinder, rectangle, rhombus, circle, sphere, ellipse, ellipsoid, superellipse, superellipsoid, etc.) can be determined by this p
https://en.wikipedia.org/wiki/Univalent
Univalent may refer to: Univalent function – an injective holomorphic function on an open subset of the complex plane Univalent foundations – a type-based approach to foundation of mathematics Univalent relation – a binary relation R that satisfies Valence (chemistry)#univalent – 1-valent.
https://en.wikipedia.org/wiki/Axiomatic%20system
In mathematics and logic, an axiomatic system is any set of axioms from which some or all axioms can be used in conjunction to logically derive theorems. A theory is a consistent, relatively-self-contained body of knowledge which usually contains an axiomatic system and all its derived theorems. An axiomatic system that is completely described is a special kind of formal system. A formal theory is an axiomatic system (usually formulated within model theory) that describes a set of sentences that is closed under logical implication. A formal proof is a complete rendition of a mathematical proof within a formal system. Properties An axiomatic system is said to be consistent if it lacks contradiction. That is, it is impossible to derive both a statement and its negation from the system's axioms. Consistency is a key requirement for most axiomatic systems, as the presence of contradiction would allow any statement to be proven (principle of explosion). In an axiomatic system, an axiom is called independent if it cannot be proven or disproven from other axioms in the system. A system is called independent if each of its underlying axioms is independent. Unlike consistency, independence is not a necessary requirement for a functioning axiomatic system — though it is usually sought after to minimize the number of axioms in the system. An axiomatic system is called complete if for every statement, either itself or its negation is derivable from the system's axioms (equivalently, every statement is capable of being proven true or false). Relative consistency Beyond consistency, relative consistency is also the mark of a worthwhile axiom system. This describes the scenario where the undefined terms of a first axiom system are provided definitions from a second, such that the axioms of the first are theorems of the second. A good example is the relative consistency of absolute geometry with respect to the theory of the real number system. Lines and points are undefined terms (also called primitive notions) in absolute geometry, but assigned meanings in the theory of real numbers in a way that is consistent with both axiom systems. Models A model for an axiomatic system is a well-defined set, which assigns meaning for the undefined terms presented in the system, in a manner that is correct with the relations defined in the system. The existence of a proves the consistency of a system. A model is called concrete if the meanings assigned are objects and relations from the real world, as opposed to an which is based on other axiomatic systems. Models can also be used to show the independence of an axiom in the system. By constructing a valid model for a subsystem without a specific axiom, we show that the omitted axiom is independent if its correctness does not necessarily follow from the subsystem. Two models are said to be isomorphic if a one-to-one correspondence can be found between their elements, in a manner that preserves their relations
https://en.wikipedia.org/wiki/Irreducible%20polynomial
In mathematics, an irreducible polynomial is, roughly speaking, a polynomial that cannot be factored into the product of two non-constant polynomials. The property of irreducibility depends on the nature of the coefficients that are accepted for the possible factors, that is, the field to which the coefficients of the polynomial and its possible factors are supposed to belong. For example, the polynomial is a polynomial with integer coefficients, but, as every integer is also a real number, it is also a polynomial with real coefficients. It is irreducible if it is considered as a polynomial with integer coefficients, but it factors as if it is considered as a polynomial with real coefficients. One says that the polynomial is irreducible over the integers but not over the reals. Polynomial irreducibility can be considered for polynomials with coefficients in an integral domain, and there are two common definitions. Most often, a polynomial over an integral domain is said to be irreducible if it is not the product of two polynomials that have their coefficients in , and that are not unit in . Equivalently, for this definition, an irreducible polynomial is an irreducible element in the rings of polynomials over . If is a field, the two definitions of irreducibility are equivalent. For the second definition, a polynomial is irreducible if it cannot be factored into polynomials with coefficients in the same domain that both have a positive degree. Equivalently, a polynomial is irreducible if it is irreducible over the field of fractions of the integral domain. For example, the polynomial is irreducible for the second definition, and not for the first one. On the other hand, is irreducible in for the two definitions, while it is reducible in A polynomial that is irreducible over any field containing the coefficients is absolutely irreducible. By the fundamental theorem of algebra, a univariate polynomial is absolutely irreducible if and only if its degree is one. On the other hand, with several indeterminates, there are absolutely irreducible polynomials of any degree, such as for any positive integer . A polynomial that is not irreducible is sometimes said to be a reducible polynomial. Irreducible polynomials appear naturally in the study of polynomial factorization and algebraic field extensions. It is helpful to compare irreducible polynomials to prime numbers: prime numbers (together with the corresponding negative numbers of equal magnitude) are the irreducible integers. They exhibit many of the general properties of the concept of "irreducibility" that equally apply to irreducible polynomials, such as the essentially unique factorization into prime or irreducible factors. When the coefficient ring is a field or other unique factorization domain, an irreducible polynomial is also called a prime polynomial, because it generates a prime ideal. Definition If F is a field, a non-constant polynomial is irreducible over F if its coeffi
https://en.wikipedia.org/wiki/Calabi%E2%80%93Yau%20manifold
In algebraic geometry, a Calabi–Yau manifold, also known as a Calabi–Yau space, is a particular type of manifold which has properties, such as Ricci flatness, yielding applications in theoretical physics. Particularly in superstring theory, the extra dimensions of spacetime are sometimes conjectured to take the form of a 6-dimensional Calabi–Yau manifold, which led to the idea of mirror symmetry. Their name was coined by , after who first conjectured that such surfaces might exist, and who proved the Calabi conjecture. Calabi–Yau manifolds are complex manifolds that are generalizations of K3 surfaces in any number of complex dimensions (i.e. any even number of real dimensions). They were originally defined as compact Kähler manifolds with a vanishing first Chern class and a Ricci-flat metric, though many other similar but inequivalent definitions are sometimes used. Definitions The motivational definition given by Shing-Tung Yau is of a compact Kähler manifold with a vanishing first Chern class, that is also Ricci flat. There are many other definitions of a Calabi–Yau manifold used by different authors, some inequivalent. This section summarizes some of the more common definitions and the relations between them. A Calabi–Yau -fold or Calabi–Yau manifold of (complex) dimension is sometimes defined as a compact -dimensional Kähler manifold satisfying one of the following equivalent conditions: The canonical bundle of is trivial. has a holomorphic -form that vanishes nowhere. The structure group of the tangent bundle of can be reduced from to . has a Kähler metric with global holonomy contained in . These conditions imply that the first integral Chern class of vanishes. Nevertheless, the converse is not true. The simplest examples where this happens are hyperelliptic surfaces, finite quotients of a complex torus of complex dimension 2, which have vanishing first integral Chern class but non-trivial canonical bundle. For a compact -dimensional Kähler manifold the following conditions are equivalent to each other, but are weaker than the conditions above, though they are sometimes used as the definition of a Calabi–Yau manifold: has vanishing first real Chern class. has a Kähler metric with vanishing Ricci curvature. has a Kähler metric with local holonomy contained in . A positive power of the canonical bundle of is trivial. has a finite cover that has trivial canonical bundle. has a finite cover that is a product of a torus and a simply connected manifold with trivial canonical bundle. If a compact Kähler manifold is simply connected, then the weak definition above is equivalent to the stronger definition. Enriques surfaces give examples of complex manifolds that have Ricci-flat metrics, but their canonical bundles are not trivial, so they are Calabi–Yau manifolds according to the second but not the first definition above. On the other hand, their double covers are Calabi–Yau manifolds for both definitions (in
https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20statistics
In quantum statistics, Bose–Einstein statistics (B–E statistics) describes one of two possible ways in which a collection of non-interacting identical particles may occupy a set of available discrete energy states at thermodynamic equilibrium. The aggregation of particles in the same state, which is a characteristic of particles obeying Bose–Einstein statistics, accounts for the cohesive streaming of laser light and the frictionless creeping of superfluid helium. The theory of this behaviour was developed (1924–25) by Satyendra Nath Bose, who recognized that a collection of identical and indistinguishable particles can be distributed in this way. The idea was later adopted and extended by Albert Einstein in collaboration with Bose. Bose–Einstein statistics apply only to particles that do not follow the Pauli exclusion principle restrictions. Particles that follow Bose-Einstein statistics are called bosons, which have integer values of spin. In contrast, particles that follow Fermi-Dirac statistics are called fermions and have half-integer spins. Bose–Einstein distribution At low temperatures, bosons behave differently from fermions (which obey the Fermi–Dirac statistics) in a way that an unlimited number of them can "condense" into the same energy state. This apparently unusual property also gives rise to the special state of matter – the Bose–Einstein condensate. Fermi–Dirac and Bose–Einstein statistics apply when quantum effects are important and the particles are "indistinguishable". Quantum effects appear if the concentration of particles satisfies where is the number of particles, is the volume, and is the quantum concentration, for which the interparticle distance is equal to the thermal de Broglie wavelength, so that the wavefunctions of the particles are barely overlapping. Fermi–Dirac statistics applies to fermions (particles that obey the Pauli exclusion principle), and Bose–Einstein statistics applies to bosons. As the quantum concentration depends on temperature, most systems at high temperatures obey the classical (Maxwell–Boltzmann) limit, unless they also have a very high density, as for a white dwarf. Both Fermi–Dirac and Bose–Einstein become Maxwell–Boltzmann statistics at high temperature or at low concentration. Bose–Einstein statistics was introduced for photons in 1924 by Bose and generalized to atoms by Einstein in 1924–25. The expected number of particles in an energy state for Bose–Einstein statistics is: with and where is the occupation number (the number of particles) in state , is the degeneracy of energy level , is the energy of the -th state, μ is the chemical potential (zero for a photon gas), is the Boltzmann constant, and is the absolute temperature. The variance of this distribution is calculated directly from the expression above for the average number. For comparison, the average number of fermions with energy given by Fermi–Dirac particle-energy distribution has a similar form: As mentione
https://en.wikipedia.org/wiki/Disjoint%20union
In mathematics, a disjoint union (or discriminated union) of a family of sets is a set often denoted by with an injection of each into such that the images of these injections form a partition of (that is, each element of belongs to exactly one of these images). A disjoint union of a family of pairwise disjoint sets is their union. In category theory, the disjoint union is the coproduct of the category of sets, and thus defined up to a bijection. In this context, the notation is often used. The disjoint union of two sets and is written with infix notation as . Some authors use the alternative notation or (along with the corresponding or ). A standard way for building the disjoint union is to define as the set of ordered pairs such that and the injection as Example Consider the sets and It is possible to index the set elements according to set origin by forming the associated sets where the second element in each pair matches the subscript of the origin set (for example, the in matches the subscript in etc.). The disjoint union can then be calculated as follows: Set theory definition Formally, let be a family of sets indexed by The disjoint union of this family is the set The elements of the disjoint union are ordered pairs Here serves as an auxiliary index that indicates which the element came from. Each of the sets is canonically isomorphic to the set Through this isomorphism, one may consider that is canonically embedded in the disjoint union. For the sets and are disjoint even if the sets and are not. In the extreme case where each of the is equal to some fixed set for each the disjoint union is the Cartesian product of and : Occasionally, the notation is used for the disjoint union of a family of sets, or the notation for the disjoint union of two sets. This notation is meant to be suggestive of the fact that the cardinality of the disjoint union is the sum of the cardinalities of the terms in the family. Compare this to the notation for the Cartesian product of a family of sets. In the language of category theory, the disjoint union is the coproduct in the category of sets. It therefore satisfies the associated universal property. This also means that the disjoint union is the categorical dual of the Cartesian product construction. See coproduct for more details. For many purposes, the particular choice of auxiliary index is unimportant, and in a simplifying abuse of notation, the indexed family can be treated simply as a collection of sets. In this case is referred to as a of and the notation is sometimes used. Category theory point of view In category theory the disjoint union is defined as a coproduct in the category of sets. As such, the disjoint union is defined up to an isomorphism, and the above definition is just one realization of the coproduct, among others. When the sets are pairwise disjoint, the usual union is another realization of the coproduct. This just
https://en.wikipedia.org/wiki/Matrix%20representation%20of%20conic%20sections
In mathematics, the matrix representation of conic sections permits the tools of linear algebra to be used in the study of conic sections. It provides easy ways to calculate a conic section's axis, vertices, tangents and the pole and polar relationship between points and lines of the plane determined by the conic. The technique does not require putting the equation of a conic section into a standard form, thus making it easier to investigate those conic sections whose axes are not parallel to the coordinate system. Conic sections (including degenerate ones) are the sets of points whose coordinates satisfy a second-degree polynomial equation in two variables, By an abuse of notation, this conic section will also be called when no confusion can arise. This equation can be written in matrix notation, in terms of a symmetric matrix to simplify some subsequent formulae, as The sum of the first three terms of this equation, namely is the quadratic form associated with the equation, and the matrix is called the matrix of the quadratic form. The trace and determinant of are both invariant with respect to rotation of axes and translation of the plane (movement of the origin). The quadratic equation can also be written as where is the homogeneous coordinate vector in three variables restricted so that the last variable is 1, i.e., and where is the matrix The matrix is called the matrix of the quadratic equation. Like that of , its determinant is invariant with respect to both rotation and translation. The 2 × 2 upper left submatrix (a matrix of order 2) of , obtained by removing the third (last) row and third (last) column from is the matrix of the quadratic form. The above notation is used in this article to emphasize this relationship. Classification Proper (non-degenerate) and degenerate conic sections can be distinguished based on the determinant of : If , the conic is degenerate. If so that is not degenerate, we can see what type of conic section it is by computing the minor, : is a hyperbola if and only if , is a parabola if and only if , and is an ellipse if and only if . In the case of an ellipse, we can distinguish the special case of a circle by comparing the last two diagonal elements corresponding to the coefficients of and : If and , then is a circle. Moreover, in the case of a non-degenerate ellipse (with and ), we have a real ellipse if but an imaginary ellipse if . An example of the latter is , which has no real-valued solutions. If the conic section is degenerate (), still allows us to distinguish its form: Two intersecting lines (a hyperbola degenerated to its two asymptotes) if and only if . Two parallel straight lines (a degenerate parabola) if and only if . These lines are distinct and real if , coincident if , and non-existent in the real plane if . A single point (a degenerate ellipse) if and only if . The case of coincident lines occurs if and only if the rank of the 3 × 3 matrix i
https://en.wikipedia.org/wiki/Birthday%20attack
A birthday attack is a bruteforce collision attack that exploits the mathematics behind the birthday problem in probability theory. This attack can be used to abuse communication between two or more parties. The attack depends on the higher likelihood of collisions found between random attack attempts and a fixed degree of permutations (pigeonholes). With a birthday attack, it is possible to find a collision of a hash function with chance in , with being the classical preimage resistance security with the same probability. There is a general (though disputed) result that quantum computers can perform birthday attacks, thus breaking collision resistance, in . Although there are some digital signature vulnerabilities associated with the birthday attack, it cannot be used to break an encryption scheme any faster than a brute-force attack. Understanding the problem As an example, consider the scenario in which a teacher with a class of 30 students (n = 30) asks for everybody's birthday (for simplicity, ignore leap years) to determine whether any two students have the same birthday (corresponding to a hash collision as described further). Intuitively, this chance may seem small. Counter-intuitively, the probability that at least one student has the same birthday as any other student on any day is around 70% (for n = 30), from the formula . If the teacher had picked a specific day (say, 16 September), then the chance that at least one student was born on that specific day is , about 7.9%. In a birthday attack, the attacker prepares many different variants of benign and malicious contracts, each having a digital signature. A pair of benign and malicious contracts with the same signature is sought. In this fictional example, suppose that the digital signature of a string is the first byte of its SHA-256 hash. The pair found is indicated in green – note that finding a pair of benign contracts (blue) or a pair of malicious contracts (red) is useless. After the victim accepts the benign contract, the attacker substitutes it with the malicious one and claims the victim signed it, as proven by the digital signature. Mathematics Given a function , the goal of the attack is to find two different inputs such that . Such a pair is called a collision. The method used to find a collision is simply to evaluate the function for different input values that may be chosen randomly or pseudorandomly until the same result is found more than once. Because of the birthday problem, this method can be rather efficient. Specifically, if a function yields any of different outputs with equal probability and is sufficiently large, then we expect to obtain a pair of different arguments and with after evaluating the function for about different arguments on average. We consider the following experiment. From a set of H values we choose n values uniformly at random thereby allowing repetitions. Let p(n; H) be the probability that during this experiment at least
https://en.wikipedia.org/wiki/Hermitian%20matrix
In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th row and -th column, for all indices and : or in matrix form: Hermitian matrices can be understood as the complex extension of real symmetric matrices. If the conjugate transpose of a matrix is denoted by then the Hermitian property can be written concisely as Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of always having real eigenvalues. Other, equivalent notations in common use are although in quantum mechanics, typically means the complex conjugate only, and not the conjugate transpose. Alternative characterizations Hermitian matrices can be characterized in a number of equivalent ways, some of which are listed below: Equality with the adjoint A square matrix is Hermitian if and only if it is equal to its adjoint, that is, it satisfies for any pair of vectors where denotes the inner product operation. This is also the way that the more general concept of self-adjoint operator is defined. Reality of quadratic forms An matrix is Hermitian if and only if Spectral properties A square matrix is Hermitian if and only if it is unitarily diagonalizable with real eigenvalues.'Applications Hermitian matrices are fundamental to quantum mechanics because they describe operators with necessarily real eigenvalues. An eigenvalue of an operator on some quantum state is one of the possible measurement outcomes of the operator, which necessitates the need for operators with real eigenvalues. Examples and solutions In this section, the conjugate transpose of matrix is denoted as the transpose of matrix is denoted as and conjugate of matrix is denoted as See the following example: The diagonal elements must be real, as they must be their own complex conjugate. Well-known families of Hermitian matrices include the Pauli matrices, the Gell-Mann matrices and their generalizations. In theoretical physics such Hermitian matrices are often multiplied by imaginary coefficients,Physics 125 Course Notes at California Institute of Technology which results in skew-Hermitian matrices. Here, we offer another useful Hermitian matrix using an abstract example. If a square matrix equals the product of a matrix with its conjugate transpose, that is, then is a Hermitian positive semi-definite matrix. Furthermore, if is row full-rank, then is positive definite. Properties Main diagonal values are real The entries on the main diagonal (top left to bottom right) of any Hermitian matrix are real. Only the main diagonal entries are necessarily real; Hermitian matrices can have arbitrary complex-valued entries in their off-diagonal elements, as long as diagonally-opposite entries are complex conju
https://en.wikipedia.org/wiki/Transfinite%20number
In mathematics, transfinite numbers or infinite numbers are numbers that are "infinite" in the sense that they are larger than all finite numbers. These include the transfinite cardinals, which are cardinal numbers used to quantify the size of infinite sets, and the transfinite ordinals, which are ordinal numbers used to provide an ordering of infinite sets. The term transfinite was coined in 1895 by Georg Cantor, who wished to avoid some of the implications of the word infinite in connection with these objects, which were, nevertheless, not finite. Few contemporary writers share these qualms; it is now accepted usage to refer to transfinite cardinals and ordinals as infinite numbers. Nevertheless, the term transfinite also remains in use. Notable work on transfinite numbers was done by Wacław Sierpiński: Leçons sur les nombres transfinis (1928 book) much expanded into Cardinal and Ordinal Numbers (1958, 2nd ed. 1965). Definition Any finite natural number can be used in at least two ways: as an ordinal and as a cardinal. Cardinal numbers specify the size of sets (e.g., a bag of marbles), whereas ordinal numbers specify the order of a member within an ordered set (e.g., "the man from the left" or "the day of January"). When extended to transfinite numbers, these two concepts are no longer in one-to-one correspondence. A transfinite cardinal number is used to describe the size of an infinitely large set, while a transfinite ordinal is used to describe the location within an infinitely large set that is ordered. The most notable ordinal and cardinal numbers are, respectively: (Omega): the lowest transfinite ordinal number. It is also the order type of the natural numbers under their usual linear ordering. (Aleph-null): the first transfinite cardinal number. It is also the cardinality of the natural numbers. If the axiom of choice holds, the next higher cardinal number is aleph-one, If not, there may be other cardinals which are incomparable with aleph-one and larger than aleph-null. Either way, there are no cardinals between aleph-null and aleph-one. The continuum hypothesis is the proposition that there are no intermediate cardinal numbers between and the cardinality of the continuum (the cardinality of the set of real numbers): or equivalently that is the cardinality of the set of real numbers. In Zermelo–Fraenkel set theory, neither the continuum hypothesis nor its negation can be proved. Some authors, including P. Suppes and J. Rubin, use the term transfinite cardinal to refer to the cardinality of a Dedekind-infinite set in contexts where this may not be equivalent to "infinite cardinal"; that is, in contexts where the axiom of countable choice is not assumed or is not known to hold. Given this definition, the following are all equivalent: is a transfinite cardinal. That is, there is a Dedekind infinite set such that the cardinality of is There is a cardinal such that Although transfinite ordinals and cardinals both g
https://en.wikipedia.org/wiki/Normal%20operator
In mathematics, especially functional analysis, a normal operator on a complex Hilbert space H is a continuous linear operator N : H → H that commutes with its hermitian adjoint N*, that is: NN* = N*N. Normal operators are important because the spectral theorem holds for them. The class of normal operators is well understood. Examples of normal operators are unitary operators: N* = N−1 Hermitian operators (i.e., self-adjoint operators): N* = N Skew-Hermitian operators: N* = −N positive operators: N = MM* for some M (so N is self-adjoint). A normal matrix is the matrix expression of a normal operator on the Hilbert space Cn. Properties Normal operators are characterized by the spectral theorem. A compact normal operator (in particular, a normal operator on a finite-dimensional linear space) is unitarily diagonalizable. Let be a bounded operator. The following are equivalent. is normal. is normal. for all (use ). The self-adjoint and anti–self adjoint parts of commute. That is, if is written as with and then If is a normal operator, then and have the same kernel and the same range. Consequently, the range of is dense if and only if is injective. Put in another way, the kernel of a normal operator is the orthogonal complement of its range. It follows that the kernel of the operator coincides with that of for any Every generalized eigenvalue of a normal operator is thus genuine. is an eigenvalue of a normal operator if and only if its complex conjugate is an eigenvalue of Eigenvectors of a normal operator corresponding to different eigenvalues are orthogonal, and a normal operator stabilizes the orthogonal complement of each of its eigenspaces. This implies the usual spectral theorem: every normal operator on a finite-dimensional space is diagonalizable by a unitary operator. There is also an infinite-dimensional version of the spectral theorem expressed in terms of projection-valued measures. The residual spectrum of a normal operator is empty. The product of normal operators that commute is again normal; this is nontrivial, but follows directly from Fuglede's theorem, which states (in a form generalized by Putnam): If and are normal operators and if is a bounded linear operator such that then . The operator norm of a normal operator equals its numerical radius and spectral radius. A normal operator coincides with its Aluthge transform. Properties in finite-dimensional case If a normal operator T on a finite-dimensional real or complex Hilbert space (inner product space) H stabilizes a subspace V, then it also stabilizes its orthogonal complement V⊥. (This statement is trivial in the case where T is self-adjoint.) Proof. Let PV be the orthogonal projection onto V. Then the orthogonal projection onto V⊥ is 1H−PV. The fact that T stabilizes V can be expressed as (1H−PV)TPV = 0, or TPV = PVTPV. The goal is to show that PVT(1H−PV) = 0. Let X = PVT(1H−PV). Since (A, B) ↦ tr(AB*) is an inner product on the
https://en.wikipedia.org/wiki/42%20%28number%29
42 (forty-two) is the natural number that follows 41 and precedes 43. Mathematics Forty-two (42) is a pronic number and an abundant number; its prime factorization () makes it the second sphenic number and also the second of the form (). Additional properties of the number 42 include: It is the number of isomorphism classes of all simple and oriented directed graphs on four vertices. In other words, it is the number of all possible outcomes (up to isomorphism) of a tournament consisting of four teams where the game between any pair of teams results in three possible outcomes: the first team wins, the second team wins, or there is a draw. The group stage of the FIFA World cup is a good example. It is the third primary pseudoperfect number. It is a Catalan number. Consequently, 42 is the number of noncrossing partitions of a set of five elements, the number of triangulations of a heptagon, the number of rooted ordered binary trees with six leaves, the number of ways in which five pairs of nested parentheses can be arranged, etc. It is an alternating sign matrix number, that is, the number of 4-by-4 alternating sign matrices. It is the smallest number that is equal to the sum of the nonprime proper divisors of , i.e., It is the number of partitions of 10—the number of ways of expressing 10 as a sum of positive integers (note a different sense of partition from that above). 1111123, one of the 42 unordered integer partitions of 10, has 42 ordered compositions, since The angle of 42 degrees can be constructed with only a compass and straight edge and using the golden ratio in 18 degrees, i.e. the difference between constructible angles 60 and 18. Given 27 same-size cubes whose nominal values progress from 1 to 27, a 3 × 3 × 3 magic cube can be constructed such that every row, column, and corridor, and every diagonal passing through the center, is composed of three numbers whose sum of values is 42. It is the third pentadecagonal number. It is a meandric number and an open meandric number. 42 is the only known value that is the number of sets of four distinct positive integers , , , , each less than the value itself, such that , , and are each multiples of the value. Whether there are other values remains an open question. 42 is a (2,6)-perfect number (super-multiperfect), as 42 is the resulting number of the original Smith number (): Both the sum of its digits () and the sum of the digits in its prime factorization () result in 42. The dimension of the Borel subalgebra in the exceptional Lie algebra e6 is 42. 42 is the largest number such that there exist positive integers , , with 42 is the smallest number such that for every Riemann surface of genus , (Hurwitz's automorphisms theorem). 42 is the sum of the first six positive even numbers. 42 was the last natural number less than 100 whose representation as a sum of three cubes was found (in 2019). The representation is: 42 is a Harshad number in decimal because the
https://en.wikipedia.org/wiki/Harmonic%20series
Harmonic series may refer to either of two related concepts: Harmonic series (mathematics) Harmonic series (music)
https://en.wikipedia.org/wiki/List%20of%20statistics%20articles
0–9 1.96 2SLS (two-stage least squares) redirects to instrumental variable 3SLS – see three-stage least squares 68–95–99.7 rule 100-year flood A A priori probability Abductive reasoning Absolute deviation Absolute risk reduction Absorbing Markov chain ABX test Accelerated failure time model Acceptable quality limit Acceptance sampling Accidental sampling Accuracy and precision Accuracy paradox Acquiescence bias Actuarial science Adapted process Adaptive estimator Additive Markov chain Additive model Additive smoothing Additive white Gaussian noise Adjusted Rand index – see Rand index (subsection) ADMB software Admissible decision rule Age adjustment Age-standardized mortality rate Age stratification Aggregate data Aggregate pattern Akaike information criterion Algebra of random variables Algebraic statistics Algorithmic inference Algorithms for calculating variance All models are wrong All-pairs testing Allan variance Alignments of random points Almost surely Alpha beta filter Alternative hypothesis Analyse-it – software Analysis of categorical data Analysis of covariance Analysis of molecular variance Analysis of rhythmic variance Analysis of variance Analytic and enumerative statistical studies Ancestral graph Anchor test Ancillary statistic ANCOVA redirects to Analysis of covariance Anderson–Darling test ANOVA ANOVA on ranks ANOVA–simultaneous component analysis Anomaly detection Anomaly time series Anscombe transform Anscombe's quartet Antecedent variable Antithetic variates Approximate Bayesian computation Approximate entropy Arcsine distribution Area chart Area compatibility factor ARGUS distribution Arithmetic mean Armitage–Doll multistage model of carcinogenesis Arrival theorem Artificial neural network Ascertainment bias ASReml software Association (statistics) Association mapping Association scheme Assumed mean Astrostatistics Asymptotic distribution Asymptotic equipartition property (information theory) Asymptotic normality redirects to Asymptotic distribution Asymptotic relative efficiency redirects to Efficiency (statistics) Asymptotic theory (statistics) Atkinson index Attack rate Augmented Dickey–Fuller test Aumann's agreement theorem Autocorrelation Autocorrelation plot redirects to Correlogram Autocovariance Autoregressive conditional duration Autoregressive conditional heteroskedasticity Autoregressive fractionally integrated moving average Autoregressive integrated moving average Autoregressive model Autoregressive–moving-average model Auxiliary particle filter Average Average treatment effect Averaged one-dependence estimators Azuma's inequality B BA model model for a random network Backfitting algorithm Balance equation Balanced incomplete block design redirects to Block design Balanced repeated replication Balding–Nichols model Banburismus related to Bayesian networks Bangdiwala's B Bapat–Beg theorem Bar chart Barabási–Albert model Barber–Johnson diagram Barnard's test Barnardisation Barnes interpolation Bartlett's met
https://en.wikipedia.org/wiki/Covariance%20matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector. Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the and directions contain all of the necessary information; a matrix would be necessary to fully characterize the two-dimensional variation. Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself). The covariance matrix of a random vector is typically denoted by , or . Definition Throughout this article, boldfaced unsubscripted and are used to refer to random vectors, and Roman subscripted and are used to refer to scalar random variables. If the entries in the column vector are random variables, each with finite variance and expected value, then the covariance matrix is the matrix whose entry is the covariance where the operator denotes the expected value (mean) of its argument. Conflicting nomenclatures and notations Nomenclatures differ. Some statisticians, following the probabilist William Feller in his two-volume book An Introduction to Probability Theory and Its Applications, call the matrix the variance of the random vector , because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector . Both forms are quite standard, and there is no ambiguity between them. The matrix is also often called the variance-covariance matrix, since the diagonal terms are in fact variances. By comparison, the notation for the cross-covariance matrix between two vectors is Properties Relation to the autocorrelation matrix The auto-covariance matrix is related to the autocorrelation matrix by where the autocorrelation matrix is defined as . Relation to the correlation matrix An entity closely related to the covariance matrix is the matrix of Pearson product-moment correlation coefficients between each of the random variables in the random vector , which can be written as where is the matrix of the diagonal elements of (i.e., a diagonal matrix of the variances of for ). Equivalently, the correlation matrix can be seen as the covariance matrix of the standardized random variables for . Each element on the principal diagonal of a correlation matrix is the correlation of a random variable with itself, which always equals 1. Each off-diagonal element is between −1 and +1 inclusive. Inverse of the covariance matrix The inverse of this matrix, , if it exists, is the inver
https://en.wikipedia.org/wiki/Counting%20measure
In mathematics, specifically measure theory, the counting measure is an intuitive way to put a measure on any set – the "size" of a subset is taken to be the number of elements in the subset if the subset has finitely many elements, and infinity if the subset is infinite. The counting measure can be defined on any measurable space (that is, any set along with a sigma-algebra) but is mostly used on countable sets. In formal notation, we can turn any set into a measurable space by taking the power set of as the sigma-algebra that is, all subsets of are measurable sets. Then the counting measure on this measurable space is the positive measure defined by for all where denotes the cardinality of the set The counting measure on is σ-finite if and only if the space is countable. Discussion The counting measure is a special case of a more general construction. With the notation as above, any function defines a measure on via where the possibly uncountable sum of real numbers is defined to be the supremum of the sums over all finite subsets, that is, Taking for all gives the counting measure. See also References Measures (measure theory)
https://en.wikipedia.org/wiki/Algebra%20over%20a%20field
In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear". The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras and non-associative algebras. Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead. An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space. Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra. Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients. Definition and motivation Motivating examples Definition Let be a field, and let be a vector space over equipped with an additional binary operation from to , denoted here by (that is, if and are any two elements of , then is an element of that is called the product of and ). Then is an algebra over if the following identities hold for all elements in , and all elements (often called scalars) and in : Right distributivity: Left distributivity: Compatibility with scalars: . These three axioms are another way of saying that the binary operation is bilinear. An algebra over is sometimes also called a -algebra, and is called the base field of . The binary operation is often referred to as multiplication in . The convention adopted in this article is that multiplication of elements of an algebra is not necessarily associative, although some authors use the term algebra to refer to an associative algebra. When a binary operation on a vector space is commutative, left distributivity and right distributivity are equivalent, and, in this case, only one distributivity requires a proof. In general
https://en.wikipedia.org/wiki/Kolmogorov%27s%20zero%E2%80%93one%20law
In probability theory, Kolmogorov's zero–one law, named in honor of Andrey Nikolaevich Kolmogorov, specifies that a certain type of event, namely a tail event of independent σ-algebras, will either almost surely happen or almost surely not happen; that is, the probability of such an event occurring is zero or one. Tail events are defined in terms of countably infinite families of σ-algebras. For illustrative purposes, we present here the special case in which each sigma algebra is generated by a random variable for . Let be the sigma-algebra generated jointly by all of the . Then, a tail event is an event which is probabilistically independent of each finite subset of these random variables. (Note: belonging to implies that membership in is uniquely determined by the values of the , but the latter condition is strictly weaker and does not suffice to prove the zero-one law.) For example, the event that the sequence of the converges, and the event that its sum converges are both tail events. If the are, for example, all Bernoulli-distributed, then the event that there are infinitely many such that is a tail event. If each models the outcome of the -th coin toss in a modeled, infinite sequence of coin tosses, this means that a sequence of 100 consecutive heads occurring infinitely many times is a tail event in this model. Tail events are precisely those events whose occurrence can still be determined if an arbitrarily large but finite initial segment of the is removed. In many situations, it can be easy to apply Kolmogorov's zero–one law to show that some event has probability 0 or 1, but surprisingly hard to determine which of these two extreme values is the correct one. Formulation A more general statement of Kolmogorov's zero–one law holds for sequences of independent σ-algebras. Let (Ω,F,P) be a probability space and let Fn be a sequence of σ-algebras contained in F. Let be the smallest σ-algebra containing Fn, Fn+1, …. The terminal σ-algebra of the Fn is defined as . Kolmogorov's zero–one law asserts that, if the Fn are stochastically independent, then for any event , one has either P(E) = 0 or P(E)=1. The statement of the law in terms of random variables is obtained from the latter by taking each Fn to be the σ-algebra generated by the random variable Xn. A tail event is then by definition an event which is measurable with respect to the σ-algebra generated by all Xn, but which is independent of any finite number of Xn. That is, a tail event is precisely an element of the terminal σ-algebra . Examples An invertible measure-preserving transformation on a standard probability space that obeys the 0-1 law is called a Kolmogorov automorphism. All Bernoulli automorphisms are Kolmogorov automorphisms but not vice versa. The presence of an infinite cluster in the context of percolation theory also obeys the 0-1 law. See also Borel–Cantelli lemma Hewitt–Savage zero–one law Lévy's zero–one law Long tail Tail risk References
https://en.wikipedia.org/wiki/Tangent%20vector
In mathematics, a tangent vector is a vector that is tangent to a curve or surface at a given point. Tangent vectors are described in the differential geometry of curves in the context of curves in Rn. More generally, tangent vectors are elements of a tangent space of a differentiable manifold. Tangent vectors can also be described in terms of germs. Formally, a tangent vector at the point is a linear derivation of the algebra defined by the set of germs at . Motivation Before proceeding to a general definition of the tangent vector, we discuss its use in calculus and its tensor properties. Calculus Let be a parametric smooth curve. The tangent vector is given by , where we have used a prime instead of the usual dot to indicate differentiation with respect to parameter . The unit tangent vector is given by Example Given the curve in , the unit tangent vector at is given by Contravariance If is given parametrically in the n-dimensional coordinate system (here we have used superscripts as an index instead of the usual subscript) by or then the tangent vector field is given by Under a change of coordinates the tangent vector in the -coordinate system is given by where we have used the Einstein summation convention. Therefore, a tangent vector of a smooth curve will transform as a contravariant tensor of order one under a change of coordinates. Definition Let be a differentiable function and let be a vector in . We define the directional derivative in the direction at a point by The tangent vector at the point may then be defined as Properties Let be differentiable functions, let be tangent vectors in at , and let . Then Tangent vector on manifolds Let be a differentiable manifold and let be the algebra of real-valued differentiable functions on . Then the tangent vector to at a point in the manifold is given by the derivation which shall be linear — i.e., for any and we have Note that the derivation will by definition have the Leibniz property See also References Bibliography . . . Vectors (mathematics and physics)
https://en.wikipedia.org/wiki/Trace%20class
In mathematics, specifically functional analysis, a trace-class operator is a linear operator for which a trace may be defined, such that the trace is a finite number independent of the choice of basis used to compute the trace. This trace of trace-class operators generalizes the trace of matrices studied in linear algebra. All trace-class operators are compact operators. In quantum mechanics, mixed states are described by density matrices, which are certain trace class operators. Trace-class operators are essentially the same as nuclear operators, though many authors reserve the term "trace-class operator" for the special case of nuclear operators on Hilbert spaces and use the term "nuclear operator" in more general topological vector spaces (such as Banach spaces). Note that the trace operator studied in partial differential equations is an unrelated concept. Definition Suppose is a separable Hilbert space and a bounded linear operator on which is non-negative (I.e., semi—positive-definite) and self-adjoint. The trace of , denoted by is the sum of the serieswhere is an orthonormal basis of . The trace is a sum on non-negative reals and is therefore a non-negative real or infinity. It can be shown that the trace does not depend on the choice of orthonormal basis. For an arbitrary bounded linear operator on we define its absolute value, denoted by to be the positive square root of that is, is the unique bounded positive operator on such that The operator is said to be in the trace class if We denote the space of all trace class linear operators on by (One can show that this is indeed a vector space.) If is in the trace class, we define the trace of bywhere is an arbitrary orthonormal basis of . It can be shown that this is an absolutely convergent series of complex numbers whose sum does not depend on the choice of orthonormal basis. When is finite-dimensional, every operator is trace class and this definition of trace of coincides with the definition of the trace of a matrix. Equivalent formulations Given a bounded linear operator , each of the following statements is equivalent to being in the trace class: For some orthonormal basis of , the sum of positive terms is finite. For every orthonormal basis of , the sum of positive terms is finite. is a compact operator and where are the eigenvalues of (also known as the singular values of ) with each eigenvalue repeated as often as its multiplicity. There exist two orthogonal sequences and in and a sequence in such that for all Here, the infinite sum means that the sequence of partial sums converges to in . is a nuclear operator. is equal to the composition of two Hilbert-Schmidt operators. is a Hilbert-Schmidt operator. is an integral operator. There exist weakly closed and equicontinuous (and thus weakly compact) subsets and of and respectively, and some positive Radon measure on of total mass such that for all and : Trac
https://en.wikipedia.org/wiki/DHT
DHT may refer to: Science and technology Discrete Hartley transform, in mathematics Distributed hash table, lookup service in computing Chemistry Dihydrotestosterone, hormone derived from testosterone Dihydrotachysterol, synthetic vitamin D analog Other DHT (band), Belgian dance duo Dr Hadwen Trust, UK charity promoting animal experiments alternatives Dalhart Municipal Airport, (IATA code), an airport near Dalhart, Texas Grande Prairie Daily Herald-Tribune, a newspaper in Canada David Hume Tower, the former name of 40 George Square, a University of Edinburgh building See also
https://en.wikipedia.org/wiki/Seven%20Bridges%20of%20K%C3%B6nigsberg
The Seven Bridges of Königsberg is a historically notable problem in mathematics. Its negative resolution by Leonhard Euler in 1736 laid the foundations of graph theory and prefigured the idea of topology. The city of Königsberg in Prussia (now Kaliningrad, Russia) was set on both sides of the Pregel River, and included two large islands—Kneiphof and Lomse—which were connected to each other, and to the two mainland portions of the city, by seven bridges. The problem was to devise a walk through the city that would cross each of those bridges once and only once. By way of specifying the logical task unambiguously, solutions involving either reaching an island or mainland bank other than via one of the bridges, or accessing any bridge without crossing to its other end are explicitly unacceptable. Euler proved that the problem has no solution. The difficulty he faced was the development of a suitable technique of analysis, and of subsequent tests that established this assertion with mathematical rigor. Euler's analysis Euler first pointed out that the choice of route inside each land mass is irrelevant and that the only important feature of a route is the sequence of bridges crossed. This allowed him to reformulate the problem in abstract terms (laying the foundations of graph theory), eliminating all features except the list of land masses and the bridges connecting them. In modern terms, one replaces each land mass with an abstract "vertex" or node, and each bridge with an abstract connection, an "edge", which only serves to record which pair of vertices (land masses) is connected by that bridge. The resulting mathematical structure is a graph. → → Since only the connection information is relevant, the shape of pictorial representations of a graph may be distorted in any way, without changing the graph itself. Only the existence (or absence) of an edge between each pair of nodes is significant. For example, it does not matter whether the edges drawn are straight or curved, or whether one node is to the left or right of another. Next, Euler observed that (except at the endpoints of the walk), whenever one enters a vertex by a bridge, one leaves the vertex by a bridge. In other words, during any walk in the graph, the number of times one enters a non-terminal vertex equals the number of times one leaves it. Now, if every bridge has been traversed exactly once, it follows that, for each land mass (except for the ones chosen for the start and finish), the number of bridges touching that land mass must be even (half of them, in the particular traversal, will be traversed "toward" the landmass; the other half, "away" from it). However, all four of the land masses in the original problem are touched by an odd number of bridges (one is touched by 5 bridges, and each of the other three is touched by 3). Since, at most, two land masses can serve as the endpoints of a walk, the proposition of a walk traversing each bridge once leads to a contradi
https://en.wikipedia.org/wiki/Triviality%20%28mathematics%29
In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or an object which possesses a simple structure (e.g., groups, topological spaces). The noun triviality usually refers to a simple technical aspect of some proof or definition. The origin of the term in mathematical language comes from the medieval trivium curriculum, which distinguishes from the more difficult quadrivium curriculum. The opposite of trivial is nontrivial, which is commonly used to indicate that an example or a solution is not simple, or that a statement or a theorem is not easy to prove. The judgement of whether a situation under consideration is trivial or not depends on who considers it since the situation is obviously true for someone who has sufficient knowledge or experience of it while to someone who has never seen this, it may be even hard to be understood so not trivial at all. And there can be an argument about how quickly and easily a problem should be recognized for the problem to be treated as trivial. So, triviality is not a universally agreed property in mathematics and logic. Trivial and nontrivial solutions In mathematics, the term "trivial" is often used to refer to objects (e.g., groups, topological spaces) with a very simple structure. These include, among others: Empty set: the set containing no or null members Trivial group: the mathematical group containing only the identity element Trivial ring: a ring defined on a singleton set "Trivial" can also be used to describe solutions to an equation that have a very simple structure, but for the sake of completeness cannot be omitted. These solutions are called the trivial solutions. For example, consider the differential equation where is a function whose derivative is . The trivial solution is the zero function while a nontrivial solution is the exponential function The differential equation with boundary conditions is important in mathematics and physics, as it could be used to describe a particle in a box in quantum mechanics, or a standing wave on a string. It always includes the solution , which is considered obvious and hence is called the "trivial" solution. In some cases, there may be other solutions (sinusoids), which are called "nontrivial" solutions. Similarly, mathematicians often describe Fermat's last theorem as asserting that there are no nontrivial integer solutions to the equation , where n is greater than 2. Clearly, there are some solutions to the equation. For example, is a solution for any n, but such solutions are obvious and obtainable with little effort, and hence "trivial". In mathematical reasoning Trivial may also refer to any easy case of a proof, which for the sake of completeness cannot be ignored. For instance, proofs by mathematical induction have two parts: the "base case" which shows that the theorem is true for a particular initial value (such as n = 0 or n = 1), and the inductive step which
https://en.wikipedia.org/wiki/Trivial%20group
In mathematics, a trivial group or zero group is a group consisting of a single element. All such groups are isomorphic, so one often speaks of the trivial group. The single element of the trivial group is the identity element and so it is usually denoted as such: or depending on the context. If the group operation is denoted then it is defined by The similarly defined is also a group since its only element is its own inverse, and is hence the same as the trivial group. The trivial group is distinct from the empty set, which has no elements, hence lacks an identity element, and so cannot be a group. Definitions Given any group the group consisting of only the identity element is a subgroup of and, being the trivial group, is called the of The term, when referred to " has no nontrivial proper subgroups" refers to the only subgroups of being the trivial group and the group itself. Properties The trivial group is cyclic of order ; as such it may be denoted or If the group operation is called addition, the trivial group is usually denoted by If the group operation is called multiplication then 1 can be a notation for the trivial group. Combining these leads to the trivial ring in which the addition and multiplication operations are identical and The trivial group serves as the zero object in the category of groups, meaning it is both an initial object and a terminal object. The trivial group can be made a (bi-)ordered group by equipping it with the trivial non-strict order See also References Finite groups
https://en.wikipedia.org/wiki/Hermann%20Minkowski
Hermann Minkowski (; ; 22 June 1864 – 12 January 1909) was a German mathematician and professor at Königsberg, Zürich and Göttingen. He created and developed the geometry of numbers and used geometrical methods to solve problems in number theory, mathematical physics, and the theory of relativity. Minkowski is perhaps best known for his foundational work describing space and time as a four-dimensional space, now known as "Minkowski spacetime", which facilitated geometric interpretations of Albert Einstein's special theory of relativity (1905). Personal life and family Hermann Minkowski was born in the town of Aleksota, the Suwałki Governorate, the Kingdom of Poland, since 1864 part of the Russian Empire, to Lewin Boruch Minkowski, a merchant who subsidized the building of the choral synagogue in Kovno, and Rachel Taubmann, both of Jewish descent. Hermann was a younger brother of the medical researcher Oskar (born 1858). In different sources Minkowski's nationality is variously given as German, Polish, or Lithuanian-German, or Russian. To escape Jewish persecution in the Russian Empire , the family moved to Königsberg in 1872, where the father became involved in rag export and later in manufacture of mechanical clockwork tin toys (he operated his firm Lewin Minkowski & Son with his eldest son Max). Minkowski studied in Königsberg and taught in Bonn (1887–1894), Königsberg (1894–1896) and Zurich (1896–1902), and finally in Göttingen from 1902 until his death in 1909. He married Auguste Adler in 1897 with whom he had two daughters; the electrical engineer and inventor Reinhold Rudenberg was his son-in-law. Minkowski died suddenly of appendicitis in Göttingen on 12 January 1909. David Hilbert's obituary of Minkowski illustrates the deep friendship between the two mathematicians (translated): Since my student years Minkowski was my best, most dependable friend who supported me with all the depth and loyalty that was so characteristic of him. Our science, which we loved above all else, brought us together; it seemed to us a garden full of flowers. In it, we enjoyed looking for hidden pathways and discovered many a new perspective that appealed to our sense of beauty, and when one of us showed it to the other and we marveled over it together, our joy was complete. He was for me a rare gift from heaven and I must be grateful to have possessed that gift for so long. Now death has suddenly torn him from our midst. However, what death cannot take away is his noble image in our hearts and the knowledge that his spirit continues to be active in us. Max Born delivered the obituary on behalf of the mathematics students at Göttingen. The main-belt asteroid 12493 Minkowski and M-matrices are named in Minkowski's honor. Education and career Minkowski was educated in East Prussia at the Albertina University of Königsberg, where he earned his doctorate in 1885 under the direction of Ferdinand von Lindemann. In 1883, while still a student at Königsberg
https://en.wikipedia.org/wiki/Multiplier
Multiplier may refer to: Mathematics Multiplier (arithmetic), the number of multiples being computed in multiplication Constant multiplier, a constant factor with units of measurement Lagrange multiplier, a scalar variable used in mathematics to solve an optimisation problem for a given constraint Multiplier (Fourier analysis), an operator that multiplies the Fourier coefficients of a function by a specified function (known as the symbol) Multiplier of orbit, a formula for computing a value of a variable based on its own previous value or values; see Periodic points of complex quadratic mappings Characteristic multiplier, an eigenvalue of a monodromy matrix Multiplier algebra, a construction on C*-Algebras and similar structures Electrical engineering Binary multiplier, a digital circuit to perform rapid multiplication of two numbers in binary representation Analog multiplier, a device that multiplies two analog signals Frequency multiplier, a device that generates a signal at an integer multiple of its input frequency Voltage multiplier, an electrical circuit that converts AC electrical power from a lower voltage to a higher DC voltage. Schweigger multiplier, an early galvanometer Macroeconomics Multiplier (economics), any measure of the proportional effect of an exogenous variable on an endogenous variable Fiscal multiplier, the ratio of the change in aggregate demand to the change in government spending that caused it Money multiplier, the ratio of the money generated by the banking system to the central bank's increase in the monetary base that caused it Others Force multiplier, in warfare a factor that dramatically increases the combat-effectiveness of a given military force A multiplier fishing reel creates less friction when casting A CPU multiplier allows a CPU to perform more cycles per single cycle of the front side bus Multiplier (linguistics), an adjective indicating number of times something is to be multiplied Multipliers: How the Best Leaders Make Everyone Smarter
https://en.wikipedia.org/wiki/Integration%20by%20substitution
In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables, is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards." Substitution for a single variable Introduction (indefinite integrals) Before stating the result rigorously, consider a simple case using indefinite integrals. Compute Set This means or in differential form, Now: where is an arbitrary constant of integration. This procedure is frequently used, but not all integrals are of a form that permits its use. In any event, the result should be verified by differentiating and comparing to the original integrand. For definite integrals, the limits of integration must also be adjusted, but the procedure is mostly the same. Statement for definite integrals Let be a differentiable function with a continuous derivative, where is an interval. Suppose that is a continuous function. Then: In Leibniz notation, the substitution yields: Working heuristically with infinitesimals yields the equation which suggests the substitution formula above. (This equation may be put on a rigorous foundation by interpreting it as a statement about differential forms.) One may view the method of integration by substitution as a partial justification of Leibniz's notation for integrals and derivatives. The formula is used to transform one integral into another integral that is easier to compute. Thus, the formula can be read from left to right or from right to left in order to simplify a given integral. When used in the former manner, it is sometimes known as u-substitution or w''-substitution in which a new variable is defined to be a function of the original variable found inside the composite function multiplied by the derivative of the inner function. The latter manner is commonly used in trigonometric substitution, replacing the original variable with a trigonometric function of a new variable and the original differential with the differential of the trigonometric function. Proof Integration by substitution can be derived from the fundamental theorem of calculus as follows. Let and be two functions satisfying the above hypothesis that is continuous on and is integrable on the closed interval . Then the function is also integrable on . Hence the integrals and in fact exist, and it remains to show that they are equal. Since is continuous, it has an antiderivative . The composite function is then defined. Since is differentiable, combining the chain rule and the definition of an antiderivative gives: Applying the fundamental theorem of calculus twice gives: which is the substitution rule. Examples: Definite integrals Example 1 Consider the integral: Make the substitution to obtain meaning Therefore: Since the lower limit was replaced with and the upper limit with a transfor
https://en.wikipedia.org/wiki/List%20of%20named%20matrices
This article lists some important classes of matrices used in mathematics, science and engineering. A matrix (plural matrices, or less commonly matrixes) is a rectangular array of numbers called entries. Matrices have a long history of both study and application, leading to diverse ways of classifying matrices. A first group is matrices satisfying concrete conditions of the entries, including constant matrices. Important examples include the identity matrix given by and the zero matrix of dimension . For example: . Further ways of classifying matrices are according to their eigenvalues, or by imposing conditions on the product of the matrix with other matrices. Finally, many domains, both in mathematics and other sciences including physics and chemistry, have particular matrices that are applied chiefly in these areas. Constant matrices The list below comprises matrices whose elements are constant for any given dimension (size) of matrix. The matrix entries will be denoted aij. The table below uses the Kronecker delta δij for two integers i and j which is 1 if i = j and 0 else. Specific patterns for entries The following lists matrices whose entries are subject to certain conditions. Many of them apply to square matrices only, that is matrices with the same number of columns and rows. The main diagonal of a square matrix is the diagonal joining the upper left corner and the lower right one or equivalently the entries ai,i. The other diagonal is called anti-diagonal (or counter-diagonal). Matrices satisfying some equations A number of matrix-related notions is about properties of products or inverses of the given matrix. The matrix product of a m-by-n matrix A and a n-by-k matrix B is the m-by-k matrix C given by This matrix product is denoted AB. Unlike the product of numbers, matrix products are not commutative, that is to say AB need not be equal to BA. A number of notions are concerned with the failure of this commutativity. An inverse of square matrix A is a matrix B (necessarily of the same dimension as A) such that AB = I. Equivalently, BA = I. An inverse need not exist. If it exists, B is uniquely determined, and is also called the inverse of A, denoted A−1. Matrices with conditions on eigenvalues or eigenvectors Matrices generated by specific data Matrices used in statistics The following matrices find their main application in statistics and probability theory. Bernoulli matrix — a square matrix with entries +1, −1, with equal probability of each. Centering matrix — a matrix which, when multiplied with a vector, has the same effect as subtracting the mean of the components of the vector from every component. Correlation matrix — a symmetric n×n matrix, formed by the pairwise correlation coefficients of several random variables. Covariance matrix — a symmetric n×n matrix, formed by the pairwise covariances of several random variables. Sometimes called a dispersion matrix. Dispersion matrix — another name for a covariance ma
https://en.wikipedia.org/wiki/Bipolar
Bipolar may refer to: Astronomy Bipolar nebula, a distinctive nebular formation Bipolar outflow, two continuous flows of gas from the poles of a star Mathematics Bipolar coordinates, a two-dimensional orthogonal coordinate system Bipolar set, a derivative of a polar set Bipolar theorem, a theorem in convex analysis which provides necessary and sufficient conditions for a cone to be equal to its bipolar Medicine Bipolar disorder, a mental disorder that causes periods of depression and periods of elevated mood Bipolar I disorder, a bipolar spectrum disorder characterized by the occurrence of at least one manic or mixed episode Bipolar II disorder, a bipolar spectrum disorder characterized by at least one episode of hypomania and at least one episode of major depression Bipolar disorder not otherwise specified, a diagnosis for bipolar disorder when it does not fall within the other established sub-types Bipolar neuron, a type of neuron which has two extensions Music Albums Bipolar (Up Dharma Down album), 2008 Bi-Polar (Vanilla Ice album), 2001 Bipolar, a 2009 album by rock group El Cuarteto de Nos Songs "Bipolar" (Peso Pluma song), 2023 "Bipolar", a song by Blonde Redhead from their 1997 album Fake Can Be Just as Good "Bipolar", a song by Gloria Trevi from her 2013 album De Película "Bipolar", a song by Gucci Mane from his 2018 album Evil Genius "Bipolar", a 2019 song by Kiiara "Bi Polar", a 2021 song by Bhad Bhabie "Bye Bipolar", a song by Brandy from B7, 2020 Technology Bipolar electricity transmission, using a pair of conductors in opposite polarity Bipolar encoding, a type of line code where two nonzero values are used Bipolar violation, a violation of the bipolar encoding rules Bipolar electric motor, an electric motor with only two poles to its stationary field Bipolar (locomotive), a locomotive using a bipolar electric motor Bipolar signal, a signal that may assume either of two polarities, neither of which is zero Transistors Bipolar junction transistor (BJT) Heterojunction bipolar transistor (HBT) Insulated-gate bipolar transistor (IGBT) Other uses Bipolarity, polarity in international relations involving two states See also Dipole (disambiguation)