source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Superfactorial
In mathematics, and more specifically number theory, the superfactorial of a positive integer is the product of the first factorials. They are a special case of the Jordan–Pólya numbers, which are products of arbitrary collections of factorials. Definition The th superfactorial may be defined as: Following the usual convention for the empty product, the superfactorial of 0 is 1. The sequence of superfactorials, beginning with , is: Properties Just as the factorials can be continuously interpolated by the gamma function, the superfactorials can be continuously interpolated by the Barnes G-function. According to an analogue of Wilson's theorem on the behavior of factorials modulo prime numbers, when is an odd prime number where is the notation for the double factorial. For every integer , the number is a square number. This may be expressed as stating that, in the formula for as a product of factorials, omitting one of the factorials (the middle one, ) results in a square product. Additionally, if any integers are given, the product of their pairwise differences is always a multiple of , and equals the superfactorial when the given numbers are consecutive. References External links Integer sequences Factorial and binomial topics
https://en.wikipedia.org/wiki/Glossary%20of%20differential%20geometry%20and%20topology
This is a glossary of terms specific to differential geometry and differential topology. The following three glossaries are closely related: Glossary of general topology Glossary of algebraic topology Glossary of Riemannian and metric geometry. See also: List of differential geometry topics Words in italics denote a self-reference to this glossary. A Atlas B Bundle – see fiber bundle. basic element – A basic element with respect to an element is an element of a cochain complex (e.g., complex of differential forms on a manifold) that is closed: and the contraction of by is zero. C Chart Cobordism Codimension – The codimension of a submanifold is the dimension of the ambient space minus the dimension of the submanifold. Connected sum Connection Cotangent bundle – the vector bundle of cotangent spaces on a manifold. Cotangent space D Diffeomorphism – Given two differentiable manifolds and , a bijective map from to is called a diffeomorphism – if both and its inverse are smooth functions. Doubling – Given a manifold with boundary, doubling is taking two copies of and identifying their boundaries. As the result we get a manifold without boundary. E Embedding F Fiber – In a fiber bundle, the preimage of a point in the base is called the fiber over , often denoted . Fiber bundle Frame – A frame at a point of a differentiable manifold M is a basis of the tangent space at the point. Frame bundle – the principal bundle of frames on a smooth manifold. Flow G Genus H Hypersurface – A hypersurface is a submanifold of codimension one. I Immersion Integration along fibers L Lens space – A lens space is a quotient of the 3-sphere (or (2n + 1)-sphere) by a free isometric action of Z – k. M Manifold – A topological manifold is a locally Euclidean Hausdorff space. (In Wikipedia, a manifold need not be paracompact or second-countable.) A manifold is a differentiable manifold whose chart overlap functions are k times continuously differentiable. A or smooth manifold is a differentiable manifold whose chart overlap functions are infinitely continuously differentiable. N Neat submanifold – A submanifold whose boundary equals its intersection with the boundary of the manifold into which it is embedded. O Orientation of a vector bundle P Parallelizable – A smooth manifold is parallelizable if it admits a smooth global frame. This is equivalent to the tangent bundle being trivial. Poincaré lemma Principal bundle – A principal bundle is a fiber bundle together with an action on by a Lie group that preserves the fibers of and acts simply transitively on those fibers. Pullback S Section Submanifold – the image of a smooth embedding of a manifold. Submersion Surface – a two-dimensional manifold or submanifold. Systole – least length of a noncontractible loop. T Tangent bundle – the vector bundle of tangent spaces on a differentiable manifold. Tangent field – a
https://en.wikipedia.org/wiki/Hadamard%20matrix
In mathematics, a Hadamard matrix, named after the French mathematician Jacques Hadamard, is a square matrix whose entries are either +1 or −1 and whose rows are mutually orthogonal. In geometric terms, this means that each pair of rows in a Hadamard matrix represents two perpendicular vectors, while in combinatorial terms, it means that each pair of rows has matching entries in exactly half of their columns and mismatched entries in the remaining columns. It is a consequence of this definition that the corresponding properties hold for columns as well as rows. The n-dimensional parallelotope spanned by the rows of an n×n Hadamard matrix has the maximum possible n-dimensional volume among parallelotopes spanned by vectors whose entries are bounded in absolute value by 1. Equivalently, a Hadamard matrix has maximal determinant among matrices with entries of absolute value less than or equal to 1 and so is an extremal solution of Hadamard's maximal determinant problem. Certain Hadamard matrices can almost directly be used as an error-correcting code using a Hadamard code (generalized in Reed–Muller codes), and are also used in balanced repeated replication (BRR), used by statisticians to estimate the variance of a parameter estimator. Properties Let H be a Hadamard matrix of order n. The transpose of H is closely related to its inverse. In fact: where In is the n × n identity matrix and HT is the transpose of H. To see that this is true, notice that the rows of H are all orthogonal vectors over the field of real numbers and each have length . Dividing H through by this length gives an orthogonal matrix whose transpose is thus its inverse. Multiplying by the length again gives the equality above. As a result, where det(H) is the determinant of H. Suppose that M is a complex matrix of order n, whose entries are bounded by |Mij| ≤ 1, for each i, j between 1 and n. Then Hadamard's determinant bound states that Equality in this bound is attained for a real matrix M if and only if M is a Hadamard matrix. The order of a Hadamard matrix must be 1, 2, or a multiple of 4. The proof of the nonexistence of Hadamard matrices with dimensions other than 1, 2, or a multiple of 4 follows: If , then there is at least one scalar product of 2 rows which has to be 0. The scalar product is a sum of n values each of which is either 1 or -1, therefore the sum is odd for odd n, so n must be even. If with , and there exists an Hadamard matrix , then it has the property that for any : Now we define the matrix by setting . Note that has all 1s in row 0. We check that the matrix is also a Hadamard matrix: Row 1 and row 2, like all other rows except row 0, must have entries of 1 and entries of -1 each. (*) Let denote the number of 1s of row 2 beneath 1s in row 1. Let denote the number of -1s of row 2 beneath 1s in row 1. Let denote the number of 1s of row 2 beneath -1s in row 1. Let denote the number of -1s of row 2 beneath -1s in row 1. Row 2 has to
https://en.wikipedia.org/wiki/Solid%20geometry
Solid geometry or stereometry is the geometry of three-dimensional Euclidean space (3D space). A solid figure is the region of 3D space bounded by a two-dimensional surface; for example, a solid ball consists of a sphere and its interior. Solid geometry deals with the measurements of volumes of various solids, including pyramids, prisms (and other polyhedrons), cubes, cylinders, cones (and truncated cones). History The Pythagoreans dealt with the regular solids, but the pyramid, prism, cone and cylinder were not studied until the Platonists. Eudoxus established their measurement, proving the pyramid and cone to have one-third the volume of a prism and cylinder on the same base and of the same height. He was probably also the discoverer of a proof that the volume enclosed by a sphere is proportional to the cube of its radius. Topics Basic topics in solid geometry and stereometry include: incidence of planes and lines dihedral angle and solid angle the cube, cuboid, parallelepiped the tetrahedron and other pyramids prisms octahedron, dodecahedron, icosahedron cones and cylinders the sphere other quadrics: spheroid, ellipsoid, paraboloid and hyperboloids. Advanced topics include: projective geometry of three dimensions (leading to a proof of Desargues' theorem by using an extra dimension) further polyhedra descriptive geometry. List of solid figures Whereas a sphere is the surface of a ball, for other solid figures it is sometimes ambiguous whether the term refers to the surface of the figure or the volume enclosed therein, notably for a cylinder. Techniques Various techniques and tools are used in solid geometry. Among them, analytic geometry and vector techniques have a major impact by allowing the systematic use of linear equations and matrix algebra, which are important for higher dimensions. Applications A major application of solid geometry and stereometry is in 3D computer graphics. See also Euclidean geometry Shape Solid modeling Surface Notes References Solid geometry
https://en.wikipedia.org/wiki/Triangle%20%28disambiguation%29
A triangle is a geometric shape with three sides. Triangle may also refer to: Mathematics Exact triangle, a collection of objects in category theory Triangle inequality, Euclid's proposition that the sum of any two sides of a triangle is longer than the third side American expression for set square, an object used in engineering and technical drawing, with the aim of providing a straightedge at a right angle or other particular planar angle to a baseline The triangle graph in graph theory Entertainment Music Triangle (musical instrument), in the percussion family Tri Angle (record label), in New York and London Triangle (band) a Japanese pop group in 1970s The Triangles, Australian band Albums Tri-Angle, a 2004 album by TVXQ Triangle (The Beau Brummels album), 1967 Triangle (Perfume album), 2009 Triangle (Diaura album), 2014 Triangle, a 2008 album by Mi Lu Bing Triangle, a 2011 EP by 10,000 Maniacs Film Triangle Film Corporation, a film studio in the U.S. during the silent era The Triangle (film), a 2001 made-for-TV thriller Triangle (2007 film), a Hong Kong crime-thriller Triangle (2009 British film), a British-Australian psychological thriller Triangle (2009 South Korean film), a South Korean-Japanese comedy The Triangle, a 1953 film starring Douglas Fairbanks Jr. Television The Triangle (miniseries), a 2005 Sci-Fi Channel series Triangle (1981 TV series), a 1980s BBC soap opera Triangle (2014 TV series), a 2014 MBC Korean drama "Triangle" (Buffy the Vampire Slayer), 2001 "Triangle" (The X-Files), 1998 "Triangles", an episode of Private Practice Places Le Triangle, a residential district in Montreal Triangle, Newfoundland and Labrador, Canada Triangle (Israel), a concentration of Israeli Arab towns Triangle Region (Denmark) (Trekanten), a sub-region on the Jutland Peninsula Trianglen, Copenhagen, a large intersection in Copenhagen, Denmark Research Triangle, a region of North Carolina, U.S., anchored by Raleigh, Durham, and Chapel Hill Triangle, New York, United States Triangle, Virginia, United States Triangle, West Yorkshire, a village in Calderdale, England Triangle, Zimbabwe Triangle, Bermuda, Devils triangle, located between Bermuda, Florida, and Puerto Rico Other uses Triangle (chart pattern), in financial technical analysis Triangle (novel), a 1983 Star Trek novel Triangle (Paris building) Triangle (railway), an English term equivalent to a North American Wye rail Triangle Fraternity, social fraternity for STEM Students Triangle offense, an offensive strategy used in basketball Triangle Rewards, a loyalty program offered by Canadian Tire The Triangle (newspaper), at Drexel University The Triangle, Manchester, a building in England Triangles (novel), a 2011 novel by Ellen Hopkins Delta (letter), in the Greek alphabet, whose uppercase resembles a triangle (Δ) Trigonodes hyppasia or Triangles, a species of moth See also Triangle Lake (disambiguation) Triangle Park (disambiguation) T
https://en.wikipedia.org/wiki/Berry%E2%80%93Esseen%20theorem
In probability theory, the central limit theorem states that, under certain circumstances, the probability distribution of the scaled mean of a random sample converges to a normal distribution as the sample size increases to infinity. Under stronger assumptions, the Berry–Esseen theorem, or Berry–Esseen inequality, gives a more quantitative result, because it also specifies the rate at which this convergence takes place by giving a bound on the maximal error of approximation between the normal distribution and the true distribution of the scaled sample mean. The approximation is measured by the Kolmogorov–Smirnov distance. In the case of independent samples, the convergence rate is , where is the sample size, and the constant is estimated in terms of the third absolute normalized moment. Statement of the theorem Statements of the theorem vary, as it was independently discovered by two mathematicians, Andrew C. Berry (in 1941) and Carl-Gustav Esseen (1942), who then, along with other authors, refined it repeatedly over subsequent decades. Identically distributed summands One version, sacrificing generality somewhat for the sake of clarity, is the following: There exists a positive constant C such that if X1, X2, ..., are i.i.d. random variables with E(X1) = 0, E(X12) = σ2 > 0, and E(|X1|3) = ρ < ∞, and if we define the sample mean, with Fn the cumulative distribution function of and Φ the cumulative distribution function of the standard normal distribution, then for all x and n, That is: given a sequence of independent and identically distributed random variables, each having mean zero and positive variance, if additionally the third absolute moment is finite, then the cumulative distribution functions of the standardized sample mean and the standard normal distribution differ (vertically, on a graph) by no more than the specified amount. Note that the approximation error for all n (and hence the limiting rate of convergence for indefinite n sufficiently large) is bounded by the order of n−1/2. Calculated values of the constant C have decreased markedly over the years, from the original value of 7.59 by , to 0.7882 by , then 0.7655 by , then 0.7056 by , then 0.7005 by , then 0.5894 by , then 0.5129 by , then 0.4785 by . The detailed review can be found in the papers and . The best estimate , C < 0.4748, follows from the inequality due to , since σ3 ≤ ρ and 0.33554 · 1.415 < 0.4748. However, if ρ ≥ 1.286σ3, then the estimate which is also proved in , gives an even tighter upper estimate. proved that the constant also satisfies the lower bound Non-identically distributed summands Let X1, X2, ..., be independent random variables with E(Xi) = 0, E(Xi2) = σi2 > 0, and E(|Xi|3) = ρi < ∞. Also, let be the normalized n-th partial sum. Denote Fn the cdf of Sn, and Φ the cdf of the standard normal distribution. For the sake of convenience denote In 1941, Andrew C. Berry proved that for all n there exists an absolute constant C1 such t
https://en.wikipedia.org/wiki/List%20of%20factorial%20and%20binomial%20topics
This is a list of factorial and binomial topics in mathematics. See also binomial (disambiguation). Abel's binomial theorem Alternating factorial Antichain Beta function Bhargava factorial Binomial coefficient Pascal's triangle Binomial distribution Binomial proportion confidence interval Binomial-QMF (Daubechies wavelet filters) Binomial series Binomial theorem Binomial transform Binomial type Carlson's theorem Catalan number Fuss–Catalan number Central binomial coefficient Combination Combinatorial number system De Polignac's formula Difference operator Difference polynomials Digamma function Egorychev method Erdős–Ko–Rado theorem Euler–Mascheroni constant Faà di Bruno's formula Factorial Factorial moment Factorial number system Factorial prime Gamma distribution Gamma function Gaussian binomial coefficient Gould's sequence Hyperfactorial Hypergeometric distribution Hypergeometric function identities Hypergeometric series Incomplete beta function Incomplete gamma function Jordan–Pólya number Kempner function Lah number Lanczos approximation Lozanić's triangle Macaulay representation of an integer Mahler's theorem Multinomial distribution Multinomial coefficient, Multinomial formula, Multinomial theorem Multiplicities of entries in Pascal's triangle Multiset Multivariate gamma function Narayana numbers Negative binomial distribution Nörlund–Rice integral Pascal matrix Pascal's pyramid Pascal's simplex Pascal's triangle Permutation List of permutation topics Pochhammer symbol (also falling, lower, rising, upper factorials) Poisson distribution Polygamma function Primorial Proof of Bertrand's postulate Sierpinski triangle Star of David theorem Stirling number Stirling transform Stirling's approximation Subfactorial Table of Newtonian series Taylor series Trinomial expansion Vandermonde's identity Wilson prime Wilson's theorem Wolstenholme prime Factorial and binomial topics
https://en.wikipedia.org/wiki/Telescoping%20series
In mathematics, a telescoping series is a series whose general term is of the form , i.e. the difference of two consecutive terms of a sequence . As a consequence the partial sums only consists of two terms of after cancellation. The cancellation technique, with part of each term cancelling with part of the next term, is known as the method of differences. For example, the series (the series of reciprocals of pronic numbers) simplifies as An early statement of the formula for the sum or partial sums of a telescoping series can be found in a 1644 work by Evangelista Torricelli, De dimensione parabolae. In general Telescoping sums are finite sums in which pairs of consecutive terms cancel each other, leaving only the initial and final terms. Let be a sequence of numbers. Then, If Telescoping products are finite products in which consecutive terms cancel denominator with numerator, leaving only the initial and final terms. Let be a sequence of numbers. Then, If More examples Many trigonometric functions also admit representation as a difference, which allows telescopic canceling between the consecutive terms. Some sums of the form where f and g are polynomial functions whose quotient may be broken up into partial fractions, will fail to admit summation by this method. In particular, one has The problem is that the terms do not cancel. Let k be a positive integer. Then where Hk is the kth harmonic number. All of the terms after cancel. Let k,m with k m be positive integers. Then An application in probability theory In probability theory, a Poisson process is a stochastic process of which the simplest case involves "occurrences" at random times, the waiting time until the next occurrence having a memoryless exponential distribution, and the number of "occurrences" in any time interval having a Poisson distribution whose expected value is proportional to the length of the time interval. Let Xt be the number of "occurrences" before time t, and let Tx be the waiting time until the xth "occurrence". We seek the probability density function of the random variable Tx. We use the probability mass function for the Poisson distribution, which tells us that where λ is the average number of occurrences in any time interval of length 1. Observe that the event {Xt ≥ x} is the same as the event {Tx ≤ t}, and thus they have the same probability. Intuitively, if something occurs at least times before time , we have to wait at most for the occurrence. The density function we seek is therefore The sum telescopes, leaving Similar concepts Telescoping product A telescoping product is a finite product (or the partial product of an infinite product) that can be cancelled by method of quotients to be eventually only a finite number of factors. For example, the infinite product simplifies as Other applications For other applications, see: Grandi's series; Proof that the sum of the reciprocals of the primes diver
https://en.wikipedia.org/wiki/Circle%20group
In mathematics, the circle group, denoted by or , is the multiplicative group of all complex numbers with absolute value 1, that is, the unit circle in the complex plane or simply the unit complex numbers The circle group forms a subgroup of , the multiplicative group of all nonzero complex numbers. Since is abelian, it follows that is as well. A unit complex number in the circle group represents a rotation of the complex plane about the origin and can be parametrized by the angle measure : This is the exponential map for the circle group. The circle group plays a central role in Pontryagin duality and in the theory of Lie groups. The notation for the circle group stems from the fact that, with the standard topology (see below), the circle group is a 1-torus. More generally, (the direct product of with itself times) is geometrically an -torus. The circle group is isomorphic to the special orthogonal group . Elementary introduction One way to think about the circle group is that it describes how to add angles, where only angles between 0° and 360° or or are permitted. For example, the diagram illustrates how to add 150° to 270°. The answer is , but when thinking in terms of the circle group, we may "forget" the fact that we have wrapped once around the circle. Therefore, we adjust our answer by 360°, which gives ). Another description is in terms of ordinary (real) addition, where only numbers between 0 and 1 are allowed (with 1 corresponding to a full rotation: 360° or ), i.e. the real numbers modulo the integers: This can be achieved by throwing away the digits occurring before the decimal point. For example, when we work out the answer is 1.1666..., but we may throw away the leading 1, so the answer (in the circle group) is just     with some preference to 0.166..., because Topological and analytic structure The circle group is more than just an abstract algebraic object. It has a natural topology when regarded as a subspace of the complex plane. Since multiplication and inversion are continuous functions on , the circle group has the structure of a topological group. Moreover, since the unit circle is a closed subset of the complex plane, the circle group is a closed subgroup of (itself regarded as a topological group). One can say even more. The circle is a 1-dimensional real manifold, and multiplication and inversion are real-analytic maps on the circle. This gives the circle group the structure of a one-parameter group, an instance of a Lie group. In fact, up to isomorphism, it is the unique 1-dimensional compact, connected Lie group. Moreover, every -dimensional compact, connected, abelian Lie group is isomorphic to . Isomorphisms The circle group shows up in a variety of forms in mathematics. We list some of the more common forms here. Specifically, we show that Note that the slash (/) denotes here quotient group. The set of all 1×1 unitary matrices clearly coincides with the circle group; the unitary condition
https://en.wikipedia.org/wiki/Dodecagon
In geometry, a dodecagon, or 12-gon, is any twelve-sided polygon. Regular dodecagon A regular dodecagon is a figure with sides of the same length and internal angles of the same size. It has twelve lines of reflective symmetry and rotational symmetry of order 12. A regular dodecagon is represented by the Schläfli symbol {12} and can be constructed as a truncated hexagon, t{6}, or a twice-truncated triangle, tt{3}. The internal angle at each vertex of a regular dodecagon is 150°. Area The area of a regular dodecagon of side length a is given by: And in terms of the apothem r (see also inscribed figure), the area is: In terms of the circumradius R, the area is: The span S of the dodecagon is the distance between two parallel sides and is equal to twice the apothem. A simple formula for area (given side length and span) is: This can be verified with the trigonometric relationship: Perimeter The perimeter of a regular dodecagon in terms of circumradius is: The perimeter in terms of apothem is: This coefficient is double the coefficient found in the apothem equation for area. Dodecagon construction As 12 = 22 × 3, regular dodecagon is constructible using compass-and-straightedge construction: Dissection Coxeter states that every zonogon (a 2m-gon whose opposite sides are parallel and of equal length) can be dissected into m(m-1)/2 parallelograms. In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the regular dodecagon, m=6, and it can be divided into 15: 3 squares, 6 wide 30° rhombs and 6 narrow 15° rhombs. This decomposition is based on a Petrie polygon projection of a 6-cube, with 15 of 240 faces. The sequence OEIS sequence defines the number of solutions as 908, including up to 12-fold rotations and chiral forms in reflection. One of the ways the mathematical manipulative pattern blocks are used is in creating a number of different dodecagons. They are related to the rhombic dissections, with 3 60° rhombi merged into hexagons, half-hexagon trapezoids, or divided into 2 equilateral triangles. Symmetry The regular dodecagon has Dih12 symmetry, order 24. There are 15 distinct subgroup dihedral and cyclic symmetries. Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g12 subgroup has no degrees of freedom but can seen as directed edges. Occurrence Tiling A regular dodecagon can fill a plane vertex with other regular polygons in 4 ways: Here are 3 example periodic plane tilings that use regular dodecagons, defined by their vertex configuration: Skew dodecagon A skew dodecagon is a skew polygon with 12 vertices and edges but not existing on the same plane. The interior of such an dodecagon is not generally defined. A skew zig-zag dodecagon has vertices alternating between two parallel planes. A regular skew dodecagon is vertex-transitive with equal edge lengths. In 3-dimensions it will be a zig-zag skew dodecagon
https://en.wikipedia.org/wiki/Nomenclature%20of%20Territorial%20Units%20for%20Statistics
Nomenclature of Territorial Units for Statistics or NUTS () is a geocode standard for referencing the administrative divisions of countries for statistical purposes. The standard, adopted in 2003, is developed and regulated by the European Union, and thus only covers the EU member states in detail. The Nomenclature of Territorial Units for Statistics is instrumental in the European Union's Structural Funds and Cohesion Fund delivery mechanisms and for locating the area where goods and services subject to European public procurement legislation are to be delivered. For each EU member country, a hierarchy of three NUTS levels is established by Eurostat in agreement with each member state; the subdivisions in some levels do not necessarily correspond to administrative divisions within the country. A NUTS code begins with a two-letter code referencing the country, as abbreviated in the European Union's Interinstitutional Style Guide. The subdivision of the country is then referred to with one number. A second or third subdivision level is referred to with another number each. Each numbering starts with 1, as 0 is used for the upper level. Where the subdivision has more than nine entities, capital letters are used to continue the numbering. Below the three NUTS levels are local administrative units (LAUs). A similar statistical system is defined for the candidate countries and members of the European Free Trade Association, but they are not part of NUTS governed by the regulations. The current NUTS classification, dated 21 November 2016 and effective from 1 January 2018 (now updated to current members ), lists 92 regions at NUTS 1, 244 regions at NUTS 2, 1215 regions at NUTS 3 level, and 99,387 local administrative units (LAUs). National structures Not all countries have every level of division, depending on their size. For example, Luxembourg and Cyprus only have local administrative units (LAUs); the three NUTS divisions each correspond to the entire country itself. Member states Candidate countries EFTA countries Former EU member-state The United Kingdom left the European Union on 31 January 2020, the only member-state to ever do so. Maps Establishment NUTS regions are generally based on existing national administrative subdivisions. In countries where only one or two regional subdivisions exist, or where the population of existing subdivisions is too small or too large, a second and/or third level is created. This may be on the first level (ex. France, Italy, Greece, and Spain), on the second (ex. Germany) and/or third level (ex. Belgium). In countries with small populations, where the entire country would be placed on the NUTS 2 or even NUTS 3 level (ex. Luxembourg, Cyprus), the regions at levels 1, 2 and 3 are identical to each other (and also to the entire country), but are coded with the appropriate length codes levels 1, 2 and 3. The NUTS system favors existing administrative units, with one or more assigned to each NUTS level. Spe
https://en.wikipedia.org/wiki/Cusp
A cusp is the most pointed end of a curve. It often refers to cusp (anatomy), a pointed structure on a tooth. Cusp or CUSP may also refer to: Mathematics Cusp (singularity), a singular point of a curve Cusp catastrophe, a branch of bifurcation theory in the study of dynamical systems Cusp form, in modular form theory Cusp neighborhood, a set of points near a cusp Cuspidal representation, a generalization of cusp forms in the theory of automorphic representations Science and medicine Beach cusps, a pointed and regular arc pattern of the shoreline at the beach Behavioral cusp, a change in behavior with far-reaching consequences Caltech-USGS Seismic Processing, software for analyzing earthquake data Center for Urban Science and Progress, a graduate school of New York University focusing on urban informatics CubeSat for Solar Particles, a satellite launched in 2022 Cusp (anatomy), a pointed structure on a tooth Cusps of heart valves, leaflets of a heart valve Nuclear cusp condition, in electron density Other uses Cusp (astrology) Cusp (film), a 2021 American documentary following three teenage girls at the end of summer Cusp (novel), a 2005 science fiction story by Robert A. Metzger Cusp Conference, an annual gathering of thinkers, innovators, etc. from various fields Cusp generation, a name given to those born during the transitional years of two generations Concordia University, St. Paul
https://en.wikipedia.org/wiki/Formula%20for%20primes
In number theory, a formula for primes is a formula generating the prime numbers, exactly and without exception. No such formula which is efficiently computable is known. A number of constraints are known, showing what such a "formula" can and cannot be. Formulas based on Wilson's theorem A simple formula is for positive integer , where is the floor function, which rounds down to the nearest integer. By Wilson's theorem, is prime if and only if . Thus, when is prime, the first factor in the product becomes one, and the formula produces the prime number . But when is not prime, the first factor becomes zero and the formula produces the prime number 2. This formula is not an efficient way to generate prime numbers because evaluating requires about multiplications and reductions modulo . In 1964, Willans gave the formula for the th prime number . This formula reduces to ; that is, it tautologically defines as the smallest integer m for which the prime-counting function is at least n. This formula is also not efficient. In addition to the appearance of , it computes by adding up copies of ; for example, . The articles What is an Answer? by Herbert Wilf (1982) and Formulas for Primes by Underwood Dudley (1983) have further discussion about the worthlessness of such formulas. Formula based on a system of Diophantine equations Because the set of primes is a computably enumerable set, by Matiyasevich's theorem, it can be obtained from a system of Diophantine equations. found an explicit set of 14 Diophantine equations in 26 variables, such that a given number k + 2 is prime if and only if that system has a solution in nonnegative integers: The 14 equations α0, …, α13 can be used to produce a prime-generating polynomial inequality in 26 variables: That is, is a polynomial inequality in 26 variables, and the set of prime numbers is identical to the set of positive values taken on by the left-hand side as the variables a, b, …, z range over the nonnegative integers. A general theorem of Matiyasevich says that if a set is defined by a system of Diophantine equations, it can also be defined by a system of Diophantine equations in only 9 variables. Hence, there is a prime-generating polynomial as above with only 10 variables. However, its degree is large (in the order of 1045). On the other hand, there also exists such a set of equations of degree only 4, but in 58 variables. Mills' formula The first such formula known was established by , who proved that there exists a real number A such that, if then is a prime number for all positive integers n. If the Riemann hypothesis is true, then the smallest such A has a value of around 1.3063778838630806904686144926... and is known as Mills' constant. This value gives rise to the primes , , , ... . Very little is known about the constant A (not even whether it is rational). This formula has no practical value, because there is no known way of calcul
https://en.wikipedia.org/wiki/Pappus%27s%20centroid%20theorem
In mathematics, Pappus's centroid theorem (also known as the Guldinus theorem, Pappus–Guldinus theorem or Pappus's theorem) is either of two related theorems dealing with the surface areas and volumes of surfaces and solids of revolution. The theorems are attributed to Pappus of Alexandria and Paul Guldin. Pappus's statement of this theorem appears in print for the first time in 1659, but it was known before, by Kepler in 1615 and by Guldin in 1640. The first theorem The first theorem states that the surface area A of a surface of revolution generated by rotating a plane curve C about an axis external to C and on the same plane is equal to the product of the arc length s of C and the distance d traveled by the geometric centroid of C: For example, the surface area of the torus with minor radius r and major radius R is Proof A curve given by the positive function is bounded by two points given by: and If is an infinitesimal line element tangent to the curve, the length of the curve is given by: The component of the centroid of this curve is: The area of the surface generated by rotating the curve around the x-axis is given by: Using the last two equations to eliminate the integral we have: The second theorem The second theorem states that the volume V of a solid of revolution generated by rotating a plane figure F about an external axis is equal to the product of the area A of F and the distance d traveled by the geometric centroid of F. (The centroid of F is usually different from the centroid of its boundary curve C.) That is: For example, the volume of the torus with minor radius r and major radius R is This special case was derived by Johannes Kepler using infinitesimals. Proof 1 The area bounded by the two functions: and bounded by the two lines: and is given by: The component of the centroid of this area is given by: If this area is rotated about the y-axis, the volume generated can be calculated using the shell method. It is given by: Using the last two equations to eliminate the integral we have: Proof 2 Let be the area of , the solid of revolution of , and the volume of . Suppose starts in the -plane and rotates around the -axis. The distance of the centroid of from the -axis is its -coordinate and the theorem states that To show this, let be in the xz-plane, parametrized by for , a parameter region. Since is essentially a mapping from to , the area of is given by the change of variables formula: where is the determinant of the Jacobian matrix of the change of variables. The solid has the toroidal parametrization for in the parameter region ; and its volume is Expanding, The last equality holds because the axis of rotation must be external to , meaning . Now, by change of variables. Generalizations The theorems can be generalized for arbitrary curves and shapes, under appropriate conditions. Goodman & Goodman generalize the second theorem as follows. If the figure moves through sp
https://en.wikipedia.org/wiki/Gibbs%20sampling
In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. This sequence can be used to approximate the joint distribution (e.g., to generate a histogram of the distribution); to approximate the marginal distribution of one of the variables, or some subset of the variables (for example, the unknown parameters or latent variables); or to compute an integral (such as the expected value of one of the variables). Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled. Gibbs sampling is commonly used as a means of statistical inference, especially Bayesian inference. It is a randomized algorithm (i.e. an algorithm that makes use of random numbers), and is an alternative to deterministic algorithms for statistical inference such as the expectation-maximization algorithm (EM). As with other MCMC algorithms, Gibbs sampling generates a Markov chain of samples, each of which is correlated with nearby samples. As a result, care must be taken if independent samples are desired. Generally, samples from the beginning of the chain (the burn-in period) may not accurately represent the desired distribution and are usually discarded. Introduction Gibbs sampling is named after the physicist Josiah Willard Gibbs, in reference to an analogy between the sampling algorithm and statistical physics. The algorithm was described by brothers Stuart and Donald Geman in 1984, some eight decades after the death of Gibbs, and became popularized in the statistics community for calculating marginal probability distribution, especially the posterior distribution. In its basic version, Gibbs sampling is a special case of the Metropolis–Hastings algorithm. However, in its extended versions (see below), it can be considered a general framework for sampling from a large set of variables by sampling each variable (or in some cases, each group of variables) in turn, and can incorporate the Metropolis–Hastings algorithm (or methods such as slice sampling) to implement one or more of the sampling steps. Gibbs sampling is applicable when the joint distribution is not known explicitly or is difficult to sample from directly, but the conditional distribution of each variable is known and is easy (or at least, easier) to sample from. The Gibbs sampling algorithm generates an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown that the sequence of samples constitutes a Markov chain, and the stationary distribution of that Markov chain is just the sought-after joint distribution. Gibbs sampling is particularly well-adapted to sampling the posterior distribution of a Bayesian network, since Bayesian networks are typically specified as a collection of con
https://en.wikipedia.org/wiki/Complex%20manifold
In differential geometry and complex geometry, a complex manifold is a manifold with an atlas of charts to the open unit disc in , such that the transition maps are holomorphic. The term complex manifold is variously used to mean a complex manifold in the sense above (which can be specified as an integrable complex manifold), and an almost complex manifold. Implications of complex structure Since holomorphic functions are much more rigid than smooth functions, the theories of smooth and complex manifolds have very different flavors: compact complex manifolds are much closer to algebraic varieties than to differentiable manifolds. For example, the Whitney embedding theorem tells us that every smooth n-dimensional manifold can be embedded as a smooth submanifold of R2n, whereas it is "rare" for a complex manifold to have a holomorphic embedding into Cn. Consider for example any compact connected complex manifold M: any holomorphic function on it is constant by the maximum modulus principle. Now if we had a holomorphic embedding of M into Cn, then the coordinate functions of Cn would restrict to nonconstant holomorphic functions on M, contradicting compactness, except in the case that M is just a point. Complex manifolds that can be embedded in Cn are called Stein manifolds and form a very special class of manifolds including, for example, smooth complex affine algebraic varieties. The classification of complex manifolds is much more subtle than that of differentiable manifolds. For example, while in dimensions other than four, a given topological manifold has at most finitely many smooth structures, a topological manifold supporting a complex structure can and often does support uncountably many complex structures. Riemann surfaces, two dimensional manifolds equipped with a complex structure, which are topologically classified by the genus, are an important example of this phenomenon. The set of complex structures on a given orientable surface, modulo biholomorphic equivalence, itself forms a complex algebraic variety called a moduli space, the structure of which remains an area of active research. Since the transition maps between charts are biholomorphic, complex manifolds are, in particular, smooth and canonically oriented (not just orientable: a biholomorphic map to (a subset of) Cn gives an orientation, as biholomorphic maps are orientation-preserving). Examples of complex manifolds Riemann surfaces. Calabi–Yau manifolds. The Cartesian product of two complex manifolds. The inverse image of any noncritical value of a holomorphic map. Smooth complex algebraic varieties Smooth complex algebraic varieties are complex manifolds, including: Complex vector spaces. Complex projective spaces, Pn(C). Complex Grassmannians. Complex Lie groups such as GL(n, C) or Sp(n, C). Similarly, the quaternionic analogs of these are also complex manifolds. Simply connected The simply connected 1-dimensional complex manifolds are isomorphic to either:
https://en.wikipedia.org/wiki/Skew-Hermitian%20matrix
In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix is skew-Hermitian if it satisfies the relation where denotes the conjugate transpose of the matrix . In component form, this means that for all indices and , where is the element in the -th row and -th column of , and the overline denotes complex conjugation. Skew-Hermitian matrices can be understood as the complex versions of real skew-symmetric matrices, or as the matrix analogue of the purely imaginary numbers. The set of all skew-Hermitian matrices forms the Lie algebra, which corresponds to the Lie group U(n). The concept can be generalized to include linear transformations of any complex vector space with a sesquilinear norm. Note that the adjoint of an operator depends on the scalar product considered on the dimensional complex or real space . If denotes the scalar product on , then saying is skew-adjoint means that for all one has . Imaginary numbers can be thought of as skew-adjoint (since they are like matrices), whereas real numbers correspond to self-adjoint operators. Example For example, the following matrix is skew-Hermitian because Properties The eigenvalues of a skew-Hermitian matrix are all purely imaginary (and possibly zero). Furthermore, skew-Hermitian matrices are normal. Hence they are diagonalizable and their eigenvectors for distinct eigenvalues must be orthogonal. All entries on the main diagonal of a skew-Hermitian matrix have to be pure imaginary; i.e., on the imaginary axis (the number zero is also considered purely imaginary). If and are skew-Hermitian, then is skew-Hermitian for all real scalars and . is skew-Hermitian if and only if (or equivalently, ) is Hermitian. is skew-Hermitian if and only if the real part is skew-symmetric and the imaginary part is symmetric. If is skew-Hermitian, then is Hermitian if is an even integer and skew-Hermitian if is an odd integer. is skew-Hermitian if and only if for all vectors . If is skew-Hermitian, then the matrix exponential is unitary. The space of skew-Hermitian matrices forms the Lie algebra of the Lie group . Decomposition into Hermitian and skew-Hermitian The sum of a square matrix and its conjugate transpose is Hermitian. The difference of a square matrix and its conjugate transpose is skew-Hermitian. This implies that the commutator of two Hermitian matrices is skew-Hermitian. An arbitrary square matrix can be written as the sum of a Hermitian matrix and a skew-Hermitian matrix : See also Bivector (complex) Hermitian matrix Normal matrix Skew-symmetric matrix Unitary matrix Notes References . . Matrices Abstract algebra Linear algebra
https://en.wikipedia.org/wiki/Zariski%20tangent%20space
In algebraic geometry, the Zariski tangent space is a construction that defines a tangent space at a point P on an algebraic variety V (and more generally). It does not use differential calculus, being based directly on abstract algebra, and in the most concrete cases just the theory of a system of linear equations. Motivation For example, suppose given a plane curve C defined by a polynomial equation F(X,Y) = 0 and take P to be the origin (0,0). Erasing terms of higher order than 1 would produce a 'linearised' equation reading L(X,Y) = 0 in which all terms XaYb have been discarded if a + b > 1. We have two cases: L may be 0, or it may be the equation of a line. In the first case the (Zariski) tangent space to C at (0,0) is the whole plane, considered as a two-dimensional affine space. In the second case, the tangent space is that line, considered as affine space. (The question of the origin comes up, when we take P as a general point on C; it is better to say 'affine space' and then note that P is a natural origin, rather than insist directly that it is a vector space.) It is easy to see that over the real field we can obtain L in terms of the first partial derivatives of F. When those both are 0 at P, we have a singular point (double point, cusp or something more complicated). The general definition is that singular points of C are the cases when the tangent space has dimension 2. Definition The cotangent space of a local ring R, with maximal ideal is defined to be where 2 is given by the product of ideals. It is a vector space over the residue field k:= R/. Its dual (as a k-vector space) is called tangent space of R. This definition is a generalization of the above example to higher dimensions: suppose given an affine algebraic variety V and a point v of V. Morally, modding out 2 corresponds to dropping the non-linear terms from the equations defining V inside some affine space, therefore giving a system of linear equations that define the tangent space. The tangent space and cotangent space to a scheme X at a point P is the (co)tangent space of . Due to the functoriality of Spec, the natural quotient map induces a homomorphism for X=Spec(R), P a point in Y=Spec(R/I). This is used to embed in . Since morphisms of fields are injective, the surjection of the residue fields induced by g is an isomorphism. Then a morphism k of the cotangent spaces is induced by g, given by Since this is a surjection, the transpose is an injection. (One often defines the tangent and cotangent spaces for a manifold in the analogous manner.) Analytic functions If V is a subvariety of an n-dimensional vector space, defined by an ideal I, then R = Fn / I, where Fn is the ring of smooth/analytic/holomorphic functions on this vector space. The Zariski tangent space at x is mn / (I+mn2), where mn is the maximal ideal consisting of those functions in Fn vanishing at x. In the planar example above, I = (F(X,Y)), and I+m2 = (L(X,Y))+m2. Properties
https://en.wikipedia.org/wiki/First%20fundamental%20form
In differential geometry, the first fundamental form is the inner product on the tangent space of a surface in three-dimensional Euclidean space which is induced canonically from the dot product of . It permits the calculation of curvature and metric properties of a surface such as length and area in a manner consistent with the ambient space. The first fundamental form is denoted by the Roman numeral , Definition Let be a parametric surface. Then the inner product of two tangent vectors is where , , and are the coefficients of the first fundamental form. The first fundamental form may be represented as a symmetric matrix. Further notation When the first fundamental form is written with only one argument, it denotes the inner product of that vector with itself. The first fundamental form is often written in the modern notation of the metric tensor. The coefficients may then be written as : The components of this tensor are calculated as the scalar product of tangent vectors and : for . See example below. Calculating lengths and areas The first fundamental form completely describes the metric properties of a surface. Thus, it enables one to calculate the lengths of curves on the surface and the areas of regions on the surface. The line element may be expressed in terms of the coefficients of the first fundamental form as The classical area element given by can be expressed in terms of the first fundamental form with the assistance of Lagrange's identity, Example: curve on a sphere A spherical curve on the unit sphere in may be parametrized as Differentiating with respect to and yields The coefficients of the first fundamental form may be found by taking the dot product of the partial derivatives. so: Length of a curve on the sphere The equator of the unit sphere is a parametrized curve given by with ranging from 0 to 2. The line element may be used to calculate the length of this curve. Area of a region on the sphere The area element may be used to calculate the area of the unit sphere. Gaussian curvature The Gaussian curvature of a surface is given by where , , and are the coefficients of the second fundamental form. Theorema egregium of Gauss states that the Gaussian curvature of a surface can be expressed solely in terms of the first fundamental form and its derivatives, so that is in fact an intrinsic invariant of the surface. An explicit expression for the Gaussian curvature in terms of the first fundamental form is provided by the Brioschi formula. See also Metric tensor Second fundamental form Third fundamental form Tautological one-form External links First Fundamental Form — from Wolfram MathWorld Differential geometry of surfaces Differential geometry Surfaces
https://en.wikipedia.org/wiki/Group%20scheme
In mathematics, a group scheme is a type of object from algebraic geometry equipped with a composition law. Group schemes arise naturally as symmetries of schemes, and they generalize algebraic groups, in the sense that all algebraic groups have group scheme structure, but group schemes are not necessarily connected, smooth, or defined over a field. This extra generality allows one to study richer infinitesimal structures, and this can help one to understand and answer questions of arithmetic significance. The category of group schemes is somewhat better behaved than that of group varieties, since all homomorphisms have kernels, and there is a well-behaved deformation theory. Group schemes that are not algebraic groups play a significant role in arithmetic geometry and algebraic topology, since they come up in contexts of Galois representations and moduli problems. The initial development of the theory of group schemes was due to Alexander Grothendieck, Michel Raynaud and Michel Demazure in the early 1960s. Definition A group scheme is a group object in a category of schemes that has fiber products and some final object S. That is, it is an S-scheme G equipped with one of the equivalent sets of data a triple of morphisms μ: G ×S G → G, e: S → G, and ι: G → G, satisfying the usual compatibilities of groups (namely associativity of μ, identity, and inverse axioms) a functor from schemes over S to the category of groups, such that composition with the forgetful functor to sets is equivalent to the presheaf corresponding to G under the Yoneda embedding. (See also: group functor.) A homomorphism of group schemes is a map of schemes that respects multiplication. This can be precisely phrased either by saying that a map f satisfies the equation fμ = μ(f × f), or by saying that f is a natural transformation of functors from schemes to groups (rather than just sets). A left action of a group scheme G on a scheme X is a morphism G ×S X→ X that induces a left action of the group G(T) on the set X(T) for any S-scheme T. Right actions are defined similarly. Any group scheme admits natural left and right actions on its underlying scheme by multiplication and conjugation. Conjugation is an action by automorphisms, i.e., it commutes with the group structure, and this induces linear actions on naturally derived objects, such as its Lie algebra, and the algebra of left-invariant differential operators. An S-group scheme G is commutative if the group G(T) is an abelian group for all S-schemes T. There are several other equivalent conditions, such as conjugation inducing a trivial action, or inversion map ι being a group scheme automorphism. Constructions Given a group G, one can form the constant group scheme GS. As a scheme, it is a disjoint union of copies of S, and by choosing an identification of these copies with elements of G, one can define the multiplication, unit, and inverse maps by transport of structure. As a functor, it takes
https://en.wikipedia.org/wiki/Representativeness%20heuristic
The representativeness heuristic is used when making judgments about the probability of an event being representional in character and essence of known protyical event. It is one of a group of heuristics (simple rules governing judgment or decision-making) proposed by psychologists Amos Tversky and Daniel Kahneman in the early 1970s as "the degree to which [an event] (i) is similar in essential characteristics to its parent population, and (ii) reflects the salient features of the process by which it is generated". The representativeness heuristic works by comparing an event to a prototype or stereotype that we already have in mind. For example, if we see a person who is dressed in eccentric clothes and reading a poetry book, we might be more likely to think that they are a poet than an accountant. This is because the person's appearance and behavior are more representative of the stereotype of a poet than an accountant. The representativeness heuristic can be a useful shortcut in some cases, but it can also lead to errors in judgment. For example, if we only see a small sample of people from a particular group, we might overestimate the degree to which they are representative of the entire group. Heuristics are described as "judgmental shortcuts that generally get us where we need to go – and quickly – but at the cost of occasionally sending us off course." Heuristics are useful because they use effort-reduction and simplification in decision-making. When people rely on representativeness to make judgments, they are likely to judge wrongly because the fact that something is more representative does not actually make it more likely. The representativeness heuristic is simply described as assessing similarity of objects and organizing them based around the category prototype (e.g., like goes with like, and causes and effects should resemble each other). This heuristic is used because it is an easy computation. The problem is that people overestimate its ability to accurately predict the likelihood of an event. Thus, it can result in neglect of relevant base rates and other cognitive biases. Determinants of representativeness The representativeness heuristic is more likely to be used when the judgement or decision to be made has certain factors. Similarity When judging the representativeness of a new stimulus/event, people usually pay attention to the degree of similarity between the stimulus/event and a standard/process. It is also important that those features be salient. Nilsson, Juslin, and Olsson (2008) found this to be influenced by the exemplar account of memory (concrete examples of a category are stored in memory) so that new instances were classified as representative if highly similar to a category as well as if frequently encountered. Several examples of similarity have been described in the representativeness heuristic literature. This research has focused on medical beliefs. People often believe that medical symptoms should r
https://en.wikipedia.org/wiki/RSA%20numbers
In mathematics, the RSA numbers are a set of large semiprimes (numbers with exactly two prime factors) that were part of the RSA Factoring Challenge. The challenge was to find the prime factors of each number. It was created by RSA Laboratories in March 1991 to encourage research into computational number theory and the practical difficulty of factoring large integers. The challenge was ended in 2007. RSA Laboratories (which is an acronym of the creators of the technique; Rivest, Shamir and Adleman) published a number of semiprimes with 100 to 617 decimal digits. Cash prizes of varying size, up to US$200,000 (and prizes up to $20,000 awarded), were offered for factorization of some of them. The smallest RSA number was factored in a few days. Most of the numbers have still not been factored and many of them are expected to remain unfactored for many years to come. , the smallest 23 of the 54 listed numbers have been factored. While the RSA challenge officially ended in 2007, people are still attempting to find the factorizations. According to RSA Laboratories, "Now that the industry has a considerably more advanced understanding of the cryptanalytic strength of common symmetric-key and public-key algorithms, these challenges are no longer active." Some of the smaller prizes had been awarded at the time. The remaining prizes were retracted. The first RSA numbers generated, from RSA-100 to RSA-500, were labeled according to their number of decimal digits. Later, beginning with RSA-576, binary digits are counted instead. An exception to this is RSA-617, which was created before the change in the numbering scheme. The numbers are listed in increasing order below. Note: until work on this article is finished, please check both the table and the list, since they include different values and different information. RSA-100 RSA-100 has 100 decimal digits (330 bits). Its factorization was announced on April 1, 1991, by Arjen K. Lenstra. Reportedly, the factorization took a few days using the multiple-polynomial quadratic sieve algorithm on a MasPar parallel computer. The value and factorization of RSA-100 are as follows: RSA-100 = 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139 RSA-100 = 37975227936943673922808872755445627854565536638199 × 40094690950920881030683735292761468389214899724061 It takes four hours to repeat this factorization using the program Msieve on a 2200 MHz Athlon 64 processor. The number can be factorized in 72 minutes on overclocked to 3.5 GHz Intel Core2 Quad q9300, using GGNFS and Msieve binaries running by distributed version of the factmsieve Perl script. RSA-110 RSA-110 has 110 decimal digits (364 bits), and was factored in April 1992 by Arjen K. Lenstra and Mark S. Manasse in approximately one month. The number can be factorized in less than four hours on overclocked to 3.5 GHz Intel Core2 Quad q9300, using GGNFS and Msieve binaries running by distributed
https://en.wikipedia.org/wiki/Computational%20number%20theory
In mathematics and computer science, computational number theory, also known as algorithmic number theory, is the study of computational methods for investigating and solving problems in number theory and arithmetic geometry, including algorithms for primality testing and integer factorization, finding solutions to diophantine equations, and explicit methods in arithmetic geometry. Computational number theory has applications to cryptography, including RSA, elliptic curve cryptography and post-quantum cryptography, and is used to investigate conjectures and open problems in number theory, including the Riemann hypothesis, the Birch and Swinnerton-Dyer conjecture, the ABC conjecture, the modularity conjecture, the Sato-Tate conjecture, and explicit aspects of the Langlands program. Software packages Magma computer algebra system SageMath Number Theory Library PARI/GP Fast Library for Number Theory Further reading References External links Number theory Number theory
https://en.wikipedia.org/wiki/Isosceles%20trapezoid
In Euclidean geometry, an isosceles trapezoid (isosceles trapezium in British English) is a convex quadrilateral with a line of symmetry bisecting one pair of opposite sides. It is a special case of a trapezoid. Alternatively, it can be defined as a trapezoid in which both legs and both base angles are of equal measure, or as a trapezoid whose diagonals have equal length. Note that a non-rectangular parallelogram is not an isosceles trapezoid because of the second condition, or because it has no line of symmetry. In any isosceles trapezoid, two opposite sides (the bases) are parallel, and the two other sides (the legs) are of equal length (properties shared with the parallelogram), and the diagonals have equal length. The base angles of an isosceles trapezoid are equal in measure (there are in fact two pairs of equal base angles, where one base angle is the supplementary angle of a base angle at the other base). Special cases Rectangles and squares are usually considered to be special cases of isosceles trapezoids though some sources would exclude them. Another special case is a 3-equal side trapezoid, sometimes known as a trilateral trapezoid or a trisosceles trapezoid. They can also be seen dissected from regular polygons of 5 sides or more as a truncation of 4 sequential vertices. Self-intersections Any non-self-crossing quadrilateral with exactly one axis of symmetry must be either an isosceles trapezoid or a kite. However, if crossings are allowed, the set of symmetric quadrilaterals must be expanded to include also the crossed isosceles trapezoids, crossed quadrilaterals in which the crossed sides are of equal length and the other sides are parallel, and the antiparallelograms, crossed quadrilaterals in which opposite sides have equal length. Every antiparallelogram has an isosceles trapezoid as its convex hull, and may be formed from the diagonals and non-parallel sides (or either pair of opposite sides in the case of a rectangle) of an isosceles trapezoid. Characterizations If a quadrilateral is known to be a trapezoid, it is not sufficient just to check that the legs have the same length in order to know that it is an isosceles trapezoid, since a rhombus is a special case of a trapezoid with legs of equal length, but is not an isosceles trapezoid as it lacks a line of symmetry through the midpoints of opposite sides. Any one of the following properties distinguishes an isosceles trapezoid from other trapezoids: The diagonals have the same length. The base angles have the same measure. The segment that joins the midpoints of the parallel sides is perpendicular to them. Opposite angles are supplementary, which in turn implies that isosceles trapezoids are cyclic quadrilaterals. The diagonals divide each other into segments with lengths that are pairwise equal; in terms of the picture below, , (and if one wishes to exclude rectangles). Angles In an isosceles trapezoid, the base angles have the same measure pairwise. In the picture
https://en.wikipedia.org/wiki/Exotic%20probability
Exotic probability is a branch of probability theory that deals with probabilities which are outside the normal range of [0, 1]. According to the author of various papers on exotic probability, Saul Youssef, the valid possible alternatives for probability values are the real numbers, the complex numbers and the quaternions. Youssef also cites the work of Richard Feynman, P. A. M. Dirac, Stanley Gudder and S. K. Srinivasan as relevant to exotic probability theories. Of the application of such theories to quantum mechanics, Bill Jefferys has said: "Such approaches are also not necessary and in my opinion they confuse more than they illuminate." See also Negative probability Signed measure Complex measure References External links http://physics.bu.edu/~youssef/quantum/quantum_refs.html https://web.archive.org/web/20040327004613/http://fnalpubs.fnal.gov/library/colloq/colloqyoussef.html Measuring Negative Probabilities, Demystifying Schroedinger's Cat and Exploring Other Quantum Peculiarities With Trapped Atoms MathPages - The Complex Domain of Probability Probability theory
https://en.wikipedia.org/wiki/A5
A5 and variants may refer to: Science and mathematics A5 regulatory sequence in biochemistry A5, the abbreviation for the androgen Androstenediol Annexin A5, a human cellular protein ATC code A05 Bile and liver therapy, a subgroup of the Anatomical Therapeutic Chemical Classification System British NVC community A5 (Ceratophyllum demersum community), a British Isles plants community Subfamily A5, a Rhodopsin-like receptors subfamily Noradrenergic cell group A5, a noradrenergic cell group located in the Pons A5 pod, a name given to a group of orcas (Orcinus orca) found off the coast of British Columbia, Canada A5, the strain at fracture of a material as measured with a load test on a cylindrical body of length 5 times its diameter A5, the alternating group on five elements Technology Apple A5, the Apple mobile microprocessor ARM Cortex-A5, ARM applications processor Sport and recreation A5 (classification), an amputee sport classification A5 grade (climbing) A5, an aid climbing gear manufacturer - absorbed by The North Face A05, Réti Opening Encyclopaedia of Chess Openings code A-5, a common shorthand name for the Browning Auto-5 shotgun Gibson A-5 mandolin, a Gibson mandolin Tippmann A-5, a semi-automatic pneumatic marker for playing paintball A5, an Atlanta-based volleyball club Transport Automobiles Arrows A5, a 1982 British Formula One racing car Audi A5, a 2007–present German compact executive car Chery A5, a 2006–2010 Chinese compact sedan Soueast A5, a 2019–present Chinese compact sedan Sehol A5, a 2019–present Chinese compact sedan, formerly JAC Jiayue A5 Other uses in transportation A5 road, in several countries Hall-Scott A-5, an engine powering the 1916 Standard H-2 aircraft Prussian A 5, a 1913 German railbus Route A5 (WMATA), a bus route operated by the Washington Metropolitan Area Transit Authority Airlinair, by IATA code Bhutan, by aircraft registration code ICON A5, an American amphibious aircraft Pennsylvania Railroad class A5s, an American locomotive Finnish Steam Locomotive Class A5 LNER Class A5, a class of 4-6-2T steam locomotives Military Aircraft A-5, an export version of Nanchang Q-5, a Chinese-built jet fighter bomber Curtiss Falcon or A-5 Falcon, an attack aircraft manufactured by the Curtiss Aircraft Company Mitsubishi A5M, a 1930s Japanese fighter plane A-5 Vigilante, a carrier-based supersonic bomber designed for the United States Navy Focke-Wulf A 5, a World War I German Focke-Wulf aircraft Sturzkampfgeschwader 1, from its historic Geschwaderkennung code with the Luftwaffe in World War II Other uses in military or USS Pike (SS-6), a 1903 United States Navy Plunger-class submarine , an A-class submarine of the Royal Navy Aggregate 5, a German rocket design, scaled down precursor to the V-2, in World War II A 5, a Swedish regiment designation, see list of Swedish artillery regiments A5, the staff designation for air force headquarters staff concerned with plans or
https://en.wikipedia.org/wiki/1729%20%28number%29
1729 is the natural number following 1728 and preceding 1730. It is notably the first taxicab number. In mathematics 1729 is the smallest taxicab number, and is variously known as Ramanujan's number or the Ramanujan–Hardy number, after an anecdote of the British mathematician G. H. Hardy when he visited Indian mathematician Srinivasa Ramanujan in hospital. He related their conversation: The two different ways are: 1729 = 13 + 123 = 93 + 103 The quotation is sometimes expressed using the term "positive cubes", since allowing negative perfect cubes (the cube of a negative integer) gives the smallest solution as 91 (which is a divisor of 1729; 1991 = 1729). 91 = 63 + (−5)3 = 43 + 33 1729 was also found in one of Ramanujan's notebooks dated years before the incident, and was noted by Frénicle de Bessy in 1657. A commemorative plaque now appears at the site of the Ramanujan-Hardy incident, at 2 Colinette Road in Putney. The same expression defines 1729 as the first in the sequence of "Fermat near misses" defined, in reference to Fermat's Last Theorem, as numbers of the form which are also expressible as the sum of two other cubes . Other properties 1729 is a sphenic number. It is the third Carmichael number, the first Chernick–Carmichael number , the first absolute Euler pseudoprime, and the third Zeisel number. It is a centered cube number, as well as a dodecagonal number, a 24-gonal and 84-gonal number. Investigating pairs of distinct integer-valued quadratic forms that represent every integer the same number of times, Schiemann found that such quadratic forms must be in four or more variables, and the least possible discriminant of a four-variable pair is 1729. 1729 is the lowest number which can be represented by a Loeschian quadratic form in four different ways with a and b positive integers. The integer pairs are (25,23), (32,15), (37,8) and (40,3). 1729 is also the smallest integer side of an equilateral triangle for which there are three sets of non-equivalent points at integer distances from their vertices: {211, 1541, 1560}, {195, 1544, 1591}, and {824, 915, 1591}. 1729 is the dimension of the Fourier transform on which the fastest known algorithm for multiplying two numbers is based. This is an example of a galactic algorithm. See also A Disappearing Number, a March 2007 play about Ramanujan in England during World War I. Interesting number paradox 4104, the second positive integer which can be expressed as the sum of two positive cubes in two different ways. References External links Why does the number 1729 show up in so many Futurama episodes?, io9.com Integers Srinivasa Ramanujan
https://en.wikipedia.org/wiki/Gyrocar
A gyrocar is a two-wheeled automobile. The difference between a bicycle or motorcycle and a gyrocar is that in a bike, dynamic balance is provided by the rider, and in some cases by the geometry and mass distribution of the bike itself, and the gyroscopic effects from the wheels. Steering a motorcycle is done by precessing the front wheel. In a gyrocar, balance was provided by one or more gyroscopes, and in one example, connected to two pendulums by a rack and pinion. The concept was originally described in fiction in 1911 "Two Boys in a Gyrocar: The story of a New York to Paris Motor Race" by Kenneth Brown, (Houghton Mifflin Co). However the first prototype Gyrocar, The Shilovski Gyrocar, was commissioned in 1912 by the Russian Count Pyotr Shilovsky, a lawyer and member of the Russian royal family. It was manufactured to his design by the Wolseley Tool and Motorcar Company in England in 1914 and demonstrated in London the same year. The gyrocar was powered by a modified Wolseley C5 engine of , with a bore of 90 mm and a stroke of 121 mm. It was mounted ahead of the radiator, driving the rear wheel through a conventional clutch and gear box. A transmission brake was fitted after the gearbox – there were no brakes on the wheels themselves. The weight of the vehicle was 2.75 tons and it had a very large turning radius. In 1927, Louis Brennan, funded to the tune of £12,000 (plus a £2000 per year) by John Cortauld built a rather more successful gyrocar. Two contra-rotating gyros were housed under the front seats, spun in a horizontal plane at 3500 rpm by 24V electric motors powered from standard car batteries. This was the greatest speed obtainable with the electric motors available, and meant that each rotor had to weigh to generate sufficient forces. Precession was in the vertical fore-aft plane. The car had a Morris Oxford engine, engine mountings, and gearbox. Two sidewheels (light aircraft tailwheels were used) were manually lowered on stopping; if the driver forgot and switched off the gyros and walked away, the car would continue to balance itself using the gyro momentum for a few minutes, and then the wheels would automatically be dropped to stop tipping. See also Ford Gyron Gyro monorail Segway PT Bicycle and motorcycle dynamics Bi-Autogo, another 2-wheeled car Self-balancing unicycle Lit Motors References External links The Schilovski Gyrocar The Schilovski Gyrocar (better resolution image) The Schilovski Gyrocar (more detailed article) Motorcycle technology Russian inventions Experimental and prototype gyroscopic vehicles Two-wheeled motor vehicles
https://en.wikipedia.org/wiki/Taxicab%20number
In mathematics, the nth taxicab number, typically denoted Ta(n) or Taxicab(n), also called the nth Ramanujan–Hardy number, is defined as the smallest integer that can be expressed as a sum of two positive integer cubes in n distinct ways. The most famous taxicab number is 1729 = Ta(2) = 13 + 123 = 93 + 103. The name is derived from a conversation in about 1919 involving mathematicians G. H. Hardy and Srinivasa Ramanujan. As told by Hardy: History and definition The concept was first mentioned in 1657 by Bernard Frénicle de Bessy, who published the Hardy–Ramanujan number Ta(2) = 1729. This particular example of 1729 was made famous in the early 20th century by a story involving Srinivasa Ramanujan. In 1938, G. H. Hardy and E. M. Wright proved that such numbers exist for all positive integers n, and their proof is easily converted into a program to generate such numbers. However, the proof makes no claims at all about whether the thus-generated numbers are the smallest possible and so it cannot be used to find the actual value of Ta(n). The taxicab numbers subsequent to 1729 were found with the help of computers. John Leech obtained Ta(3) in 1957. E. Rosenstiel, J. A. Dardis and C. R. Rosenstiel found Ta(4) in 1989. J. A. Dardis found Ta(5) in 1994 and it was confirmed by David W. Wilson in 1999. Ta(6) was announced by Uwe Hollerbach on the NMBRTHRY mailing list on March 9, 2008, following a 2003 paper by Calude et al. that gave a 99% probability that the number was actually Ta(6). Upper bounds for Ta(7) to Ta(12) were found by Christian Boyer in 2006. The restriction of the summands to positive numbers is necessary, because allowing negative numbers allows for more (and smaller) instances of numbers that can be expressed as sums of cubes in n distinct ways. The concept of a cabtaxi number has been introduced to allow for alternative, less restrictive definitions of this nature. In a sense, the specification of two summands and powers of three is also restrictive; a generalized taxicab number allows for these values to be other than two and three, respectively. Known taxicab numbers So far, the following 6 taxicab numbers are known: Upper bounds for taxicab numbers For the following taxicab numbers upper bounds are known: Cubefree taxicab numbers A more restrictive taxicab problem requires that the taxicab number be cubefree, which means that it is not divisible by any cube other than 13. When a cubefree taxicab number is written as , the numbers and must be relatively prime. Among the taxicab numbers listed above, only and are cubefree taxicab numbers. The smallest cubefree taxicab number with three representations was discovered by Paul Vojta (unpublished) in 1981 while he was a graduate student: The smallest cubefree taxicab number with four representations was discovered by Stuart Gascoigne and independently by Duncan Moore in 2003: . See also , a list of related conjectures and theorems Notes References
https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes%20theory
In mathematics and computer science, the Krohn–Rhodes theory (or algebraic automata theory) is an approach to the study of finite semigroups and automata that seeks to decompose them in terms of elementary components. These components correspond to finite aperiodic semigroups and finite simple groups that are combined in a feedback-free manner (called a "wreath product" or "cascade"). Krohn and Rhodes found a general decomposition for finite automata. The authors discovered and proved an unexpected major result in finite semigroup theory, revealing a deep connection between finite automata and semigroups. Definitions and description of the Krohn–Rhodes theorem Let T be a semigroup. A semigroup S that is a homomorphic image of a subsemigroup of T is said to be a divisor of T. The Krohn–Rhodes theorem for finite semigroups states that every finite semigroup S is a divisor of a finite alternating wreath product of finite simple groups, each a divisor of S, and finite aperiodic semigroups (which contain no nontrivial subgroups). In the automata formulation, the Krohn–Rhodes theorem for finite automata states that given a finite automaton A with states Q and input set I, output alphabet U, then one can expand the states to Q' such that the new automaton A' embeds into a cascade of "simple", irreducible automata: In particular, A is emulated by a feed-forward cascade of (1) automata whose transformation semigroups are finite simple groups and (2) automata that are banks of flip-flops running in parallel. The new automaton A' has the same input and output symbols as A. Here, both the states and inputs of the cascaded automata have a very special hierarchical coordinate form. Moreover, each simple group (prime) or non-group irreducible semigroup (subsemigroup of the flip-flop monoid) that divides the transformation semigroup of A must divide the transformation semigroup of some component of the cascade, and only the primes that must occur as divisors of the components are those that divide A's transformation semigroup. Group complexity The Krohn–Rhodes complexity (also called group complexity or just complexity) of a finite semigroup S is the least number of groups in a wreath product of finite groups and finite aperiodic semigroups of which S is a divisor. All finite aperiodic semigroups have complexity 0, while non-trivial finite groups have complexity 1. In fact, there are semigroups of every non-negative integer complexity. For example, for any n greater than 1, the multiplicative semigroup of all (n+1) × (n+1) upper-triangular matrices over any fixed finite field has complexity n (Kambites, 2007). A major open problem in finite semigroup theory is the decidability of complexity: is there an algorithm that will compute the Krohn–Rhodes complexity of a finite semigroup, given its multiplication table? Upper bounds and ever more precise lower bounds on complexity have been obtained (see, e.g. Rhodes & Steinberg, 2009). Rhodes has co
https://en.wikipedia.org/wiki/Casimir%20element
In mathematics, a Casimir element (also known as a Casimir invariant or Casimir operator) is a distinguished element of the center of the universal enveloping algebra of a Lie algebra. A prototypical example is the squared angular momentum operator, which is a Casimir element of the three-dimensional rotation group. More generally, Casimir elements can be used to refer to any element of the center of the universal enveloping algebra. The algebra of these elements is known to be isomorphic to a polynomial algebra through the Harish-Chandra isomorphism. The Casimir element is named after Hendrik Casimir, who identified them in his description of rigid body dynamics in 1931. Definition The most commonly-used Casimir invariant is the quadratic invariant. It is the simplest to define, and so is given first. However, one may also have Casimir invariants of higher order, which correspond to homogeneous symmetric polynomials of higher order. Quadratic Casimir element Suppose that is an -dimensional Lie algebra. Let B be a nondegenerate bilinear form on that is invariant under the adjoint action of on itself, meaning that for all X, Y, Z in . (The most typical choice of B is the Killing form if is semisimple.) Let be any basis of , and be the dual basis of with respect to B. The Casimir element for B is the element of the universal enveloping algebra given by the formula Although the definition relies on a choice of basis for the Lie algebra, it is easy to show that Ω is independent of this choice. On the other hand, Ω does depend on the bilinear form B. The invariance of B implies that the Casimir element commutes with all elements of the Lie algebra , and hence lies in the center of the universal enveloping algebra . Quadratic Casimir invariant of a linear representation and of a smooth action Given a representation ρ of on a vector space V, possibly infinite-dimensional, the Casimir invariant of ρ is defined to be ρ(Ω), the linear operator on V given by the formula A specific form of this construction plays an important role in differential geometry and global analysis. Suppose that a connected Lie group G with Lie algebra acts on a differentiable manifold M. Consider the corresponding representation ρ of G on the space of smooth functions on M. Then elements of are represented by first order differential operators on M. In this situation, the Casimir invariant of ρ is the G-invariant second order differential operator on M defined by the above formula. Specializing further, if it happens that M has a Riemannian metric on which G acts transitively by isometries, and the stabilizer subgroup Gx of a point acts irreducibly on the tangent space of M at x, then the Casimir invariant of ρ is a scalar multiple of the Laplacian operator coming from the metric. More general Casimir invariants may also be defined, commonly occurring in the study of pseudo-differential operators in Fredholm theory. Casimir elements of higher order T
https://en.wikipedia.org/wiki/5-cell
In geometry, the 5-cell is the convex 4-polytope with Schläfli symbol {3,3,3}. It is a 5-vertex four-dimensional object bounded by five tetrahedral cells. It is also known as a C5, pentachoron, pentatope, pentahedroid, or tetrahedral pyramid. It is the 4-simplex (Coxeter's polytope), the simplest possible convex 4-polytope, and is analogous to the tetrahedron in three dimensions and the triangle in two dimensions. The 5-cell is a 4-dimensional pyramid with a tetrahedral base and four tetrahedral sides. The regular 5-cell is bounded by five regular tetrahedra, and is one of the six regular convex 4-polytopes (the four-dimensional analogues of the Platonic solids). A regular 5-cell can be constructed from a regular tetrahedron by adding a fifth vertex one edge length distant from all the vertices of the tetrahedron. This cannot be done in 3-dimensional space. The regular 5-cell is a solution to the problem: Make 10 equilateral triangles, all of the same size, using 10 matchsticks, where each side of every triangle is exactly one matchstick, and none of the triangles and match sticks intersect one another. No solution exists in three dimensions. Alternative names Pentachoron (5-point 4-polytope) Hypertetrahedron (4-dimensional analogue of the tetrahedron) 4-simplex (4-dimensional simplex) Tetrahedral pyramid (4-dimensional hyperpyramid with a tetrahedral base) Pentatope Pentahedroid (Henry Parker Manning) Pen (Jonathan Bowers: for pentachoron) Geometry The 5-cell is the 4-dimensional simplex, the simplest possible 4-polytope. As such it is the first in the sequence of 6 convex regular 4-polytopes (in order of size and complexity). A 5-cell is formed by any five points which are not all in the same hyperplane (as a tetrahedron is formed by any four points which are not all in the same plane, and a triangle is formed by any three points which are not all in the same line). Any five vertices in five different hyperplanes constitute a 5-cell, though not usually a regular 5-cell. The regular 5-cell is not found within any of the other regular convex 4-polytopes except one: the 600-vertex 120-cell is a compound of 120 regular 5-cells. Structure When a net of five tetrahedra is folded up in 4-dimensional space such that each tetrahedron is face bonded to the other four, the resulting 5-cell has a total of 5 vertices, 10 edges and 10 faces. Four edges meet at each vertex, and three tetrahedral cells meet at each edge. The 5-cell is self-dual (as are all simplexes), and its vertex figure is the tetrahedron. Its maximal intersection with 3-dimensional space is the triangular prism. Its dihedral angle is cos−1(), or approximately 75.52°. The convex hull of two 5-cells in dual configuration is the disphenoidal 30-cell, dual of the bitruncated 5-cell. As a configuration This configuration matrix represents the 5-cell. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers say how many of each element occur
https://en.wikipedia.org/wiki/Almost%20complex%20manifold
In mathematics, an almost complex manifold is a smooth manifold equipped with a smooth linear complex structure on each tangent space. Every complex manifold is an almost complex manifold, but there are almost complex manifolds that are not complex manifolds. Almost complex structures have important applications in symplectic geometry. The concept is due to Charles Ehresmann and Heinz Hopf in the 1940s. Formal definition Let M be a smooth manifold. An almost complex structure J on M is a linear complex structure (that is, a linear map which squares to −1) on each tangent space of the manifold, which varies smoothly on the manifold. In other words, we have a smooth tensor field J of degree such that when regarded as a vector bundle isomorphism on the tangent bundle. A manifold equipped with an almost complex structure is called an almost complex manifold. If M admits an almost complex structure, it must be even-dimensional. This can be seen as follows. Suppose M is n-dimensional, and let be an almost complex structure. If then . But if M is a real manifold, then is a real number – thus n must be even if M has an almost complex structure. One can show that it must be orientable as well. An easy exercise in linear algebra shows that any even dimensional vector space admits a linear complex structure. Therefore, an even dimensional manifold always admits a -rank tensor pointwise (which is just a linear transformation on each tangent space) such that at each point p. Only when this local tensor can be patched together to be defined globally does the pointwise linear complex structure yield an almost complex structure, which is then uniquely determined. The possibility of this patching, and therefore existence of an almost complex structure on a manifold M is equivalent to a reduction of the structure group of the tangent bundle from to . The existence question is then a purely algebraic topological one and is fairly well understood. Examples For every integer n, the flat space R2n admits an almost complex structure. An example for such an almost complex structure is (1 ≤ i, j ≤ 2n): for even i, for odd i. The only spheres which admit almost complex structures are S2 and S6 (). In particular, S4 cannot be given an almost complex structure (Ehresmann and Hopf). In the case of S2, the almost complex structure comes from an honest complex structure on the Riemann sphere. The 6-sphere, S6, when considered as the set of unit norm imaginary octonions, inherits an almost complex structure from the octonion multiplication; the question of whether it has a complex structure is known as the Hopf problem, after Heinz Hopf. Differential topology of almost complex manifolds Just as a complex structure on a vector space V allows a decomposition of VC into V+ and V− (the eigenspaces of J corresponding to +i and −i, respectively), so an almost complex structure on M allows a decomposition of the complexified tangent bundle TMC (which is the vector
https://en.wikipedia.org/wiki/Canonical%20form
In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression. Often, it is one which provides the simplest representation of an object and allows it to be identified in a unique way. The distinction between "canonical" and "normal" forms varies from subfield to subfield. In most fields, a canonical form specifies a unique representation for every object, while a normal form simply specifies its form, without the requirement of uniqueness. The canonical form of a positive integer in decimal representation is a finite sequence of digits that does not begin with zero. More generally, for a class of objects on which an equivalence relation is defined, a canonical form consists in the choice of a specific object in each class. For example: Jordan normal form is a canonical form for matrix similarity. The row echelon form is a canonical form, when one considers as equivalent a matrix and its left product by an invertible matrix. In computer science, and more specifically in computer algebra, when representing mathematical objects in a computer, there are usually many different ways to represent the same object. In this context, a canonical form is a representation such that every object has a unique representation (with canonicalization being the process through which a representation is put into its canonical form). Thus, the equality of two objects can easily be tested by testing the equality of their canonical forms. Despite this advantage, canonical forms frequently depend on arbitrary choices (like ordering the variables), which introduce difficulties for testing the equality of two objects resulting on independent computations. Therefore, in computer algebra, normal form is a weaker notion: A normal form is a representation such that zero is uniquely represented. This allows testing for equality by putting the difference of two objects in normal form. Canonical form can also mean a differential form that is defined in a natural (canonical) way. Definition Given a set S of objects with an equivalence relation R on S, a canonical form is given by designating some objects of S to be "in canonical form", such that every object under consideration is equivalent to exactly one object in canonical form. In other words, the canonical forms in S represent the equivalence classes, once and only once. To test whether two objects are equivalent, it then suffices to test equality on their canonical forms. A canonical form thus provides a classification theorem and more, in that it not only classifies every class, but also gives a distinguished (canonical) representative for each object in the class. Formally, a canonicalization with respect to an equivalence relation R on a set S is a mapping c:S→S such that for all s, s1, s2 ∈ S: c(s) = c(c(s))   (idempotence), s1 R s2 if and only if c(s1) = c(s2)   (decisiveness), and s R c(s)   (representa
https://en.wikipedia.org/wiki/Gorenstein%20ring
In commutative algebra, a Gorenstein local ring is a commutative Noetherian local ring R with finite injective dimension as an R-module. There are many equivalent conditions, some of them listed below, often saying that a Gorenstein ring is self-dual in some sense. Gorenstein rings were introduced by Grothendieck in his 1961 seminar (published in ). The name comes from a duality property of singular plane curves studied by (who was fond of claiming that he did not understand the definition of a Gorenstein ring). The zero-dimensional case had been studied by . and publicized the concept of Gorenstein rings. Frobenius rings are noncommutative analogs of zero-dimensional Gorenstein rings. Gorenstein schemes are the geometric version of Gorenstein rings. For Noetherian local rings, there is the following chain of inclusions. Definitions A Gorenstein ring is a commutative Noetherian ring such that each localization at a prime ideal is a Gorenstein local ring, as defined below. A Gorenstein ring is in particular Cohen–Macaulay. One elementary characterization is: a Noetherian local ring R of dimension zero (equivalently, with R of finite length as an R-module) is Gorenstein if and only if HomR(k, R) has dimension 1 as a k-vector space, where k is the residue field of R. Equivalently, R has simple socle as an R-module. More generally, a Noetherian local ring R is Gorenstein if and only if there is a regular sequence a1,...,an in the maximal ideal of R such that the quotient ring R/( a1,...,an) is Gorenstein of dimension zero. For example, if R is a commutative graded algebra over a field k such that R has finite dimension as a k-vector space, R = k ⊕ R1 ⊕ ... ⊕ Rm, then R is Gorenstein if and only if it satisfies Poincaré duality, meaning that the top graded piece Rm has dimension 1 and the product Ra × Rm−a → Rm is a perfect pairing for every a. Another interpretation of the Gorenstein property as a type of duality, for not necessarily graded rings, is: for a field F, a commutative F-algebra R of finite dimension as an F-vector space (hence of dimension zero as a ring) is Gorenstein if and only if there is an F-linear map e: R → F such that the symmetric bilinear form (x, y) := e(xy) on R (as an F-vector space) is nondegenerate. For a commutative Noetherian local ring (R, m, k) of Krull dimension n, the following are equivalent: R has finite injective dimension as an R-module; R has injective dimension n as an R-module; The Ext group for i ≠ n while for some i > n; for all i < n and R is an n-dimensional Gorenstein ring. A (not necessarily commutative) ring R is called Gorenstein if R has finite injective dimension both as a left R-module and as a right R-module. If R is a local ring, R is said to be a local Gorenstein ring. Examples Every local complete intersection ring, in particular every regular local ring, is Gorenstein. The ring R = k[x,y,z]/(x2, y2, xz, yz, z2−xy) is a 0-dimensional Gorenstein ring that is not a com
https://en.wikipedia.org/wiki/Quantitative%20analysis
Quantitative analysis may refer to: Quantitative research, application of mathematics and statistics in economics and marketing Quantitative analysis (chemistry), the determination of the absolute or relative abundance of one or more substances present in a sample Quantitative analysis (finance), the use of mathematical and statistical methods in finance and investment management Quantitative analysis of behavior, quantitative models in the experimental analysis of behavior Mathematical psychology, an approach to psychological research using mathematical modeling of perceptual, cognitive and motor processes Statistics, the collection, organization, analysis, interpretation and presentation of data See also QA (disambiguation)
https://en.wikipedia.org/wiki/Map%20%28mathematics%29
In mathematics, a map or mapping is a function in its general sense. These terms may have originated as from the process of making a geographical map: mapping the Earth surface to a sheet of paper. The term map may be used to distinguish some special types of functions, such as homomorphisms. For example, a linear map is a homomorphism of vector spaces, while the term linear function may have this meaning or it may mean a linear polynomial. In category theory, a map may refer to a morphism. The term transformation can be used interchangeably, but transformation often refers to a function from a set to itself. There are also a few less common uses in logic and graph theory. Maps as functions In many branches of mathematics, the term map is used to mean a function, sometimes with a specific property of particular importance to that branch. For instance, a "map" is a "continuous function" in topology, a "linear transformation" in linear algebra, etc. Some authors, such as Serge Lang, use "function" only to refer to maps in which the codomain is a set of numbers (i.e. a subset of R or C), and reserve the term mapping for more general functions. Maps of certain kinds are the subjects of many important theories. These include homomorphisms in abstract algebra, isometries in geometry, operators in analysis and representations in group theory. In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. A partial map is a partial function. Related terms such as domain, codomain, injective, and continuous can be applied equally to maps and functions, with the same meaning. All these usages can be applied to "maps" as general functions or as functions with special properties. As morphisms In category theory, "map" is often used as a synonym for "morphism" or "arrow", which is a structure-respecting function and thus may imply more structure than "function" does. For example, a morphism in a concrete category (i.e. a morphism that can be viewed as a function) carries with it the information of its domain (the source of the morphism) and its codomain (the target ). In the widely used definition of a function , is a subset of consisting of all the pairs for . In this sense, the function does not capture the set that is used as the codomain; only the range is determined by the function. See also Arrow notation – e.g., , also known as map List of chaotic maps Maplet arrow (↦) – commonly pronounced "maps to" References Works cited External links Basic concepts in set theory
https://en.wikipedia.org/wiki/Zero-dimensional%20space
In mathematics, a zero-dimensional topological space (or nildimensional space) is a topological space that has dimension zero with respect to one of several inequivalent notions of assigning a dimension to a given topological space. A graphical illustration of a nildimensional space is a point. Definition Specifically: A topological space is zero-dimensional with respect to the Lebesgue covering dimension if every open cover of the space has a refinement which is a cover by disjoint open sets. A topological space is zero-dimensional with respect to the finite-to-finite covering dimension if every finite open cover of the space has a refinement that is a finite open cover such that any point in the space is contained in exactly one open set of this refinement. A topological space is zero-dimensional with respect to the small inductive dimension if it has a base consisting of clopen sets. The three notions above agree for separable, metrisable spaces. Properties of spaces with small inductive dimension zero A zero-dimensional Hausdorff space is necessarily totally disconnected, but the converse fails. However, a locally compact Hausdorff space is zero-dimensional if and only if it is totally disconnected. (See for the non-trivial direction.) Zero-dimensional Polish spaces are a particularly convenient setting for descriptive set theory. Examples of such spaces include the Cantor space and Baire space. Hausdorff zero-dimensional spaces are precisely the subspaces of topological powers where is given the discrete topology. Such a space is sometimes called a Cantor cube. If is countably infinite, is the Cantor space. Manifolds All points of a zero-dimensional manifold are isolated. In particular, the zero-dimensional hypersphere is a pair of points, and the zero-dimensional ball is a single point. Notes References Dimension 0 Descriptive set theory Properties of topological spaces Space, topological
https://en.wikipedia.org/wiki/Jakob%20Rosanes
Jakob Rosanes (also Jacob; 16 August 1842 – 6 January 1922) was a German mathematician who worked on algebraic geometry and invariant theory. He was also a chess master. Rosanes was a grandson of Rabbi Akiva Eiger, one of the most revered Jewish religious scholars of the Talmud and halachic decisors of the 18th century. Eiger's daughter Baila was Rosanes' mother. Rosanes grew up during a period when the Enlightenment and increasing opportunities for social, academic and economic advancement for culturally assimilated Jews influenced large numbers of Jews to reconsider their faith. He was not religiously observant, and his children converted to Christianity. Rosanes studied at University of Berlin and the University of Breslau. He obtained his doctorate from Breslau (Wrocław) in 1865 and taught there for the rest of his working life. He became professor in 1876 and rector of the university during the years 1903–1904. Rosanes made significant contributions in Cremona transformations. Notable chess games Jakob Rosanes vs Adolf Anderssen, Breslau 1862, Spanish Game: Berlin Defense. Rio Gambit Accepted (C67), 1-0 Sometimes, Rosanes was able to beat even one of the best masters of his time, Adolf Anderssen... Jakob Rosanes vs Adolf Anderssen, Breslau, 1863, King's Gambit: Accepted. Kieseritzky Gambit Anderssen Defense (C39), 0-1 ...but as shows this beautiful game, the opposite result was probably quite usual in their games. References External links 1842 births 1922 deaths People from Brody 19th-century Austrian Jews 19th-century German mathematicians 20th-century German mathematicians 19th-century Polish mathematicians 20th-century Polish mathematicians Jewish scientists Algebraic geometers Jewish chess players Austrian chess players Jews from Galicia (Eastern Europe) 19th-century chess players
https://en.wikipedia.org/wiki/Semi-locally%20simply%20connected
In mathematics, specifically algebraic topology, semi-locally simply connected is a certain local connectedness condition that arises in the theory of covering spaces. Roughly speaking, a topological space X is semi-locally simply connected if there is a lower bound on the sizes of the “holes” in X. This condition is necessary for most of the theory of covering spaces, including the existence of a universal cover and the Galois correspondence between covering spaces and subgroups of the fundamental group. Most “nice” spaces such as manifolds and CW complexes are semi-locally simply connected, and topological spaces that do not satisfy this condition are considered somewhat pathological. The standard example of a non-semi-locally simply connected space is the Hawaiian earring. Definition A space X is called semi-locally simply connected if every point in X has a neighborhood U with the property that every loop in U can be contracted to a single point within X (i.e. every loop in U is nullhomotopic in X). The neighborhood U need not be simply connected: though every loop in U must be contractible within X, the contraction is not required to take place inside of U. For this reason, a space can be semi-locally simply connected without being locally simply connected. Equivalent to this definition, a space X is semi-locally simply connected if every point in X has a neighborhood U for which the homomorphism from the fundamental group of U to the fundamental group of X, induced by the inclusion map of U into X, is trivial. Most of the main theorems about covering spaces, including the existence of a universal cover and the Galois correspondence, require a space to be path-connected, locally path-connected, and semi-locally simply connected, a condition known as unloopable (délaçable in French). In particular, this condition is necessary for a space to have a simply connected covering space. Examples A simple example of a space that is not semi-locally simply connected is the Hawaiian earring: the union of the circles in the Euclidean plane with centers (1/n, 0) and radii 1/n, for n a natural number. Give this space the subspace topology. Then all neighborhoods of the origin contain circles that are not nullhomotopic. The Hawaiian earring can also be used to construct a semi-locally simply connected space that is not locally simply connected. In particular, the cone on the Hawaiian earring is contractible and therefore semi-locally simply connected, but it is clearly not locally simply connected. Topology of fundamental group In terms of the natural topology on the fundamental group, a locally path-connected space is semi-locally simply connected if and only if its quasitopological fundamental group is discrete. References J.S. Calcut, J.D. McCarthy Discreteness and homogeneity of the topological fundamental group Topology Proceedings, Vol. 34,(2009), pp. 339–349 Algebraic topology Homotopy theory Properties of topological spaces
https://en.wikipedia.org/wiki/Regular%20polytope
In mathematics, a regular polytope is a polytope whose symmetry group acts transitively on its flags, thus giving it the highest degree of symmetry. All its elements or -faces (for all , where is the dimension of the polytope) — cells, faces and so on — are also transitive on the symmetries of the polytope, and are regular polytopes of dimension . Regular polytopes are the generalized analog in any number of dimensions of regular polygons (for example, the square or the regular pentagon) and regular polyhedra (for example, the cube). The strong symmetry of the regular polytopes gives them an aesthetic quality that interests both non-mathematicians and mathematicians. Classically, a regular polytope in dimensions may be defined as having regular facets (-faces) and regular vertex figures. These two conditions are sufficient to ensure that all faces are alike and all vertices are alike. Note, however, that this definition does not work for abstract polytopes. A regular polytope can be represented by a Schläfli symbol of the form with regular facets as and regular vertex figures as Classification and description Regular polytopes are classified primarily according to their dimensionality. They can be further classified according to symmetry. For example, the cube and the regular octahedron share the same symmetry, as do the regular dodecahedron and icosahedron. Indeed, symmetry groups are sometimes named after regular polytopes, for example the tetrahedral and icosahedral symmetries. Three special classes of regular polytope exist in every dimension: Regular simplex Measure polytope (Hypercube) Cross polytope (Orthoplex) In two dimensions, there are infinitely many regular polygons. In three and four dimensions, there are several more regular polyhedra and 4-polytopes besides these three. In five dimensions and above, these are the only ones. See also the list of regular polytopes. In one dimension, the Line Segment simultaneously serves as all of these polytopes, and in two dimensions, the square can act as both the Measure Polytope and Cross Polytope at the same time. The idea of a polytope is sometimes generalised to include related kinds of geometrical object. Some of these have regular examples, as discussed in the section on historical discovery below. Schläfli symbols A concise symbolic representation for regular polytopes was developed by Ludwig Schläfli in the 19th century, and a slightly modified form has become standard. The notation is best explained by adding one dimension at a time. A convex regular polygon having n sides is denoted by {n}. So an equilateral triangle is {3}, a square {4}, and so on indefinitely. A regular star polygon which winds m times around its centre is denoted by the fractional value {n/m}, where n and m are co-prime, so a regular pentagram is {5/2}. A regular polyhedron having faces {n} with p faces joining around a vertex is denoted by {n, p}. The nine regular polyhedra are {3, 3} {3, 4} {4, 3}
https://en.wikipedia.org/wiki/Complex%20structure
A complex structure may refer to: In mathematics Almost complex manifold Complex manifold Linear complex structure Generalized complex structure Complex structure deformation Complex vector bundle#Complex structure In law Complex structure theory in English law See also Real structure
https://en.wikipedia.org/wiki/Locally%20convex%20topological%20vector%20space
In functional analysis and related areas of mathematics, locally convex topological vector spaces (LCTVS) or locally convex spaces are examples of topological vector spaces (TVS) that generalize normed spaces. They can be defined as topological vector spaces whose topology is generated by translations of balanced, absorbent, convex sets. Alternatively they can be defined as a vector space with a family of seminorms, and a topology can be defined in terms of that family. Although in general such spaces are not necessarily normable, the existence of a convex local base for the zero vector is strong enough for the Hahn–Banach theorem to hold, yielding a sufficiently rich theory of continuous linear functionals. Fréchet spaces are locally convex spaces that are completely metrizable (with a choice of complete metric). They are generalizations of Banach spaces, which are complete vector spaces with respect to a metric generated by a norm. History Metrizable topologies on vector spaces have been studied since their introduction in Maurice Fréchet's 1902 PhD thesis Sur quelques points du calcul fonctionnel (wherein the notion of a metric was first introduced). After the notion of a general topological space was defined by Felix Hausdorff in 1914, although locally convex topologies were implicitly used by some mathematicians, up to 1934 only John von Neumann would seem to have explicitly defined the weak topology on Hilbert spaces and strong operator topology on operators on Hilbert spaces. Finally, in 1935 von Neumann introduced the general definition of a locally convex space (called a convex space by him). A notable example of a result which had to wait for the development and dissemination of general locally convex spaces (amongst other notions and results, like nets, the product topology and Tychonoff's theorem) to be proven in its full generality, is the Banach–Alaoglu theorem which Stefan Banach first established in 1932 by an elementary diagonal argument for the case of separable normed spaces (in which case the unit ball of the dual is metrizable). Definition Suppose is a vector space over a subfield of the complex numbers (normally itself or ). A locally convex space is defined either in terms of convex sets, or equivalently in terms of seminorms. Definition via convex sets A topological vector space (TVS) is called if it has a neighborhood basis (that is, a local base) at the origin consisting of balanced, convex sets. The term is sometimes shortened to or . A subset in is called Convex if for all and In other words, contains all line segments between points in Circled if for all and scalars if then If this means that is equal to its reflection through the origin. For it means for any contains the circle through centred on the origin, in the one-dimensional complex subspace generated by Balanced if for all and scalars if then If this means that if then contains the line segment between and For
https://en.wikipedia.org/wiki/Quadratic%20integral
In mathematics, a quadratic integral is an integral of the form It can be evaluated by completing the square in the denominator. Positive-discriminant case Assume that the discriminant q = b2 − 4ac is positive. In that case, define u and A by and The quadratic integral can now be written as The partial fraction decomposition allows us to evaluate the integral: The final result for the original integral, under the assumption that q > 0, is Negative-discriminant case In case the discriminant q = b2 − 4ac is negative, the second term in the denominator in is positive. Then the integral becomes References Weisstein, Eric W. "Quadratic Integral." From MathWorld--A Wolfram Web Resource, wherein the following is referenced: Integral calculus
https://en.wikipedia.org/wiki/Projection%20%28linear%20algebra%29
In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself (an endomorphism) such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once (i.e. is idempotent). It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object. Definitions A projection on a vector space is a linear operator such that . When has an inner product and is complete, i.e. when is a Hilbert space, the concept of orthogonality can be used. A projection on a Hilbert space is called an orthogonal projection if it satisfies for all . A projection on a Hilbert space that is not orthogonal is called an oblique projection. Projection matrix A square matrix is called a projection matrix if it is equal to its square, i.e. if . A square matrix is called an orthogonal projection matrix if for a real matrix, and respectively for a complex matrix, where denotes the transpose of and denotes the adjoint or Hermitian transpose of . A projection matrix that is not an orthogonal projection matrix is called an oblique projection matrix. The eigenvalues of a projection matrix must be 0 or 1. Examples Orthogonal projection For example, the function which maps the point in three-dimensional space to the point is an orthogonal projection onto the xy-plane. This function is represented by the matrix The action of this matrix on an arbitrary vector is To see that is indeed a projection, i.e., , we compute Observing that shows that the projection is an orthogonal projection. Oblique projection A simple example of a non-orthogonal (oblique) projection is Via matrix multiplication, one sees that showing that is indeed a projection. The projection is orthogonal if and only if because only then Properties and classification Idempotence By definition, a projection is idempotent (i.e. ). Open map Every projection is an open map, meaning that it maps each open set in the domain to an open set in the subspace topology of the image. That is, for any vector and any ball (with positive radius) centered on , there exists a ball (with positive radius) centered on that is wholly contained in the image . Complementarity of image and kernel Let be a finite-dimensional vector space and be a projection on . Suppose the subspaces and are the image and kernel of respectively. Then has the following properties: is the identity operator on : We have a direct sum . Every vector may be decomposed uniquely as with and , and where The image and kernel of a projection are complementary, as are and . The operator is also a projection as the image and kernel of become the kernel and image of and vice versa. We say is a projection along onto (kernel/ima
https://en.wikipedia.org/wiki/Hessenberg%20matrix
In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg. A Hessenberg decomposition is a matrix decomposition of a matrix into a unitary matrix and a Hessenberg matrix such that where denotes the conjugate transpose. Definitions Upper Hessenberg matrix A square matrix is said to be in upper Hessenberg form or to be an upper Hessenberg matrix if for all with . An upper Hessenberg matrix is called unreduced if all subdiagonal entries are nonzero, i.e. if for all . Lower Hessenberg matrix A square matrix is said to be in lower Hessenberg form or to be a lower Hessenberg matrix if its transpose is an upper Hessenberg matrix or equivalently if for all with . A lower Hessenberg matrix is called unreduced if all superdiagonal entries are nonzero, i.e. if for all . Examples Consider the following matrices. The matrix is an upper unreduced Hessenberg matrix, is a lower unreduced Hessenberg matrix and is a lower Hessenberg matrix but is not unreduced. Computer programming Many linear algebra algorithms require significantly less computational effort when applied to triangular matrices, and this improvement often carries over to Hessenberg matrices as well. If the constraints of a linear algebra problem do not allow a general matrix to be conveniently reduced to a triangular one, reduction to Hessenberg form is often the next best thing. In fact, reduction of any matrix to a Hessenberg form can be achieved in a finite number of steps (for example, through Householder's transformation of unitary similarity transforms). Subsequent reduction of Hessenberg matrix to a triangular matrix can be achieved through iterative procedures, such as shifted QR-factorization. In eigenvalue algorithms, the Hessenberg matrix can be further reduced to a triangular matrix through Shifted QR-factorization combined with deflation steps. Reducing a general matrix to a Hessenberg matrix and then reducing further to a triangular matrix, instead of directly reducing a general matrix to a triangular matrix, often economizes the arithmetic involved in the QR algorithm for eigenvalue problems. Reduction to Hessenberg matrix Any matrix can be transformed into a Hessenberg matrix by a similarity transformation using Householder transformations. The following procedure for such a transformation is adapted from A Second Course In Linear Algebra by Garcia & Roger. Let be any real or complex matrix, then let be the submatrix of constructed by removing the first row in and let be the first column of . Construct the householder matrix where This householder matrix will map to and as such, the block matrix will map the matrix to the matrix which has only zeros below the second entry of the first colum
https://en.wikipedia.org/wiki/Tridiagonal%20matrix
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal: The determinant of a tridiagonal matrix is given by the continuant of its elements. An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm. Properties A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix. In particular, a tridiagonal matrix is a direct sum of p 1-by-1 and q 2-by-2 matrices such that — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems have one of these properties. Furthermore, if a real tridiagonal matrix A satisfies ak,k+1 ak+1,k > 0 for all k, so that the signs of its entries are symmetric, then it is similar to a Hermitian matrix, by a diagonal change of basis matrix. Hence, its eigenvalues are real. If we replace the strict inequality by ak,k+1 ak+1,k ≥ 0, then by continuity, the eigenvalues are still guaranteed to be real, but the matrix need no longer be similar to a Hermitian matrix. The set of all n × n tridiagonal matrices forms a 3n-2 dimensional vector space. Many linear algebra algorithms require significantly less computational effort when applied to diagonal matrices, and this improvement often carries over to tridiagonal matrices as well. Determinant The determinant of a tridiagonal matrix A of order n can be computed from a three-term recurrence relation. Write f1 = |a1| = a1 (i.e., f1 is the determinant of the 1 by 1 matrix consisting only of a1), and let The sequence (fi) is called the continuant and satisfies the recurrence relation with initial values f0 = 1 and f−1 = 0. The cost of computing the determinant of a tridiagonal matrix using this formula is linear in n, while the cost is cubic for a general matrix. Inversion The inverse of a non-singular tridiagonal matrix T is given by where the θi satisfy the recurrence relation with initial conditions θ0 = 1, θ1 = a1 and the ϕi satisfy with initial conditions ϕn+1 = 1 and ϕn = an. Closed form solutions can be computed for special cases such as symmetric matrices with all diagonal and off-diagonal elements equal or Toeplitz matrices and for the general case as well. In general, the inverse of a tridiagonal matrix is a semiseparable matrix and vice versa. Solution of linear system A system of equations Ax = b for  can be solved by an efficient form of Gaussian elimination when A is tridiagonal called tridiagonal matrix algorithm, requiring O(n) operations. Eigenvalues When a tridiagonal matrix is also Toeplitz, there is a simple closed-form solution for its eigenvalues, namely: A real symmetric tridiagonal matrix has rea
https://en.wikipedia.org/wiki/Pisot%E2%80%93Vijayaraghavan%20number
In mathematics, a Pisot–Vijayaraghavan number, also called simply a Pisot number or a PV number, is a real algebraic integer greater than 1, all of whose Galois conjugates are less than 1 in absolute value. These numbers were discovered by Axel Thue in 1912 and rediscovered by G. H. Hardy in 1919 within the context of diophantine approximation. They became widely known after the publication of Charles Pisot's dissertation in 1938. They also occur in the uniqueness problem for Fourier series. Tirukkannapuram Vijayaraghavan and Raphael Salem continued their study in the 1940s. Salem numbers are a closely related set of numbers. A characteristic property of PV numbers is that their powers approach integers at an exponential rate. Pisot proved a remarkable converse: if α > 1 is a real number such that the sequence measuring the distance from its consecutive powers to the nearest integer is square-summable, or ℓ 2, then α is a Pisot number (and, in particular, algebraic). Building on this characterization of PV numbers, Salem showed that the set S of all PV numbers is closed. Its minimal element is a cubic irrationality known as the plastic number. Much is known about the accumulation points of S. The smallest of them is the golden ratio. Definition and properties An algebraic integer of degree n is a root α of an irreducible monic polynomial P(x) of degree n with integer coefficients, its minimal polynomial. The other roots of P(x) are called the conjugates of α. If α > 1 but all other roots of P(x) are real or complex numbers of absolute value less than 1, so that they lie strictly inside the circle |x| = 1 in the complex plane, then α is called a Pisot number, Pisot–Vijayaraghavan number, or simply PV number. For example, the golden ratio, φ ≈ 1.618, is a real quadratic integer that is greater than 1, while the absolute value of its conjugate, −φ−1 ≈ −0.618, is less than 1. Therefore, φ is a Pisot number. Its minimal polynomial Elementary properties Every integer greater than 1 is a PV number. Conversely, every rational PV number is an integer greater than 1. If α is an irrational PV number whose minimal polynomial ends in k then α is greater than |k|. If α is a PV number then so are its powers αk, for all natural number exponents k. Every real algebraic number field K of degree n contains a PV number of degree n. This number is a field generator. The set of all PV numbers of degree n in K is closed under multiplication. Given an upper bound M and degree n, there are only a finite number of PV numbers of degree n that are less than M. Every PV number is a Perron number (a real algebraic number greater than one all of whose conjugates have smaller absolute value). Diophantine properties The main interest in PV numbers is due to the fact that their powers have a very "biased" distribution (mod 1). If α is a PV number and λ is any algebraic integer in the field then the sequence where ||x|| denotes the distance from the real numbe
https://en.wikipedia.org/wiki/Transcendental%20curve
In analytical geometry , a transcendental curve is a curve that is not an algebraic curve. Here for a curve, C, what matters is the point set (typically in the plane) underlying C, not a given parametrisation. For example, the unit circle is an algebraic curve (pedantically, the real points of such a curve); the usual parametrisation by trigonometric functions may involve those transcendental functions, but certainly the unit circle is defined by a polynomial equation. (The same remark applies to elliptic curves and elliptic functions; and in fact to curves of genus > 1 and automorphic functions.) The properties of algebraic curves, such as Bézout's theorem, give rise to criteria for showing curves actually are transcendental. For example an algebraic curve C either meets a given line L in a finite number of points, or possibly contains all of L. Thus a curve intersecting any line in an infinite number of points, while not containing it, must be transcendental. This applies not just to sinusoidal curves, therefore; but to large classes of curves showing oscillations. The term is originally attributed to Leibniz. Further examples Cycloid Trigonometric functions Logarithmic and exponential functions Archimedes' spiral Logarithmic spiral Catenary Tricomplex cosexponential References Curves ru:Кривая#Трансцендентные кривые
https://en.wikipedia.org/wiki/List%20of%20mathematics%20history%20topics
This is a list of mathematics history topics, by Wikipedia page. See also list of mathematicians, timeline of mathematics, history of mathematics, list of publications in mathematics. 1729 (anecdote) Adequality Archimedes Palimpsest Archimedes' use of infinitesimals Arithmetization of analysis Brachistochrone curve Chinese mathematics Cours d'Analyse Edinburgh Mathematical Society Erlangen programme Fermat's Last Theorem Greek mathematics Thomas Little Heath Hilbert's problems History of topos theory Hyperbolic quaternion Indian mathematics Islamic mathematics Italian school of algebraic geometry Kraków School of Mathematics Law of Continuity Lwów School of Mathematics Nicolas Bourbaki Non-Euclidean geometry Scottish Café Seven bridges of Königsberg Spectral theory Synthetic geometry Tautochrone curve Unifying theories in mathematics Waring's problem Warsaw School of Mathematics Academic positions Lowndean Professor of Astronomy and Geometry Lucasian professor Rouse Ball Professor of Mathematics Sadleirian Chair See also History
https://en.wikipedia.org/wiki/Koszul%20complex
The Koszul complex is a concept in mathematics introduced by Jean-Louis Koszul. Definition Let A be a commutative ring and s: Ar → A an A-linear map. Its Koszul complex Ks is where the maps send where means the term is omitted and means the wedge product. One may replace Ar with any A-module. Motivating example Let M be a manifold, variety, scheme, ..., and A be the ring of functions on it, denoted . The map s : Ar → A corresponds to picking r functions . When r = 1, the Koszul complex is whose cokernel is the ring of functions on the zero locus f = 0. In general, the Koszul complex is The cokernel of the last map is again functions on the zero locus f1 = ... = fr = 0. It is the tensor product of the r many Koszul complexes for fi = 0, so its dimensions are given by binomial coefficients. In pictures: given functions si, how do we define the locus where they all vanish? In algebraic geometry, the ring of functions of the zero locus is A/(s1, ..., sr). In derived algebraic geometry, the dg ring of functions is the Koszul complex. If the loci si = 0 intersect transversely, these are equivalent. Thus: Koszul complexes are derived intersections of zero loci. Properties Algebra structure First, the Koszul complex Ks of (A,s) is a chain complex: the composition of any two maps is zero. Second, the map makes it into a dg algebra. As a tensor product The Koszul complex is a tensor product: if s = (s1, ..., sr), then where ⊗ denotes the derived tensor product of chain complexes of A-modules. Vanishing in regular case When s1, ..., sr form a regular sequence, Ks → A is a quasi-isomorphism, i.e. and as for any s, H0(Ks) = A. History The Koszul complex was first introduced to define a cohomology theory for Lie algebras, by Jean-Louis Koszul (see Lie algebra cohomology). It turned out to be a useful general construction in homological algebra. As a tool, its homology can be used to tell when a set of elements of a (local) ring is an M-regular sequence, and hence it can be used to prove basic facts about the depth of a module or ideal which is an algebraic notion of dimension that is related to but different from the geometric notion of Krull dimension. Moreover, in certain circumstances, the complex is the complex of syzygies, that is, it tells you the relations between generators of a module, the relations between these relations, and so forth. Detailed Definition Let R be a commutative ring and E a free module of finite rank r over R. We write for the i-th exterior power of E. Then, given an R-linear map , the Koszul complex associated to s is the chain complex of R-modules: , where the differential is given by: for any in E, . The superscript means the term is omitted. To show that , use the self-duality of a Koszul complex. Note that and . Note also that ; this isomorphism is not canonical (for example, a choice of a volume form in differential geometry provides an example of such an isomorphism.) If (i.e., an
https://en.wikipedia.org/wiki/Sampling%20distribution
In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic. If an arbitrarily large number of samples, each involving multiple observations (data points), were separately used in order to compute one value of a statistic (such as, for example, the sample mean or sample variance) for each sample, then the sampling distribution is the probability distribution of the values that the statistic takes on. In many contexts, only one sample is observed, but the sampling distribution can be found theoretically. Sampling distributions are important in statistics because they provide a major simplification en route to statistical inference. More specifically, they allow analytical considerations to be based on the probability distribution of a statistic, rather than on the joint probability distribution of all the individual sample values. Introduction The sampling distribution of a statistic is the distribution of that statistic, considered as a random variable, when derived from a random sample of size . It may be considered as the distribution of the statistic for all possible samples from the same population of a given sample size. The sampling distribution depends on the underlying distribution of the population, the statistic being considered, the sampling procedure employed, and the sample size used. There is often considerable interest in whether the sampling distribution can be approximated by an asymptotic distribution, which corresponds to the limiting case either as the number of random samples of finite size, taken from an infinite population and used to produce the distribution, tends to infinity, or when just one equally-infinite-size "sample" is taken of that same population. For example, consider a normal population with mean and variance . Assume we repeatedly take samples of a given size from this population and calculate the arithmetic mean for each sample – this statistic is called the sample mean. The distribution of these means, or averages, is called the "sampling distribution of the sample mean". This distribution is normal (n is the sample size) since the underlying population is normal, although sampling distributions may also often be close to normal even when the population distribution is not (see central limit theorem). An alternative to the sample mean is the sample median. When calculated from the same population, it has a different sampling distribution to that of the mean and is generally not normal (but it may be close for large sample sizes). The mean of a sample from a population having a normal distribution is an example of a simple statistic taken from one of the simplest statistical populations. For other statistics and other populations the formulas are more complicated, and often they do not exist in closed-form. In such cases the sampling distributions may be approximated through Monte-Carlo simulations, bootstrap methods, or
https://en.wikipedia.org/wiki/Schur%27s%20lemma
In mathematics, Schur's lemma is an elementary but extremely useful statement in representation theory of groups and algebras. In the group case it says that if M and N are two finite-dimensional irreducible representations of a group G and φ is a linear map from M to N that commutes with the action of the group, then either φ is invertible, or φ = 0. An important special case occurs when M = N, i.e. φ is a self-map; in particular, any element of the center of a group must act as a scalar operator (a scalar multiple of the identity) on M. The lemma is named after Issai Schur who used it to prove the Schur orthogonality relations and develop the basics of the representation theory of finite groups. Schur's lemma admits generalisations to Lie groups and Lie algebras, the most common of which are due to Jacques Dixmier and Daniel Quillen. Representation theory of groups Representation theory is the study of homomorphisms from a group, G, into the general linear group GL(V) of a vector space V; i.e., into the group of automorphisms of V. (Let us here restrict ourselves to the case when the underlying field of V is , the field of complex numbers.) Such a homomorphism is called a representation of G on V. A representation on V is a special case of a group action on V, but rather than permit any arbitrary bijections (permutations) of the underlying set of V, we restrict ourselves to invertible linear transformations. Let ρ be a representation of G on V. It may be the case that V has a subspace, W, such that for every element g of G, the invertible linear map ρ(g) preserves or fixes W, so that (ρ(g))(w) is in W for every w in W, and (ρ(g))(v) is not in W for any v not in W. In other words, every linear map ρ(g): V→V is also an automorphism of W, ρ(g): W→W, when its domain is restricted to W. We say W is stable under G, or stable under the action of G. It is clear that if we consider W on its own as a vector space, then there is an obvious representation of G on W—the representation we get by restricting each map ρ(g) to W. When W has this property, we call W with the given representation a subrepresentation of V. Every representation of G has itself and the zero vector space as trivial subrepresentations. A representation of G with no non-trivial subrepresentations is called an irreducible representation. Irreducible representations – like the prime numbers, or like the simple groups in group theory – are the building blocks of representation theory. Many of the initial questions and theorems of representation theory deal with the properties of irreducible representations. As we are interested in homomorphisms between groups, or continuous maps between topological spaces, we are interested in certain functions between representations of G. Let V and W be vector spaces, and let and be representations of G on V and W respectively. Then we define a map f from V to W to be a linear map from V to W that is equivariant under the action of G; that is,
https://en.wikipedia.org/wiki/Iwahori%E2%80%93Hecke%20algebra
In mathematics, the Iwahori–Hecke algebra, or Hecke algebra, named for Erich Hecke and Nagayoshi Iwahori, is a deformation of the group algebra of a Coxeter group. Hecke algebras are quotients of the group rings of Artin braid groups. This connection found a spectacular application in Vaughan Jones' construction of new invariants of knots. Representations of Hecke algebras led to discovery of quantum groups by Michio Jimbo. Michael Freedman proposed Hecke algebras as a foundation for topological quantum computation. Hecke algebras of Coxeter groups Start with the following data: (W, S) is a Coxeter system with the Coxeter matrix M = (mst), R is a commutative ring with identity. {qs | s ∈ S} is a family of units of R such that qs = qt whenever s and t are conjugate in W A is the ring of Laurent polynomials over Z with indeterminates qs (and the above restriction that qs = qt whenever s and t are conjugated), that is A = Z [q] Multiparameter Hecke Algebras The multiparameter Hecke algebra HR(W,S,q) is a unital, associative R-algebra with generators Ts for all s ∈ S and relations: Braid Relations: Ts Tt Ts ... = Tt Ts Tt ..., where each side has mst < ∞ factors and s,t belong to S. Quadratic Relation: For all s in S we have: (Ts - qs)(Ts + 1) = 0. Warning: in later books and papers, Lusztig used a modified form of the quadratic relation that reads After extending the scalars to include the half integer powers q the resulting Hecke algebra is isomorphic to the previously defined one (but the Ts here corresponds to q Ts in our notation). While this does not change the general theory, many formulae look different. Generic Multiparameter Hecke Algebras HA(W,S,q) is the generic multiparameter Hecke algebra. This algebra is universal in the sense that every other multiparameter Hecke algebra can be obtained from it via the (unique) ring homomorphism A → R which maps the indeterminate qs ∈ A to the unit qs ∈ R. This homomorphism turns R into a A-algebra and the scalar extension HA(W,S) ⊗A R is canonically isomorphic to the Hecke algebra HR(W,S,q) as constructed above. One calls this process specialization of the generic algebra. One-parameter Hecke Algebras If one specializes every indeterminate qs to a single indeterminate q over the integers (or q to q½ respectively), then one obtains the so-called generic one-parameter Hecke algebra of (W,S). Since in Coxeter groups with single laced Dynkin diagrams (for example groups of type A and D) every pair of Coxeter generators is conjugated, the above-mentioned restriction of qs being equal qt whenever s and t are conjugated in W forces the multiparameter and the one-parameter Hecke algebras to be equal. Therefore, it is also very common to only look at one-parameter Hecke algebras. Coxeter groups with weights If an integral weight function is defined on W (i.e. a map L:W → Z with L(vw)=L(v)+L(w) for all v,w ∈ W with l(vw)=l(v)+l(w)), then a common specialization to look at is the one induced
https://en.wikipedia.org/wiki/Hecke%20operator
In mathematics, in particular in the theory of modular forms, a Hecke operator, studied by , is a certain kind of "averaging" operator that plays a significant role in the structure of vector spaces of modular forms and more general automorphic representations. History used Hecke operators on modular forms in a paper on the special cusp form of Ramanujan, ahead of the general theory given by . Mordell proved that the Ramanujan tau function, expressing the coefficients of the Ramanujan form, is a multiplicative function: The idea goes back to earlier work of Adolf Hurwitz, who treated algebraic correspondences between modular curves which realise some individual Hecke operators. Mathematical description Hecke operators can be realized in a number of contexts. The simplest meaning is combinatorial, namely as taking for a given integer some function defined on the lattices of fixed rank to with the sum taken over all the that are subgroups of of index . For example, with and two dimensions, there are three such . Modular forms are particular kinds of functions of a lattice, subject to conditions making them analytic functions and homogeneous with respect to homotheties, as well as moderate growth at infinity; these conditions are preserved by the summation, and so Hecke operators preserve the space of modular forms of a given weight. Another way to express Hecke operators is by means of double cosets in the modular group. In the contemporary adelic approach, this translates to double cosets with respect to some compact subgroups. Explicit formula Let be the set of integral matrices with determinant and be the full modular group . Given a modular form of weight , the th Hecke operator acts by the formula where is in the upper half-plane and the normalization constant assures that the image of a form with integer Fourier coefficients has integer Fourier coefficients. This can be rewritten in the form which leads to the formula for the Fourier coefficients of in terms of the Fourier coefficients of : One can see from this explicit formula that Hecke operators with different indices commute and that if then , so the subspace of cusp forms of weight is preserved by the Hecke operators. If a (non-zero) cusp form is a simultaneous eigenform of all Hecke operators with eigenvalues then and . Hecke eigenforms are normalized so that , then Thus for normalized cuspidal Hecke eigenforms of integer weight, their Fourier coefficients coincide with their Hecke eigenvalues. Hecke algebras Algebras of Hecke operators are called "Hecke algebras", and are commutative rings. In the classical elliptic modular form theory, the Hecke operators with coprime to the level acting on the space of cusp forms of a given weight are self-adjoint with respect to the Petersson inner product. Therefore, the spectral theorem implies that there is a basis of modular forms that are eigenfunctions for these Hecke operators. E
https://en.wikipedia.org/wiki/Radon%20measure
In mathematics (specifically in measure theory), a Radon measure, named after Johann Radon, is a measure on the -algebra of Borel sets of a Hausdorff topological space that is finite on all compact sets, outer regular on all Borel sets, and inner regular on open sets. These conditions guarantee that the measure is "compatible" with the topology of the space, and most measures used in mathematical analysis and in number theory are indeed Radon measures. Motivation A common problem is to find a good notion of a measure on a topological space that is compatible with the topology in some sense. One way to do this is to define a measure on the Borel sets of the topological space. In general there are several problems with this: for example, such a measure may not have a well defined support. Another approach to measure theory is to restrict to locally compact Hausdorff spaces, and only consider the measures that correspond to positive linear functionals on the space of continuous functions with compact support (some authors use this as the definition of a Radon measure). This produces a good theory with no pathological problems, but does not apply to spaces that are not locally compact. If there is no restriction to non-negative measures and complex measures are allowed, then Radon measures can be defined as the continuous dual space on the space of continuous functions with compact support. If such a Radon measure is real then it can be decomposed into the difference of two positive measures. Furthermore, an arbitrary Radon measure can be decomposed into four positive Radon measures, where the real and imaginary parts of the functional are each the differences of two positive Radon measures. The theory of Radon measures has most of the good properties of the usual theory for locally compact spaces, but applies to all Hausdorff topological spaces. The idea of the definition of a Radon measure is to find some properties that characterize the measures on locally compact spaces corresponding to positive functionals, and use these properties as the definition of a Radon measure on an arbitrary Hausdorff space. Definitions Let be a measure on the -algebra of Borel sets of a Hausdorff topological space . The measure is called inner regular or tight if, for any open set , is the supremum of over all compact subsets of . The measure is called outer regular if, for any Borel set , is the infimum of over all open sets containing . The measure is called locally finite if every point of has a neighborhood for which is finite. If is locally finite, then it follows that is finite on compact sets, and for locally compact Hausdorff spaces, the converse holds, too. Thus, in this case, local finiteness may be equivalently replaced by finiteness on compact subsets. The measure is called a Radon measure if it is inner regular and locally finite. In many situations, such as finite measures on locally compact spaces, this also implies outer re
https://en.wikipedia.org/wiki/Heegner%20number
In number theory, a Heegner number (as termed by Conway and Guy) is a square-free positive integer d such that the imaginary quadratic field has class number 1. Equivalently, the ring of algebraic integers of has unique factorization. The determination of such numbers is a special case of the class number problem, and they underlie several striking results in number theory. According to the (Baker–)Stark–Heegner theorem there are precisely nine Heegner numbers: This result was conjectured by Gauss and proved up to minor flaws by Kurt Heegner in 1952. Alan Baker and Harold Stark independently proved the result in 1966, and Stark further indicated the gap in Heegner's proof was minor. Euler's prime-generating polynomial Euler's prime-generating polynomial which gives (distinct) primes for n = 0, ..., 39, is related to the Heegner number 163 = 4 · 41 − 1. Rabinowitz proved that gives primes for if and only if this quadratic's discriminant is the negative of a Heegner number. (Note that yields , so is maximal.) 1, 2, and 3 are not of the required form, so the Heegner numbers that work are 7, 11, 19, 43, 67, 163, yielding prime generating functions of Euler's form for 2, 3, 5, 11, 17, 41; these latter numbers are called lucky numbers of Euler by F. Le Lionnais. Almost integers and Ramanujan's constant Ramanujan's constant is the transcendental number , which is an almost integer, in that it is very close to an integer: This number was discovered in 1859 by the mathematician Charles Hermite. In a 1975 April Fool article in Scientific American magazine, "Mathematical Games" columnist Martin Gardner made the hoax claim that the number was in fact an integer, and that the Indian mathematical genius Srinivasa Ramanujan had predicted it—hence its name. This coincidence is explained by complex multiplication and the q-expansion of the j-invariant. Detail In what follows, j(z) denotes the j-invariant of the complex number z. Briefly, is an integer for d a Heegner number, and via the q-expansion. If is a quadratic irrational, then the j-invariant is an algebraic integer of degree , the class number of and the minimal (monic integral) polynomial it satisfies is called the 'Hilbert class polynomial'. Thus if the imaginary quadratic extension has class number 1 (so d is a Heegner number), the j-invariant is an integer. The q-expansion of j, with its Fourier series expansion written as a Laurent series in terms of , begins as: The coefficients asymptotically grow as and the low order coefficients grow more slowly than , so for , j is very well approximated by its first two terms. Setting yields Now so, Or, where the linear term of the error is, explaining why is within approximately the above of being an integer. Pi formulas The Chudnovsky brothers found in 1987 that a proof of which uses the fact that For similar formulas, see the Ramanujan–Sato series. Other Heegner numbers For the four largest Heegner numbers, the appr
https://en.wikipedia.org/wiki/Homotopy%20lifting%20property
In mathematics, in particular in homotopy theory within algebraic topology, the homotopy lifting property (also known as an instance of the right lifting property or the covering homotopy axiom) is a technical condition on a continuous function from a topological space E to another one, B. It is designed to support the picture of E "above" B by allowing a homotopy taking place in B to be moved "upstairs" to E. For example, a covering map has a property of unique local lifting of paths to a given sheet; the uniqueness is because the fibers of a covering map are discrete spaces. The homotopy lifting property will hold in many situations, such as the projection in a vector bundle, fiber bundle or fibration, where there need be no unique way of lifting. Formal definition Assume all maps are continuous functions between topological spaces. Given a map , and a space , one says that has the homotopy lifting property, or that has the homotopy lifting property with respect to , if: for any homotopy , and for any map lifting (i.e., so that ), there exists a homotopy lifting (i.e., so that ) which also satisfies . The following diagram depicts this situation: The outer square (without the dotted arrow) commutes if and only if the hypotheses of the lifting property are true. A lifting corresponds to a dotted arrow making the diagram commute. This diagram is dual to that of the homotopy extension property; this duality is loosely referred to as Eckmann–Hilton duality. If the map satisfies the homotopy lifting property with respect to all spaces , then is called a fibration, or one sometimes simply says that has the homotopy lifting property. A weaker notion of fibration is Serre fibration, for which homotopy lifting is only required for all CW complexes . Generalization: homotopy lifting extension property There is a common generalization of the homotopy lifting property and the homotopy extension property. Given a pair of spaces , for simplicity we denote . Given additionally a map , one says that has the homotopy lifting extension property if: For any homotopy , and For any lifting of , there exists a homotopy which covers (i.e., such that ) and extends (i.e., such that ). The homotopy lifting property of is obtained by taking , so that above is simply . The homotopy extension property of is obtained by taking to be a constant map, so that is irrelevant in that every map to E is trivially the lift of a constant map to the image point of . See also Covering space Fibration Notes References . Jean-Pierre Marquis (2006) "A path to Epistemology of Mathematics: Homotopy theory", pages 239 to 260 in The Architecture of Modern Mathematics, J. Ferreiros & J.J. Gray, editors, Oxford University Press External links Homotopy theory Algebraic topology
https://en.wikipedia.org/wiki/Deterministic%20system
In mathematics, computer science and physics, a deterministic system is a system in which no randomness is involved in the development of future states of the system. A deterministic model will thus always produce the same output from a given starting condition or initial state. In physics Physical laws that are described by differential equations represent deterministic systems, even though the state of the system at a given point in time may be difficult to describe explicitly. In quantum mechanics, the Schrödinger equation, which describes the continuous time evolution of a system's wave function, is deterministic. However, the relationship between a system's wave function and the observable properties of the system appears to be non-deterministic. In mathematics The systems studied in chaos theory are deterministic. If the initial state were known exactly, then the future state of such a system could theoretically be predicted. However, in practice, knowledge about the future state is limited by the precision with which the initial state can be measured, and chaotic systems are characterized by a strong dependence on the initial conditions. This sensitivity to initial conditions can be measured with Lyapunov exponents. Markov chains and other random walks are not deterministic systems, because their development depends on random choices. In computer science A deterministic model of computation, for example a deterministic Turing machine, is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state. A deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. There may be non-deterministic algorithms that run on a deterministic machine, for example, an algorithm that relies on random choices. Generally, for such random choices, one uses a pseudorandom number generator, but one may also use some external physical process, such as the last digits of the time given by the computer clock. A pseudorandom number generator is a deterministic algorithm, that is designed to produce sequences of numbers that behave as random sequences. A hardware random number generator, however, may be non-deterministic. Others In economics, the Ramsey–Cass–Koopmans model is deterministic. The stochastic equivalent is known as real business-cycle theory. See also Deterministic system (philosophy) Dynamical system Scientific modelling Statistical model Stochastic process References System Dynamical systems
https://en.wikipedia.org/wiki/Centerpoint
Centerpoint (alternatively spelled centrepoint) may refer to: Centerpoint (geometry), a generalization of the median to two or more dimensions Organizations CenterPoint Energy, an electric and natural gas utility in the U.S.A. CenterPoint Properties, Chicago industrial real estate developer Centrepoint (charity), a UK charitable trust for homeless young people Centrepoint Theatre, a theatre and theatre company in Palmerston North, New Zealand Places Sydney Tower, also known as Centrepoint Tower, in Sydney, New South Wales, Australia Westfield Sydney, a shopping centre under the Sydney Tower Centerpoint Mall (Toronto), Ontario, Canada Centrepointe, a neighbourhood in Ottawa, Ontario, Canada Centre Point Sabah, in Kota Kinabalu, Sabah, Malaysia Centrepoint (commune), a former commune in Albany, New Zealand SM City Sta. Mesa, Manila, Philippines, formerly known as SM Centerpoint The Centrepoint, a shopping centre in Singapore Centre Point, an office building in central London, England Centerpoint, Ohio, United States, an unincorporated community Centerpointe Mall, Grand Rapids, Michigan, United States Centerpoint Medical Center, Independence, Missouri, United States CentrePointe, Lexington, Kentucky, United States, a building See also Center Point (disambiguation)
https://en.wikipedia.org/wiki/Free%20monoid
In abstract algebra, the free monoid on a set is the monoid whose elements are all the finite sequences (or strings) of zero or more elements from that set, with string concatenation as the monoid operation and with the unique sequence of zero elements, often called the empty string and denoted by ε or λ, as the identity element. The free monoid on a set A is usually denoted A∗. The free semigroup on A is the subsemigroup of A∗ containing all elements except the empty string. It is usually denoted A+. More generally, an abstract monoid (or semigroup) S is described as free if it is isomorphic to the free monoid (or semigroup) on some set. As the name implies, free monoids and semigroups are those objects which satisfy the usual universal property defining free objects, in the respective categories of monoids and semigroups. It follows that every monoid (or semigroup) arises as a homomorphic image of a free monoid (or semigroup). The study of semigroups as images of free semigroups is called combinatorial semigroup theory. Free monoids (and monoids in general) are associative, by definition; that is, they are written without any parenthesis to show grouping or order of operation. The non-associative equivalent is the free magma. Examples Natural numbers The monoid (N0,+) of natural numbers (including zero) under addition is a free monoid on a singleton free generator, in this case the natural number 1. According to the formal definition, this monoid consists of all sequences like "1", "1+1", "1+1+1", "1+1+1+1", and so on, including the empty sequence. Mapping each such sequence to its evaluation result and the empty sequence to zero establishes an isomorphism from the set of such sequences to N0. This isomorphism is compatible with "+", that is, for any two sequences s and t, if s is mapped (i.e. evaluated) to a number m and t to n, then their concatenation s+t is mapped to the sum m+n. Kleene star In formal language theory, usually a finite set of "symbols" A (sometimes called the alphabet) is considered. A finite sequence of symbols is called a "word over A", and the free monoid A∗ is called the "Kleene star of A". Thus, the abstract study of formal languages can be thought of as the study of subsets of finitely generated free monoids. For example, assuming an alphabet A = {a, b, c}, its Kleene star A∗ contains all concatenations of a, b, and c: {ε, a, ab, ba, caa, , ...}. If A is any set, the word length function on A∗ is the unique monoid homomorphism from A∗ to (N0,+) that maps each element of A to 1. A free monoid is thus a graded monoid. (A graded monoid is a monoid that can be written as . Each is a grade; the grading here is just the length of the string. That is, contains those strings of length The symbol here can be taken to mean "set union"; it is used instead of the symbol because, in general, set unions might not be monoids, and so a distinct symbol is used. By convention, gradations are always written with the
https://en.wikipedia.org/wiki/Linear%20separability
In Euclidean geometry, linear separability is a property of two sets of points. This is most easily visualized in two dimensions (the Euclidean plane) by thinking of one set of points as being colored blue and the other set of points as being colored red. These two sets are linearly separable if there exists at least one line in the plane with all of the blue points on one side of the line and all the red points on the other side. This idea immediately generalizes to higher-dimensional Euclidean spaces if the line is replaced by a hyperplane. The problem of determining if a pair of sets is linearly separable and finding a separating hyperplane if they are, arises in several areas. In statistics and machine learning, classifying certain types of data is a problem for which good algorithms exist that are based on this concept. Mathematical definition Let and be two sets of points in an n-dimensional Euclidean space. Then and are linearly separable if there exist n + 1 real numbers , such that every point satisfies and every point satisfies , where is the -th component of . Equivalently, two sets are linearly separable precisely when their respective convex hulls are disjoint (colloquially, do not overlap). In simple 2D, it can also be imagined that the set of points under a linear transformation collapses into a line, on which there exists a value, k, greater than which one set of points will fall into, and lesser than which the other set of points fall. Examples Three non-collinear points in two classes ('+' and '-') are always linearly separable in two dimensions. This is illustrated by the three examples in the following figure (the all '+' case is not shown, but is similar to the all '-' case): However, not all sets of four points, no three collinear, are linearly separable in two dimensions. The following example would need two straight lines and thus is not linearly separable: Notice that three points which are collinear and of the form "+ ⋅⋅⋅ — ⋅⋅⋅ +" are also not linearly separable. Number of linear separations Let be the number of ways to linearly separate N points (in general position) in K dimensions, thenWhen K is large, is very close to one when , but very close to zero when . In words, one perceptron unit can almost certainly memorize a random assignment of binary labels on N points when , but almost certainly not when . Linear separability of Boolean functions in n variables A Boolean function in n variables can be thought of as an assignment of 0 or 1 to each vertex of a Boolean hypercube in n dimensions. This gives a natural division of the vertices into two sets. The Boolean function is said to be linearly separable provided these two sets of points are linearly separable. The number of distinct Boolean functions is where n is the number of variables passed into the function. Such functions are also called linear threshold logic, or perceptrons. The classical theory is summarized in , as Knuth claims. T
https://en.wikipedia.org/wiki/Constructible%20polygon
In mathematics, a constructible polygon is a regular polygon that can be constructed with compass and straightedge. For example, a regular pentagon is constructible with compass and straightedge while a regular heptagon is not. There are infinitely many constructible polygons, but only 31 with an odd number of sides are known. Conditions for constructibility Some regular polygons are easy to construct with compass and straightedge; others are not. The ancient Greek mathematicians knew how to construct a regular polygon with 3, 4, or 5 sides, and they knew how to construct a regular polygon with double the number of sides of a given regular polygon. This led to the question being posed: is it possible to construct all regular polygons with compass and straightedge? If not, which n-gons (that is, polygons with n edges) are constructible and which are not? Carl Friedrich Gauss proved the constructibility of the regular 17-gon in 1796. Five years later, he developed the theory of Gaussian periods in his Disquisitiones Arithmeticae. This theory allowed him to formulate a sufficient condition for the constructibility of regular polygons. Gauss stated without proof that this condition was also necessary, but never published his proof. A full proof of necessity was given by Pierre Wantzel in 1837. The result is known as the Gauss–Wantzel theorem: A regular n-gon can be constructed with compass and straightedge if and only if n is a power of 2 or the product of a power of 2 and any number of distinct Fermat primes. A Fermat prime is a prime number of the form In order to reduce a geometric problem to a problem of pure number theory, the proof uses the fact that a regular n-gon is constructible if and only if the cosine is a constructible number—that is, can be written in terms of the four basic arithmetic operations and the extraction of square roots. Equivalently, a regular n-gon is constructible if any root of the nth cyclotomic polynomial is constructible. Detailed results by Gauss's theory Restating the Gauss–Wantzel theorem: A regular n-gon is constructible with straightedge and compass if and only if n = 2kp1p2...pt where k and t are non-negative integers, and the pi's (when t > 0) are distinct Fermat primes. The five known Fermat primes are: F0 = 3, F1 = 5, F2 = 17, F3 = 257, and F4 = 65537 . Since there are 31 combinations of anywhere from one to five Fermat primes, there are 31 known constructible polygons with an odd number of sides. The next twenty-eight Fermat numbers, F5 through F32, are known to be composite. Thus a regular n-gon is constructible if n = 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40, 48, 51, 60, 64, 68, 80, 85, 96, 102, 120, 128, 136, 160, 170, 192, 204, 240, 255, 256, 257, 272, 320, 340, 384, 408, 480, 510, 512, 514, 544, 640, 680, 768, 771, 816, 960, 1020, 1024, 1028, 1088, 1280, 1285, 1360, 1536, 1542, 1632, 1920, 2040, 2048, ... , while a regular n-gon is not constructible with compass and str
https://en.wikipedia.org/wiki/Chiliagon
In geometry, a chiliagon () or 1,000-gon is a polygon with 1,000 sides. Philosophers commonly refer to chiliagons to illustrate ideas about the nature and workings of thought, meaning, and mental representation. Regular chiliagon A regular chiliagon is represented by Schläfli symbol {1,000} and can be constructed as a truncated 500-gon, t{500}, or a twice-truncated 250-gon, tt{250}, or a thrice-truncated 125-gon, ttt{125}. The measure of each internal angle in a regular chiliagon is 179°38'24"/rad. The area of a regular chiliagon with sides of length a is given by This result differs from the area of its circumscribed circle by less than 4 parts per million. Because 1,000 = 23 × 53, the number of sides is neither a product of distinct Fermat primes nor a power of two. Thus the regular chiliagon is not a constructible polygon. Indeed, it is not even constructible with the use of an angle trisector, as the number of sides is neither a product of distinct Pierpont primes, nor a product of powers of two and three. Therefore, construction of a chiliagon requires other techniques such as the quadratrix of Hippias, Archimedean spiral, or other auxiliary curves. For example, a 9° angle can first be constructed with compass and straightedge, which can then be quintisected (divided into five equal parts) twice using an auxiliary curve to produce the 21'36" internal angle required. Philosophical application René Descartes uses the chiliagon as an example in his Sixth Meditation to demonstrate the difference between pure intellection and imagination. He says that, when one thinks of a chiliagon, he "does not imagine the thousand sides or see them as if they were present" before him – as he does when one imagines a triangle, for example. The imagination constructs a "confused representation," which is no different from that which it constructs of a myriagon (a polygon with ten thousand sides). However, he does clearly understand what a chiliagon is, just as he understands what a triangle is, and he is able to distinguish it from a myriagon. Therefore, the intellect is not dependent on imagination, Descartes claims, as it is able to entertain clear and distinct ideas when imagination is unable to. Philosopher Pierre Gassendi, a contemporary of Descartes, was critical of this interpretation, believing that while Descartes could imagine a chiliagon, he could not understand it: one could "perceive that the word 'chiliagon' signifies a figure with a thousand angles [but] that is just the meaning of the term, and it does not follow that you understand the thousand angles of the figure any better than you imagine them." The example of a chiliagon is also referenced by other philosophers. David Hume points out that it is "impossible for the eye to determine the angles of a chiliagon to be equal to 1.996 right angles, or make any conjecture, that approaches this proportion." Gottfried Leibniz comments on a use of the chiliagon by John Locke, noting that one can
https://en.wikipedia.org/wiki/Myriagon
In geometry, a myriagon or 10000-gon is a polygon with 10000 sides. Several philosophers have used the regular myriagon to illustrate issues regarding thought. Regular myriagon A regular myriagon is represented by Schläfli symbol {10,000} and can be constructed as a truncated 5000-gon, t{5000}, or a twice-truncated 2500-gon, tt{2500}, or a thrice-truncated 1250-gon, ttt{1250), or a four-fold-truncated 625-gon, tttt{625}. The measure of each internal angle in a regular myriagon is 179.964°. The area of a regular myriagon with sides of length a is given by The result differs from the area of its circumscribed circle by up to 40 parts per billion. Because 10,000 = 24 × 54, the number of sides is neither a product of distinct Fermat primes nor a power of two. Thus the regular myriagon is not a constructible polygon. Indeed, it is not even constructible with the use of an angle trisector, as the number of sides is neither a product of distinct Pierpont primes, nor a product of powers of two and three. Symmetry The regular myriagon has Dih10000 dihedral symmetry, order 20000, represented by 10000 lines of reflection. Dih10000 has 24 dihedral subgroups: (Dih5000, Dih2500, Dih1250, Dih625), (Dih2000, Dih1000, Dih500, Dih250, Dih125), (Dih400, Dih200, Dih100, Dih50, Dih25), (Dih80, Dih40, Dih20, Dih10, Dih5), and (Dih16, Dih8, Dih4, Dih2, Dih1). It also has 25 more cyclic symmetries as subgroups: (Z10000, Z5000, Z2500, Z1250, Z625), (Z2000, Z1000, Z500, Z250, Z125), (Z400, Z200, Z100, Z50, Z25), (Z80, Z40, Z20, Z10), and (Z16, Z8, Z4, Z2, Z1), with Zn representing π/n radian rotational symmetry. John Conway labels these lower symmetries with a letter and order of the symmetry follows the letter. r20000 represents full symmetry, and a1 labels no symmetry. He gives d (diagonal) with mirror lines through vertices, p with mirror lines through edges (perpendicular), i with mirror lines through both vertices and edges, and g for rotational symmetry. These lower symmetries allows degrees of freedom in defining irregular myriagons. Only the g10000 subgroup has no degrees of freedom but can seen as directed edges. Myriagram A myriagram is a 10,000-sided star polygon. There are 1999 regular forms given by Schläfli symbols of the form {10000/n}, where n is an integer between 2 and 5,000 that is coprime to 10,000. There are also 3000 regular star figures in the remaining cases. In popular culture In the novella Flatland, the Chief Circle is assumed to have ten thousand sides, making him a myriagon. See also Chiliagon Megagon Notes References Polygons by the number of sides
https://en.wikipedia.org/wiki/256%20%28number%29
256 (two hundred [and] fifty-six) is the natural number following 255 and preceding 257. In mathematics 256 is a composite number, with the factorization 256 = 28, which makes it a power of two. 256 is 4 raised to the 4th power, so in tetration notation, 256 is 24. 256 is the value of the expression , where . 256 is a perfect square (162). 256 is the only 3-digit number that is zenzizenzizenzic. It is 2 to the 8th power or . 256 is the lowest number that is a product of eight prime factors. 256 is the number of parts in all compositions of 7. In computing One octet (in most cases one byte) is equal to eight bits and has 28 or 256 possible values, counting from 0 to 255. The number 256 often appears in computer applications (especially on 8-bit systems) such as: The typical number of different values in each color channel of a digital color image (256 values for red, 256 values for green, and 256 values for blue used for 24-bit color) (see color space or Web colors). The number of colors available in a GIF or a 256-color (8-bit) bitmap. The number of characters in extended ASCII and Latin-1. The number of columns available in a Microsoft Excel worksheet until Excel 2007. The split-screen level in Pac-Man, which results from the use of a single byte to store the internal level counter. A 256-bit integer can represent up to 115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,936 values. The number of bits in the SHA-256 cryptographic hash. The branding number of Nvidia's GeForce 256. References Integers
https://en.wikipedia.org/wiki/Simply%20connected%20space
In topology, a topological space is called simply connected (or 1-connected, or 1-simply connected) if it is path-connected and every path between two points can be continuously transformed (intuitively for embedded spaces, staying within the space) into any other such path while preserving the two endpoints in question. The fundamental group of a topological space is an indicator of the failure for the space to be simply connected: a path-connected topological space is simply connected if and only if its fundamental group is trivial. Definition and equivalent formulations A topological space is called if it is path-connected and any loop in defined by can be contracted to a point: there exists a continuous map such that restricted to is Here, and denotes the unit circle and closed unit disk in the Euclidean plane respectively. An equivalent formulation is this: is simply connected if and only if it is path-connected, and whenever and are two paths (that is, continuous maps) with the same start and endpoint ( and ), then can be continuously deformed into while keeping both endpoints fixed. Explicitly, there exists a homotopy such that and A topological space is simply connected if and only if is path-connected and the fundamental group of at each point is trivial, i.e. consists only of the identity element. Similarly, is simply connected if and only if for all points the set of morphisms in the fundamental groupoid of has only one element. In complex analysis: an open subset is simply connected if and only if both and its complement in the Riemann sphere are connected. The set of complex numbers with imaginary part strictly greater than zero and less than one furnishes a nice example of an unbounded, connected, open subset of the plane whose complement is not connected. It is nevertheless simply connected. It might also be worth pointing out that a relaxation of the requirement that be connected leads to an interesting exploration of open subsets of the plane with connected extended complement. For example, a (not necessarily connected) open set has a connected extended complement exactly when each of its connected components are simply connected. Informal discussion Informally, an object in our space is simply connected if it consists of one piece and does not have any "holes" that pass all the way through it. For example, neither a doughnut nor a coffee cup (with a handle) is simply connected, but a hollow rubber ball is simply connected. In two dimensions, a circle is not simply connected, but a disk and a line are. Spaces that are connected but not simply connected are called non-simply connected or multiply connected. The definition rules out only handle-shaped holes. A sphere (or, equivalently, a rubber ball with a hollow center) is simply connected, because any loop on the surface of a sphere can contract to a point even though it has a "hole" in the hollow center. The stronger condition, that the object
https://en.wikipedia.org/wiki/Ideal%20number
In number theory an ideal number is an algebraic integer which represents an ideal in the ring of integers of a number field; the idea was developed by Ernst Kummer, and led to Richard Dedekind's definition of ideals for rings. An ideal in the ring of integers of an algebraic number field is principal if it consists of multiples of a single element of the ring, and nonprincipal otherwise. By the principal ideal theorem any nonprincipal ideal becomes principal when extended to an ideal of the Hilbert class field. This means that there is an element of the ring of integers of the Hilbert class field, which is an ideal number, such that the original nonprincipal ideal is equal to the collection of all multiples of this ideal number by elements of this ring of integers that lie in the original field's ring of integers. Example For instance, let be a root of , then the ring of integers of the field is , which means all with and integers form the ring of integers. An example of a nonprincipal ideal in this ring is the set of all where and are integers; the cube of this ideal is principal, and in fact the class group is cyclic of order three. The corresponding class field is obtained by adjoining an element satisfying to , giving . An ideal number for the nonprincipal ideal is . Since this satisfies the equation it is an algebraic integer. All elements of the ring of integers of the class field which when multiplied by give a result in are of the form , where and The coefficients α and β are also algebraic integers, satisfying and respectively. Multiplying by the ideal number gives , which is the nonprincipal ideal. History Kummer first published the failure of unique factorization in cyclotomic fields in 1844 in an obscure journal; it was reprinted in 1847 in Liouville's journal. In subsequent papers in 1846 and 1847 he published his main theorem, the unique factorization into (actual and ideal) primes. It is widely believed that Kummer was led to his "ideal complex numbers" by his interest in Fermat's Last Theorem; there is even a story often told that Kummer, like Lamé, believed he had proven Fermat's Last Theorem until Lejeune Dirichlet told him his argument relied on unique factorization; but the story was first told by Kurt Hensel in 1910 and the evidence indicates it likely derives from a confusion by one of Hensel's sources. Harold Edwards says the belief that Kummer was mainly interested in Fermat's Last Theorem "is surely mistaken" (Edwards 1977, p. 79). Kummer's use of the letter λ to represent a prime number, α to denote a λth root of unity, and his study of the factorization of prime number into "complex numbers composed of th roots of unity" all derive directly from a paper of Jacobi which is concerned with higher reciprocity laws. Kummer's 1844 memoir was in honor of the jubilee celebration of the University of Königsberg and was meant as a tribute to Jacobi. Although Kummer had studied Fermat's Last Theorem in th
https://en.wikipedia.org/wiki/Internal%20and%20external%20angles
In geometry, an angle of a polygon is formed by two adjacent sides. For a simple (non-self-intersecting) polygon, regardless of whether it is convex or non-convex, this angle is called an (or interior angle) if a point within the angle is in the interior of the polygon. A polygon has exactly one internal angle per vertex. If every internal angle of a simple polygon is less than a straight angle ( radians or 180°), then the polygon is called convex. In contrast, an (also called a turning angle or exterior angle) is an angle formed by one side of a simple polygon and a line extended from an adjacent side. Properties The sum of the internal angle and the external angle on the same vertex is π radians (180°). The sum of all the internal angles of a simple polygon is π(n−2) radians or 180(n–2) degrees, where n is the number of sides. The formula can be proved by using mathematical induction: starting with a triangle, for which the angle sum is 180°, then replacing one side with two sides connected at another vertex, and so on. The sum of the external angles of any simple convex or non-convex polygon, if only one of the two external angles is assumed at each vertex, is 2π radians (360°). The measure of the exterior angle at a vertex is unaffected by which side is extended: the two exterior angles that can be formed at a vertex by extending alternately one side or the other are vertical angles and thus are equal. Extension to crossed polygons The interior angle concept can be extended in a consistent way to crossed polygons such as star polygons by using the concept of directed angles. In general, the interior angle sum in degrees of any closed polygon, including crossed (self-intersecting) ones, is then given by 180(n–2k)°, where n is the number of vertices, and the strictly positive integer k is the number of total (360°) revolutions one undergoes by walking around the perimeter of the polygon. In other words, the sum of all the exterior angles is 2πk radians or 360k degrees. Example: for ordinary convex polygons and concave polygons, k = 1, since the exterior angle sum is 360°, and one undergoes only one full revolution by walking around the perimeter. References External links Internal angles of a triangle Interior angle sum of polygons: a general formula - Provides an interactive Java activity that extends the interior angle sum formula for simple closed polygons to include crossed (complex) polygons. Angle Euclidean plane geometry Elementary geometry Polygons
https://en.wikipedia.org/wiki/Bisect
Bisect, or similar, may refer to: Mathematics Bisection, in geometry, dividing something into two equal parts Bisection method, a root-finding algorithm Equidistant set Other uses Bisect (philately), the use of postage stamp halves Bisector (music), a half octave in diatonic set theory Bisection (software engineering), for finding code changes bisection of earthworms to study regeneration
https://en.wikipedia.org/wiki/Ernest%20Esclangon
Ernest Benjamin Esclangon (17 March 1876 – 28 January 1954) was a French astronomer and mathematician. Born in Mison, Alpes-de-Haute-Provence, in 1895 he started to study mathematics at the École Normale Supérieure, graduating in 1898. Looking for some means of financial support while he completed his doctorate on quasi-periodic functions, he took a post at the Bordeaux Observatory, teaching some mathematics at the university. During World War I, he worked on ballistics and developed a novel method for precisely locating enemy artillery. When a gun is fired, it initiates a spherical shock wave but the projectile also generates a conical wave. By using the sound of distant guns to compare the two waves, Escaglon was able to make accurate predictions of gun locations. After the armistice in 1919, Esclangon became director of the Strasbourg Observatory and professor of astronomy at the university the following year. In 1929, he was appointed director of the Paris Observatory and of the International Time Bureau, and elected to the Bureau des Longitudes in 1932. He is perhaps best remembered for initiating in 1933 the first speaking clock service, reportedly to relieve the observatory staff from the numerous telephone calls requesting the exact time. He was elected to the Académie des Sciences in 1939. Esclangon was the President of the Société astronomique de France (SAF), the French astronomical society, from 1933–1935. In 1935, he received the Prix Jules Janssen, the society's highest award. Serving as director of the Paris Observatory throughout World War II and the German occupation of Paris, he retired in 1944. He died in Eyrenville, France. The binary asteroid 1509 Esclangona is named after him. The lunar crater Esclangon is named after him. His doctoral students include Daniel Barbier, Édmée Chandon, Louis Couffignal, André-Louis Danjon, and Nicolas Stoyko. References External links 20th-century French astronomers 1876 births 1954 deaths Members of the French Academy of Sciences École Normale Supérieure alumni Academic staff of the University of Strasbourg People from Alpes-de-Haute-Provence 20th-century French mathematicians Presidents of the International Astronomical Union
https://en.wikipedia.org/wiki/Dan%20Grimaldi
Dan Grimaldi (born March 7, 1946) is an American actor and mathematics professor who is known for his roles as twins Philly and Patsy Parisi on the HBO television series The Sopranos, various characters on Law & Order (1991-2001), Don't Go in the House (1979), The Junkman (1983), Men of Respect (1990), and The Yards (2000). Education Grimaldi has a bachelor's degree in mathematics from Fordham University, a master's degree in operations research from New York University, and a PhD in data processing from the City University of New York, and teaches in the Department of Mathematics and Computer Science at Kingsborough Community College in Brooklyn, New York. Career In addition to his role on The Sopranos, he has also had some minor film credits, most notably as mother-fixated pyromaniac Donny Kohler in the 1980 slasher film Don't Go in the House, and some guest TV appearances, including several episodes on Law & Order as well as appearing in 2011 as Tommy Barrone Sr. in "Moonlighting", the 9th episode of the 2nd season of the CBS show Blue Bloods. He appeared as an executive in the 2000 film The Yards and Grimaldi also voices "Frank" for the video game Mafia. Filmography Film Television Video games References External links HBO.com 1946 births Living people American male television actors American male voice actors CUNY Graduate Center alumni Fordham University alumni American people of Italian descent Mathematics educators Male actors from New York City New York University alumni American operations researchers
https://en.wikipedia.org/wiki/Submanifold
In mathematics, a submanifold of a manifold M is a subset S which itself has the structure of a manifold, and for which the inclusion map satisfies certain properties. There are different types of submanifolds depending on exactly which properties are required. Different authors often have different definitions. Formal definition In the following we assume all manifolds are differentiable manifolds of class Cr for a fixed , and all morphisms are differentiable of class Cr. Immersed submanifolds An immersed submanifold of a manifold M is the image S of an immersion map ; in general this image will not be a submanifold as a subset, and an immersion map need not even be injective (one-to-one) – it can have self-intersections. More narrowly, one can require that the map be an injection (one-to-one), in which we call it an injective immersion, and define an immersed submanifold to be the image subset S together with a topology and differential structure such that S is a manifold and the inclusion f is a diffeomorphism: this is just the topology on N, which in general will not agree with the subset topology: in general the subset S is not a submanifold of M, in the subset topology. Given any injective immersion the image of N in M can be uniquely given the structure of an immersed submanifold so that is a diffeomorphism. It follows that immersed submanifolds are precisely the images of injective immersions. The submanifold topology on an immersed submanifold need not be the subspace topology inherited from M. In general, it will be finer than the subspace topology (i.e. have more open sets). Immersed submanifolds occur in the theory of Lie groups where Lie subgroups are naturally immersed submanifolds. They also appear in the study of foliations where immersed submanifolds provide the right context to prove the Frobenius theorem. Embedded submanifolds An embedded submanifold (also called a regular submanifold), is an immersed submanifold for which the inclusion map is a topological embedding. That is, the submanifold topology on S is the same as the subspace topology. Given any embedding of a manifold N in M the image f(N) naturally has the structure of an embedded submanifold. That is, embedded submanifolds are precisely the images of embeddings. There is an intrinsic definition of an embedded submanifold which is often useful. Let M be an n-dimensional manifold, and let k be an integer such that . A k-dimensional embedded submanifold of M is a subset such that for every point there exists a chart containing p such that is the intersection of a k-dimensional plane with φ(U). The pairs form an atlas for the differential structure on S. Alexander's theorem and the Jordan–Schoenflies theorem are good examples of smooth embeddings. Other variations There are some other variations of submanifolds used in the literature. A neat submanifold is a manifold whose boundary agrees with the boundary of the entire manifold. Sharpe (1997) d
https://en.wikipedia.org/wiki/Hypersurface
In geometry, a hypersurface is a generalization of the concepts of hyperplane, plane curve, and surface. A hypersurface is a manifold or an algebraic variety of dimension , which is embedded in an ambient space of dimension , generally a Euclidean space, an affine space or a projective space. Hypersurfaces share, with surfaces in a three-dimensional space, the property of being defined by a single implicit equation, at least locally (near every point), and sometimes globally. A hypersurface in a (Euclidean, affine, or projective) space of dimension two is a plane curve. In a space of dimension three, it is a surface. For example, the equation defines an algebraic hypersurface of dimension in the Euclidean space of dimension . This hypersurface is also a smooth manifold, and is called a hypersphere or an -sphere. Smooth hypersurface A hypersurface that is a smooth manifold is called a smooth hypersurface. In , a smooth hypersurface is orientable. Every connected compact smooth hypersurface is a level set, and separates Rn into two connected components; this is related to the Jordan–Brouwer separation theorem. Affine algebraic hypersurface An algebraic hypersurface is an algebraic variety that may be defined by a single implicit equation of the form where is a multivariate polynomial. Generally the polynomial is supposed to be irreducible. When this is not the case, the hypersurface is not an algebraic variety, but only an algebraic set. It may depend on the authors or the context whether a reducible polynomial defines a hypersurface. For avoiding ambiguity, the term irreducible hypersurface is often used. As for algebraic varieties, the coefficients of the defining polynomial may belong to any fixed field , and the points of the hypersurface are the zeros of in the affine space where is an algebraically closed extension of . A hypersurface may have singularities, which are the common zeros, if any, of the defining polynomial and its partial derivatives. In particular, a real algebraic hypersurface is not necessarily a manifold. Properties Hypersurfaces have some specific properties that are not shared with other algebraic varieties. One of the main such properties is Hilbert's Nullstellensatz, which asserts that a hypersurface contains a given algebraic set if and only if the defining polynomial of the hypersurface has a power that belongs to the ideal generated by the defining polynomials of the algebraic set. A corollary of this theorem is that, if two irreducible polynomials (or more generally two square-free polynomials) define the same hypersurface, then one is the product of the other by a nonzero constant. Hypersurfaces are exactly the subvarieties of dimension of an affine space of dimension of . This is the geometric interpretation of the fact that, in a polynomial ring over a field, the height of an ideal is 1 if and only if the ideal is a principal ideal. In the case of possibly reducible hypersurfaces, this re
https://en.wikipedia.org/wiki/Codimension
In mathematics, codimension is a basic geometric idea that applies to subspaces in vector spaces, to submanifolds in manifolds, and suitable subsets of algebraic varieties. For affine and projective algebraic varieties, the codimension equals the height of the defining ideal. For this reason, the height of an ideal is often called its codimension. The dual concept is relative dimension. Definition Codimension is a relative concept: it is only defined for one object inside another. There is no “codimension of a vector space (in isolation)”, only the codimension of a vector subspace. If W is a linear subspace of a finite-dimensional vector space V, then the codimension of W in V is the difference between the dimensions: It is the complement of the dimension of W, in that, with the dimension of W, it adds up to the dimension of the ambient space V: Similarly, if N is a submanifold or subvariety in M, then the codimension of N in M is Just as the dimension of a submanifold is the dimension of the tangent bundle (the number of dimensions that you can move on the submanifold), the codimension is the dimension of the normal bundle (the number of dimensions you can move off the submanifold). More generally, if W is a linear subspace of a (possibly infinite dimensional) vector space V then the codimension of W in V is the dimension (possibly infinite) of the quotient space V/W, which is more abstractly known as the cokernel of the inclusion. For finite-dimensional vector spaces, this agrees with the previous definition and is dual to the relative dimension as the dimension of the kernel. Finite-codimensional subspaces of infinite-dimensional spaces are often useful in the study of topological vector spaces. Additivity of codimension and dimension counting The fundamental property of codimension lies in its relation to intersection: if W1 has codimension k1, and W2 has codimension k2, then if U is their intersection with codimension j we have max (k1, k2) ≤ j ≤ k1 + k2. In fact j may take any integer value in this range. This statement is more perspicuous than the translation in terms of dimensions, because the RHS is just the sum of the codimensions. In words codimensions (at most) add. If the subspaces or submanifolds intersect transversally (which occurs generically), codimensions add exactly. This statement is called dimension counting, particularly in intersection theory. Dual interpretation In terms of the dual space, it is quite evident why dimensions add. The subspaces can be defined by the vanishing of a certain number of linear functionals, which if we take to be linearly independent, their number is the codimension. Therefore, we see that U is defined by taking the union of the sets of linear functionals defining the Wi. That union may introduce some degree of linear dependence: the possible values of j express that dependence, with the RHS sum being the case where there is no dependence. This definition of codimension in terms
https://en.wikipedia.org/wiki/Singular%20perturbation
In mathematics, a singular perturbation problem is a problem containing a small parameter that cannot be approximated by setting the parameter value to zero. More precisely, the solution cannot be uniformly approximated by an asymptotic expansion as . Here is the small parameter of the problem and are a sequence of functions of of increasing order, such as . This is in contrast to regular perturbation problems, for which a uniform approximation of this form can be obtained. Singularly perturbed problems are generally characterized by dynamics operating on multiple scales. Several classes of singular perturbations are outlined below. The term "singular perturbation" was coined in the 1940s by Kurt Otto Friedrichs and Wolfgang R. Wasow. Methods of analysis A perturbed problem whose solution can be approximated on the whole problem domain, whether space or time, by a single asymptotic expansion has a regular perturbation. Most often in applications, an acceptable approximation to a regularly perturbed problem is found by simply replacing the small parameter by zero everywhere in the problem statement. This corresponds to taking only the first term of the expansion, yielding an approximation that converges, perhaps slowly, to the true solution as decreases. The solution to a singularly perturbed problem cannot be approximated in this way: As seen in the examples below, a singular perturbation generally occurs when a problem's small parameter multiplies its highest operator. Thus naively taking the parameter to be zero changes the very nature of the problem. In the case of differential equations, boundary conditions cannot be satisfied; in algebraic equations, the possible number of solutions is decreased. Singular perturbation theory is a rich and ongoing area of exploration for mathematicians, physicists, and other researchers. The methods used to tackle problems in this field are many. The more basic of these include the method of matched asymptotic expansions and WKB approximation for spatial problems, and in time, the Poincaré–Lindstedt method, the method of multiple scales and periodic averaging. The numerical methods for solving singular perturbation problems are also very popular. For books on singular perturbation in ODE and PDE's, see for example Holmes, Introduction to Perturbation Methods, Hinch, Perturbation methods or Bender and Orszag, Advanced Mathematical Methods for Scientists and Engineers. Examples of singular perturbative problems Each of the examples described below shows how a naive perturbation analysis, which assumes that the problem is regular instead of singular, will fail. Some show how the problem may be solved by more sophisticated singular methods. Vanishing coefficients in ordinary differential equations Differential equations that contain a small parameter that premultiplies the highest order term typically exhibit boundary layers, so that the solution evolves in two different scales. For example, consi
https://en.wikipedia.org/wiki/Brazilian%20Institute%20of%20Geography%20and%20Statistics
The Brazilian Institute of Geography and Statistics (; IBGE) is the agency responsible for official collection of statistical, geographic, cartographic, geodetic and environmental information in Brazil. IBGE performs a decennial national census; questionnaires account for information such as age, household income, literacy, education, occupation and hygiene levels. IBGE is a public institute created in 1936 under the name National Institute of Statistics. Its founder and chief proponent was statistician Mário Augusto Teixeira de Freitas. The current name dates from 1938. Its headquarters are located in Rio de Janeiro, and its current president is Eduardo Rios Neto. It was made a federal agency by Decree-Law No. 161 on February 13, 1967, and is linked to the Ministry of the Economy, inside the Secretariat of Planning, Budget and Management. Structure IBGE has a network of national research and dissemination components, comprising: 27 state units (26 in state capitals and one in the Federal District); 27 centres for documentation and dissemination of information (26 in the capital and one in the Federal District); 27 units for the supervision of territorial mapping (26 in the capital and one in the Federal District); 585 data collection agencies in major cities. Headquarters in Rio de Janeiro (the capital of the Republic when the Office was established). Also in Rio are five boards and a school: Executive Directors (ED), Directorate of Research (DPE), Department of Geosciences (DGC), Department of Informatics (DI), Center for Documentation and Information Dissemination (CDDI) and the , a degree-granting institution. The Directorate of Research is responsible for planning and coordinating the research of nature and processing of statistical data collected by the state units; the Department of Geosciences is responsible for basic cartography, the national geodetic system, with a survey of natural resources and environment and by survey and geographical studies. The Center for Documentation and Information Dissemination is responsible for documentation and dissemination of information produced by the institute as well as coordinating the 27 CDDIs in the country, and the National School of Statistical Sciences, besides being responsible for training the institute's employees, is a federal institution of higher learning that offers the following courses: BA in statistics; specialization in Environmental Analysis and Management Planning, and Masters in Population Studies and Social Research. The IBGE also maintains the Roncador Ecological Reserve, situated 35 km south of Brasília. System of national accounts Gives an overview of the economy and describes the phenomena of economic life: production, consumption and wealth accumulation, providing a comprehensive and simplified representation of these data. The System of National Accounts IBGE follows the most recent UN recommendations expressed in the Handbook of National Accounts (System of N
https://en.wikipedia.org/wiki/Ordered%20ring
In abstract algebra, an ordered ring is a (usually commutative) ring R with a total order ≤ such that for all a, b, and c in R: if a ≤ b then a + c ≤ b + c. if 0 ≤ a and 0 ≤ b then 0 ≤ ab. Examples Ordered rings are familiar from arithmetic. Examples include the integers, the rationals and the real numbers. (The rationals and reals in fact form ordered fields.) The complex numbers, in contrast, do not form an ordered ring or field, because there is no inherent order relationship between the elements 1 and i. Positive elements In analogy with the real numbers, we call an element c of an ordered ring R positive if 0 < c, and negative if c < 0. 0 is considered to be neither positive nor negative. The set of positive elements of an ordered ring R is often denoted by R+. An alternative notation, favored in some disciplines, is to use R+ for the set of nonnegative elements, and R++ for the set of positive elements. Absolute value If is an element of an ordered ring R, then the absolute value of , denoted , is defined thus: where is the additive inverse of and 0 is the additive identity element. Discrete ordered rings A discrete ordered ring or discretely ordered ring is an ordered ring in which there is no element between 0 and 1. The integers are a discrete ordered ring, but the rational numbers are not. Basic properties For all a, b and c in R: If a ≤ b and 0 ≤ c, then ac ≤ bc. This property is sometimes used to define ordered rings instead of the second property in the definition above. |ab| = |a||b|. An ordered ring that is not trivial is infinite. Exactly one of the following is true: a is positive, −a is positive, or a = 0. This property follows from the fact that ordered rings are abelian, linearly ordered groups with respect to addition. In an ordered ring, no negative element is a square: Firstly, 0 is square. Now if a ≠ 0 and a = b2 then b ≠ 0 and a = (−b)2; as either b or −b is positive, a must be nonnegative. See also , also called vector lattice Ordered semirings Notes The list below includes references to theorems formally verified by the IsarMathLib project. Ordered groups Real algebraic geometry
https://en.wikipedia.org/wiki/Cubic%20surface
In mathematics, a cubic surface is a surface in 3-dimensional space defined by one polynomial equation of degree 3. Cubic surfaces are fundamental examples in algebraic geometry. The theory is simplified by working in projective space rather than affine space, and so cubic surfaces are generally considered in projective 3-space . The theory also becomes more uniform by focusing on surfaces over the complex numbers rather than the real numbers; note that a complex surface has real dimension 4. A simple example is the Fermat cubic surface in . Many properties of cubic surfaces hold more generally for del Pezzo surfaces. Rationality of cubic surfaces A central feature of smooth cubic surfaces X over an algebraically closed field is that they are all rational, as shown by Alfred Clebsch in 1866. That is, there is a one-to-one correspondence defined by rational functions between the projective plane minus a lower-dimensional subset and X minus a lower-dimensional subset. More generally, every irreducible cubic surface (possibly singular) over an algebraically closed field is rational unless it is the projective cone over a cubic curve. In this respect, cubic surfaces are much simpler than smooth surfaces of degree at least 4 in , which are never rational. In characteristic zero, smooth surfaces of degree at least 4 in are not even uniruled. More strongly, Clebsch showed that every smooth cubic surface in over an algebraically closed field is isomorphic to the blow-up of at 6 points. As a result, every smooth cubic surface over the complex numbers is diffeomorphic to the connected sum , where the minus sign refers to a change of orientation. Conversely, the blow-up of at 6 points is isomorphic to a cubic surface if and only if the points are in general position, meaning that no three points lie on a line and all 6 do not lie on a conic. As a complex manifold (or an algebraic variety), the surface depends on the arrangement of those 6 points. 27 lines on a cubic surface Most proofs of rationality for cubic surfaces start by finding a line on the surface. (In the context of projective geometry, a line in is isomorphic to .) More precisely, Arthur Cayley and George Salmon showed in 1849 that every smooth cubic surface over an algebraically closed field contains exactly 27 lines. This is a distinctive feature of cubics: a smooth quadric (degree 2) surface is covered by a continuous family of lines, while most surfaces of degree at least 4 in contain no lines. Another useful technique for finding the 27 lines involves Schubert calculus which computes the number of lines using the intersection theory of the Grassmannian of lines on . As the coefficients of a smooth complex cubic surface are varied, the 27 lines move continuously. As a result, a closed loop in the family of smooth cubic surfaces determines a permutation of the 27 lines. The group of permutations of the 27 lines arising this way is called the monodromy group of the family of cubic
https://en.wikipedia.org/wiki/Karl%20L.%20Littrow
Karl Ludwig Edler von Littrow (18 July 1811 – 16 November 1877) was an Austrian astronomer. Born in Kazan, Russian Empire, he was the son of astronomer Joseph Johann Littrow. He studied mathematics and astronomy at the universities of Vienna and Berlin, receiving his doctorate at the University of Krakow in 1832. In 1842 he succeeded his father as director of the Vienna Observatory. Under his leadership, construction of a new observatory began in Währing in 1872; he died, however, prior to its completion. He was the husband of Auguste von Littrow. He died in Venice, Italy. He is the great-great-grandfather of Roman Catholic Cardinal Christoph Schönborn. Publications Beitrag zu einer Monographie des Halleyschen Cometen, (1834) – Monograph on Halley's comet. Verzeichnis geographischer Ortsbestimmungen, (1844) – Directory of geographical localizations. Die Wunder des Himmels : gemeinverständliche Darstellung des astronomischen Weltbildes, (1854) – The wonders of the heavens; a common understanding of the astronomical world image. Physische Zusammenkünfte der Planeten, (1859). He made contributions to a new edition of Johann Samuel Traugott Gehler's Physikalisches wörterbunch. References 1811 births 1877 deaths 19th-century Austrian astronomers Edlers of Austria University of Vienna alumni
https://en.wikipedia.org/wiki/Lists%20of%20mathematics%20topics
Lists of mathematics topics cover a variety of topics related to mathematics. Some of these lists link to hundreds of articles; some link only to a few. The template to the right includes links to alphabetical lists of all mathematical articles. This article brings together the same content organized in a manner better suited for browsing. Lists cover aspects of basic and advanced mathematics, methodology, mathematical statements, integrals, general concepts, mathematical objects, and reference tables. They also cover equations named after people, societies, mathematicians, journals, and meta-lists. The purpose of this list is not similar to that of the Mathematics Subject Classification formulated by the American Mathematical Society. Many mathematics journals ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The subject codes so listed are used by the two major reviewing databases, Mathematical Reviews and Zentralblatt MATH. This list has some items that would not fit in such a classification, such as list of exponential topics and list of factorial and binomial topics, which may surprise the reader with the diversity of their coverage. Basic mathematics This branch is typically taught in secondary education or in the first year of university. Outline of arithmetic Outline of discrete mathematics List of calculus topics List of geometry topics Outline of geometry List of trigonometry topics Outline of trigonometry List of trigonometric identities List of logarithmic identities List of integrals of logarithmic functions List of set identities and relations List of topics in logic Areas of advanced mathematics As a rough guide, this list is divided into pure and applied sections although in reality, these branches are overlapping and intertwined. Pure mathematics Algebra Algebra includes the study of algebraic structures, which are sets and operations defined on these sets satisfying certain axioms. The field of algebra is further divided according to which structure is studied; for instance, group theory concerns an algebraic structure called group. Outline of algebra Glossary of field theory Glossary of group theory Glossary of linear algebra Glossary of ring theory List of abstract algebra topics List of algebraic structures List of Boolean algebra topics List of category theory topics List of cohomology theories List of commutative algebra topics List of homological algebra topics List of group theory topics List of representation theory topics List of linear algebra topics List of reciprocity laws Calculus and analysis Calculus studies the computation of limits, derivatives, and integrals of functions of real numbers, and in particular studies instantaneous rates of change. Analysis evolved from calculus. Glossary of tensor theory List of complex analysis topics List of functional analysis topics List of vector spa
https://en.wikipedia.org/wiki/Hilbert%27s%20eighth%20problem
Hilbert's eighth problem is one of David Hilbert's list of open mathematical problems posed in 1900. It concerns number theory, and in particular the Riemann hypothesis, although it is also concerned with the Goldbach Conjecture. The problem as stated asked for more work on the distribution of primes and generalizations of Riemann hypothesis to other rings where prime ideals take the place of primes. Subtopics Riemann hypothesis and generalizations Hilbert calls for a solution to the Riemann hypothesis, which has long been regarded as the deepest open problem in mathematics. Given the solution, he calls for more thorough investigation into Riemann's zeta function and the prime number theorem. Goldbach conjecture He calls for a solution to the Goldbach conjecture, as well as more general problems, such as finding infinitely many pairs of primes solving a fixed linear diophantine equation. Twin prime conjecture Generalized Riemann conjecture Finally, he calls for mathematicians to generalize the ideas of the Riemann hypothesis to counting prime ideals in a number field. External links English translation of Hilbert's original address 08 References
https://en.wikipedia.org/wiki/Indian%20Agricultural%20Statistics%20Research%20Institute
The Indian Agricultural Statistics Research Institute is an institute under the Indian Council of Agricultural Research (ICAR) with the mandate for developing new techniques for the design of agricultural experiments as well as to analyze data in agriculture. The institute is affiliated with and is located in the campus of the Indian Agricultural Research Institute, a deemed university, at Pusa in New Delhi. The institute includes sections that specialize in statistical techniques for animal and plant breeding, bioinformatics, sampling, experimental design, modelling and forecasting. Origin and history In 1930 the, then, Imperial Council of Agricultural Research, started a statistical unit to assist the State Departments of Agriculture and Animal Husbandry in planning their experiments, analysis of experimental data, interpretation of results and rendering advice on the formulation of the technical programmes of the Council. This unit was established on the recommendation of Leslie Coleman. This unit was headed from 1940 by the statistician Dr. P.V.Sukhatme who had studied with Jerzy Neyman in London. The early research was on reliable methods for collecting yield statistics of principal food crops. Further research in sampling and statistics was initiated and this became a Statistical Branch in 1945. The branch soon acquired international recognition as a centre for research and training in the field of Agricultural Statistics. In 1949 it was named as Statistical Wing of the Indian Council of Agricultural Research (ICAR). In 1952, at the recommendations of Food and Agriculture Organization (FAO) experts Dr Frank Yates and Dr D. J. Finney it was expanded and in 1955 it moved to the Pusa campus. On 2 July 1959 it was renamed as the Institute of Agricultural Research Statistics (IARS). In 1964, a Memorandum of Understanding was signed with the Indian Agricultural Research Institute (IARI), New Delhi and courses in M.Sc. and Ph.D. degrees were offered. In 1964, it was one of the few institutes with a computer, an IBM 1620 Model-II Electronic Computer. In 1970, it became a full institute under the ICAR and the name was changed to Indian Agricultural Statistics Research Institute (IASRI) on 1 January 1978. In 1977, a third generation computer Burroughs B-4700 system was installed in a new building. In 1991–95, the old computers were replaced by new networked PC systems. References Indian education P. V. Sukhatme (1966) Major Developments in Sampling Theory and Practice, in F. N. David (ed.) Research Papers in Statistics, New York: Wiley. Shashikala Sukhatme (2002) Pandurang V. Sukhatme 1911–1997, Journal of Statistical Planning and Inference, 102(1), 3-24. External links Official site Universities and colleges in Delhi Indian Council of Agricultural Research Research institutes in Delhi
https://en.wikipedia.org/wiki/Abraham%20Fraenkel
Abraham Fraenkel (; February 17, 1891 – October 15, 1965) was a German-born Israeli mathematician. He was an early Zionist and the first Dean of Mathematics at the Hebrew University of Jerusalem. He is known for his contributions to axiomatic set theory, especially his additions to Ernst Zermelo's axioms, which resulted in the Zermelo–Fraenkel set theory. Biography Abraham Adolf Halevi Fraenkel studied mathematics at the Universities of Munich, Berlin, Marburg and Breslau. After graduating, he lectured at the University of Marburg from 1916, and was promoted to professor in 1922. In 1919 he married Wilhelmina Malka A. Prins (1892–1983). Due to the severe housing shortage in post-First World war Germany, for a few years the couple lived as subtenants at professor Hensel's place. After leaving Marburg in 1928, Fraenkel taught at the University of Kiel for a year. He then made the choice of accepting a position at the Hebrew University of Jerusalem, which had been founded four years earlier, where he spent the rest of his career. He became the first dean of the faculty of mathematics, and for a while served as rector of the university. Fraenkel was a fervent Zionist and as such was a member of Jewish National Council and the Jewish Assembly of Representatives under the British mandate. He also belonged to the Mizrachi religious wing of Zionism, which promoted Jewish religious education and schools, and which advocated giving the Chief Rabbinate authority over marriage and divorce. Mathematician Fraenkel's early work was on Kurt Hensel's p-adic numbers and on the theory of rings. He is best known for his work on axiomatic set theory, publishing his first major work on the topic Einleitung in die Mengenlehre (Introduction to set theory) in 1919. In 1922 and 1925, he published two papers that sought to improve Zermelo's axiomatic system; the result is the Zermelo–Fraenkel axioms. Fraenkel worked in set theory and foundational mathematics. Fraenkel also was interested in the history of mathematics, writing in 1920 and 1930 about Gauss's works in algebra, and he published a biography of Georg Cantor. After retiring from the Hebrew University and being succeeded by his former student Abraham Robinson, Fraenkel continued teaching at the Bar Ilan University in Ramat Gan (near Tel Aviv). Awards In 1956, Fraenkel was awarded the Israel Prize, for exact sciences. Published works 1908. "Bestimmung des Datums des jüdischen Osterfestes für die Zeitrechnung der Mohammedaner". In Zeitschrift für Mathematik und naturwissenschaft Unterricht (39). 1909. "Eine Formel zur Verwandlung jüdischer Daten in mohammedanische". In Monatsschrift für Geschichte und Wissenschaft des Judentums, vol. 53, issue 11–12. 1910. "Die Berechnung des Osterfestes". Journal für die reine und angewandte Mathematik, vol 138. 1914. "Über die Teiler der Null und die Zerlegung von Ringen". J. Reine Angew. Math. 145: 139–176. 1918. "Praktisches zur Universitätsgründung in Jerusalem". Der
https://en.wikipedia.org/wiki/Von%20Neumann%E2%80%93Bernays%E2%80%93G%C3%B6del%20set%20theory
In the foundations of mathematics, von Neumann–Bernays–Gödel set theory (NBG) is an axiomatic set theory that is a conservative extension of Zermelo–Fraenkel–choice set theory (ZFC). NBG introduces the notion of class, which is a collection of sets defined by a formula whose quantifiers range only over sets. NBG can define classes that are larger than sets, such as the class of all sets and the class of all ordinals. Morse–Kelley set theory (MK) allows classes to be defined by formulas whose quantifiers range over classes. NBG is finitely axiomatizable, while ZFC and MK are not. A key theorem of NBG is the class existence theorem, which states that for every formula whose quantifiers range only over sets, there is a class consisting of the sets satisfying the formula. This class is built by mirroring the step-by-step construction of the formula with classes. Since all set-theoretic formulas are constructed from two kinds of atomic formulas (membership and equality) and finitely many logical symbols, only finitely many axioms are needed to build the classes satisfying them. This is why NBG is finitely axiomatizable. Classes are also used for other constructions, for handling the set-theoretic paradoxes, and for stating the axiom of global choice, which is stronger than ZFC's axiom of choice. John von Neumann introduced classes into set theory in 1925. The primitive notions of his theory were function and argument. Using these notions, he defined class and set. Paul Bernays reformulated von Neumann's theory by taking class and set as primitive notions. Kurt Gödel simplified Bernays' theory for his relative consistency proof of the axiom of choice and the generalized continuum hypothesis. Classes in set theory The uses of classes Classes have several uses in NBG: They produce a finite axiomatization of set theory. They are used to state a "very strong form of the axiom of choice"—namely, the axiom of global choice: There exists a global choice function defined on the class of all nonempty sets such that for every nonempty set This is stronger than ZFC's axiom of choice: For every set of nonempty sets, there exists a choice function defined on such that for all The set-theoretic paradoxes are handled by recognizing that some classes cannot be sets. For example, assume that the class of all ordinals is a set. Then is a transitive set well-ordered by . So, by definition, is an ordinal. Hence, , which contradicts being a well-ordering of Therefore, is not a set. A class that is not a set is called a proper class, is a proper class. Proper classes are useful in constructions. In his proof of the relative consistency of the axiom of global choice and the generalized continuum hypothesis, Gödel used proper classes to build the constructible universe. He constructed a function on the class of all ordinals that, for each ordinal, builds a constructible set by applying a set-building operation to previously constructed sets. The constr
https://en.wikipedia.org/wiki/Semigroupoid
In mathematics, a semigroupoid (also called semicategory, naked category or precategory) is a partial algebra that satisfies the axioms for a small category, except possibly for the requirement that there be an identity at each object. Semigroupoids generalise semigroups in the same way that small categories generalise monoids and groupoids generalise groups. Semigroupoids have applications in the structural theory of semigroups. Formally, a semigroupoid consists of: a set of things called objects. for every two objects A and B a set Mor(A,B) of things called morphisms from A to B. If f is in Mor(A,B), we write f : A → B. for every three objects A, B and C a binary operation Mor(A,B) × Mor(B,C) → Mor(A,C) called composition of morphisms. The composition of f : A → B and g : B → C is written as g ∘ f or gf. (Some authors write it as fg.) such that the following axiom holds: (associativity) if f : A → B, g : B → C and h : C → D then h ∘ (g ∘ f) = (h ∘ g) ∘ f. References Algebraic structures Category theory
https://en.wikipedia.org/wiki/Chess%20symbols%20in%20Unicode
Chess symbols are part of Unicode. Instead of using images, one can represent chess pieces by characters that are defined in the Unicode character set. This makes it possible to: Use figurine algebraic notation, which replaces the letter that stands for a piece by its symbol, e.g. ♘c6 instead of Nc6. This enables the moves to be read independent of language (the letter abbreviations of pieces in algebraic notation vary from language to language). Produce the symbols using a text editor or word processor rather than a graphics editor. In order to display or print these symbols, a device must have one or more fonts with good Unicode support installed, and the document (Web page, word processor document, etc.) it is displaying must use one of these fonts. Unicode version 12.0 has allocated a whole character block at 0x1FA00 for inclusion of extra chess piece representations. This standard points to several new characters being created in this block,including rotated pieces and neutral (neither white nor black) pieces. Unicode characters In Unicode, chess symbols are in two groups: Regular chess symbols, the basic six pieces in black and white (as part of Unicode block Miscellaneous Symbols), and Uncommon and fairy chess pieces and xiangqi pieces, in a block named Chess Symbols. The basic 12 chess pieces Fairy chess pieces and xiangqi pieces Chessboard using Unicode References Chess notation Chess Chess
https://en.wikipedia.org/wiki/Independence%20%28disambiguation%29
Independence generally refers to the self-government of a nation, country, or state by its residents and population. Independence may also refer to: Mathematics Algebraic independence Independence (graph theory), edge-wise non-connectedness Independence (mathematical logic), logical independence Independence (probability theory), statistical independence Linear independence Films Independence (1976 film), a docudrama directed by John Huston Independence (1999 film), an Indian film in Malayalam Music Independence (Lulu album), 1993 Independence (Kosheen album), 2012 Naval ships Independence class (disambiguation), several classes of ships USS Independence, any of seven US Navy ships Texan schooner Independence, an 1832 ship in the Texas Navy during the Texas Revolution Places United States Independence County, Arkansas Independence, California, a census-designated place in Inyo County Independence, Calaveras County, California, an unincorporated community Independence, Pitkin County, Colorado, a ghost town Independence, Indiana, an unincorporated community Independence, Iowa, a city Independence, Kansas, a city Independence, Kentucky, a home rule-class city Independence, Louisiana, a town Independence, Minnesota, a city in Hennepin County Independence, St. Louis County, Minnesota, an unincorporated community Independence, Mississippi, an unincorporated community Independence, Missouri, a city Independence, New York, a town Independence, Ohio, a city in Cuyahoga County Independence, Defiance County, Ohio, an unincorporated community in Defiance County, Ohio Independence, Oklahoma, a ghost town Independence, Oregon, a city Independence, Tennessee, an unincorporated community Independence, Texas, an unincorporated community Independence, Utah, a town Independence, Virginia, a town Independence, Barbour County, West Virginia, an unincorporated community Independence, Clay County, West Virginia, an unincorporated community Independence, Jackson County, West Virginia, an unincorporated community Independence, Preston County, West Virginia, an unincorporated community Independence, Washington, an unincorporated community Independence, Wisconsin, a city Independence County, Washington, a proposed county Independence Township (disambiguation) Lake Independence (disambiguation), various American lakes (and one constituency in Belize) Independence River, a tributary of the Black River in New York Independence Lake (Colorado) Independence Lakes, a number of lakes in Idaho Lake Independence (Michigan) Lake Independence (Jackson County, Minnesota) Independence National Historical Park, in Philadelphia, Pennsylvania Independence Rock (Wyoming) Fort Independence (disambiguation) Mount Independence (disambiguation) Elsewhere Independence and Mango Creek, adjacent villages (considered as one community) in Belize Independence, a former name of Niulakita, Tuvalu Independence Fjord, Greenland Transportation SS Independence, an American passenger ship (buil
https://en.wikipedia.org/wiki/David%20van%20Dantzig
David van Dantzig (September 23, 1900 – July 22, 1959) was a Dutch mathematician, well known for the construction in topology of the dyadic solenoid. He was a member of the Significs Group. Biography Born to a Jewish family in Amsterdam in 1900, Van Dantzig started to study Chemistry at the University of Amsterdam in 1917, where Gerrit Mannoury lectured. He received his PhD at the University of Groningen in 1931 with a thesis entitled "" under supervision of Bartel Leendert van der Waerden. He was appointed professor at the Delft University of Technology in 1938, and at the University of Amsterdam in 1946. Among his doctoral students were Jan Hemelrijk (1950), Johan Kemperman (1950), David Johannes Stoker (1955), and Constance van Eeden (1958). In Amsterdam he was one of the founders of the Mathematisch Centrum. At the University of Amsterdam he was succeeded by Jan Hemelrijk. Originally working on topics in differential geometry and topology, after World War II he focused on probability, emphasizing the applicability to statistical hypothesis testing. In 1949 he became member of the Royal Netherlands Academy of Arts and Sciences. In response to the North Sea flood of 1953, the Dutch Government established the Delta Committee, and asked Van Dantzig to develop a mathematical approach to formulate and solve the economic cost-benefit decision model concerning optimal dike height problems in connection with the Delta Works. The work of the Delta Committee, including the work by Van Dantzig, finally resulted in statutory minimal safety standards. Publications Books, a selection: 1931. Studien over topologische algebra. Doctoral thesis University of Groningen. 1932. Over de elementen van het wiskundig denken : voordracht. Rede Delft. Groningen : Noordhoff. 1938. Vragen en schijnvragen over ruimte en tijd : een toepassing van den wiskundigen denkvorm. Inaugurale rede Technische Hogeschool te Delft 1948. De functie der wetenschap : drie voordrachten, met discussie. With E.W. Beth and C.F.P. Stutterheim. 's-Gravenhage : Leopold Articles, a selection: D. van Dantzig, C. Scheffer "On hereditary time discrete stochastic processes, considered as stationary Markov chains, and the corresponding general form of Wald’s fundamental identity," Indag. Math. (16), No.4, (1954), p. 377–388 Dantzig, D. van. 1956. Economic decision problems for flood prevention. Econometrica 24(3) 276–287. References External links 1900 births 1959 deaths Dutch Jews Dutch statisticians 20th-century Dutch mathematicians University of Groningen alumni Academic staff of the Delft University of Technology Academic staff of the University of Amsterdam Fellows of the American Statistical Association Scientists from Amsterdam Members of the Royal Netherlands Academy of Arts and Sciences Mathematical statisticians University of Amsterdam alumni
https://en.wikipedia.org/wiki/Surface%20integral
In mathematics, particularly multivariable calculus, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analogue of the line integral. Given a surface, one may integrate a scalar field (that is, a function of position which returns a scalar as a value) over the surface, or a vector field (that is, a function which returns a vector as value). If a region R is not flat, then it is called a surface as shown in the illustration. Surface integrals have applications in physics, particularly with the theories of classical electromagnetism. Surface integrals of scalar fields Assume that f is a scalar, vector, or tensor field defined on a surface S. To find an explicit formula for the surface integral of f over S, we need to parameterize S by defining a system of curvilinear coordinates on S, like the latitude and longitude on a sphere. Let such a parameterization be , where varies in some region in the plane. Then, the surface integral is given by where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of , and is known as the surface element (which would, for example, yield a smaller value near the poles of a sphere. where the lines of longitude converge more dramatically, and latitudinal coordinates are more compactly spaced). The surface integral can also be expressed in the equivalent form where is the determinant of the first fundamental form of the surface mapping . For example, if we want to find the surface area of the graph of some scalar function, say , we have where . So that , and . So, which is the standard formula for the area of a surface described this way. One can recognize the vector in the second-last line above as the normal vector to the surface. Because of the presence of the cross product, the above formulas only work for surfaces embedded in three-dimensional space. This can be seen as integrating a Riemannian volume form on the parameterized surface, where the metric tensor is given by the first fundamental form of the surface. Surface integrals of vector fields Consider a vector field v on a surface S, that is, for each in S, v(r) is a vector. The integral of v on S was defined in the previous section. Suppose now that it is desired to integrate only the normal component of the vector field over the surface, the result being a scalar, usually called the flux passing through the surface. For example, imagine that we have a fluid flowing through S, such that v(r) determines the velocity of the fluid at r. The flux is defined as the quantity of fluid flowing through S per unit time. This illustration implies that if the vector field is tangent to S at each point, then the flux is zero because the fluid just flows in parallel to S, and neither in nor out. This also implies that if v does not just flow along S, that is, if v has both a tangential and a normal co
https://en.wikipedia.org/wiki/Definition%20%28disambiguation%29
A definition is a statement of the meaning of a term. Definition may also refer to: Science, mathematics and computing In computer programming languages, a declaration that reserves memory for a variable or gives the body of a subroutine Defining equation (physical chemistry), physico-chemical quantities defined in terms of others, in the form of an equation Dynamical system (definition), description of a mathematical model, determined by a system of coupled differential equations Circular definition, lexicographic, linguistic and logical aspects Mathematics: Intensional definition Elementary definition Algebraic definition Recursive definition Field of definition A continuous function A well-defined function Music and TV High-definition television, a television format with higher resolution Definition (album), a 1992 studio album by American crossover thrash band Dirty Rotten Imbeciles Definition (TV series), a long-running Canadian game show of the 1970s and 1980s Definition (Jersey EP), 2001 Definition (Diaura EP), 2019 "Definition" (song), a 1998 song by Black Star "Definition", a song by Mabel from About Last Night..., 2022 "Definitions" (How I Met Your Mother), a 2009 television episode Other #define, a macro in the C programming language Defined (album), a 2005 operatic pop album Definitions (Plato), a dictionary of about 185 philosophical terms sometimes included in the corpus of Plato's works Dogmatic definition, the pronunciation of religious doctrine by a Pope or an ecumenical council
https://en.wikipedia.org/wiki/Togliatti%20%28disambiguation%29
Togliatti, or Tolyatti, is a city in Russia. Togliatti may also refer to: Eugenio Giuseppe Togliatti (1890–1977), Italian mathematician Togliatti surface, an algebraic surface discovered by him Palmiro Togliatti (1893–1964), leader of the Italian Communist Party The Togliatti amnesty, drafted by Palmiro Togliatti in 1946 See also Lada Togliatti (disambiguation) FC Togliatti, a Russian football club FC Akademiya Tolyatti, a Russian football club TogliattiAzot, a Russian chemical company
https://en.wikipedia.org/wiki/Category%20of%20medial%20magmas
In mathematics, the category of medial magmas, also known as the medial category, and denoted Med, is the category whose objects are medial magmas (that is, sets with a medial binary operation), and whose morphisms are magma homomorphisms (which are equivalent to homomorphisms in the sense of universal algebra). The category Med has direct products, so the concept of a medial magma object (internal binary operation) makes sense. As a result, Med has all its objects as medial objects, and this characterizes it. There is an inclusion functor from Set to Med as trivial magmas, with operations being the right projections (x, y) → y. An injective endomorphism can be extended to an automorphism of a magma extension—the colimit of the constant sequence of the endomorphism. See also Eckmann–Hilton argument Medial magmas
https://en.wikipedia.org/wiki/Shelah%20cardinal
In axiomatic set theory, Shelah cardinals are a kind of large cardinals. A cardinal is called Shelah iff for every , there exists a transitive class and an elementary embedding with critical point ; and . A Shelah cardinal has a normal ultrafilter containing the set of weakly hyper-Woodin cardinals below it. References Ernest Schimmerling, Woodin cardinals, Shelah cardinals and the Mitchell-Steel core model, Proceedings of the American Mathematical Society 130/11, pp. 3385-3391, 2002, online Large cardinals
https://en.wikipedia.org/wiki/Remarkable%20cardinal
In mathematics, a remarkable cardinal is a certain kind of large cardinal number. A cardinal κ is called remarkable if for all regular cardinals θ > κ, there exist π, M, λ, σ, N and ρ such that π : M → Hθ is an elementary embedding M is countable and transitive π(λ) = κ σ : M → N is an elementary embedding with critical point λ N is countable and transitive ρ = M ∩ Ord is a regular cardinal in N σ(λ) > ρ M = HρN, i.e., M ∈ N and N ⊨ "M is the set of all sets that are hereditarily smaller than ρ" Equivalently, is remarkable if and only if for every there is such that in some forcing extension , there is an elementary embedding satisfying . Although the definition is similar to one of the definitions of supercompact cardinals, the elementary embedding here only has to exist in , not in . See also Hereditarily countable set References Large cardinals
https://en.wikipedia.org/wiki/Extremal%20graph%20theory
Extremal graph theory is a branch of combinatorics, itself an area of mathematics, that lies at the intersection of extremal combinatorics and graph theory. In essence, extremal graph theory studies how global properties of a graph influence local substructure. Results in extremal graph theory deal with quantitative connections between various graph properties, both global (such as the number of vertices and edges) and local (such as the existence of specific subgraphs), and problems in extremal graph theory can often be formulated as optimization problems: how big or small a parameter of a graph can be, given some constraints that the graph has to satisfy? A graph that is an optimal solution to such an optimization problem is called an extremal graph, and extremal graphs are important objects of study in extremal graph theory. Extremal graph theory is closely related to fields such as Ramsey theory, spectral graph theory, computational complexity theory, and additive combinatorics, and frequently employs the probabilistic method. History Mantel's Theorem (1907) and Turán's Theorem (1941) were some of the first milestones in the study of extremal graph theory. In particular, Turán's theorem would later on become a motivation for the finding of results such as the Erdős–Stone theorem (1946). This result is surprising because it connects the chromatic number with the maximal number of edges in an -free graph. An alternative proof of Erdős–Stone was given in 1975, and utilised the Szemerédi regularity lemma, an essential technique in the resolution of extremal graph theory problems. Topics and concepts Graph coloring A proper (vertex) coloring of a graph is a coloring of the vertices of such that no two adjacent vertices have the same color. The minimum number of colors needed to properly color is called the chromatic number of , denoted . Determining the chromatic number of specific graphs is a fundamental question in extremal graph theory, because many problems in the area and related areas can be formulated in terms of graph coloring. Two simple lower bounds to the chromatic number of a graph is given by the clique number —all vertices of a clique must have distinct colors—and by , where is the independence number, because the set of vertices with a given color must form an independent set. A greedy coloring gives the upper bound , where is the maximum degree of . When is not an odd cycle or a clique, Brooks' theorem states that the upper bound can be reduced to . When is a planar graph, the four-color theorem states that has chromatic number at most four. In general, determining whether a given graph has a coloring with a prescribed number of colors is known to be NP-hard. In addition to vertex coloring, other types of coloring are also studied, such as edge colorings. The chromatic index of a graph is the minimum number of colors in a proper edge-coloring of a graph, and Vizing's theorem states that the chromatic index
https://en.wikipedia.org/wiki/Mathcounts
Mathcounts, stylized as MATHCOUNTS, is a non-profit organization that provides grades 6-8 extracurricular mathematics programs in all U.S. states, plus the District of Columbia, Puerto Rico, Guam and U.S. Virgin Islands. Its mission is to provide engaging math programs for middle school students of all ability levels to build confidence and improve attitudes about math and problem solving. Mathcounts also provides numerous math resources for schools and the general public. Topics covered include geometry, counting, probability, number theory, and algebra. History Mathcounts was started in 1983 by the National Society of Professional Engineers, the National Council of Teachers of Mathematics, and CNA Insurance to increase middle school interest in mathematics. The first national-level competition was held in 1984. The Mathcounts Competition Series spread quickly in middle schools, and today it is the best-known middle school mathematics competition. In 2007 Mathcounts launched the National Math Club as a non-competitive alternative to the Competition Series. In 2011 Mathcounts launched the Math Video Challenge Program, which was discontinued in 2023. 2020 was the only year since 1984 in which a national competition was not held, due to the COVID-19 pandemic. The "MATHCOUNTS Week" event featuring problems from the 2020 State Competition was held on the Art of Problem Solving website as a replacement. The 2021 National Competition was held online. Current sponsors include RTX Corporation, U.S. Department of Defense STEM, BAE Systems, Northrop Grumman, National Society of Professional Engineers, 3M, Texas Instruments, Art of Problem Solving, Bentley Systems, Carina Initiatives, National Council of Examiners for Engineering and Surveying, CNA Financial, Google, Brilliant, and Mouser Electronics. Competition Series The Competition Series is divided into four levels: school, chapter, state, and national. Students progress to each level in the competition based on performance at the previous level. As the levels progress, the problems become more challenging. Each level has many rounds, always including a Sprint Round (30 questions, 40 minutes) and a Target Round (4 pairs of harder problems with calculator use, 6 minutes each pair). All students are either school-based competitors or non-school competitors ("NSCs"). Most students participate through their schools, starting with a school-level competition. A student whose school is not participating in the Competition Series starts at the chapter level as an NSC, competing individually. School level Coaches of each school select up to 12 students from their school to advance to the chapter competition, with 4 of them competing on the official school team. The rest compete individually. Chapter level All qualifying students compete individually. Students on an official school team also compete as a team. The Countdown Round is optional and can either be used to determine top individuals or as
https://en.wikipedia.org/wiki/Pfaffian
In mathematics, the determinant of an m×m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero. When m=2n is even, it is a nonzero polynomial of degree n, and is unique up to multiplication by &pm;1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by who indirectly named them after Johann Friedrich Pfaff. Explicitly, for a skew-symmetric matrix , which was first proved by , who cites Jacobi for introducing these polynomials in work on Pfaffian systems of differential equations. Cayley obtains this relation by specialising a more general result on matrices which deviate from skew symmetry only in the first row and the first column. The determinant of such a matrix is the product of the Pfaffians of the two matrices obtained by first setting in the original matrix the upper left entry to zero and then copying, respectively, the negative transpose of the first row to the first column and the negative transpose of the first column to the first row. This is proved by induction by expanding the determinant on minors and employing the recursion formula below. Examples (3 is odd, so the Pfaffian of B is 0) The Pfaffian of a 2n × 2n skew-symmetric tridiagonal matrix is given as (Note that any skew-symmetric matrix can be reduced to this form; see Spectral theory of a skew-symmetric matrix.) Formal definition Let A = (ai,j) be a 2n × 2n skew-symmetric matrix. The Pfaffian of A is explicitly defined by the formula where S2n is the symmetric group of order (2n)! and sgn(σ) is the signature of σ. One can make use of the skew-symmetry of A to avoid summing over all possible permutations. Let Π be the set of all partitions of {1, 2, ..., 2n} into pairs without regard to order. There are (2n)!/(2nn!) = (2n - 1)!! such partitions. An element α ∈ Π can be written as with ik < jk and . Let be the corresponding permutation. Given a partition α as above, define The Pfaffian of A is then given by The Pfaffian of a n×n skew-symmetric matrix for n odd is defined to be zero, as the determinant of an odd skew-symmetric matrix is zero, since for a skew-symmetric matrix, and for n odd, this implies . Recursive definition By convention, the Pfaffian of the 0×0 matrix is equal to one. The Pfaffian of a skew-symmetric 2n×2n matrix A with n>0 can be computed recursively as where the index i can be selected arbitrarily, is the Heaviside step function, and denotes the matrix A with both the i-th and j-th rows and columns removed. Note how for the special choice this reduces to the simpler expression: Alternative definitions One can associate to any s
https://en.wikipedia.org/wiki/Ideal%20%28order%20theory%29
In mathematical order theory, an ideal is a special subset of a partially ordered set (poset). Although this term historically was derived from the notion of a ring ideal of abstract algebra, it has subsequently been generalized to a different notion. Ideals are of great importance for many constructions in order and lattice theory. Definitions A subset of a partially ordered set is an ideal, if the following conditions hold: is non-empty, for every x in and y in P, implies that y is in  ( is a lower set), for every x, y in , there is some element z in , such that and  ( is a directed set). While this is the most general way to define an ideal for arbitrary posets, it was originally defined for lattices only. In this case, the following equivalent definition can be given: a subset of a lattice is an ideal if and only if it is a lower set that is closed under finite joins (suprema); that is, it is nonempty and for all x, y in , the element of P is also in . A weaker notion of order ideal is defined to be a subset of a poset that satisfies the above conditions 1 and 2. In other words, an order ideal is simply a lower set. Similarly, an ideal can also be defined as a "directed lower set". The dual notion of an ideal, i.e., the concept obtained by reversing all ≤ and exchanging with is a filter. Frink ideals, pseudoideals and Doyle pseudoideals are different generalizations of the notion of a lattice ideal. An ideal or filter is said to be proper if it is not equal to the whole set P. The smallest ideal that contains a given element p is a and p is said to be a of the ideal in this situation. The principal ideal for a principal p is thus given by . Terminology confusion The above definitions of "ideal" and "order ideal" are the standard ones, but there is some confusion in terminology. Sometimes the words and definitions such as "ideal", "order ideal", "Frink ideal", or "partial order ideal" mean one another. Prime ideals An important special case of an ideal is constituted by those ideals whose set-theoretic complements are filters, i.e. ideals in the inverse order. Such ideals are called s. Also note that, since we require ideals and filters to be non-empty, every prime ideal is necessarily proper. For lattices, prime ideals can be characterized as follows: A subset of a lattice is a prime ideal, if and only if is a proper ideal of P, and for all elements x and y of P, in implies that or . It is easily checked that this is indeed equivalent to stating that is a filter (which is then also prime, in the dual sense). For a complete lattice the further notion of a is meaningful. It is defined to be a proper ideal with the additional property that, whenever the meet (infimum) of some arbitrary set is in , some element of A is also in . So this is just a specific prime ideal that extends the above conditions to infinite meets. The existence of prime ideals is in general not obvious, and often a satisfac
https://en.wikipedia.org/wiki/MBD
MBD or MBd may refer to: Man bites dog (journalism), a shortened version of an aphorism in journalism Maxwell–Boltzmann distribution, a probability distribution in physics and chemistry Megabaud (MBd), equal to one million baud, symbol rate in telecommunications Member Board of Directors Metabolic bone disease Methyl-CpG-binding domain protein 2 a protein which bind the DNA on its methyl-CpG Microsoft Business Division, responsible for making Microsoft Office Minimal brain dysfunction or minimal brain damage, obsolete terms for attention deficit hyperactivity disorder, dyslexia and other learning disabilities Model-based definition, a method of using 3D CAD information to provide product specifications Model-based design, a mathematical and visual method of addressing problems associated with designing complex control, signal processing and communication systems Mordechai Ben David, a Jewish singer and recording artist Motherboard, a computer component Multibody dynamics Murder by Death (band), an indie rock band My Brightest Diamond, chamber rock band of Shara Worden