source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Descartes%27%20theorem
In geometry, Descartes' theorem states that for every four kissing, or mutually tangent, circles, the radii of the circles satisfy a certain quadratic equation. By solving this equation, one can construct a fourth circle tangent to three given, mutually tangent circles. The theorem is named after René Descartes, who stated it in 1643. Frederick Soddy's 1936 poem The Kiss Precise summarizes the theorem in terms of the bends (inverse radii) of the four circles: Special cases of the theorem apply when one or two of the circles is replaced by a straight line (with zero bend) or when the bends are integers or square numbers. A version of the theorem using complex numbers allows the centers of the circles, and not just their radii, to be calculated. With an appropriate definition of curvature, the theorem also applies in spherical geometry and hyperbolic geometry. In higher dimensions, an analogous quadratic equation applies to systems of pairwise tangent spheres or hyperspheres. History Geometrical problems involving tangent circles have been pondered for millennia. In ancient Greece of the third century BC, Apollonius of Perga devoted an entire book to the topic, [Tangencies]. It has been lost, and is known largely through a description of its contents by Pappus of Alexandria and through fragmentary references to it in medieval Islamic mathematics. However, Greek geometry was largely focused on straightedge and compass construction. For instance, the problem of Apollonius, closely related to Descartes' theorem, asks for the construction of a circle tangent to three given circles which need not themselves be tangent. Instead, Descartes' theorem is formulated using algebraic relations between numbers describing geometric forms. This is characteristic of analytic geometry, a field pioneered by René Descartes and Pierre de Fermat in the first half of the 17th century. Descartes discussed the tangent circle problem briefly in 1643, in two letters to Princess Elisabeth of the Palatinate. Descartes initially posed to the princess the problem of Apollonius. After Elisabeth's partial results revealed that solving the full problem analytically would be too tedious, he simplified the problem to the case in which the three given circles are mutually tangent, and in solving this simplified problem he came up with the equation describing the relation between the radii, or curvatures, of four pairwise tangent circles. This result became known as Descartes' theorem. Unfortunately, the reasoning through which Descartes found this relation has been lost. Japanese mathematics frequently concerned problems involving circles and their tangencies, and Japanese mathematician Yamaji Nushizumi stated a form of Descartes’ circle theorem in 1751. Like Descartes, he expressed it as a polynomial equation on the radii rather than their curvatures. The special case of this theorem for one straight line and three circles was recorded on a Japanese sangaku tablet from 1824.
https://en.wikipedia.org/wiki/Medical%20astrology
Medical astrology (traditionally known as iatromathematics) is an ancient applied branch of astrology based mostly on melothesia (Gr. μελοθεσία), the association of various parts of the body, diseases, and drugs with the nature of the sun, moon, planets, and the twelve astrological signs. The underlying basis for medical astrology, astrology itself, is considered to be a pseudoscience as there is no scientific basis for its core beliefs. List of works Medical astrology was mentioned by Marcus Manilius (1st century AD) in his epic poem (8000 verses) Astronomica. Ficino, Marsilio, Three Books on Life (1489) [De vita libri tre] translated by Carol V. Kaske and John R. Clark, Center for Medieval and Early Renaissance Studies, State University of New York at Binghamton and The Rneaissance Society of America (1989.) Lilly, William, Christian Astrology (1647) Culpepper, Nicholas, Astrological Judgement of Diseases from the Decumbiture of the Sick (1655) Saunders, Richard, The Astrological Judgment and Practice of Physick (1677) Cornell, H.L., M.D., The Encyclopaedia of Medical Astrology (1933), Astrology Classics [Abington, MD, 2010.] References Bibliography Astrology by type Traditional medicine History of ancient medicine History of astrology Pseudoscience
https://en.wikipedia.org/wiki/Similarity
Similarity may refer to: In mathematics and computing Similarity (geometry), the property of sharing the same shape Matrix similarity, a relation between matrices Similarity measure, a function that quantifies the similarity of two objects Cosine similarity, which uses the angle between vectors String metric, also called string similarity Semantic similarity, in computational linguistics In linguistics Lexical similarity Semantic similarity In signal processing Similarity between two different signals is also important in the field of signal processing. Below are some common methods for calculating similarity. For instance, let's consider two signals represented as and , where and . Maximum error (ME) Measuring the maximum magnitude of the difference between two signals. Maximum error is useful for assessing the worst-case scenario of prediction accuracy Mean squared error (MSE) Measuring the average squared difference between two signals. Unlike the maximum error, mean squared error takes into account the overall magnitude and spread of errors, offering a comprehensive assessment of the difference between the two signals. Normalized mean square error (NMSE) NMSE is an extension of MSE. It is calculated by normalizing the MSE with the signal power, enabling fair comparisons across different datasets and scales. Root-mean-square deviation (RMSE) RMSE is derived from MSE by taking the square root of the MSE. It downscale the MSE, providing a more interpretable and comparable measure for better understanding for outcome. Normalized root-mean-square error (NRMSE) An extension of RMSE, which allows for signal comparisons between different datasets and models with varying scales. Signal-to-noise ratio (SNR) In signal processing, SNR is calculated as the ratio of signal power to noise power, typically expressed in decibels. A high SNR indicates a clear signal, while a low SNR suggests that the signal is corrupted by noise. In this context, the signal MSE can be considered as noise, and the similarity between two signals can be viewed as the equation below: Peak signal-to-noise ratio (PSNR) A metric used to measure the maximum power of a signal to the noise. It is commonly used in image signals because the pixel intensity in an image does not directly represent the actual signal value. Instead, the pixel intensity corresponds to color values, such as white being represented as 255 and black as 0 Gray scale image: Color image: -Norm A mathematical concept used to measure the distance between two vectors. In signal processing, the L-norm is employed to quantify the difference between two signals. The L1-norm corresponds to the Manhattan distance, while the L2-norm corresponds to the Euclidean distance . Structural similarity (SSIM) SSIM is a similarity metric specifically designed for measuring the similarity between two image signals. Unlike other similarity measures, SSIM leverages the strong interdependencies between
https://en.wikipedia.org/wiki/Companion%20matrix
In linear algebra, the Frobenius companion matrix of the monic polynomial is the square matrix defined as Some authors use the transpose of this matrix, , which is more convenient for some purposes such as linear recurrence relations (see below). is defined from the coefficients of , while the characteristic polynomial as well as the minimal polynomial of are equal to . In this sense, the matrix and the polynomial are "companions". Similarity to companion matrix Any matrix with entries in a field has characteristic polynomial , which in turn has companion matrix . These matrices are related as follows. The following statements are equivalent: A is similar over F to , i.e. A can be conjugated to its companion matrix by matrices in GLn(F); the characteristic polynomial coincides with the minimal polynomial of A , i.e. the minimal polynomial has degree n; the linear mapping makes a cyclic -module, having a basis of the form ; or equivalently as -modules. If the above hold, one says that A is non-derogatory. Not every square matrix is similar to a companion matrix, but every square matrix is similar to a block diagonal matrix made of companion matrices. If we also demand that the polynomial of each diagonal block divides the next one, they are uniquely determined by A, and this gives the rational canonical form of A. Diagonalizability The roots of the characteristic polynomial are the eigenvalues of . If there are n distinct eigenvalues , then is diagonalizable as , where D is the diagonal matrix and V is the Vandermonde matrix corresponding to the 's: Indeed, an easy computation shows that the transpose has eigenvectors with , which follows from . Thus, its diagonalizing change of basis matrix is , meaning , and taking the transpose of both sides gives . We can read the eigenvectors of with from the equation : they are the column vectors of the inverse Vandermonde matrix . This matrix is known explicitly, giving the eignevectors , with coordinates equal to the coefficients of the Lagrange polynomials Alternatively, the scaled eigenvectors have simpler coefficients. If has multiple roots, then is not diagonalizable. Rather, the Jordan canonical form of contains one diagonal block for each distinct root, an m × m block with on the diagonal if the root has multiplicity m. Linear recursive sequences A linear recursive sequence defined by for has the characteristic polynomial , whose transpose companion matrix generates the sequence: The vector is an eigenvector of this matrix, where the eigenvalue is a root of . Setting the initial values of the sequence equal to this vector produces a geometric sequence which satisfies the recurrence. In the case of n distinct eigenvalues, an arbitrary solution can be written as a linear combination of such geometric solutions, and the eigenvalues of largest complex norm give an asymptotic approximation. From linear ODE to first-order linear ODE system Similarly to th
https://en.wikipedia.org/wiki/Seki%20Takakazu
, also known as , was a Japanese mathematician and author of the Edo period. Seki laid foundations for the subsequent development of Japanese mathematics, known as wasan. He has been described as "Japan's Newton". He created a new algebraic notation system and, motivated by astronomical computations, did work on infinitesimal calculus and Diophantine equations. Although he was a contemporary of German polymath mathematician and philosopher Gottfried Leibniz and British polymath physicist and mathematician Isaac Newton, Seki's work was independent. His successors later developed a school dominant in Japanese mathematics until the end of the Edo period. While it is not clear how much of the achievements of wasan are Seki's, since many of them appear only in writings of his pupils, some of the results parallel or anticipate those discovered in Europe. For example, he is credited with the discovery of Bernoulli numbers. The resultant and determinant (the first in 1683, the complete version no later than 1710) are attributed to him. Seki also calculated the value of pi correct to the 10th decimal place, having used what is now called the Aitken's delta-squared process, rediscovered later by Alexander Aitken. Seki has been influenced by Japanese mathematics books such as the Jinkōki. Biography Not much is known about Seki's personal life. His birthplace has been indicated as either Fujioka in Gunma Prefecture, or Edo. His birth date ranges from 1635 to 1643. He was born to the Uchiyama clan, a subject of Ko-shu han, and adopted into the Seki family, a subject of the shōgun. While in Ko-shu han, he was involved in a surveying project to produce a reliable map of his employer's land. He spent many years in studying 13th-century Chinese calendars to replace the less accurate one used in Japan at that time. Career Chinese mathematical roots His mathematics (and wasan as a whole) was based on mathematical knowledge accumulated from the 13th to 15th centuries. The material in these works consisted of algebra with numerical methods, polynomial interpolation and its applications, and indeterminate integer equations. Seki's work is more or less based on and related to these known methods. Chinese algebraists discovered numerical evaluation (Horner's method, re-established by William George Horner in the 19th century) of arbitrary-degree algebraic equation with real coefficients. By using the Pythagorean theorem, they reduced geometric problems to algebra systematically. The number of unknowns in an equation was, however, quite limited. They used notations of an array of numbers to represent a formula; for example, for . Later, they developed a method that uses two-dimensional arrays, representing four variables at most, but the scope of this method was limited. Accordingly, a target of Seki and his contemporary Japanese mathematicians was the development of general multivariable algebraic equations and elimination theory. In the Chinese approach
https://en.wikipedia.org/wiki/Ankeny%E2%80%93Artin%E2%80%93Chowla%20congruence
In number theory, the Ankeny–Artin–Chowla congruence is a result published in 1953 by N. C. Ankeny, Emil Artin and S. Chowla. It concerns the class number h of a real quadratic field of discriminant d > 0. If the fundamental unit of the field is with integers t and u, it expresses in another form for any prime number p > 2 that divides d. In case p > 3 it states that where   and    is the Dirichlet character for the quadratic field. For p = 3 there is a factor (1 + m) multiplying the LHS. Here represents the floor function of x. A related result is that if d=p is congruent to one mod four, then where Bn is the nth Bernoulli number. There are some generalisations of these basic results, in the papers of the authors. References Theorems in algebraic number theory
https://en.wikipedia.org/wiki/List%20of%20municipalities%20of%20Sweden%20by%20wealth
This is a list of the municipalities of Sweden by average net wealth of its inhabitants in 2007 according to Statistics Sweden. References Municipalities, wealth Municipalities, wealth Sweden, Municipalities, wealth
https://en.wikipedia.org/wiki/Hyperreal
Hyperreal may refer to: Hyperreal numbers, an extension of the real numbers in mathematics that are used in non-standard analysis Hyperreal.org, a rave culture website based in San Francisco, US Hyperreality, a term used in semiotics and postmodern philosophy Hyperrealism (visual arts), a school of painting Hyperreal (The Shamen song), 1990 Hyperreal (Flume song)", 2017 "Hyperreal", a song by My Ticket Home "Hyper Real", a song by Negativland from Dispepsi See also Hypernumber Superreal number
https://en.wikipedia.org/wiki/Spectral%20graph%20theory
In mathematics, spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix. The adjacency matrix of a simple undirected graph is a real symmetric matrix and is therefore orthogonally diagonalizable; its eigenvalues are real algebraic integers. While the adjacency matrix depends on the vertex labeling, its spectrum is a graph invariant, although not a complete one. Spectral graph theory is also concerned with graph parameters that are defined via multiplicities of eigenvalues of matrices associated to the graph, such as the Colin de Verdière number. Cospectral graphs Two graphs are called cospectral or isospectral if the adjacency matrices of the graphs are isospectral, that is, if the adjacency matrices have equal multisets of eigenvalues. Cospectral graphs need not be isomorphic, but isomorphic graphs are always cospectral. Graphs determined by their spectrum A graph is said to be determined by its spectrum if any other graph with the same spectrum as is isomorphic to . Some first examples of families of graphs that are determined by their spectrum include: The complete graphs. The finite starlike trees. Cospectral mates A pair of graphs are said to be cospectral mates if they have the same spectrum, but are non-isomorphic. The smallest pair of cospectral mates is {K1,4, C4 ∪ K1}, comprising the 5-vertex star and the graph union of the 4-vertex cycle and the single-vertex graph, as reported by Collatz and Sinogowitz in 1957. The smallest pair of polyhedral cospectral mates are enneahedra with eight vertices each. Finding cospectral graphs Almost all trees are cospectral, i.e., as the number of vertices grows, the fraction of trees for which there exists a cospectral tree goes to 1. A pair of regular graphs are cospectral if and only if their complements are cospectral. A pair of distance-regular graphs are cospectral if and only if they have the same intersection array. Cospectral graphs can also be constructed by means of the Sunada method. Another important source of cospectral graphs are the point-collinearity graphs and the line-intersection graphs of point-line geometries. These graphs are always cospectral but are often non-isomorphic. Cheeger inequality The famous Cheeger's inequality from Riemannian geometry has a discrete analogue involving the Laplacian matrix; this is perhaps the most important theorem in spectral graph theory and one of the most useful facts in algorithmic applications. It approximates the sparsest cut of a graph through the second eigenvalue of its Laplacian. Cheeger constant The Cheeger constant (also Cheeger number or isoperimetric number) of a graph is a numerical measure of whether or not a graph has a "bottleneck". The Cheeger constant as a measure of "bottleneckedness" is of great interest in many areas: for example
https://en.wikipedia.org/wiki/List%20of%20multivariable%20calculus%20topics
This is a list of multivariable calculus topics. See also multivariable calculus, vector calculus, list of real analysis topics, list of calculus topics. Closed and exact differential forms Contact (mathematics) Contour integral Contour line Critical point (mathematics) Curl (mathematics) Current (mathematics) Curvature Curvilinear coordinates Del Differential form Differential operator Directional derivative Divergence Divergence theorem Double integral Equipotential surface Euler's theorem on homogeneous functions Exterior derivative Flux Frenet–Serret formulas Gauss's law Gradient Green's theorem Green's identities Harmonic function Helmholtz decomposition Hessian matrix Hodge star operator Inverse function theorem Irrotational vector field Isoperimetry Jacobian matrix Lagrange multiplier Lamellar vector field Laplacian Laplacian vector field Level set Line integral Matrix calculus Mixed derivatives Monkey saddle Multiple integral Newtonian potential Parametric equation Parametric surface Partial derivative Partial differential equation Potential Real coordinate space Saddle point Scalar field Solenoidal vector field Stokes' theorem Submersion Surface integral Symmetry of second derivatives Taylor's theorem Total derivative Vector field Vector operator Vector potential list Mathematics-related lists Outlines of mathematics and logic Outlines
https://en.wikipedia.org/wiki/Chebotarev%27s%20density%20theorem
Chebotarev's density theorem in algebraic number theory describes statistically the splitting of primes in a given Galois extension K of the field of rational numbers. Generally speaking, a prime integer will factor into several ideal primes in the ring of algebraic integers of K. There are only finitely many patterns of splitting that may occur. Although the full description of the splitting of every prime p in a general Galois extension is a major unsolved problem, the Chebotarev density theorem says that the frequency of the occurrence of a given pattern, for all primes p less than a large integer N, tends to a certain limit as N goes to infinity. It was proved by Nikolai Chebotaryov in his thesis in 1922, published in . A special case that is easier to state says that if K is an algebraic number field which is a Galois extension of of degree n, then the prime numbers that completely split in K have density 1/n among all primes. More generally, splitting behavior can be specified by assigning to (almost) every prime number an invariant, its Frobenius element, which is a representative of a well-defined conjugacy class in the Galois group Gal(K/Q). Then the theorem says that the asymptotic distribution of these invariants is uniform over the group, so that a conjugacy class with k elements occurs with frequency asymptotic to k/n. History and motivation When Carl Friedrich Gauss first introduced the notion of complex integers Z[i], he observed that the ordinary prime numbers may factor further in this new set of integers. In fact, if a prime p is congruent to 1 mod 4, then it factors into a product of two distinct prime gaussian integers, or "splits completely"; if p is congruent to 3 mod 4, then it remains prime, or is "inert"; and if p is 2 then it becomes a product of the square of the prime (1+i) and the invertible gaussian integer -i; we say that 2 "ramifies". For instance, splits completely; is inert; ramifies. From this description, it appears that as one considers larger and larger primes, the frequency of a prime splitting completely approaches 1/2, and likewise for the primes that remain primes in Z[i]. Dirichlet's theorem on arithmetic progressions demonstrates that this is indeed the case. Even though the prime numbers themselves appear rather erratically, splitting of the primes in the extension follows a simple statistical law. Similar statistical laws also hold for splitting of primes in the cyclotomic extensions, obtained from the field of rational numbers by adjoining a primitive root of unity of a given order. For example, the ordinary integer primes group into four classes, each with probability 1/4, according to their pattern of splitting in the ring of integers corresponding to the 8th roots of unity. In this case, the field extension has degree 4 and is abelian, with the Galois group isomorphic to the Klein four-group. It turned out that the Galois group of the extension plays a key role in t
https://en.wikipedia.org/wiki/List%20of%20commutative%20algebra%20topics
Commutative algebra is the branch of abstract algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings, rings of algebraic integers, including the ordinary integers , and p-adic integers. Research fields Combinatorial commutative algebra Invariant theory Active research areas Serre's multiplicity conjectures Homological conjectures Basic notions Commutative ring Module (mathematics) Ring ideal, maximal ideal, prime ideal Ring homomorphism Ring monomorphism Ring epimorphism Ring isomorphism Zero divisor Chinese remainder theorem Classes of rings Field (mathematics) Algebraic number field Polynomial ring Integral domain Boolean algebra (structure) Principal ideal domain Euclidean domain Unique factorization domain Dedekind domain Nilpotent elements and reduced rings Dual numbers Tensor product of fields Tensor product of R-algebras Constructions with commutative rings Quotient ring Field of fractions Product of rings Annihilator (ring theory) Integral closure Localization and completion Completion (ring theory) Formal power series Localization of a ring Local ring Regular local ring Localization of a module Valuation (mathematics) Discrete valuation Discrete valuation ring I-adic topology Weierstrass preparation theorem Finiteness properties Noetherian ring Hilbert's basis theorem Artinian ring Ascending chain condition (ACC) and descending chain condition (DCC) Ideal theory Fractional ideal Ideal class group Radical of an ideal Hilbert's Nullstellensatz Homological properties Flat module Flat map Flat map (ring theory) Projective module Injective module Cohen-Macaulay ring Gorenstein ring Complete intersection ring Koszul complex Hilbert's syzygy theorem Quillen–Suslin theorem Dimension theory Height (ring theory) Depth (ring theory) Hilbert polynomial Regular local ring Discrete valuation ring Global dimension Regular sequence (algebra) Krull dimension Krull's principal ideal theorem Ring extensions, primary decomposition Primary ideal Primary decomposition and the Lasker–Noether theorem Noether normalization lemma Going up and going down Relation with algebraic geometry Spectrum of a ring Zariski tangent space Kähler differential Computational and algorithmic aspects Elimination theory Gröbner basis Buchberger's algorithm Active research areas Serre's multiplicity conjectures homological conjectures Related disciplines Algebraic number theory Algebraic geometry Ring theory Field theory (mathematics) Differential algebra Homological algebra Mathematics-related lists Outlines of mathematics and logic Outlines
https://en.wikipedia.org/wiki/Prediction%20interval
In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis. Prediction intervals are used in both frequentist statistics and Bayesian statistics: a prediction interval bears the same relationship to a future observation that a frequentist confidence interval or Bayesian credible interval bears to an unobservable population parameter: prediction intervals predict the distribution of individual future points, whereas confidence intervals and credible intervals of parameters predict the distribution of estimates of the true population mean or other quantity of interest that cannot be observed. Introduction For example, if one makes the parametric assumption that the underlying distribution is a normal distribution, and has a sample set {X1, ..., Xn}, then confidence intervals and credible intervals may be used to estimate the population mean μ and population standard deviation σ of the underlying population, while prediction intervals may be used to estimate the value of the next sample variable, Xn+1. Alternatively, in Bayesian terms, a prediction interval can be described as a credible interval for the variable itself, rather than for a parameter of the distribution thereof. The concept of prediction intervals need not be restricted to inference about a single future sample value but can be extended to more complicated cases. For example, in the context of river flooding where analyses are often based on annual values of the largest flow within the year, there may be interest in making inferences about the largest flood likely to be experienced within the next 50 years. Since prediction intervals are only concerned with past and future observations, rather than unobservable population parameters, they are advocated as a better method than confidence intervals by some statisticians, such as Seymour Geisser, following the focus on observables by Bruno de Finetti. Simple Example Given a six-sided die with face values ranging from 1 to 6, the confidence interval for the estimated expected value of the face value will be around 3.5 and will become narrower with more samples. However, the prediction interval for the next roll will approximately range from 1 to 6, even with any number of samples seen so far. Normal distribution Given a sample from a normal distribution, whose parameters are unknown, it is possible to give prediction intervals in the frequentist sense, i.e., an interval [a, b] based on statistics of the sample such that on repeated experiments, Xn+1 falls in the interval the desired percentage of the time; one may call these "predictive confidence intervals". A general technique of frequentist prediction intervals is to find and compute a pivotal quantity of the observables X1, ..., Xn, Xn+1 – meaning a function
https://en.wikipedia.org/wiki/Equation%20solving
In mathematics, to solve an equation is to find its solutions, which are the values (numbers, functions, sets, etc.) that fulfill the condition stated by the equation, consisting generally of two expressions related by an equals sign. When seeking a solution, one or more variables are designated as unknowns. A solution is an assignment of values to the unknown variables that makes the equality in the equation true. In other words, a solution is a value or a collection of values (one for each unknown) such that, when substituted for the unknowns, the equation becomes an equality. A solution of an equation is often called a root of the equation, particularly but not only for polynomial equations. The set of all solutions of an equation is its solution set. An equation may be solved either numerically or symbolically. Solving an equation numerically means that only numbers are admitted as solutions. Solving an equation symbolically means that expressions can be used for representing the solutions. For example, the equation is solved for the unknown by the expression , because substituting for in the equation results in , a true statement. It is also possible to take the variable to be the unknown, and then the equation is solved by . Or and can both be treated as unknowns, and then there are many solutions to the equation; a symbolic solution is , where the variable may take any value. Instantiating a symbolic solution with specific numbers gives a numerical solution; for example, gives (that is, ), and gives . The distinction between known variables and unknown variables is generally made in the statement of the problem, by phrases such as "an equation in and ", or "solve for and ", which indicate the unknowns, here and . However, it is common to reserve , , , ... to denote the unknowns, and to use , , , ... to denote the known variables, which are often called parameters. This is typically the case when considering polynomial equations, such as quadratic equations. However, for some problems, all variables may assume either role. Depending on the context, solving an equation may consist to find either any solution (finding a single solution is enough), all solutions, or a solution that satisfies further properties, such as belonging to a given interval. When the task is to find the solution that is the best under some criterion, this is an optimization problem. Solving an optimization problem is generally not referred to as "equation solving", as, generally, solving methods start from a particular solution for finding a better solution, and repeating the process until finding eventually the best solution. Overview One general form of an equation is where is a function, are the unknowns, and is a constant. Its solutions are the elements of the inverse image where is the domain of the function . The set of solutions can be the empty set (there are no solutions), a singleton (there is exactly one solution), finite, or infini
https://en.wikipedia.org/wiki/Complex%20multiplication
In mathematics, complex multiplication (CM) is the theory of elliptic curves E that have an endomorphism ring larger than the integers. Put another way, it contains the theory of elliptic functions with extra symmetries, such as are visible when the period lattice is the Gaussian integer lattice or Eisenstein integer lattice. It has an aspect belonging to the theory of special functions, because such elliptic functions, or abelian functions of several complex variables, are then 'very special' functions satisfying extra identities and taking explicitly calculable special values at particular points. It has also turned out to be a central theme in algebraic number theory, allowing some features of the theory of cyclotomic fields to be carried over to wider areas of application. David Hilbert is said to have remarked that the theory of complex multiplication of elliptic curves was not only the most beautiful part of mathematics but of all science. There is also the higher-dimensional complex multiplication theory of abelian varieties A having enough endomorphisms in a certain precise sense, roughly that the action on the tangent space at the identity element of A is a direct sum of one-dimensional modules. Example of the imaginary quadratic field extension Consider an imaginary quadratic field . An elliptic function is said to have complex multiplication if there is an algebraic relation between and for all in . Conversely, Kronecker conjectured – in what became known as the Kronecker Jugendtraum – that every abelian extension of could be obtained by the (roots of the) equation of a suitable elliptic curve with complex multiplication. To this day this remains one of the few cases of Hilbert's twelfth problem which has actually been solved. An example of an elliptic curve with complex multiplication is where Z[i] is the Gaussian integer ring, and θ is any non-zero complex number. Any such complex torus has the Gaussian integers as endomorphism ring. It is known that the corresponding curves can all be written as for some , which demonstrably has two conjugate order-4 automorphisms sending in line with the action of i on the Weierstrass elliptic functions. More generally, consider the lattice Λ, an additive group in the complex plane, generated by . Then we define the Weierstrass function of the variable in as follows: and Let be the derivative of . Then we obtain an isomorphism of complex Lie groups: from the complex torus group to the projective elliptic curve defined in homogeneous coordinates by and where the point at infinity, the zero element of the group law of the elliptic curve, is by convention taken to be . If the lattice defining the elliptic curve is actually preserved under multiplication by (possibly a proper subring of) the ring of integers of , then the ring of analytic automorphisms of turns out to be isomorphic to this (sub)ring. If we rewrite where and , then This means that the j-invariant of is
https://en.wikipedia.org/wiki/Quillen%E2%80%93Suslin%20theorem
The Quillen–Suslin theorem, also known as Serre's problem or Serre's conjecture, is a theorem in commutative algebra concerning the relationship between free modules and projective modules over polynomial rings. In the geometric setting it is a statement about the triviality of vector bundles on affine space. The theorem states that every finitely generated projective module over a polynomial ring is free. History Background Geometrically, finitely generated projective modules over the ring correspond to vector bundles over affine space , where free modules correspond to trivial vector bundles. This correspondence (from modules to (algebraic) vector bundles) is given by the 'globalisation' or 'twiddlification' functor, sending (cite Hartshorne II.5, page 110). Affine space is topologically contractible, so it admits no non-trivial topological vector bundles. A simple argument using the exponential exact sequence and the d-bar Poincaré lemma shows that it also admits no non-trivial holomorphic vector bundles. Jean-Pierre Serre, in his 1955 paper Faisceaux algébriques cohérents, remarked that the corresponding question was not known for algebraic vector bundles: "It is not known whether there exist projective A-modules of finite type which are not free." Here is a polynomial ring over a field, that is, = . To Serre's dismay, this problem quickly became known as Serre's conjecture. (Serre wrote, "I objected as often as I could [to the name].") The statement does not immediately follow from the proofs given in the topological or holomorphic case. These cases only guarantee that there is a continuous or holomorphic trivialization, not an algebraic trivialization. Serre made some progress towards a solution in 1957 when he proved that every finitely generated projective module over a polynomial ring over a field was stably free, meaning that after forming its direct sum with a finitely generated free module, it became free. The problem remained open until 1976, when Daniel Quillen and Andrei Suslin independently proved the result. Quillen was awarded the Fields Medal in 1978 in part for his proof of the Serre conjecture. Leonid Vaseršteĭn later gave a simpler and much shorter proof of the theorem which can be found in Serge Lang's Algebra. Generalization A generalization relating projective modules over regular Noetherian rings A and their polynomial rings is known as the Bass–Quillen conjecture. Note that although -bundles on affine space are all trivial, this is not true for G-bundles where G is a general reductive algebraic group. Notes References Translated in An account of this topic is provided by: Commutative algebra Theorems in abstract algebra
https://en.wikipedia.org/wiki/Reduced%20ring
In ring theory, a branch of mathematics, a ring is called a reduced ring if it has no non-zero nilpotent elements. Equivalently, a ring is reduced if it has no non-zero elements with square zero, that is, x2 = 0 implies x = 0. A commutative algebra over a commutative ring is called a reduced algebra if its underlying ring is reduced. The nilpotent elements of a commutative ring R form an ideal of R, called the nilradical of R; therefore a commutative ring is reduced if and only if its nilradical is zero. Moreover, a commutative ring is reduced if and only if the only element contained in all prime ideals is zero. A quotient ring R/I is reduced if and only if I is a radical ideal. Let be nilradical of any commutative ring . There is a natural functor of category of commutative rings into category of reduced rings and it is left adjoint to the inclusion functor of into . The bijection is induced from the universal property of quotient rings. Let D be the set of all zero-divisors in a reduced ring R. Then D is the union of all minimal prime ideals. Over a Noetherian ring R, we say a finitely generated module M has locally constant rank if is a locally constant (or equivalently continuous) function on Spec R. Then R is reduced if and only if every finitely generated module of locally constant rank is projective. Examples and non-examples Subrings, products, and localizations of reduced rings are again reduced rings. The ring of integers Z is a reduced ring. Every field and every polynomial ring over a field (in arbitrarily many variables) is a reduced ring. More generally, every integral domain is a reduced ring since a nilpotent element is a fortiori a zero-divisor. On the other hand, not every reduced ring is an integral domain. For example, the ring Z[x, y]/(xy) contains x + (xy) and y + (xy) as zero-divisors, but no non-zero nilpotent elements. As another example, the ring Z × Z contains (1, 0) and (0, 1) as zero-divisors, but contains no non-zero nilpotent elements. The ring Z/6Z is reduced, however Z/4Z is not reduced: The class 2 + 4Z is nilpotent. In general, Z/nZ is reduced if and only if n = 0 or n is a square-free integer. If R is a commutative ring and N is the nilradical of R, then the quotient ring R/N is reduced. A commutative ring R of characteristic p for some prime number p is reduced if and only if its Frobenius endomorphism is injective (cf. Perfect field.) Generalizations Reduced rings play an elementary role in algebraic geometry, where this concept is generalized to the concept of a reduced scheme. See also Notes References N. Bourbaki, Commutative Algebra, Hermann Paris 1972, Chap. II, § 2.7 N. Bourbaki, Algebra, Springer 1990, Chap. V, § 6.7 Ring theory pl:Element nilpotentny#Pierścień zredukowany
https://en.wikipedia.org/wiki/Duality%20%28projective%20geometry%29
In geometry, a striking feature of projective planes is the symmetry of the roles played by points and lines in the definitions and theorems, and (plane) duality is the formalization of this concept. There are two approaches to the subject of duality, one through language () and the other a more functional approach through special mappings. These are completely equivalent and either treatment has as its starting point the axiomatic version of the geometries under consideration. In the functional approach there is a map between related geometries that is called a duality. Such a map can be constructed in many ways. The concept of plane duality readily extends to space duality and beyond that to duality in any finite-dimensional projective geometry. Principle of duality A projective plane may be defined axiomatically as an incidence structure, in terms of a set of points, a set of lines, and an incidence relation that determines which points lie on which lines. These sets can be used to define a plane dual structure. Interchange the role of "points" and "lines" in to obtain the dual structure , where is the converse relation of . is also a projective plane, called the dual plane of . If and are isomorphic, then is called self-dual. The projective planes for any field (or, more generally, for every division ring (skewfield) isomorphic to its dual) are self-dual. In particular, Desarguesian planes of finite order are always self-dual. However, there are non-Desarguesian planes which are not self-dual, such as the Hall planes and some that are, such as the Hughes planes. In a projective plane a statement involving points, lines and incidence between them that is obtained from another such statement by interchanging the words "point" and "line" and making whatever grammatical adjustments that are necessary, is called the plane dual statement of the first. The plane dual statement of "Two points are on a unique line" is "Two lines meet at a unique point". Forming the plane dual of a statement is known as dualizing the statement. If a statement is true in a projective plane , then the plane dual of that statement must be true in the dual plane . This follows since dualizing each statement in the proof "in " gives a corresponding statement of the proof "in ". The principle of plane duality says that dualizing any theorem in a self-dual projective plane produces another theorem valid in . The above concepts can be generalized to talk about space duality, where the terms "points" and "planes" are interchanged (and lines remain lines). This leads to the principle of space duality. These principles provide a good reason for preferring to use a "symmetric" term for the incidence relation. Thus instead of saying "a point lies on a line" one should say "a point is incident with a line" since dualizing the latter only involves interchanging point and line ("a line is incident with a point"). The validity of the principle of plane duality
https://en.wikipedia.org/wiki/Subobject
In category theory, a branch of mathematics, a subobject is, roughly speaking, an object that sits inside another object in the same category. The notion is a generalization of concepts such as subsets from set theory, subgroups from group theory, and subspaces from topology. Since the detailed structure of objects is immaterial in category theory, the definition of subobject relies on a morphism that describes how one object sits inside another, rather than relying on the use of elements. The dual concept to a subobject is a . This generalizes concepts such as quotient sets, quotient groups, quotient spaces, quotient graphs, etc. Definitions An appropriate categorical definition of "subobject" may vary with context, depending on the goal. One common definition is as follows. In detail, let be an object of some category. Given two monomorphisms with codomain , we define an equivalence relation by if there exists an isomorphism with . Equivalently, we write if factors through —that is, if there exists such that . The binary relation defined by is an equivalence relation on the monomorphisms with codomain , and the corresponding equivalence classes of these monomorphisms are the subobjects of . The relation ≤ induces a partial order on the collection of subobjects of . The collection of subobjects of an object may in fact be a proper class; this means that the discussion given is somewhat loose. If the subobject-collection of every object is a set, the category is called well-powered or, rarely, locally small (this clashes with a different usage of the term locally small, namely that there is a set of morphisms between any two objects). To get the dual concept of quotient object, replace "monomorphism" by "epimorphism" above and reverse arrows. A quotient object of A is then an equivalence class of epimorphisms with domain A. However, in some contexts these definitions are inadequate as they do not concord with well-established notions of subobject or quotient object. In the category of topological spaces, monomorphisms are precisely the injective continuous functions; but not all injective continuous functions are subspace embeddings. In the category of rings, the inclusion is an epimorphism but is not the quotient of by a two-sided ideal. To get maps which truly behave like subobject embeddings or quotients, rather than as arbitrary injective functions or maps with dense image, one must restrict to monomorphisms and epimorphisms satisfying additional hypotheses. Therefore one might define a "subobject" to be an equivalence class of so-called "regular monomorphisms" (monomorphisms which can be expressed as an equalizer of two morphisms) and a "quotient object" to be any equivalence class of "regular epimorphisms" (morphisms which can be expressed as a coequalizer of two morphisms) Interpretation This definition corresponds to the ordinary understanding of a subobject outside category theory. When the category's objects a
https://en.wikipedia.org/wiki/Frattini%20subgroup
In mathematics, particularly in group theory, the Frattini subgroup of a group is the intersection of all maximal subgroups of . For the case that has no maximal subgroups, for example the trivial group {e} or a Prüfer group, it is defined by . It is analogous to the Jacobson radical in the theory of rings, and intuitively can be thought of as the subgroup of "small elements" (see the "non-generator" characterization below). It is named after Giovanni Frattini, who defined the concept in a paper published in 1885. Some facts is equal to the set of all non-generators or non-generating elements of . A non-generating element of is an element that can always be removed from a generating set; that is, an element a of such that whenever is a generating set of containing a, is also a generating set of . is always a characteristic subgroup of ; in particular, it is always a normal subgroup of . If is finite, then is nilpotent. If is a finite p-group, then . Thus the Frattini subgroup is the smallest (with respect to inclusion) normal subgroup N such that the quotient group is an elementary abelian group, i.e., isomorphic to a direct sum of cyclic groups of order p. Moreover, if the quotient group (also called the Frattini quotient of ) has order , then k is the smallest number of generators for (that is, the smallest cardinality of a generating set for ). In particular a finite p-group is cyclic if and only if its Frattini quotient is cyclic (of order p). A finite p-group is elementary abelian if and only if its Frattini subgroup is the trivial group, . If and are finite, then . An example of a group with nontrivial Frattini subgroup is the cyclic group of order , where p is prime, generated by a, say; here, . See also Fitting subgroup Frattini's argument Socle References (See Chapter 10, especially Section 10.4.) Group theory Functional subgroups
https://en.wikipedia.org/wiki/Poincar%C3%A9%20duality
In mathematics, the Poincaré duality theorem, named after Henri Poincaré, is a basic result on the structure of the homology and cohomology groups of manifolds. It states that if M is an n-dimensional oriented closed manifold (compact and without boundary), then the kth cohomology group of M is isomorphic to the th homology group of M, for all integers k Poincaré duality holds for any coefficient ring, so long as one has taken an orientation with respect to that coefficient ring; in particular, since every manifold has a unique orientation mod 2, Poincaré duality holds mod 2 without any assumption of orientation. History A form of Poincaré duality was first stated, without proof, by Henri Poincaré in 1893. It was stated in terms of Betti numbers: The kth and th Betti numbers of a closed (i.e., compact and without boundary) orientable n-manifold are equal. The cohomology concept was at that time about 40 years from being clarified. In his 1895 paper Analysis Situs, Poincaré tried to prove the theorem using topological intersection theory, which he had invented. Criticism of his work by Poul Heegaard led him to realize that his proof was seriously flawed. In the first two complements to Analysis Situs, Poincaré gave a new proof in terms of dual triangulations. Poincaré duality did not take on its modern form until the advent of cohomology in the 1930s, when Eduard Čech and Hassler Whitney invented the cup and cap products and formulated Poincaré duality in these new terms. Modern formulation The modern statement of the Poincaré duality theorem is in terms of homology and cohomology: if M is a closed oriented n-manifold, then there is a canonically defined isomorphism for any integer k. To define such an isomorphism, one chooses a fixed fundamental class [M] of M, which will exist if is oriented. Then the isomorphism is defined by mapping an element to the cap product . Homology and cohomology groups are defined to be zero for negative degrees, so Poincaré duality in particular implies that the homology and cohomology groups of orientable closed n-manifolds are zero for degrees bigger than n. Here, homology and cohomology are integral, but the isomorphism remains valid over any coefficient ring. In the case where an oriented manifold is not compact, one has to replace homology by Borel–Moore homology or replace cohomology by cohomology with compact support Dual cell structures Given a triangulated manifold, there is a corresponding dual polyhedral decomposition. The dual polyhedral decomposition is a cell decomposition of the manifold such that the k-cells of the dual polyhedral decomposition are in bijective correspondence with the ()-cells of the triangulation, generalizing the notion of dual polyhedra. Precisely, let T be a triangulation of an n-manifold M. Let S be a simplex of T. Let be a top-dimensional simplex of T containing S, so we can think of S as a subset of the vertices of . Define the dual cell DS correspondi
https://en.wikipedia.org/wiki/Random%20field
In physics and mathematics, a random field is a random function over an arbitrary domain (usually a multi-dimensional space such as ). That is, it is a function that takes on a random value at each point (or some other domain). It is also sometimes thought of as a synonym for a stochastic process with some restriction on its index set. That is, by modern definitions, a random field is a generalization of a stochastic process where the underlying parameter need no longer be real or integer valued "time" but can instead take values that are multidimensional vectors or points on some manifold. Formal definition Given a probability space , an X-valued random field is a collection of X-valued random variables indexed by elements in a topological space T. That is, a random field F is a collection where each is an X-valued random variable. Examples In its discrete version, a random field is a list of random numbers whose indices are identified with a discrete set of points in a space (for example, n-dimensional Euclidean space). Suppose there are four random variables, , , , and , located in a 2D grid at (0,0), (0,2), (2,2), and (2,0), respectively. Suppose each random variable can take on the value of -1 or 1, and the probability of each random variable's value depends on its immediately adjacent neighbours. This is a simple example of a discrete random field. More generally, the values each can take on might be defined over a continuous domain. In larger grids, it can also be useful to think of the random field as a "function valued" random variable as described above. In quantum field theory the notion is generalized to a random functional, one that takes on random values over a space of functions (see Feynman integral). Several kinds of random fields exist, among them the Markov random field (MRF), Gibbs random field, conditional random field (CRF), and Gaussian random field. In 1974, Julian Besag proposed an approximation method relying on the relation between MRFs and Gibbs RFs. Example properties An MRF exhibits the Markov property for each choice of values . Here each is the set of neighbors of . In other words, the probability that a random variable assumes a value depends on its immediate neighboring random variables. The probability of a random variable in an MRF is given by where the sum (can be an integral) is over the possible values of k. It is sometimes difficult to compute this quantity exactly. Applications When used in the natural sciences, values in a random field are often spatially correlated. For example, adjacent values (i.e. values with adjacent indices) do not differ as much as values that are further apart. This is an example of a covariance structure, many different types of which may be modeled in a random field. One example is the Ising model where sometimes nearest neighbor interactions are only included as a simplification to better understand the model. A common use of random fields is in the g
https://en.wikipedia.org/wiki/Category%20of%20preordered%20sets
In mathematics, the category Ord has preordered sets as objects and order-preserving functions as morphisms. This is a category because the composition of two order-preserving functions is order preserving and the identity map is order preserving. The monomorphisms in Ord are the injective order-preserving functions. The empty set (considered as a preordered set) is the initial object of Ord, and the terminal objects are precisely the singleton preordered sets. There are thus no zero objects in Ord. The categorical product in Ord is given by the product order on the cartesian product. We have a forgetful functor Ord → Set that assigns to each preordered set the underlying set, and to each order-preserving function the underlying function. This functor is faithful, and therefore Ord is a concrete category. This functor has a left adjoint (sending every set to that set equipped with the equality relation) and a right adjoint (sending every set to that set equipped with the total relation). 2-category structure The set of morphisms (order-preserving functions) between two preorders actually has more structure than that of a set. It can be made into a preordered set itself by the pointwise relation: (f ≤ g) ⇔ (∀x f(x) ≤ g(x)) This preordered set can in turn be considered as a category, which makes Ord a 2-category (the additional axioms of a 2-category trivially hold because any equation of parallel morphisms is true in a posetal category). With this 2-category structure, a pseudofunctor F from a category C to Ord is given by the same data as a 2-functor, but has the relaxed properties: ∀x ∈ F(A), F(idA)(x) ≃ x, ∀x ∈ F(A), F(g∘f)(x) ≃ F(g)(F(f)(x)), where x ≃ y means x ≤ y and y ≤ x. See also FinOrd Simplex category Preordered sets
https://en.wikipedia.org/wiki/Champernowne%20constant
In mathematics, the Champernowne constant is a transcendental real constant whose decimal expansion has important properties. It is named after economist and mathematician D. G. Champernowne, who published it as an undergraduate in 1933. For base 10, the number is defined by concatenating representations of successive integers: . Champernowne constants can also be constructed in other bases, similarly, for example: . The Champernowne word or Barbier word is the sequence of digits of C10 obtained by writing it in base 10 and juxtaposing the digits: More generally, a Champernowne sequence (sometimes also called a Champernowne word) is any sequence of digits obtained by concatenating all finite digit-strings (in any given base) in some recursive order. For instance, the binary Champernowne sequence in shortlex order is where spaces (otherwise to be ignored) have been inserted just to show the strings being concatenated. Properties A real number x is said to be normal if its digits in every base follow a uniform distribution: all digits being equally likely, all pairs of digits equally likely, all triplets of digits equally likely, etc. x is said to be normal in base b if its digits in base b follow a uniform distribution. If we denote a digit string as [a0, a1, …], then, in base 10, we would expect strings [0], [1], [2], …, [9] to occur 1/10 of the time, strings [0,0], [0,1], …, [9,8], [9,9] to occur 1/100 of the time, and so on, in a normal number. Champernowne proved that is normal in base 10, while Nakai and Shiokawa proved a more general theorem, a corollary of which is that is normal in base for any b. It is an open problem whether is normal in bases . Kurt Mahler showed that the constant is transcendental. The irrationality measure of is , and more generally for any base . The Champernowne word is a disjunctive sequence. Series The definition of the Champernowne constant immediately gives rise to an infinite series representation involving a double sum, where is the number of digits between the decimal point and the first contribution from an -digit base-10 number; these expressions generalize to an arbitrary base  by replacing 10 and 9 with and respectively. Alternative forms are and where and denote the floor and ceiling functions. Returning to the first of these series, both the summand of the outer sum and the expression for can be simplified using the closed form for the two-dimensional geometric series: The resulting expression for is while the summand of the outer sum becomes Summing over all gives Observe that in the summand, the expression in parentheses is approximately for and rapidly approaches that value as grows, while the exponent grows exponentially with . As a consequence, each additional term provides an exponentially growing number of correct digits even though the number of digits in the numerators and denominators of the fractions comprising these terms grows only linearly.
https://en.wikipedia.org/wiki/Stoneham%20number
In mathematics, the Stoneham numbers are a certain class of real numbers, named after mathematician Richard G. Stoneham (1920–1996). For coprime numbers b, c > 1, the Stoneham number αb,c is defined as It was shown by Stoneham in 1973 that αb,c is b-normal whenever c is an odd prime and b is a primitive root of c2. In 2002, Bailey & Crandall showed that coprimality of b, c > 1 is sufficient for b-normality of αb,c. References . Eponymous numbers in mathematics Number theory Sets of real numbers
https://en.wikipedia.org/wiki/Public%20health%20journal
A public health journal is a scientific journal devoted to the field of public health, including epidemiology, biostatistics, and health care (including medicine, nursing and related fields). Public health journals, like most scientific journals, are peer-reviewed. Public health journals are commonly published by health organizations and societies, such as the Bulletin of the World Health Organization or the Journal of Epidemiology and Community Health (published by the British Medical Association). Many others are published by a handful of large publishing corporations that includes Elsevier, Wolters Kluwer, Wiley-Blackwell, Springer Science+Business Media, and Informa, each of which has many imprints (which are brands named after former independent publishers that were merged or acquired). Many societies partner with such corporations to handle the work of producing their journals. The increase in public health research in recent decades has seen a rapid increase in the number of articles and journals. As such, many public health journals have emerged with a specialized focus, such as in the area of policy (e.g. Journal of Public Health Policy), a specific region or country of the world (e.g. Asia-Pacific Journal of Public Health, Pan American Journal of Public Health or Eastern Mediterranean Health Journal), a specific intervention/practice area (e.g. Cancer Epidemiology, Biomarkers & Prevention), or other particular focus (e.g. Human Resources for Health). Scope Public health journals often indicate their target audience as being interdisciplinary, including health care professionals, public health decision-makers and researchers. A main objective is to support evidence-based policy and evidence-based practice in public health. In contrast, medical journals (e.g. The Lancet) typically focus on reaching medical professionals as their main audience, although the boundaries between these two categories are increasingly blurry. It has been argued that some medical and public health journals are "filled with increasingly complex science" which depends upon advanced statistics and research methods that health care providers may have difficulty understanding. In response they have turned towards publishing "articles that are more journalism than science" such as reviews, news, and educational material. However, science is what attracts major attention and leads institutions and libraries to purchase subscriptions. Review process For an article to be accepted for publication in a public health or medical journal it must typically undergo a review process. Each journal creates its own process, but they have certain common characteristics in general. There are various general "levels" of scrutiny, which have some effect on the respect given to articles published in the journals. Some broad categories might be editorial review, peer review, and blind peer review. Richard Smith, former editor of the British Medical Journal, stated in 2006 that stu
https://en.wikipedia.org/wiki/Alexandrov%20topology
In topology, an Alexandrov topology is a topology in which the intersection of every family of open sets is open. It is an axiom of topology that the intersection of every finite family of open sets is open; in Alexandrov topologies the finite restriction is dropped. A set together with an Alexandrov topology is known as an Alexandrov-discrete space or finitely generated space. Alexandrov topologies are uniquely determined by their specialization preorders. Indeed, given any preorder ≤ on a set X, there is a unique Alexandrov topology on X for which the specialization preorder is ≤. The open sets are just the upper sets with respect to ≤. Thus, Alexandrov topologies on X are in one-to-one correspondence with preorders on X. Alexandrov-discrete spaces are also called finitely generated spaces since their topology is uniquely determined by the family of all finite subspaces. Alexandrov-discrete spaces can thus be viewed as a generalization of finite topological spaces. Due to the fact that inverse images commute with arbitrary unions and intersections, the property of being an Alexandrov-discrete space is preserved under quotients. Alexandrov-discrete spaces are named after the Russian topologist Pavel Alexandrov. They should not be confused with the more geometrical Alexandrov spaces introduced by the Russian mathematician Aleksandr Danilovich Aleksandrov. Characterizations of Alexandrov topologies Alexandrov topologies have numerous characterizations. Let X = <X, T> be a topological space. Then the following are equivalent: Open and closed set characterizations: Open set. An arbitrary intersection of open sets in X is open. Closed set. An arbitrary union of closed sets in X is closed. Neighbourhood characterizations: Smallest neighbourhood. Every point of X has a smallest neighbourhood. Neighbourhood filter. The neighbourhood filter of every point in X is closed under arbitrary intersections. Interior and closure algebraic characterizations: Interior operator. The interior operator of X distributes over arbitrary intersections of subsets. Closure operator. The closure operator of X distributes over arbitrary unions of subsets. Preorder characterizations: Specialization preorder. T is the finest topology consistent with the specialization preorder of X i.e. the finest topology giving the preorder ≤ satisfying x ≤ y if and only if x is in the closure of {y} in X. Open up-set. There is a preorder ≤ such that the open sets of X are precisely those that are upward closed i.e. if x is in the set and x ≤ y then y is in the set. (This preorder will be precisely the specialization preorder.) Closed down-set. There is a preorder ≤ such that the closed sets of X are precisely those that are downward closed i.e. if x is in the set and y ≤ x then y is in the set. (This preorder will be precisely the specialization preorder.) Downward closure. A point x lies in the closure of a subset S of X if and only if there is a point y in S such t
https://en.wikipedia.org/wiki/Borel%20regular%20measure
In mathematics, an outer measure μ on n-dimensional Euclidean space Rn is called a Borel regular measure if the following two conditions hold: Every Borel set B ⊆ Rn is μ-measurable in the sense of Carathéodory's criterion: for every A ⊆ Rn, For every set A ⊆ Rn there exists a Borel set B ⊆ Rn such that A ⊆ B and μ(A) = μ(B). Notice that the set A need not be μ-measurable: μ(A) is however well defined as μ is an outer measure. An outer measure satisfying only the first of these two requirements is called a Borel measure, while an outer measure satisfying only the second requirement (with the Borel set B replaced by a measurable set B) is called a regular measure. The Lebesgue outer measure on Rn is an example of a Borel regular measure. It can be proved that a Borel regular measure, although introduced here as an outer measure (only countably subadditive), becomes a full measure (countably additive) if restricted to the Borel sets. References Measures (measure theory)
https://en.wikipedia.org/wiki/Artinian%20module
In mathematics, specifically abstract algebra, an Artinian module is a module that satisfies the descending chain condition on its poset of submodules. They are for modules what Artinian rings are for rings, and a ring is Artinian if and only if it is an Artinian module over itself (with left or right multiplication). Both concepts are named for Emil Artin. In the presence of the axiom of (dependent) choice, the descending chain condition becomes equivalent to the minimum condition, and so that may be used in the definition instead. Like Noetherian modules, Artinian modules enjoy the following heredity property: If M is an Artinian R-module, then so is any submodule and any quotient of M. The converse also holds: If M is any R-module and N any Artinian submodule such that M/N is Artinian, then M is Artinian. As a consequence, any finitely-generated module over an Artinian ring is Artinian. Since an Artinian ring is also a Noetherian ring, and finitely-generated modules over a Noetherian ring are Noetherian, it is true that for an Artinian ring R, any finitely-generated R-module is both Noetherian and Artinian, and is said to be of finite length. It also follows that any finitely generated Artinian module is Noetherian even without the assumption of R being Artinian. However, if R is not Artinian and M is not finitely-generated, there are counterexamples. Left and right Artinian rings, modules and bimodules The ring R can be considered as a right module, where the action is the natural one given by the ring multiplication on the right. R is called right Artinian when this right module R is an Artinian module. The definition of "left Artinian ring" is done analogously. For noncommutative rings this distinction is necessary, because it is possible for a ring to be Artinian on one side but not the other. The left-right adjectives are not normally necessary for modules, because the module M is usually given as a left or right R-module at the outset. However, it is possible that M may have both a left and right R-module structure, and then calling M Artinian is ambiguous, and it becomes necessary to clarify which module structure is Artinian. To separate the properties of the two structures, one can abuse terminology and refer to M as left Artinian or right Artinian when, strictly speaking, it is correct to say that M, with its left R-module structure, is Artinian. The occurrence of modules with a left and right structure is not unusual: for example R itself has a left and right R-module structure. In fact this is an example of a bimodule, and it may be possible for an abelian group M to be made into a left-R, right-S bimodule for a different ring S. Indeed, for any right module M, it is automatically a left module over the ring of integers Z, and moreover is a Z-R-bimodule. For example, consider the rational numbers Q as a Z-Q-bimodule in the natural way. Then Q is not Artinian as a left Z-module, but it is Artinian as a right Q-module
https://en.wikipedia.org/wiki/Square%20pyramidal%20number
In mathematics, a pyramid number, or square pyramidal number, is a natural number that counts the number of stacked spheres in a pyramid with a square base. The study of these numbers goes back to Archimedes and Fibonacci. They are part of a broader topic of figurate numbers representing the numbers of points forming regular patterns within different shapes. As well as counting spheres in a pyramid, these numbers can be described algebraically as a sum of the first positive square numbers, or as the values of a cubic polynomial. They can be used to solve several other counting problems, including counting squares in a square grid and counting acute triangles formed from the vertices of an odd regular polygon. They equal the sums of consecutive tetrahedral numbers, and are one-fourth of a larger tetrahedral number. The sum of two consecutive square pyramidal numbers is an octahedral number. History The pyramidal numbers were one of the few types of three-dimensional figurate numbers studied in Greek mathematics, in works by Nicomachus, Theon of Smyrna, and Iamblichus. Formulas for summing consecutive squares to give a cubic polynomial, whose values are the square pyramidal numbers, are given by Archimedes, who used this sum as a lemma as part of a study of the volume of a cone, and by Fibonacci, as part of a more general solution to the problem of finding formulas for sums of progressions of squares. The square pyramidal numbers were also one of the families of figurate numbers studied by Japanese mathematicians of the wasan period, who named them "kirei saijō suida" (with modern kanji, 奇零 再乗 蓑深). The same problem, formulated as one of counting the cannonballs in a square pyramid, was posed by Walter Raleigh to mathematician Thomas Harriot in the late 1500s, while both were on a sea voyage. The cannonball problem, asking whether there are any square pyramidal numbers that are also square numbers other than 1 and 4900, is said to have developed out of this exchange. Édouard Lucas found the 4900-ball pyramid with a square number of balls, and in making the cannonball problem more widely known, suggested that it was the only nontrivial solution. After incomplete proofs by Lucas and Claude-Séraphin Moret-Blanc, the first complete proof that no other such numbers exist was given by G. N. Watson in 1918. Formula If spheres are packed into square pyramids whose number of layers is 1, 2, 3, etc., then the square pyramidal numbers giving the numbers of spheres in each pyramid are: These numbers can be calculated algebraically, as follows. If a pyramid of spheres is decomposed into its square layers with a square number of spheres in each, then the total number of spheres can be counted as the sum of the number of spheres in each square, and this summation can be solved to give a cubic polynomial, which can be written in several equivalent ways: This equation for a sum of squares is a special case of Faulhaber's formula for sums of powers, and ma
https://en.wikipedia.org/wiki/Specialization%20%28pre%29order
In the branch of mathematics known as topology, the specialization (or canonical) preorder is a natural preorder on the set of the points of a topological space. For most spaces that are considered in practice, namely for all those that satisfy the T0 separation axiom, this preorder is even a partial order (called the specialization order). On the other hand, for T1 spaces the order becomes trivial and is of little interest. The specialization order is often considered in applications in computer science, where T0 spaces occur in denotational semantics. The specialization order is also important for identifying suitable topologies on partially ordered sets, as is done in order theory. Definition and motivation Consider any topological space X. The specialization preorder ≤ on X relates two points of X when one lies in the closure of the other. However, various authors disagree on which 'direction' the order should go. What is agreed is that if x is contained in cl{y}, (where cl{y} denotes the closure of the singleton set {y}, i.e. the intersection of all closed sets containing {y}), we say that x is a specialization of y and that y is a generalization of x; this is commonly written y ⤳ x. Unfortunately, the property "x is a specialization of y" is alternatively written as "x ≤ y" and as "y ≤ x" by various authors (see, respectively, and ). Both definitions have intuitive justifications: in the case of the former, we have x ≤ y if and only if cl{x} ⊆ cl{y}. However, in the case where our space X is the prime spectrum Spec R of a commutative ring R (which is the motivational situation in applications related to algebraic geometry), then under our second definition of the order, we have y ≤ x if and only if y ⊆ x as prime ideals of the ring R. For the sake of consistency, for the remainder of this article we will take the first definition, that "x is a specialization of y" be written as x ≤ y. We then see, x ≤ y if and only if x is contained in all closed sets that contain y. x ≤ y if and only if y is contained in all open sets that contain x. These restatements help to explain why one speaks of a "specialization": y is more general than x, since it is contained in more open sets. This is particularly intuitive if one views closed sets as properties that a point x may or may not have. The more closed sets contain a point, the more properties the point has, and the more special it is. The usage is consistent with the classical logical notions of genus and species; and also with the traditional use of generic points in algebraic geometry, in which closed points are the most specific, while a generic point of a space is one contained in every nonempty open subset. Specialization as an idea is applied also in valuation theory. The intuition of upper elements being more specific is typically found in domain theory, a branch of order theory that has ample applications in computer science. Upper and lower sets Let X be a topological spa
https://en.wikipedia.org/wiki/Binomial%20%28polynomial%29
In algebra, a binomial is a polynomial that is the sum of two terms, each of which is a monomial. It is the simplest kind of a sparse polynomial after the monomials. Definition A binomial is a polynomial which is the sum of two monomials. A binomial in a single indeterminate (also known as a univariate binomial) can be written in the form where and are numbers, and and are distinct non-negative integers and is a symbol which is called an indeterminate or, for historical reasons, a variable. In the context of Laurent polynomials, a Laurent binomial, often simply called a binomial, is similarly defined, but the exponents and may be negative. More generally, a binomial may be written as: Examples Operations on simple binomials The binomial can be factored as the product of two other binomials: This is a special case of the more general formula: When working over the complex numbers, this can also be extended to: The product of a pair of linear binomials and is a trinomial: A binomial raised to the th power, represented as can be expanded by means of the binomial theorem or, equivalently, using Pascal's triangle. For example, the square of the binomial is equal to the sum of the squares of the two terms and twice the product of the terms, that is: The numbers (1, 2, 1) appearing as multipliers for the terms in this expansion are the binomial coefficients two rows down from the top of Pascal's triangle. The expansion of the th power uses the numbers rows down from the top of the triangle. An application of the above formula for the square of a binomial is the "-formula" for generating Pythagorean triples: For , let , , and ; then . Binomials that are sums or differences of cubes can be factored into smaller-degree polynomials as follows: See also Completing the square Binomial distribution List of factorial and binomial topics (which contains a large number of related links) Notes References Algebra Factorial and binomial topics
https://en.wikipedia.org/wiki/Krull%27s%20principal%20ideal%20theorem
In commutative algebra, Krull's principal ideal theorem, named after Wolfgang Krull (1899–1971), gives a bound on the height of a principal ideal in a commutative Noetherian ring. The theorem is sometimes referred to by its German name, Krulls Hauptidealsatz (from ("Principal") + + ("theorem")). Precisely, if R is a Noetherian ring and I is a principal, proper ideal of R, then each minimal prime ideal over I has height at most one. This theorem can be generalized to ideals that are not principal, and the result is often called Krull's height theorem. This says that if R is a Noetherian ring and I is a proper ideal generated by n elements of R, then each minimal prime over I has height at most n. The converse is also true: if a prime ideal has height n, then it is a minimal prime ideal over an ideal generated by n elements. The principal ideal theorem and the generalization, the height theorem, both follow from the fundamental theorem of dimension theory in commutative algebra (see also below for the direct proofs). Bourbaki's Commutative Algebra gives a direct proof. Kaplansky's Commutative Rings includes a proof due to David Rees. Proofs Proof of the principal ideal theorem Let be a Noetherian ring, x an element of it and a minimal prime over x. Replacing A by the localization , we can assume is local with the maximal ideal . Let be a strictly smaller prime ideal and let , which is a -primary ideal called the n-th symbolic power of . It forms a descending chain of ideals . Thus, there is the descending chain of ideals in the ring . Now, the radical is the intersection of all minimal prime ideals containing ; is among them. But is a unique maximal ideal and thus . Since contains some power of its radical, it follows that is an Artinian ring and thus the chain stabilizes and so there is some n such that . It implies: , from the fact is -primary (if is in , then with and . Since is minimal over , and so implies is in .) Now, quotienting out both sides by yields . Then, by Nakayama's lemma (which says a finitely generated module M is zero if for some ideal I contained in the radical), we get ; i.e., and thus . Using Nakayama's lemma again, and is an Artinian ring; thus, the height of is zero. Proof of the height theorem Krull’s height theorem can be proved as a consequence of the principal ideal theorem by induction on the number of elements. Let be elements in , a minimal prime over and a prime ideal such that there is no prime strictly between them. Replacing by the localization we can assume is a local ring; note we then have . By minimality, cannot contain all the ; relabeling the subscripts, say, . Since every prime ideal containing is between and , and thus we can write for each , with and . Now we consider the ring and the corresponding chain in it. If is a minimal prime over , then contains and thus ; that is to say, is a minimal prime over and so, by Krull’s principal ideal the
https://en.wikipedia.org/wiki/Noetherian%20module
In abstract algebra, a Noetherian module is a module that satisfies the ascending chain condition on its submodules, where the submodules are partially ordered by inclusion. Historically, Hilbert was the first mathematician to work with the properties of finitely generated submodules. He proved an important theorem known as Hilbert's basis theorem which says that any ideal in the multivariate polynomial ring of an arbitrary field is finitely generated. However, the property is named after Emmy Noether who was the first one to discover the true importance of the property. Characterizations and properties In the presence of the axiom of choice, two other characterizations are possible: Any nonempty set S of submodules of the module has a maximal element (with respect to set inclusion). This is known as the maximum condition. All of the submodules of the module are finitely generated. If M is a module and K a submodule, then M is Noetherian if and only if K and M/K are Noetherian. This is in contrast to the general situation with finitely generated modules: a submodule of a finitely generated module need not be finitely generated. Examples The integers, considered as a module over the ring of integers, is a Noetherian module. If R = Mn(F) is the full matrix ring over a field, and M = Mn 1(F) is the set of column vectors over F, then M can be made into a module using matrix multiplication by elements of R on the left of elements of M. This is a Noetherian module. Any module that is finite as a set is Noetherian. Any finitely generated right module over a right Noetherian ring is a Noetherian module. Use in other structures A right Noetherian ring R is, by definition, a Noetherian right R-module over itself using multiplication on the right. Likewise a ring is called left Noetherian ring when R is Noetherian considered as a left R-module. When R is a commutative ring the left-right adjectives may be dropped as they are unnecessary. Also, if R is Noetherian on both sides, it is customary to call it Noetherian and not "left and right Noetherian". The Noetherian condition can also be defined on bimodule structures as well: a Noetherian bimodule is a bimodule whose poset of sub-bimodules satisfies the ascending chain condition. Since a sub-bimodule of an R-S bimodule M is in particular a left R-module, if M considered as a left R-module were Noetherian, then M is automatically a Noetherian bimodule. It may happen, however, that a bimodule is Noetherian without its left or right structures being Noetherian. See also Artinian module Ascending/descending chain condition Composition series Finitely generated module Krull dimension References Eisenbud Commutative Algebra with a View Toward Algebraic Geometry, Springer-Verlag, 1995. Module theory Commutative algebra de:Emmy Noether#Ehrungen
https://en.wikipedia.org/wiki/Information%20content
In information theory, the information content, self-information, surprisal, or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable. It can be thought of as an alternative way of expressing probability, much like odds or log-odds, but which has particular mathematical advantages in the setting of information theory. The Shannon information can be interpreted as quantifying the level of "surprise" of a particular outcome. As it is such a basic quantity, it also appears in several other settings, such as the length of a message needed to transmit the event given an optimal source coding of the random variable. The Shannon information is closely related to entropy, which is the expected value of the self-information of a random variable, quantifying how surprising the random variable is "on average". This is the average amount of self-information an observer would expect to gain about a random variable when measuring it. The information content can be expressed in various units of information, of which the most common is the "bit" (more correctly called the shannon), as explained below. Definition Claude Shannon's definition of self-information was chosen to meet several axioms: An event with probability 100% is perfectly unsurprising and yields no information. The less probable an event is, the more surprising it is and the more information it yields. If two independent events are measured separately, the total amount of information is the sum of the self-informations of the individual events. The detailed derivation is below, but it can be shown that there is a unique function of probability that meets these three axioms, up to a multiplicative scaling factor. Broadly, given a real number and an event with probability , the information content is defined as follows: The base b corresponds to the scaling factor above. Different choices of b correspond to different units of information: when , the unit is the shannon (symbol Sh), often called a 'bit'; when , the unit is the natural unit of information (symbol nat); and when , the unit is the hartley (symbol Hart). Formally, given a random variable with probability mass function , the self-information of measuring as outcome is defined as The use of the notation for self-information above is not universal. Since the notation is also often used for the related quantity of mutual information, many authors use a lowercase for self-entropy instead, mirroring the use of the capital for the entropy. Properties Monotonically decreasing function of probability For a given probability space, the measurement of rarer events are intuitively more "surprising", and yield more information content, than more common values. Thus, self-information is a strictly decreasing monotonic function of the probability, or sometimes called an "antitonic" function. While standard probabilities are represented by real numbers in the
https://en.wikipedia.org/wiki/Locus%20%28mathematics%29
In geometry, a locus (plural: loci) (Latin word for "place", "location") is a set of all points (commonly, a line, a line segment, a curve or a surface), whose location satisfies or is determined by one or more specified conditions. The set of the points that satisfy some property is often called the locus of a point satisfying this property. The use of the singular in this formulation is a witness that, until the end of the 19th century, mathematicians did not consider infinite sets. Instead of viewing lines and curves as sets of points, they viewed them as places where a point may be located or may move. History and philosophy Until the beginning of the 20th century, a geometrical shape (for example a curve) was not considered as an infinite set of points; rather, it was considered as an entity on which a point may be located or on which it moves. Thus a circle in the Euclidean plane was defined as the locus of a point that is at a given distance of a fixed point, the center of the circle. In modern mathematics, similar concepts are more frequently reformulated by describing shapes as sets; for instance, one says that the circle is the set of points that are at a given distance from the center. In contrast to the set-theoretic view, the old formulation avoids considering infinite collections, as avoiding the actual infinite was an important philosophical position of earlier mathematicians. Once set theory became the universal basis over which the whole mathematics is built, the term of locus became rather old-fashioned. Nevertheless, the word is still widely used, mainly for a concise formulation, for example: Critical locus, the set of the critical points of a differentiable function. Zero locus or vanishing locus, the set of points where a function vanishes, in that it takes the value zero. Singular locus, the set of the singular points of an algebraic variety. Connectedness locus, the subset of the parameter set of a family of rational functions for which the Julia set of the function is connected. More recently, techniques such as the theory of schemes, and the use of category theory instead of set theory to give a foundation to mathematics, have returned to notions more like the original definition of a locus as an object in itself rather than as a set of points. Examples in plane geometry Examples from plane geometry include: The set of points equidistant from two points is a perpendicular bisector to the line segment connecting the two points. The set of points equidistant from two intersecting lines is the union of their two angle bisectors. All conic sections are loci: Circle: the set of points at constant distance (the radius) from a fixed point (the center). Parabola: the set of points equidistant from a fixed point (the focus) and a line (the directrix). Hyperbola: the set of points for each of which the absolute value of the difference between the distances to two given foci is a constant. Ellipse: the set of point
https://en.wikipedia.org/wiki/Flat%20morphism
In mathematics, in particular in the theory of schemes in algebraic geometry, a flat morphism f from a scheme X to a scheme Y is a morphism such that the induced map on every stalk is a flat map of rings, i.e., is a flat map for all P in X. A map of rings is called flat if it is a homomorphism that makes B a flat A-module. A morphism of schemes is called faithfully flat if it is both surjective and flat. Two basic intuitions regarding flat morphisms are: flatness is a generic property; and the failure of flatness occurs on the jumping set of the morphism. The first of these comes from commutative algebra: subject to some finiteness conditions on f, it can be shown that there is a non-empty open subscheme of Y, such that f restricted to Y′ is a flat morphism (generic flatness). Here 'restriction' is interpreted by means of the fiber product of schemes, applied to f and the inclusion map of into Y. For the second, the idea is that morphisms in algebraic geometry can exhibit discontinuities of a kind that are detected by flatness. For instance, the operation of blowing down in the birational geometry of an algebraic surface, can give a single fiber that is of dimension 1 when all the others have dimension 0. It turns out (retrospectively) that flatness in morphisms is directly related to controlling this sort of semicontinuity, or one-sided jumping. Flat morphisms are used to define (more than one version of) the flat topos, and flat cohomology of sheaves from it. This is a deep-lying theory, and has not been found easy to handle. The concept of étale morphism (and so étale cohomology) depends on the flat morphism concept: an étale morphism being flat, of finite type, and unramified. Examples/non-examples Consider the affine scheme induced from the obvious morphism of algebras Since proving flatness for this morphism amounts to computing we resolve the complex numbers and tensor by the module representing our scheme giving the sequence of -modules Because is not a zero divisor we have a trivial kernel, hence the homology group vanishes. Miracle flatness Other examples of flat morphisms can be found using "miracle flatness" which states that if you have a morphism between a Cohen–Macaulay scheme to a regular scheme with equidimensional fibers, then it is flat. Easy examples of this are elliptic fibrations, smooth morphisms, and morphisms to stratified varieties which satisfy miracle flatness on each of the strata. Hilbert schemes The universal examples of flat morphisms of schemes are given by Hilbert schemes. This is because Hilbert schemes parameterize universal classes of flat morphisms, and every flat morphism is the pullback from some Hilbert scheme. I.e., if is flat, there exists a commutative diagram for the Hilbert scheme of all flat morphisms to . Since is flat, the fibers all have the same Hilbert polynomial , hence we could have similarly written for the Hilbert scheme above. Non-examples Blowup One class of
https://en.wikipedia.org/wiki/Flat%20map
In differential geometry, flat map is a mapping that converts vectors into corresponding 1-forms, given a non-degenerate (0,2)-tensor. See also Flat morphism Sharp map, the mapping that converts 1-forms into corresponding vectors bind, another name for flatMap in functional programming Differential geometry
https://en.wikipedia.org/wiki/Coaxial
In geometry, coaxial means that several three-dimensional linear or planar forms share a common axis. The two-dimensional analog is concentric. Common examples: A coaxial cable is a three-dimensional linear structure. It has a wire conductor in the centre (D), a circumferential outer conductor (B), and an insulating medium called the dielectric (C) separating these two conductors. The outer conductor is usually sheathed in a protective PVC outer jacket (A). All these have a common axis. The dimension and material of the conductors and insulation determine the cable's characteristic impedance and attenuation at various frequencies. Coaxial rotors are a three-dimensional planar structure: a pair of helicopter rotors (wings) mounted one above the other on concentric shafts, with the same axis of rotation (but turning in opposite directions). In loudspeaker design, coaxial speakers are a loudspeaker system in which the individual drivers are mounted close to one another on the same axis, and thus radiate sound along the same axis and roughly from the same point. A coaxial weapon mount places two weapons on roughly the same axis – as the weapons are usually side-by-side or one on top of the other, and thus oriented in parallel directions – they are technically par-axial rather than coaxial, however the distances involved mean that they are effectively coaxial as far as the operator is concerned. See also Orthogonal Perpendicular References External links Geometry
https://en.wikipedia.org/wiki/Logistic%20distribution
In probability theory and statistics, the logistic distribution is a continuous probability distribution. Its cumulative distribution function is the logistic function, which appears in logistic regression and feedforward neural networks. It resembles the normal distribution in shape but has heavier tails (higher kurtosis). The logistic distribution is a special case of the Tukey lambda distribution. Specification Probability density function When the location parameter  is 0 and the scale parameter  is 1, then the probability density function of the logistic distribution is given by Thus in general the density is: Because this function can be expressed in terms of the square of the hyperbolic secant function "sech", it is sometimes referred to as the sech-square(d) distribution. (See also: hyperbolic secant distribution). Cumulative distribution function The logistic distribution receives its name from its cumulative distribution function, which is an instance of the family of logistic functions. The cumulative distribution function of the logistic distribution is also a scaled version of the hyperbolic tangent. In this equation is the mean, and is a scale parameter proportional to the standard deviation. Quantile function The inverse cumulative distribution function (quantile function) of the logistic distribution is a generalization of the logit function. Its derivative is called the quantile density function. They are defined as follows: Alternative parameterization An alternative parameterization of the logistic distribution can be derived by expressing the scale parameter, , in terms of the standard deviation, , using the substitution , where . The alternative forms of the above functions are reasonably straightforward. Applications The logistic distribution—and the S-shaped pattern of its cumulative distribution function (the logistic function) and quantile function (the logit function)—have been extensively used in many different areas. Logistic regression One of the most common applications is in logistic regression, which is used for modeling categorical dependent variables (e.g., yes-no choices or a choice of 3 or 4 possibilities), much as standard linear regression is used for modeling continuous variables (e.g., income or population). Specifically, logistic regression models can be phrased as latent variable models with error variables following a logistic distribution. This phrasing is common in the theory of discrete choice models, where the logistic distribution plays the same role in logistic regression as the normal distribution does in probit regression. Indeed, the logistic and normal distributions have a quite similar shape. However, the logistic distribution has heavier tails, which often increases the robustness of analyses based on it compared with using the normal distribution. Physics The PDF of this distribution has the same functional form as the derivative of the Fermi function. In the t
https://en.wikipedia.org/wiki/Florian%20Cajori
Florian Cajori (February 28, 1859 – August 14 or 15, 1930) was a Swiss-American historian of mathematics. Biography Florian Cajori was born in Zillis, Switzerland, as the son of Georg Cajori and Catherine Camenisch. He attended schools first in Zillis and later in Chur. In 1875, Florian Cajori emigrated to the United States at the age of sixteen, and attended the State Normal school in Whitewater, Wisconsin. After graduating in 1878, he taught in a country school, and then later began studying mathematics at University of Wisconsin–Madison. In 1883, Cajori received both his bachelor's and master's degrees from the University of Wisconsin–Madison, briefly attended Johns Hopkins University for 8 months in between degrees. He taught for a few years at Tulane University, before being appointed as professor of applied mathematics there in 1887. He was then driven north by tuberculosis. He founded the Colorado College Scientific Society and taught at Colorado College where he held the chair in physics from 1889 to 1898 and the chair in mathematics from 1898 to 1918. He was the position Dean of the engineering department. While at Colorado, he received his doctorate from Tulane in 1894, and married Elizabeth G. Edwards in 1890 and had one son. Cajori's A History of Mathematics (1894) was the first popular presentation of the history of mathematics in the United States. Based upon his reputation in the history of mathematics (even today his 1928–1929 History of Mathematical Notations has been described as "unsurpassed") he was appointed in 1918 to the first history of mathematics chair in the U.S, created especially for him, at the University of California, Berkeley. He remained in Berkeley, California until his death in 1930. Cajori did no original mathematical research unrelated to the history of mathematics. In addition to his numerous books, he also contributed highly recognized and popular historical articles to the American Mathematical Monthly. His last work was a revision of Andrew Motte's 1729 translation of Newton's Principia, vol.1 The Motion of Bodies, but he died before it was completed. The work was finished by R.T. Crawford of Berkeley, California. Societies and honors 1917–1918, Mathematical Association of America president 1923, American Association for the Advancement of Science vice-president 1924, Invited Speaker of the International Congress of Mathematicians in 1924 in Toronto 1924–1925, History of Science Society vice-president 1929–1930, Comité International d'Histoire des Sciences vice-president The Cajori crater on the Moon was named in his honour Publications Books 1890: The Teaching and History of Mathematics in the United States U.S. Government Printing Office. 1893: A History of Mathematics, Macmillan & Company. 1898: A History of Elementary Mathematics, Macmillan. 1899: A History of Physics in its Elementary Branches: Including the Evolution of Physical Laboratories, The Macmillan Company, 1899. A
https://en.wikipedia.org/wiki/Happy%20number
In number theory, a happy number is a number which eventually reaches 1 when replaced by the sum of the square of each digit. For instance, 13 is a happy number because , and . On the other hand, 4 is not a happy number because the sequence starting with and eventually reaches , the number that started the sequence, and so the process continues in an infinite cycle without ever reaching 1. A number which is not happy is called sad or unhappy. More generally, a -happy number is a natural number in a given number base that eventually reaches 1 when iterated over the perfect digital invariant function for . The origin of happy numbers is not clear. Happy numbers were brought to the attention of Reg Allenby (a British author and senior lecturer in pure mathematics at Leeds University) by his daughter, who had learned of them at school. However, they "may have originated in Russia" . Happy numbers and perfect digital invariants Formally, let be a natural number. Given the perfect digital invariant function . for base , a number is -happy if there exists a such that , where represents the -th iteration of , and -unhappy otherwise. If a number is a nontrivial perfect digital invariant of , then it is -unhappy. For example, 19 is 10-happy, as For example, 347 is 6-happy, as There are infinitely many -happy numbers, as 1 is a -happy number, and for every , ( in base ) is -happy, since its sum is 1. The happiness of a number is preserved by removing or inserting zeroes at will, since they do not contribute to the cross sum. Natural density of b-happy numbers By inspection of the first million or so 10-happy numbers, it appears that they have a natural density of around 0.15. Perhaps surprisingly, then, the 10-happy numbers do not have an asymptotic density. The upper density of the happy numbers is greater than 0.18577, and the lower density is less than 0.1138. Happy bases A happy base is a number base where every number is -happy. The only happy integer bases less than are base 2 and base 4. Specific b-happy numbers 4-happy numbers For , the only positive perfect digital invariant for is the trivial perfect digital invariant 1, and there are no other cycles. Because all numbers are preperiodic points for , all numbers lead to 1 and are happy. As a result, base 4 is a happy base. 6-happy numbers For , the only positive perfect digital invariant for is the trivial perfect digital invariant 1, and the only cycle is the eight-number cycle 5 → 41 → 25 → 45 → 105 → 42 → 32 → 21 → 5 → ... and because all numbers are preperiodic points for , all numbers either lead to 1 and are happy, or lead to the cycle and are unhappy. Because base 6 has no other perfect digital invariants except for 1, no positive integer other than 1 is the sum of the squares of its own digits. In base 10, the 74 6-happy numbers up to 1296 = 64 are (written in base 10): 1, 6, 36, 44, 49, 79, 100, 160, 170, 216, 224, 229, 254, 264, 275, 285, 28
https://en.wikipedia.org/wiki/Heinz%20Hopf
Heinz Hopf (19 November 1894 – 3 June 1971) was a German mathematician who worked on the fields of topology and geometry. Early life and education Hopf was born in Gräbschen, German Empire (now , part of Wrocław, Poland), the son of Elizabeth (née Kirchner) and Wilhelm Hopf. His father was born Jewish and converted to Protestantism a year after Heinz was born; his mother was from a Protestant family. Hopf attended Karl Mittelhaus higher boys' school from 1901 to 1904, and then entered the König-Wilhelm-Gymnasium in Breslau. He showed mathematical talent from an early age. In 1913 he entered the Silesian Friedrich Wilhelm University where he attended lectures by Ernst Steinitz, Adolf Kneser, Max Dehn, Erhard Schmidt, and Rudolf Sturm. When World War I broke out in 1914, Hopf eagerly enlisted. He was wounded twice and received the iron cross (first class) in 1918. After the war Hopf continued his mathematical education in Heidelberg (winter 1919/20 and summer 1920) and Berlin (starting in winter 1920/21). He studied under Ludwig Bieberbach and received his doctorate in 1925. Career In his dissertation, Connections between topology and metric of manifolds (German Über Zusammenhänge zwischen Topologie und Metrik von Mannigfaltigkeiten), he proved that any simply connected complete Riemannian 3-manifold of constant sectional curvature is globally isometric to Euclidean, spherical, or hyperbolic space. He also studied the indices of zeros of vector fields on hypersurfaces, and connected their sum to curvature. Some six months later he gave a new proof that the sum of the indices of the zeros of a vector field on a manifold is independent of the choice of vector field and equal to the Euler characteristic of the manifold. This theorem is now called the Poincaré–Hopf theorem. Hopf spent the year after his doctorate at the University of Göttingen, where David Hilbert, Richard Courant, Carl Runge, and Emmy Noether were working. While there he met Pavel Alexandrov and began a lifelong friendship. In 1926 Hopf moved back to Berlin, where he gave a course in combinatorial topology. He spent the academic year 1927/28 at Princeton University on a Rockefeller fellowship with Alexandrov. Solomon Lefschetz, Oswald Veblen and J. W. Alexander were all at Princeton at the time. At this time Hopf discovered the Hopf invariant of maps and proved that the Hopf fibration has invariant 1. In the summer of 1928 Hopf returned to Berlin and began working with Pavel Alexandrov, at the suggestion of Courant, on a book on topology. Three volumes were planned, but only one was finished. It was published in 1935. In 1929, he declined a job offer from Princeton University. In 1931 Hopf took Hermann Weyl's position at ETH, in Zürich. Hopf received another invitation to Princeton in 1940, but he declined it. Two years later, however, he was forced to file for Swiss citizenship after his property was confiscated by Nazis, his father's conversion to Christianity having failed
https://en.wikipedia.org/wiki/Rankit
In statistics, rankits of a set of data are the expected values of the order statistics of a sample from the standard normal distribution the same size as the data. They are primarily used in the normal probability plot, a graphical technique for normality testing. Example This is perhaps most readily understood by means of an example. If an i.i.d. sample of six items is taken from a normally distributed population with expected value 0 and variance 1 (the standard normal distribution) and then sorted into increasing order, the expected values of the resulting order statistics are: −1.2672,   −0.6418,   −0.2016,   0.2016,   0.6418,   1.2672. Suppose the numbers in a data set are 65, 75, 16, 22, 43, 40. Then one may sort these and line them up with the corresponding rankits; in order they are 16, 22, 40, 43, 65, 75, which yields the points: These points are then plotted as the vertical and horizontal coordinates of a scatter plot. Alternative method Alternatively, rather than sort the data points, one may rank them, and rearrange the rankits accordingly. This yields the same pairs of numbers, but in a different order. For: 65, 75, 16, 22, 43, 40, the corresponding ranks are: 5, 6, 1, 2, 4, 3, i.e., the number appearing first is the 5th-smallest, the number appearing second is 6th-smallest, the number appearing third is smallest, the number appearing fourth is 2nd-smallest, etc. One rearranges the expected normal order statistics accordingly, getting the rankits of this data set: Rankit plot A graph plotting the rankits on the horizontal axis and the data points on the vertical axis is called a rankit plot or a normal probability plot. Such a plot is necessarily nondecreasing. In large samples from a normally distributed population, such a plot will approximate a straight line. Substantial deviations from straightness are considered evidence against normality of the distribution. Rankit plots are usually used to visually demonstrate whether data are from a specified probability distribution. A rankit plot is a kind of Q–Q plot – it plots the order statistics (quantiles) of the sample against certain quantiles (the rankits) of the assumed normal distribution. Q–Q plots may use other quantiles for the normal distribution, however. History The rankit plot and the word rankit was introduced by the biologist and statistician Chester Ittner Bliss (1899–1979). See also Probit analysis developed by C. I. Bliss in 1934. External links Engineering Statistics Handbook Statistical charts and diagrams Normal distribution
https://en.wikipedia.org/wiki/Normal%20probability%20plot
The normal probability plot is a graphical technique to identify substantive departures from normality. This includes identifying outliers, skewness, kurtosis, a need for transformations, and mixtures. Normal probability plots are made of raw data, residuals from model fits, and estimated parameters. In a normal probability plot (also called a "normal plot"), the sorted data are plotted vs. values selected to make the resulting image look close to a straight line if the data are approximately normally distributed. Deviations from a straight line suggest departures from normality. The plotting can be manually performed by using a special graph paper, called normal probability paper. With modern computers normal plots are commonly made with software. The normal probability plot is a special case of the Q–Q probability plot for a normal distribution. The theoretical quantiles are generally chosen to approximate either the mean or the median of the corresponding order statistics. Definition The normal probability plot is formed by plotting the sorted data vs. an approximation to the means or medians of the corresponding order statistics; see rankit. Some plot the data on the vertical axis; others plot the data on the horizontal axis. Different sources use slightly different approximations for rankits. The formula used by the "qqnorm" function in the basic "stats" package in R (programming language) is as follows: for , where if and 0.5 for n > 10, and is the standard normal quantile function. If the data are consistent with a sample from a normal distribution, the points should lie close to a straight line. As a reference, a straight line can be fit to the points. The further the points vary from this line, the greater the indication of departure from normality. If the sample has mean 0, standard deviation 1 then a line through 0 with slope 1 could be used. With more points, random deviations from a line will be less pronounced. Normal plots are often used with as few as 7 points, e.g., with plotting the effects in a saturated model from a 2-level fractional factorial experiment. With fewer points, it becomes harder to distinguish between random variability and a substantive deviation from normality. Other distributions Probability plots for distributions other than the normal are computed in exactly the same way. The normal quantile function is simply replaced by the quantile function of the desired distribution. In this way, a probability plot can easily be generated for any distribution for which one has the quantile function. With a location-scale family of distributions, the location and scale parameters of the distribution can be estimated from the intercept and the slope of the line. For other distributions the parameters must first be estimated before a probability plot can be made. Plot types This is a sample of size 50 from a normal distribution, plotted as both a histogram, and a normal probabil
https://en.wikipedia.org/wiki/Eugenio%20Beltrami
Eugenio Beltrami (16 November 1835 – 18 February 1900) was an Italian mathematician notable for his work concerning differential geometry and mathematical physics. His work was noted especially for clarity of exposition. He was the first to prove consistency of non-Euclidean geometry by modeling it on a surface of constant curvature, the pseudosphere, and in the interior of an n-dimensional unit sphere, the so-called Beltrami–Klein model. He also developed singular value decomposition for matrices, which has been subsequently rediscovered several times. Beltrami's use of differential calculus for problems of mathematical physics indirectly influenced development of tensor calculus by Gregorio Ricci-Curbastro and Tullio Levi-Civita. Life Beltrami was born in 1835 in Cremona (Lombardy), then a part of the Austrian Empire, and now part of Italy. Both parents were artists Giovanni Beltrami and the Venetian Elisa Barozzi. He began studying mathematics at University of Pavia in 1853, but was expelled from Ghislieri College in 1856 due to his political opinions—he was sympathetic with the Risorgimento. During this time he was taught and influenced by Francesco Brioschi. He had to discontinue his studies because of financial hardship and spent the next several years as a secretary working for the Lombardy–Venice railroad company. He was appointed to the University of Bologna as a professor in 1862, the year he published his first research paper. Throughout his life, Beltrami had various professorial jobs at the universities of Pisa, Rome and Pavia. From 1891 until the end of his life, Beltrami lived in Rome. He became the president of the Accademia dei Lincei in 1898 and a senator of the Kingdom of Italy in 1899. Contributions to non-Euclidean geometry In 1868 Beltrami published two memoirs (written in Italian; French translations by J. Hoüel appeared in 1869) dealing with consistency and interpretations of non-Euclidean geometry of János Bolyai and Nikolai Lobachevsky. In his "Essay on an interpretation of non-Euclidean geometry", Beltrami proposed that this geometry could be realized on a surface of constant negative curvature, a pseudosphere. For Beltrami's concept, lines of the geometry are represented by geodesics on the pseudosphere and theorems of non-Euclidean geometry can be proved within ordinary three-dimensional Euclidean space, and not derived in an axiomatic fashion, as Lobachevsky and Bolyai had done previously. In 1840, Ferdinand Minding already considered geodesic triangles on the pseudosphere and remarked that the corresponding "trigonometric formulas" are obtained from the corresponding formulas of spherical trigonometry by replacing the usual trigonometric functions with hyperbolic functions; this was further developed by Delfino Codazzi in 1857, but apparently neither of them noticed the association with Lobachevsky's work. In this way, Beltrami attempted to demonstrate that two-dimensional non-Euclidean geometry is as valid a
https://en.wikipedia.org/wiki/Spectral%20space
In mathematics, a spectral space is a topological space that is homeomorphic to the spectrum of a commutative ring. It is sometimes also called a coherent space because of the connection to coherent topos. Definition Let X be a topological space and let K(X) be the set of all compact open subsets of X. Then X is said to be spectral if it satisfies all of the following conditions: X is compact and T0. K(X) is a basis of open subsets of X. K(X) is closed under finite intersections. X is sober, i.e., every nonempty irreducible closed subset of X has a (necessarily unique) generic point. Equivalent descriptions Let X be a topological space. Each of the following properties are equivalent to the property of X being spectral: X is homeomorphic to a projective limit of finite T0-spaces. X is homeomorphic to the spectrum of a bounded distributive lattice L. In this case, L is isomorphic (as a bounded lattice) to the lattice K(X) (this is called Stone representation of distributive lattices). X is homeomorphic to the spectrum of a commutative ring. X is the topological space determined by a Priestley space. X is a T0 space whose frame of open sets is coherent (and every coherent frame comes from a unique spectral space in this way). Properties Let X be a spectral space and let K(X) be as before. Then: K(X) is a bounded sublattice of subsets of X. Every closed subspace of X is spectral. An arbitrary intersection of compact and open subsets of X (hence of elements from K(X)) is again spectral. X is T0 by definition, but in general not T1. In fact a spectral space is T1 if and only if it is Hausdorff (or T2) if and only if it is a boolean space if and only if K(X) is a boolean algebra. X can be seen as a pairwise Stone space. Spectral maps A spectral map f: X → Y between spectral spaces X and Y is a continuous map such that the preimage of every open and compact subset of Y under f is again compact. The category of spectral spaces, which has spectral maps as morphisms, is dually equivalent to the category of bounded distributive lattices (together with morphisms of such lattices). In this anti-equivalence, a spectral space X corresponds to the lattice K(X). Citations References M. Hochster (1969). Prime ideal structure in commutative rings. Trans. Amer. Math. Soc., 142 43—60 . General topology Algebraic geometry Lattice theory
https://en.wikipedia.org/wiki/Carl%20Hindenburg
Carl Friedrich Hindenburg (13 July 1741 – 17 March 1808) was a German mathematician born in Dresden. His work centered mostly on combinatorics and probability. Education Hindenburg did not attend school but was educated at home by a private tutor as arranged by his merchant father. He went to the University of Leipzig in 1757 and took courses in medicine, philosophy, Latin, Greek, physics, mathematics, and aesthetics. Hindenburg later published on philology in 1763 and 1769. Hindenburg was mentored by Christian Fürchtegott Gellert, a popular lecturer in Leipzig who introduced Hindenburg to a student named Schönborn. Schönborn's interest in mathematics influenced Hindenburg to go into the field as well. He obtained a master's degree from the University of Leipzig in 1771. Career On obtaining his master's degree at Leipzig in 1771, Hindenburg was made Privatdozent there. In 1781, Hindenburg was appointed as professor of philosophy in the University of Leipzig. He would be appointed professor of physics in 1786 after presenting a dissertation on water pumps. Hindenburg served as academic dean at the University of Leipzig, where he was also Rector in 1792. He became a member of the Berlin Academy of Sciences on 5 August 1806. Research contributions Hindenburg's first published mathematical publication, Beschreibung einer ganz neuen art, nach einem bekannten Gesetze fortgehende Zahlen, durch Abzahlen oder Abmessen bequem und sicher zu finden, originated as a project to extend then-existing prime tables up to 5 million. In the book, he mechanically realizes, independent of the work done by Felkel, the sieve of Eratosthenes, which he then proceeds with rules to both optimize and organize. The book also contained results in linear diophantine analysis, decimal periods, combinations, and gave combinatorial significance to the digits of numbers written in decimal notation. In 1778, he started publishing a series of works on combinatorics, particularly on probability, series and formulae for higher differentials. He worked on a generalization of the binomial theorem and was a major influence in Gudermann's work on the expansion of functions into power series. Editorial work Hindenburg co-founded the first German mathematical journals. Between 1780 and 1800, he was involved at different times with the publishing of four different journals all relating to mathematics and its applications. Two of the journals, the Leipziger Magazin für reine und angewandte Mathematik (1786–1789) and the Archiv für reine und angewandte Mathematik (1795–1799), published Johann Heinrich Lambert's Nachlass as edited by Johann Bernoulli. In 1796, he edited the Sammlung combinatorisch-analytischer Abhandlungen, which contained a claim that de Moivre's multinomial theorem was “the most important proposition in all of mathematical analysis”. Students One of Hindenburg's best students, according to Donald Knuth, is Heinrich August Rothe. Another student, Johann Karl Burckhardt
https://en.wikipedia.org/wiki/Bicategory
In mathematics, a bicategory (or a weak 2-category) is a concept in category theory used to extend the notion of category to handle the cases where the composition of morphisms is not (strictly) associative, but only associative up to an isomorphism. The notion was introduced in 1967 by Jean Bénabou. Bicategories may be considered as a weakening of the definition of 2-categories. A similar process for 3-categories leads to tricategories, and more generally to weak n-categories for n-categories. Definition Formally, a bicategory B consists of: objects a, b, ... called 0-cells; morphisms f, g, ... with fixed source and target objects called 1-cells; "morphisms between morphisms" ρ, σ, ... with fixed source and target morphisms (which should have themselves the same source and the same target), called 2-cells; with some more structure: given two objects a and b there is a category B(a, b) whose objects are the 1-cells and morphisms are the 2-cells. The composition in this category is called vertical composition; given three objects a, b and c, there is a bifunctor called horizontal composition. The horizontal composition is required to be associative up to a natural isomorphism α between morphisms and . Some more coherence axioms, similar to those needed for monoidal categories, are moreover required to hold: a monoidal category is the same as a bicategory with one 0-cell. Example: Boolean monoidal category Consider a simple monoidal category, such as the monoidal preorder Bool based on the monoid M = ({T, F}, ∧, T). As a category this is presented with two objects {T, F} and single morphism g: F → T. We can reinterpret this monoid as a bicategory with a single object x (one 0-cell); this construction is analogous to construction of a small category from a monoid. The objects {T, F} become morphisms, and the morphism g becomes a natural transformation (forming a functor category for the single hom-category B(x, x)). References J. Bénabou. "Introduction to bicategories, part I". In Reports of the Midwest Category Seminar, Lecture Notes in Mathematics 47, pages 1–77. Springer, 1967. External links Higher category theory
https://en.wikipedia.org/wiki/Recurrence%20plot
In descriptive statistics and chaos theory, a recurrence plot (RP) is a plot showing, for each moment in time, the times at which the state of a dynamical system returns to the previous state at , i.e., when the phase space trajectory visits roughly the same area in the phase space as at time . In other words, it is a plot of showing on a horizontal axis and on a vertical axis, where is the state of the system (or its phase space trajectory). Background Natural processes can have a distinct recurrent behaviour, e.g. periodicities (as seasonal or Milankovich cycles), but also irregular cyclicities (as El Niño Southern Oscillation, heart beat intervals). Moreover, the recurrence of states, in the meaning that states are again arbitrarily close after some time of divergence, is a fundamental property of deterministic dynamical systems and is typical for nonlinear or chaotic systems (cf. Poincaré recurrence theorem). The recurrence of states in nature has been known for a long time and has also been discussed in early work (e.g. Henri Poincaré 1890). Detailed description One way to visualize the recurring nature of states by their trajectory through a phase space is the recurrence plot, introduced by Eckmann et al. (1987). Often, the phase space does not have a low enough dimension (two or three) to be pictured, since higher-dimensional phase spaces can only be visualized by projection into the two or three-dimensional sub-spaces. However, making a recurrence plot enables us to investigate certain aspects of the m-dimensional phase space trajectory through a two-dimensional representation. At a recurrence the trajectory returns to a location in phase space it has visited before up to a small error (i.e., the system returns to a state that it has before). The recurrence plot represents the collection of pairs of times such recurrences, i.e., the set of with , with and discrete points of time and the state of the system at time (location of the trajectory at time ). Mathematically, this can be expressed by the binary recurrence matrix where is a norm and the recurrence threshold. The recurrence plot visualises with coloured (mostly black) dot at coordinates if , with time at the - and -axes. If only a time series is available, the phase space can be reconstructed by using a time delay embedding (see Takens' theorem): where is the time series, the embedding dimension and the time delay. Phase space reconstruction is not essential part of the recurrence plot (although often stated in literature), because it is based on phase space trajectories which could be derived from the system's variables directly (e.g., from the three variables of the Lorenz system). The visual appearance of a recurrence plot gives hints about the dynamics of the system. Caused by characteristic behaviour of the phase space trajectory, a recurrence plot contains typical small-scale structures, as single dots, diagonal lines and vertical/horizontal lines
https://en.wikipedia.org/wiki/Central%20Statistics%20Office%20%28Ireland%29
The Central Statistics Office (CSO; ) is the statistical agency responsible for the gathering of "information relating to economic, social and general activities and conditions" in Ireland, in particular the census which is held every five years. The office is answerable to the Taoiseach and has its main offices in Cork. The Director General of the CSO is Pádraig Dalton. History The CSO was established on a statutory basis in 1994 to reduce the number of separate offices responsible for collecting statistics for the state. The CSO had existed, as an independent office within the Department of the Taoiseach from June 1949, and its work greatly increased in the following decades particularly from 1973 with Ireland joining the European Community. Previous to the 1949 reforms, statistics were collected by the Statistics Branch of Department of Industry and Commerce on the creation of the Irish Free State in 1922. The Statistics Branch amalgamated a number of statistics gathering organisations that had existed in Ireland since 1841 when the first comprehensive census was undertaken by the Royal Irish Constabulary. On 15 September 2020, on the advice of the Central Statistics Office, the Government postponed the quinquennial population census, originally scheduled for 18 April 2021, until 3 April 2022 because of health and logistical obstacles caused by the COVID-19 pandemic. Head of the Office The current Director-General of the Central Statistics Office is Pádraig Dalton. Household Finance and Consumption Survey In 2013 the first ever Household Finance and Consumption Survey (HFCS) was conducted in Ireland by the Central Statistics Office on behalf of the Central Bank of Ireland as part of the European Central Bank (ECB) HFCS scheme/network. See also NUTS statistical regions of Ireland Leprechaun economics Irish modified GNI (or GNI star) Northern Ireland Statistics and Research Agency References External links Population of each Province, County and City, 2002 National Statistics Board, Ireland Government agencies of the Republic of Ireland Ireland Department of the Taoiseach
https://en.wikipedia.org/wiki/Extrapolation
In mathematics, extrapolation is a type of estimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results. Extrapolation may also mean extension of a method, assuming similar methods will be applicable. Extrapolation may also apply to human experience to project, extend, or expand known experience into an area not known or previously experienced so as to arrive at a (usually conjectural) knowledge of the unknown (e.g. a driver extrapolates road conditions beyond his sight while driving). The extrapolation method can be applied in the interior reconstruction problem. Methods A sound choice of which extrapolation method to apply relies on a priori knowledge of the process that created the existing data points. Some experts have proposed the use of causal forces in the evaluation of extrapolation methods. Crucial questions are, for example, if the data can be assumed to be continuous, smooth, possibly periodic, etc. Linear Linear extrapolation means creating a tangent line at the end of the known data and extending it beyond that limit. Linear extrapolation will only provide good results when used to extend the graph of an approximately linear function or not too far beyond the known data. If the two data points nearest the point to be extrapolated are and , linear extrapolation gives the function: (which is identical to linear interpolation if ). It is possible to include more than two points, and averaging the slope of the linear interpolant, by regression-like techniques, on the data points chosen to be included. This is similar to linear prediction. Polynomial A polynomial curve can be created through the entire known data or just near the end (two points for linear extrapolation, three points for quadratic extrapolation, etc.). The resulting curve can then be extended beyond the end of the known data. Polynomial extrapolation is typically done by means of Lagrange interpolation or using Newton's method of finite differences to create a Newton series that fits the data. The resulting polynomial may be used to extrapolate the data. High-order polynomial extrapolation must be used with due care. For the example data set and problem in the figure above, anything above order 1 (linear extrapolation) will possibly yield unusable values; an error estimate of the extrapolated value will grow with the degree of the polynomial extrapolation. This is related to Runge's phenomenon. Conic A conic section can be created using five points near the end of the known data. If the conic section created is an ellipse or circle, when extrapolated it will loop back and rejoin itself. An extrapolated parabola or hyperbola will not rejoin itself, but may curve back relative to the X-axis. This
https://en.wikipedia.org/wiki/126%20%28number%29
126 (one hundred [and] twenty-six) is the natural number following 125 and preceding 127. In mathematics As the binomial coefficient , 126 is a central binomial coefficient, and in Pascal's Triangle, it is a pentatope number. 126 is a sum of two cubes, and since 125 + 1 is σ3(5), 126 is the fifth value of the sum of cubed divisors function. 126 is the fifth -perfect Granville number, and the third such not to be a perfect number. Also, it is known to be the smallest Granville number with three distinct prime factors, and perhaps the only such Granville number. 126 is a pentagonal pyramidal number and a decagonal number. 126 is also the different number of ways to partition a decagon into even polygons by diagonals, and the number of crossing points among the diagonals of a regular nonagon. There are exactly 126 binary strings of length seven that are not repetitions of a shorter string, and 126 different semigroups on four elements (up to isomorphism and reversal). There are exactly 126 positive integers that are not solutions of the equation where a, b, c, and d must themselves all be positive integers. 126 is the number of root vectors of simple Lie group E7. In physics 126 is the seventh magic number in nuclear physics. For each of these numbers, 2, 8, 20, 28, 50, 82, and 126, an atomic nucleus with this many protons is or is predicted to be more stable than for other numbers. Thus, although there has been no experimental discovery of element 126, tentatively called unbihexium, it is predicted to belong to an island of stability that might allow it to exist with a long enough half life that its existence could be detected. See also 126th (disambiguation) The years 126 AD and 126 BC 126 Velleda, a main belt asteroid List of highways numbered 126 126 film and 126 film (roll format) photographic film formats References Integers
https://en.wikipedia.org/wiki/List%20of%20districts%20in%20Northern%20Ireland%20%28pre-2015%29
This is a list of the former local government districts in Northern Ireland showing statistics for population, population density and area. The figures are from the 2011 Census. These districts officially dissolved on 1 April 2015 when they were merged into eleven larger districts, statistics for which are listed at Local government in Northern Ireland. See also Local government in Northern Ireland Local Councils in Northern Ireland by area Local Councils in Northern Ireland by population density List of districts in Northern Ireland by religion or religion brought up in List of districts in Northern Ireland by national identity Notes Government of Northern Ireland Districts of Northern Ireland, 1972–2015 Demographics of Northern Ireland Lists of places in Northern Ireland Northern Ireland
https://en.wikipedia.org/wiki/Feuerbach%20point
In the geometry of triangles, the incircle and nine-point circle of a triangle are internally tangent to each other at the Feuerbach point of the triangle. The Feuerbach point is a triangle center, meaning that its definition does not depend on the placement and scale of the triangle. It is listed as X(11) in Clark Kimberling's Encyclopedia of Triangle Centers, and is named after Karl Wilhelm Feuerbach. Feuerbach's theorem, published by Feuerbach in 1822, states more generally that the nine-point circle is tangent to the three excircles of the triangle as well as its incircle. A very short proof of this theorem based on Casey's theorem on the bitangents of four circles tangent to a fifth circle was published by John Casey in 1866; Feuerbach's theorem has also been used as a test case for automated theorem proving. The three points of tangency with the excircles form the Feuerbach triangle of the given triangle. Construction The incircle of a triangle ABC is a circle that is tangent to all three sides of the triangle. Its center, the incenter of the triangle, lies at the point where the three internal angle bisectors of the triangle cross each other. The nine-point circle is another circle defined from a triangle. It is so called because it passes through nine significant points of the triangle, among which the simplest to construct are the midpoints of the triangle's sides. The nine-point circle passes through these three midpoints; thus, it is the circumcircle of the medial triangle. These two circles meet in a single point, where they are tangent to each other. That point of tangency is the Feuerbach point of the triangle. Associated with the incircle of a triangle are three more circles, the excircles. These are circles that are each tangent to the three lines through the triangle's sides. Each excircle touches one of these lines from the opposite side of the triangle, and is on the same side as the triangle for the other two lines. Like the incircle, the excircles are all tangent to the nine-point circle. Their points of tangency with the nine-point circle form a triangle, the Feuerbach triangle. Properties The Feuerbach point lies on the line through the centers of the two tangent circles that define it. These centers are the incenter and nine-point center of the triangle. Let , , and be the three distances of the Feuerbach point to the vertices of the medial triangle (the midpoints of the sides BC=a, CA=b, and AB=c respectively of the original triangle). Then, or, equivalently, the largest of the three distances equals the sum of the other two. Specifically, we have where O is the reference triangle's circumcenter and I is its incenter. The latter property also holds for the tangency point of any of the excircles with the nine–point circle: the greatest distance from this tangency to one of the original triangle's side midpoints equals the sum of the distances to the other two side midpoints. If the incircle of triangle ABC to
https://en.wikipedia.org/wiki/The%20Calculus%20Affair
The Calculus Affair () is the eighteenth volume of The Adventures of Tintin, the comics series by the Belgian cartoonist Hergé. It was serialised weekly in Belgium's Tintin magazine from December 1954 to February 1956 before being published in a single volume by Casterman in 1956. The story follows the attempts of the young reporter Tintin, his dog Snowy, and his friend Captain Haddock to rescue their friend Professor Calculus, who has developed a machine capable of destroying objects with sound waves, from kidnapping attempts by the competing European countries of Borduria and Syldavia. Like the previous volume, Explorers on the Moon, The Calculus Affair was created with the aid of Hergé's team of artists at Studios Hergé. The story reflected the Cold War tensions that Europe was experiencing during the 1950s, and introduced three recurring characters into the series: Jolyon Wagg, Cutts the Butcher, and Colonel Sponsz. Hergé continued The Adventures of Tintin with The Red Sea Sharks, and the series as a whole became a defining part of the Franco-Belgian comics tradition. The Calculus Affair was critically well-received, with various commentators having described it as one of the best Tintin adventures. The story was adapted for both the 1957 Belvision animated series Hergé's Adventures of Tintin, the 1991 Ellipse/Nelvana animated series The Adventures of Tintin, and the 1992–93 BBC Radio 5 dramatisation of the Adventures. Synopsis During a thunderstorm, glass and porcelain items at Marlinspike Hall shatter inexplicably. Insurance salesman Jolyon Wagg arrives at the house to take shelter, annoying Captain Haddock. Gunshots are heard in the Hall's grounds, and Tintin and Haddock discover an unconscious man with a foreign accent who soon disappears with an accomplice. The next morning, Professor Calculus leaves for Geneva to attend a conference on nuclear physics. Tintin and Haddock use the opportunity to investigate Calculus' laboratory, discovering that his experiments were responsible for the glass-shattering of the previous night. While exploring, they are attacked by a masked stranger, who then takes off. While escaping, Snowy rips the stranger's coat, causing a cigarette packet to fall off. On the packet, the name of "Hotel Cornavin, Geneva" where Calculus stays was written on it. Tintin feared that Calculus is in danger. Tintin, Haddock, and Snowy head for Geneva. In Geneva, they learn that Calculus has gone to Nyon to meet Professor Topolino, an expert in ultrasonics. The group travel there in a taxi, but their car is attacked by two men in another car, who force the taxi into Lake Geneva. Surviving the attack, Tintin, Haddock and Snowy continue to Nyon, where they find Topolino bound and gagged in his cellar. As Tintin questions the professor, the house blows up, but they all survive. Tintin and Haddock meet the detectives Thomson and Thompson, who reveal that the man at Marlinspike was Syldavian. Tintin surmises that Calculus had inv
https://en.wikipedia.org/wiki/Bound
Bound or bounds may refer to: Mathematics Bound variable Upper and lower bounds, observed limits of mathematical functions Physics Bound state, a particle that has a tendency to remain localized in one or more regions of space Geography Bound Brook (Raritan River), a tributary of the Raritan River in New Jersey Bound Brook, New Jersey, a borough in Somerset County People Bound (surname) Bounds (surname) Arts, entertainment, and media Films Bound (1996 film), an American neo-noir film by the Wachowskis Bound (2015 film), an American erotic thriller film by Jared Cohn Bound (2018 film), a Nigerian romantic drama film by Frank Rajah Arase Television "Bound" (Fringe), an episode of Fringe "Bound" (The Secret Circle), an episode of The Secret Circle "Bound" (Star Trek: Enterprise), an episode of Star Trek: Enterprise Other arts, entertainment, and media Bound (video game), a PlayStation 4 game "Bound", a song by Darkane from their 1999 album Rusted Angel "Bound", a song by Suzanne Vega from her 2007 album Beauty & Crime Bount or Bound, a fictional race in the anime Bleach Other uses Bound (car), a British 4-wheeled cyclecar made in 1920 Legally bound, see Contract Boundary (sports), the edges of a field Butts and bounds, delineation of property bounds See also Bind (disambiguation) Bond Bondage (disambiguation) Bound & Gagged (disambiguation) Boundary (disambiguation) Bound FX (Business)
https://en.wikipedia.org/wiki/Welch%27s%20method
Welch's method, named after Peter D. Welch, is an approach for spectral density estimation. It is used in physics, engineering, and applied mathematics for estimating the power of a signal at different frequencies. The method is based on the concept of using periodogram spectrum estimates, which are the result of converting a signal from the time domain to the frequency domain. Welch's method is an improvement on the standard periodogram spectrum estimating method and on Bartlett's method, in that it reduces noise in the estimated power spectra in exchange for reducing the frequency resolution. Due to the noise caused by imperfect and finite data, the noise reduction from Welch's method is often desired. Definition and procedure The Welch method is based on Bartlett's method and differs in two ways: The signal is split up into overlapping segments: the original data segment is split up into L data segments of length M, overlapping by D points. If D = M / 2, the overlap is said to be 50% If D = 0, the overlap is said to be 0%. This is the same situation as in the Bartlett's method. The overlapping segments are then windowed: After the data is split up into overlapping segments, the individual L data segments have a window applied to them (in the time domain). Most window functions afford more influence to the data at the center of the set than to data at the edges, which represents a loss of information. To mitigate that loss, the individual data sets are commonly overlapped in time (as in the above step). The windowing of the segments is what makes the Welch method a "modified" periodogram. After doing the above, the periodogram is calculated by computing the discrete Fourier transform, and then computing the squared magnitude of the result. The individual periodograms are then averaged, which reduces the variance of the individual power measurements. The end result is an array of power measurements vs. frequency "bin". Related approaches Other overlapping windowed Fourier transforms include: Modified discrete cosine transform Short-time Fourier transform See also Fast Fourier transform Power spectrum Spectral density estimation References Frequency-domain analysis Digital signal processing Waves
https://en.wikipedia.org/wiki/Braid%20group
In mathematics, the braid group on strands (denoted ), also known as the Artin braid group, is the group whose elements are equivalence classes of -braids (e.g. under ambient isotopy), and whose group operation is composition of braids (see ). Example applications of braid groups include knot theory, where any knot may be represented as the closure of certain braids (a result known as Alexander's theorem); in mathematical physics where Artin's canonical presentation of the braid group corresponds to the Yang–Baxter equation (see ); and in monodromy invariants of algebraic geometry. Introduction In this introduction let ; the generalization to other values of will be straightforward. Consider two sets of four items lying on a table, with the items in each set being arranged in a vertical line, and such that one set sits next to the other. (In the illustrations below, these are the black dots.) Using four strands, each item of the first set is connected with an item of the second set so that a one-to-one correspondence results. Such a connection is called a braid. Often some strands will have to pass over or under others, and this is crucial: the following two connections are different braids: {| valign="centre" |----- | |    is different from    |} On the other hand, two such connections which can be made to look the same by "pulling the strands" are considered the same braid: {| valign="centre" |----- | |     is the same as    |} All strands are required to move from left to right; knots like the following are not considered braids: {| valign="centre" |----- |    is not a braid |} Any two braids can be composed by drawing the first next to the second, identifying the four items in the middle, and connecting corresponding strands: {| valign="centre" |----- | |     composed with     | |     yields     |} Another example: The composition of the braids and is written as . The set of all braids on four strands is denoted by . The above composition of braids is indeed a group operation. The identity element is the braid consisting of four parallel horizontal strands, and the inverse of a braid consists of that braid which "undoes" whatever the first braid did, which is obtained by flipping a diagram such as the ones above across a vertical line going through its centre. (The first two example braids above are inverses of each other.) Applications Braid theory has recently been applied to fluid mechanics, specifically to the field of chaotic mixing in fluid flows. The braiding of (2 + 1)-dimensional space-time trajectories formed by motion of physical rods, periodic orbits or "ghost rods", and almost-invariant sets has been used to estimate the topological entropy of several engineered and naturally occurring fluid systems, via the use of Nielsen–Thurston classification. Another field of intense investigation involving braid groups and related topological concepts in the context of quantum physics is in the theory and (conject
https://en.wikipedia.org/wiki/Prime%20geodesic
In mathematics, a prime geodesic on a hyperbolic surface is a primitive closed geodesic, i.e. a geodesic which is a closed curve that traces out its image exactly once. Such geodesics are called prime geodesics because, among other things, they obey an asymptotic distribution law similar to the prime number theorem. Technical background We briefly present some facts from hyperbolic geometry which are helpful in understanding prime geodesics. Hyperbolic isometries Consider the Poincaré half-plane model H of 2-dimensional hyperbolic geometry. Given a Fuchsian group, that is, a discrete subgroup Γ of PSL(2, R), Γ acts on H via linear fractional transformation. Each element of PSL(2, R) in fact defines an isometry of H, so Γ is a group of isometries of H. There are then 3 types of transformation: hyperbolic, elliptic, and parabolic. (The loxodromic transformations are not present because we are working with real numbers.) Then an element γ of Γ has 2 distinct real fixed points if and only if γ is hyperbolic. See Classification of isometries and Fixed points of isometries for more details. Closed geodesics Now consider the quotient surface M=Γ\H. The following description refers to the upper half-plane model of the hyperbolic plane. This is a hyperbolic surface, in fact, a Riemann surface. Each hyperbolic element h of Γ determines a closed geodesic of Γ\H: first, by connecting the geodesic semicircle joining the fixed points of h, we get a geodesic on H called the axis of h, and by projecting this geodesic to M, we get a geodesic on Γ\H. This geodesic is closed because 2 points which are in the same orbit under the action of Γ project to the same point on the quotient, by definition. It can be shown that this gives a 1-1 correspondence between closed geodesics on Γ\H and hyperbolic conjugacy classes in Γ. The prime geodesics are then those geodesics that trace out their image exactly once — algebraically, they correspond to primitive hyperbolic conjugacy classes, that is, conjugacy classes {γ} such that γ cannot be written as a nontrivial power of another element of Γ. Applications of prime geodesics The importance of prime geodesics comes from their relationship to other branches of mathematics, especially dynamical systems, ergodic theory, and number theory, as well as Riemann surfaces themselves. These applications often overlap among several different research fields. Dynamical systems and ergodic theory In dynamical systems, the closed geodesics represent the periodic orbits of the geodesic flow. Number theory In number theory, various "prime geodesic theorems" have been proved which are very similar in spirit to the prime number theorem. To be specific, we let π(x) denote the number of closed geodesics whose norm (a function related to length) is less than or equal to x; then π(x) ∼ x/ln(x). This result is usually credited to Atle Selberg. In his 1970 Ph.D. thesis, Grigory Margulis proved a similar result for surfaces of variable nega
https://en.wikipedia.org/wiki/Composition%20%28combinatorics%29
In mathematics, a composition of an integer n is a way of writing n as the sum of a sequence of (strictly) positive integers. Two sequences that differ in the order of their terms define different compositions of their sum, while they are considered to define the same partition of that number. Every integer has finitely many distinct compositions. Negative numbers do not have any compositions, but 0 has one composition, the empty sequence. Each positive integer n has 2n−1 distinct compositions. A weak composition of an integer n is similar to a composition of n, but allowing terms of the sequence to be zero: it is a way of writing n as the sum of a sequence of non-negative integers. As a consequence every positive integer admits infinitely many weak compositions (if their length is not bounded). Adding a number of terms 0 to the end of a weak composition is usually not considered to define a different weak composition; in other words, weak compositions are assumed to be implicitly extended indefinitely by terms 0. To further generalize, an A-restricted composition of an integer n, for a subset A of the (nonnegative or positive) integers, is an ordered collection of one or more elements in A whose sum is n. Examples The sixteen compositions of 5 are: 5 4 + 1 3 + 2 3 + 1 + 1 2 + 3 2 + 2 + 1 2 + 1 + 2 2 + 1 + 1 + 1 1 + 4 1 + 3 + 1 1 + 2 + 2 1 + 2 + 1 + 1 1 + 1 + 3 1 + 1 + 2 + 1 1 + 1 + 1 + 2 1 + 1 + 1 + 1 + 1. Compare this with the seven partitions of 5: 5 4 + 1 3 + 2 3 + 1 + 1 2 + 2 + 1 2 + 1 + 1 + 1 1 + 1 + 1 + 1 + 1. It is possible to put constraints on the parts of the compositions. For example the five compositions of 5 into distinct terms are: 5 4 + 1 3 + 2 2 + 3 1 + 4. Compare this with the three partitions of 5 into distinct terms: 5 4 + 1 3 + 2. Number of compositions Conventionally the empty composition is counted as the sole composition of 0, and there are no compositions of negative integers. There are 2n−1 compositions of n ≥ 1; here is a proof: Placing either a plus sign or a comma in each of the n − 1 boxes of the array produces a unique composition of n. Conversely, every composition of n determines an assignment of pluses and commas. Since there are n − 1 binary choices, the result follows. The same argument shows that the number of compositions of n into exactly k parts (a k-composition) is given by the binomial coefficient . Note that by summing over all possible numbers of parts we recover 2n−1 as the total number of compositions of n: For weak compositions, the number is , since each k-composition of n + k corresponds to a weak one of n by the rule It follows from this formula that the number of weak compositions of n into exactly k parts equals the number of weak compositions of k − 1 into exactly n + 1 parts. For A-restricted compositions, the number of compositions of n into exactly k parts is given by the extended binomial (or polynomial) coefficient , where the square brackets indicate the extraction of
https://en.wikipedia.org/wiki/Thomas%20E.%20Kurtz
Thomas Eugene Kurtz (born February 22, 1928) is a retired Dartmouth professor of mathematics and computer scientist, who along with his colleague John G. Kemeny set in motion the then revolutionary concept of making computers as freely available to college students as library books were, by implementing the concept of time-sharing at Dartmouth College. In his mission to allow non-expert users to interact with the computer, he co-developed the BASIC programming language (Beginners All-purpose Symbolic Instruction Code) and the Dartmouth Time Sharing System during 1963 to 1964. A native of Oak Park, Illinois, United States, Kurtz graduated from Knox College in 1950, and was awarded a Ph.D. degree from Princeton University in 1956, where his advisor was John Tukey, and joined the Mathematics Department of Dartmouth College that same year, where he taught statistics and numerical analysis. In 1983, Kurtz and Kemeny co-founded a company called True BASIC, Inc. to market True BASIC, an updated version of the language. Kurtz has also served as Council Chairman and Trustee of EDUCOM, as well as Trustee and Chairman of NERComP, and on the Pierce Panel of the President's Scientific Advisory Committee. Kurtz also served on the steering committees for the CONDUIT project and the CCUC conferences on instructional computing. In 1974, the American Federation of Information Processing Societies gave an award to Kurtz and Kemeny at the National Computer Conference for their work on BASIC and time-sharing. In 1991, the Computer Society honored Kurtz with the IEEE Computer Pioneer Award, and in 1994, he was inducted as a Fellow of the Association for Computing Machinery. Early life and education In 1951, Kurtz' first experience with computing came at the Summer Session of the Institute for Numerical Analysis at University of California, Los Angeles. His interests have included numerical analysis, statistics, and computer science ever since. He graduated in 1950 when he obtained his bachelor's degree majoring in mathematics and in 1956, at the age of 28, he went on to acquire his PhD from Princeton University. His thesis was on a problem of multiple comparisons in mathematical statistics. Kurtz composed his first computer program in 1951 while working with computers at UCLA in the institute of numerical analysis. He performed this feat just after finishing grad school and one year into his tuition at Princeton University. Dartmouth In 1963 to 1964, Kurtz and Kemeny developed the first version of the Dartmouth Time-Sharing System, a time-sharing system for university use, and the BASIC language. From 1966 to 1975, Kurtz served as Director of the Kiewit Computation Center at Dartmouth, and from 1975 to 1978, Director of the Office of Academic Computing. From 1980 to 1988 Kurtz was Director of the Computer and Information Systems program at Dartmouth, a ground-breaking multidisciplinary graduate program to develop information system (IS) leaders for industry. Su
https://en.wikipedia.org/wiki/United%20Kingdom%20Mathematics%20Trust
The United Kingdom Mathematics Trust (UKMT) is a charity founded in 1996 to help with the education of children in mathematics within the UK. History The national mathematics competitions existed prior to the formation of the UKMT, but the foundation of the UKMT in the summer of 1996 enabled them to be run collectively. The Senior Mathematical Challenge was formerly the National Mathematics Contest. Founded in 1961, it was run by the Mathematical Association from 1975 until its adoption by the UKMT in 1996. The Junior and Intermediate Mathematical Challenges were the initiative of Dr Tony Gardiner in 1987 and were run by him under the name of the United Kingdom Mathematics Foundation until 1996. The popularity of the UK national mathematics competitions is largely due to the publicising efforts of Dr Gardiner in the years 1987-1995. Hence, in 1995, he advertised for the formation of a committee and for a host institution that would lead to the establishment of the UKMT, enabling the challenges to be run effectively together under one organisation. Mathematical Challenges The UKMT runs a series of mathematics challenges to encourage childs interest in mathematics and develop their skills in secondary schools. The three main challenges are: Junior Mathematical Challenge (UK year 8/S2 and below) Intermediate Mathematical Challenge (UK year 11/S4 and below) Senior Mathematical Challenge (UK year 13/S6 and below) Certificates In the Junior and Intermediate Challenges the top scoring 50% of the entrants receive bronze, silver or gold certificates based on their mark in the paper. In the Senior Mathematical Challenge these certificates are awarded to top scoring 66% of the entries. In each case bronze, silver and gold certificates are awarded in the ratio 3 : 2 : 1. So in the Junior and Intermediate Challenges The Gold award is achieved by the top 8-9% of the entrants. The Silver award is achieved by 16-17% of the entrants. The Bronze award is achieved by 25% of the entrants. In the past, only the top 40% of participants received a certificate in the Junior and Intermediate Challenges, and only the top 60% of participants received a certificate in the Senior Challenge. The ratio of bronze, silver, and gold have not changed, still being 3 : 2 : 1. Junior Mathematical Challenge The Junior Mathematical Challenge (JMC) is an introductory challenge for pupils in Years 8 or below (aged 13) or below, taking place in spring each year. This takes the form of twenty-five multiple choice questions to be sat in exam conditions, to be completed within one hour. The first fifteen questions are designed to be easier, and a pupil will gain 5 marks for getting a question in this section correct. Questions 16-20 are more difficult and are worth 6 marks, with a penalty of 1 point for a wrong answer which tries to stop pupils guessing. The last five questions are intended to be the most challenging and so are also 6 marks, but with a 2 point penalty for an incor
https://en.wikipedia.org/wiki/Royal%20Statistical%20Society
The Royal Statistical Society (RSS) is an established statistical society. It has three main roles: a British learned society for statistics, a professional body for statisticians and a charity which promotes statistics for the public good. History The society was founded in 1834 as the Statistical Society of London, though a perhaps unrelated London Statistical Society was in existence at least as early as 1824. At that time there were many provincial statistics societies throughout Britain, but most have not survived. The Manchester Statistical Society (which is older than the LSS) is a notable exception. The associations were formed with the object of gathering information about society. The idea of statistics referred more to political knowledge rather than a series of methods. The members called themselves "statists" and the original aim was "...procuring, arranging and publishing facts to illustrate the condition and prospects of society" and the idea of interpreting data, or having opinions, was explicitly excluded. The original seal had the motto Aliis Exterendum (for others to thresh out, i.e. interpret) but this separation was found to be a hindrance and the motto was dropped in later logos. It was many decades before mathematics was regarded as part of the statistical project. Fellows Fellowship of the Royal Statistical Society is open to anyone with an interest in statistics. It is not restricted to only those with high achievement within the discipline. This distinguishes it from other learned societies, where usually the fellow grade is the highest grade in that discipline. Key figures Instrumental in founding the Statistical Society of London were Richard Jones, Charles Babbage, Adolphe Quetelet, William Whewell, Thomas Malthus, and William Henry Sykes. Among its famous members was Florence Nightingale, who was the society's first female member in 1858. Stella Cunliffe was the first female president. Other notable RSS presidents have included William Beveridge, Ronald Fisher, Harold Wilson, and David Cox. Honorary Secretaries include Gerald Goodhardt (1982–88). Royal Charter The LSS became the RSS (Royal Statistical Society) by Royal Charter in 1887, and merged with the Institute of Statisticians in 1993. The merger enabled the society to take on the role of a professional body as well as that of a learned society. As of 2019 the society claims more than 10,000 members around the world, of whom some 1,500 are professionally qualified, with the status of Chartered Statistician (CStat). In January 2009, the RSS received Licensed Body status from the UK Science Council to award Chartered Scientist status. Since February 2009 the society has awarded Chartered Scientist status to suitably qualified members. Unusually among professional societies, all members of the RSS are known as "Fellows". Fellowship is nowadays not usually used by post-merger members as a post-nominal mark of distinction. However, before the 1993 merger wi
https://en.wikipedia.org/wiki/Oval%20%28disambiguation%29
An oval is a curve resembling an egg or an ellipse. Oval, The Oval, or variations may also refer to: Mathematics Cassini oval Oval (projective plane) Places Singapore The Oval, Singapore, a road within Seletar Aerospace Park off Seletar Aerospace Drive United Kingdom Oval, London, a district in South London United States Oval, North Carolina, an unincorporated community Oval, Pennsylvania, a census-designated place Oval City, Ohio, an unincorporated community Oval Park, Visalia, California, a neighborhood Sports Cricket Adelaide Oval, in Australia Cricket oval, a type of sporting ground Kensington Oval, in Barbados Kensington Oval, Dunedin, a cricket ground in New Zealand The Oval, in London The Oval (Llandudno), a cricket ground in Llandudno, Conwy, Wales University of Otago Oval, a cricket ground in New Zealand Football Australian rules football playing field Perth Oval, in Australia The Oval (Belfast), in Northern Ireland The Oval (Eastbourne), in England The Oval (Wednesbury) (defunct), in England Ice skating Guidant John Rose Minnesota Oval, a multi-use ice facility in Minnesota, United States Olympic Oval, a speed skating rink in Calgary, Alberta, Canada Speed skating rink Utah Olympic Oval, a speed skating rink in Salt Lake City, Utah, United States Other uses in sports The Oval (Prestwick), a public park and sports facility in Scotland The Oval (Caernarfon), a multi-use stadium in Caernarfon, Wales The Oval (Wirral), an athletics stadium in Bebington, Merseyside, England Oval track, in automobile racing Other Old Oval, also known as the Kenneth A. Shaw Quadrangle since 2010, a central lawn on the Syracuse University campus Open Vulnerability and Assessment Language, in computing Oval (musical project), German electronic music project Oval (Stanford University), an oval-shaped sunken lawn on the Stanford University campus, Stanford, California, United States Oval (novel), a 2019 novel by Elvia Wilk Oval Office, the official office of the President of the United States Oval tube station, situated near the Oval cricket ground in London, England The Oval (Limassol), a commercial use high-rise building in Limassol, Cyprus The Oval (Ohio State University), a large green area in the center of the university The Oval (TV series), a 2019 TV series on BET created by Tyler Perry See also Ellipse (disambiguation)
https://en.wikipedia.org/wiki/Subsequential%20limit
In mathematics, a subsequential limit of a sequence is the limit of some subsequence. Every subsequential limit is a cluster point, but not conversely. In first-countable spaces, the two concepts coincide. In a topological space, if every subsequence has a subsequential limit to the same point, then the original sequence also converges to that limit. This need not hold in more generalized notions of convergence, such as the space of almost everywhere convergence. The supremum of the set of all subsequential limits of some sequence is called the limit superior, or limsup. Similarly, the infimum of such a set is called the limit inferior, or liminf. See limit superior and limit inferior. If is a metric space and there is a Cauchy sequence such that there is a subsequence converging to some then the sequence also converges to See also References Limits (mathematics) Sequences and series
https://en.wikipedia.org/wiki/Nonholonomic%20system
A nonholonomic system in physics and mathematics is a physical system whose state depends on the path taken in order to achieve it. Such a system is described by a set of parameters subject to differential constraints and non-linear constraints, such that when the system evolves along a path in its parameter space (the parameters varying continuously in values) but finally returns to the original set of parameter values at the start of the path, the system itself may not have returned to its original state. Nonholonomic mechanics is autonomous division of Newtonian mechanics. Details More precisely, a nonholonomic system, also called an anholonomic system, is one in which there is a continuous closed circuit of the governing parameters, by which the system may be transformed from any given state to any other state. Because the final state of the system depends on the intermediate values of its trajectory through parameter space, the system cannot be represented by a conservative potential function as can, for example, the inverse square law of the gravitational force. This latter is an example of a holonomic system: path integrals in the system depend only upon the initial and final states of the system (positions in the potential), completely independent of the trajectory of transition between those states. The system is therefore said to be integrable, while the nonholonomic system is said to be nonintegrable. When a path integral is computed in a nonholonomic system, the value represents a deviation within some range of admissible values and this deviation is said to be an anholonomy produced by the specific path under consideration. This term was introduced by Heinrich Hertz in 1894. The general character of anholonomic systems is that of implicitly dependent parameters. If the implicit dependency can be removed, for example by raising the dimension of the space, thereby adding at least one additional parameter, the system is not truly nonholonomic, but is simply incompletely modeled by the lower-dimensional space. In contrast, if the system intrinsically cannot be represented by independent coordinates (parameters), then it is truly an anholonomic system. Some authors make much of this by creating a distinction between so-called internal and external states of the system, but in truth, all parameters are necessary to characterize the system, be they representative of "internal" or "external" processes, so the distinction is in fact artificial. However, there is a very real and irreconcilable difference between physical systems that obey conservation principles and those that do not. In the case of parallel transport on a sphere, the distinction is clear: a Riemannian manifold has a metric fundamentally distinct from that of a Euclidean space. For parallel transport on a sphere, the implicit dependence is intrinsic to the non-euclidean metric. The surface of a sphere is a two-dimensional space. By raising the dimension, we can
https://en.wikipedia.org/wiki/Ferryland
Ferryland is a town in Newfoundland and Labrador on the Avalon Peninsula. According to the 2021 Statistics Canada census, its population is 371. Seventeenth century settlement Ferryland was originally established as a station for migratory fishermen in the late 16th century but had earlier been used by the French, Spanish, and Portuguese. By the 1590s it was one of the most popular fishing harbours in Newfoundland and acclaimed by Sir Walter Raleigh. Ferryland was called "Farilham" by the Portuguese fishermen and "Forillon" by the French—it later became anglicized to its current name "Ferryland." (This should not be confused with the Forillon National Park in Quebec, which still keeps its French name.) The land was granted by charter to the London and Bristol Company in the 1610s and the vicinity became the location of a number of short-lived English colonies at Cuper's Cove, Bristol's Hope, and Renews and adjoined the colony of South Falkland. In 1620 the territory was granted to George Calvert, 1st Baron Baltimore who had obtained the holdings from William Vaughan. Calvert appointed Edward Wynne to establish a colony which became the first successful permanent colony in Newfoundland growing to a population of 100 by 1625. In 1623, Calvert's grant was confirmed and expanded. The Charter of Avalon was granted to Lord Baltimore by James I. Dated 7 April 1623 it created the Province of Avalon on the island of Newfoundland and gave Baltimore complete authority over all matters in the territory. That same year Baltimore chose Ferryland as the principal area of settlement. In the 1660s, the colony was attacked by the Dutch. The town was destroyed by New France in 1696 during the Avalon Peninsula Campaign of King William's War. Virtually forgotten for centuries, excavations of the original settlement began in earnest in the late 1980s and continue to this day. Historic designations The site of the 17th-century Colony of Avalon was designated a National Historic Site of Canada in 1953. It was also designated a Municipal Heritage District in 1998. The Historic Ferryland Museum was designated a Municipal Heritage Site in 2006. Demographics In the 2021 Census of Population conducted by Statistics Canada, Ferryland had a population of living in of its total private dwellings, a change of from its 2016 population of . With a land area of , it had a population density of in 2021. Gallery See also List of lighthouses in Canada British colonization of the Americas List of cities and towns in Newfoundland and Labrador Sara Kirke Erasmus Stourton Ron Hynes James Tuck (archaeologist) References External links Official Newfoundland and Labrador Tourism Website - Ferryland Ferryland - Encyclopedia of Newfoundland and Labrador, vol. 2, pp. 50–60. Populated coastal places in Canada Towns in Newfoundland and Labrador Former English colonies
https://en.wikipedia.org/wiki/Density%20estimation
In statistics, probability density estimation or simply density estimation is the construction of an estimate, based on observed data, of an unobservable underlying probability density function. The unobservable density function is thought of as the density according to which a large population is distributed; the data are usually thought of as a random sample from that population. A variety of approaches to density estimation are used, including Parzen windows and a range of data clustering techniques, including vector quantization. The most basic form of density estimation is a rescaled histogram. Example We will consider records of the incidence of diabetes. The following is quoted verbatim from the data set description: A population of women who were at least 21 years old, of Pima Indian heritage and living near Phoenix, Arizona, was tested for diabetes mellitus according to World Health Organization criteria. The data were collected by the US National Institute of Diabetes and Digestive and Kidney Diseases. We used the 532 complete records. In this example, we construct three density estimates for "glu" (plasma glucose concentration), one conditional on the presence of diabetes, the second conditional on the absence of diabetes, and the third not conditional on diabetes. The conditional density estimates are then used to construct the probability of diabetes conditional on "glu". The "glu" data were obtained from the MASS package of the R programming language. Within R, ?Pima.tr and ?Pima.te give a fuller account of the data. The mean of "glu" in the diabetes cases is 143.1 and the standard deviation is 31.26. The mean of "glu" in the non-diabetes cases is 110.0 and the standard deviation is 24.29. From this we see that, in this data set, diabetes cases are associated with greater levels of "glu". This will be made clearer by plots of the estimated density functions. The first figure shows density estimates of p(glu | diabetes=1), p(glu | diabetes=0), and p(glu). The density estimates are kernel density estimates using a Gaussian kernel. That is, a Gaussian density function is placed at each data point, and the sum of the density functions is computed over the range of the data. From the density of "glu" conditional on diabetes, we can obtain the probability of diabetes conditional on "glu" via Bayes' rule. For brevity, "diabetes" is abbreviated "db." in this formula. The second figure shows the estimated posterior probability p(diabetes=1 | glu). From these data, it appears that an increased level of "glu" is associated with diabetes. Application and purpose A very natural use of density estimates is in the informal investigation of the properties of a given set of data. Density estimates can give a valuable indication of such features as skewness and multimodality in the data. In some cases they will yield conclusions that may then be regarded as self-evidently true, while in others all they will do is to point the way to f
https://en.wikipedia.org/wiki/P-value
In null-hypothesis significance testing, the p-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Even though reporting p-values of statistical tests is common practice in academic publications of many quantitative fields, misinterpretation and misuse of p-values is widespread and has been a major topic in mathematics and metascience. In 2016, the American Statistical Association (ASA) made a formal statement that "p-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone" and that "a p-value, or statistical significance, does not measure the size of an effect or the importance of a result" or "evidence regarding a model or hypothesis." That said, a 2019 task force by ASA has issued a statement on statistical significance and replicability, concluding with: "p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data." Basic concepts In statistics, every conjecture concerning the unknown probability distribution of a collection of random variables representing the observed data in some study is called a statistical hypothesis. If we state one hypothesis only and the aim of the statistical test is to see whether this hypothesis is tenable, but not to investigate other specific hypotheses, then such a test is called a null hypothesis test. As our statistical hypothesis will, by definition, state some property of the distribution, the null hypothesis is the default hypothesis under which that property does not exist. The null hypothesis is typically that some parameter (such as a correlation or a difference between means) in the populations of interest is zero. Our hypothesis might specify the probability distribution of precisely, or it might only specify that it belongs to some class of distributions. Often, we reduce the data to a single numerical statistic, e.g., , whose marginal probability distribution is closely connected to a main question of interest in the study. The p-value is used in the context of null hypothesis testing in order to quantify the statistical significance of a result, the result being the observed value of the chosen statistic . The lower the p-value is, the lower the probability of getting that result if the null hypothesis were true. A result is said to be statistically significant if it allows us to reject the null hypothesis. All other things being equal, smaller p-values are taken as stronger evidence against the null hypothesis. Loosely speaking, rejection of the null hypothesis implies that there is sufficient evidence against it. As a particular example, if a null hypothesis states that a certain summary statistic follows the stand
https://en.wikipedia.org/wiki/Schwarzian%20derivative
In mathematics, the Schwarzian derivative is an operator similar to the derivative which is invariant under Möbius transformations. Thus, it occurs in the theory of the complex projective line, and in particular, in the theory of modular forms and hypergeometric functions. It plays an important role in the theory of univalent functions, conformal mapping and Teichmüller spaces. It is named after the German mathematician Hermann Schwarz. Definition The Schwarzian derivative of a holomorphic function of one complex variable is defined by The same formula also defines the Schwarzian derivative of a function of one real variable. The alternative notation is frequently used. Properties The Schwarzian derivative of any Möbius transformation is zero. Conversely, the Möbius transformations are the only functions with this property. Thus, the Schwarzian derivative precisely measures the degree to which a function fails to be a Möbius transformation. If is a Möbius transformation, then the composition has the same Schwarzian derivative as ; and on the other hand, the Schwarzian derivative of is given by the chain rule More generally, for any sufficiently differentiable functions and When and are smooth real-valued functions, this implies that all iterations of a function with negative (or positive) Schwarzian will remain negative (resp. positive), a fact of use in the study of one-dimensional dynamics. Introducing the function of two complex variables its second mixed partial derivative is given by and the Schwarzian derivative is given by the formula: The Schwarzian derivative has a simple inversion formula, exchanging the dependent and the independent variables. One has or more explicitly, . This follows from the chain rule above. Geometric interpretation William Thurston interprets the Schwarzian derivative as a measure of how much a conformal map deviates from a Möbius transformation. Let be a conformal mapping in a neighborhood of Then there exists a unique Möbius transformation such that has the same 0, 1, 2-th order derivatives at Now To explicitly solve for it suffices to solve the case of Let and solve for the that make the first three coefficients of equal to Plugging it into the fourth coefficient, we get . After a translation, rotation, and scaling of the complex plane, in a neighborhood of zero. Up to third order this function maps the circle of radius to the parametric curve defined by where This curve is, up to fourth order, an ellipse with semiaxes and as Since Möbius transformations always map circles to circles or lines, the eccentricity measures the deviation of from a Möbius transform. Differential equation Consider the linear second-order ordinary differential equation where is a real-valued function of a real parameter . Let denote the two-dimensional space of solutions. For , let be the evaluation functional . The map gives, for each point of the domain of , a one-di
https://en.wikipedia.org/wiki/Mean-field%20theory
In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other. The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost. MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium. Origins The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Pierre–Weiss approximation, Flory–Huggins solution theory, and Scheutjens–Fleer theory. Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original solvable and open to calculation, and in some cases MFT may give very accurate approximations. In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means that an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean-field”. Quite often, MFT provides a convenient launch point for studying higher-order fluctuations. For example, when computing the partition function, studying the combinatorics of the interaction terms in the Hamiltonian can sometimes at best produce perturbation results or Feynman diagrams that correct the mean-field approximation. Validity In general, dimensionality plays an active role in determining whether a mean-field approach will work for any particular problem. There is sometimes a critical dimension above which MFT is valid and below which it is not. Heuristically, many interactions are replaced in MFT by one effective interaction. So if the field or particle exhibits many random interactions in the original system,
https://en.wikipedia.org/wiki/Paul%20L%C3%A9vy%20%28mathematician%29
Paul Pierre Lévy (15 September 1886 – 15 December 1971) was a French mathematician who was active especially in probability theory, introducing fundamental concepts such as local time, stable distributions and characteristic functions. Lévy processes, Lévy flights, Lévy measures, Lévy's constant, the Lévy distribution, the Lévy area, the Lévy arcsine law, and the fractal Lévy C curve are named after him. Biography Lévy was born in Paris to a Jewish family which already included several mathematicians. His father Lucien Lévy was an examiner at the École Polytechnique. Lévy attended the École Polytechnique and published his first paper in 1905, at the age of nineteen, while still an undergraduate, in which he introduced the Lévy–Steinitz theorem. His teacher and advisor was Jacques Hadamard. After graduation, he spent a year in military service and then studied for three years at the École des Mines, where he became a professor in 1913. During World War I Lévy conducted mathematical analysis work for the French Artillery. In 1920 he was appointed Professor of Analysis at the École Polytechnique, where his students included Benoît Mandelbrot and Georges Matheron. He remained at the École Polytechnique until his retirement in 1959, with a gap during World War II after his 1940 firing because of the Vichy anti-Jewish legislation. Lévy made many fundamental contributions to probability theory and the nascent theory of stochastic processes. He introduced the notion of 'stable distribution' which share the property of stability under addition of independent variables and proved a general version of the Central Limit theorem, recorded in his 1937 book Théorie de l'addition des variables aléatoires, using the notion of characteristic function. He also introduced, independently from Ya. Khinchine, the notion of infinitely divisible law and derived their characterization through the Lévy–Khintchine representation. His 1948 monograph on Brownian motion, Processus stochastiques et mouvement brownien, contains a wealth of new concepts and results, including the Lévy area, the Lévy arcsine law, the local time of a Brownian path, and many other results. Lévy received a number of honours, including membership at the French Academy of Sciences and honorary membership at the London Mathematical Society. His daughter Marie-Hélène Schwartz and son-in-law Laurent Schwartz were also notable mathematicians. Works 1922 – Lecons d'analyse Fonctionnelle 1925 – Calcul des probabilités 1937 – Théorie de l'addition des variables aléatoires 1948 – Processus stochastiques et mouvement brownien 1954 – Le mouvement brownien See also Cramér's decomposition theorem Lévy distribution Lévy metric Lévy's modulus of continuity Lévy–Prokhorov metric Lévy's continuity theorem Lévy's zero-one law Concentration of measure Lévy process Lévy–Khintchine representation Lévy–Itô decomposition Lévy flight local time Isoperimetric inequality on a sphere Lé
https://en.wikipedia.org/wiki/Graph%20%28abstract%20data%20type%29
In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics. A graph data structure consists of a finite (and possibly mutable) set of vertices (also called nodes or points), together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges (also called links or lines), and for a directed graph are also known as edges but also sometimes arrows or arcs. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references. A graph data structure may also associate to each edge some edge value, such as a symbolic label or a numeric attribute (cost, capacity, length, etc.). Operations The basic operations provided by a graph data structure G usually include: : tests whether there is an edge from the vertex x to the vertex y; : lists all vertices y such that there is an edge from the vertex x to the vertex y; : adds the vertex x, if it is not there; : removes the vertex x, if it is there; : adds the edge z from the vertex x to the vertex y, if it is not there; : removes the edge from the vertex x to the vertex y, if it is there; : returns the value associated with the vertex x; : sets the value associated with the vertex x to v. Structures that associate values to the edges usually also provide: : returns the value associated with the edge (x, y); : sets the value associated with the edge (x, y) to v. Common data structures for graph representation Adjacency list Vertices are stored as records or objects, and every vertex stores a list of adjacent vertices. This data structure allows the storage of additional data on the vertices. Additional data can be stored if edges are also stored as objects, in which case each vertex stores its incident edges and each edge stores its incident vertices. Adjacency matrix A two-dimensional matrix, in which the rows represent source vertices and columns represent destination vertices. Data on edges and vertices must be stored externally. Only the cost for one edge can be stored between each pair of vertices. Incidence matrix A two-dimensional matrix, in which the rows represent the vertices and columns represent the edges. The entries indicate the incidence relation between the vertex at a row and edge at a column. The following table gives the time complexity cost of performing various operations on graphs, for each of these representations, with |V| the number of vertices and |E| the number of edges. In the matrix representations, the entries encode the cost of following an edge. The cost of edges that are not present are assumed to be ∞. Adjacency lists are generally preferred for the representation of sparse graphs, while an adjacency matrix is preferred if the graph is dense; that is, the number of edges |E| is close to the nu
https://en.wikipedia.org/wiki/Level%20set
In mathematics, a level set of a real-valued function of real variables is a set where the function takes on a given constant value , that is: When the number of independent variables is two, a level set is called a level curve, also known as contour line or isoline; so a level curve is the set of all real-valued solutions of an equation in two variables and . When , a level set is called a level surface (or isosurface); so a level surface is the set of all real-valued roots of an equation in three variables , and . For higher values of , the level set is a level hypersurface, the set of all real-valued roots of an equation in variables. A level set is a special case of a fiber. Alternative names Level sets show up in many applications, often under different names. For example, an implicit curve is a level curve, which is considered independently of its neighbor curves, emphasizing that such a curve is defined by an implicit equation. Analogously, a level surface is sometimes called an implicit surface or an isosurface. The name isocontour is also used, which means a contour of equal height. In various application areas, isocontours have received specific names, which indicate often the nature of the values of the considered function, such as isobar, isotherm, isogon, isochrone, isoquant and indifference curve. Examples Consider the 2-dimensional Euclidean distance: A level set of this function consists of those points that lie at a distance of from the origin, that make a circle. For example, , because . Geometrically, this means that the point lies on the circle of radius 5 centered at the origin. More generally, a sphere in a metric space with radius centered at can be defined as the level set . A second example is the plot of Himmelblau's function shown in the figure to the right. Each curve shown is a level curve of the function, and they are spaced logarithmically: if a curve represents , the curve directly "within" represents , and the curve directly "outside" represents . Level sets versus the gradient Theorem: If the function is differentiable, the gradient of at a point is either zero, or perpendicular to the level set of at that point. To understand what this means, imagine that two hikers are at the same location on a mountain. One of them is bold, and decides to go in the direction where the slope is steepest. The other one is more cautious and does not want to either climb or descend, choosing a path which stays at the same height. In our analogy, the above theorem says that the two hikers will depart in directions perpendicular to each other. A consequence of this theorem (and its proof) is that if is differentiable, a level set is a hypersurface and a manifold outside the critical points of . At a critical point, a level set may be reduced to a point (for example at a local extremum of ) or may have a singularity such as a self-intersection point or a cusp. Sublevel and superlevel sets A
https://en.wikipedia.org/wiki/Kim%20Maltman
Kim Maltman (born 1951) is a Canadian poet and physicist who lives in Toronto, Ontario. He is a professor of applied mathematics at York University and pursues research in theoretical nuclear/particle physics. He is serving as a judge for the 2019 Griffin Poetry Prize. Works The Country of the Mapmakers (1977), The Sicknesses of Hats (1982), Branch Lines (1982), Softened Violence (1984), The Transparence of November / Snow (1985), (with Roo Borson) Technologies/Installations (1990), Introduction to the Introduction to Wang Wei (2000), (by Pain Not Bread) External links Archives of Kim Maltman (Roo Robson and Kim Maltman fonds, (R12759) are held at Library and Archives Canada 1951 births Living people 20th-century Canadian poets Scientists from Toronto Poets from Toronto Particle physicists Academic staff of York University Canadian male poets 20th-century Canadian male writers 21st-century Canadian male writers 21st-century Canadian poets Canadian nuclear physicists Canadian particle physicists 20th-century Canadian scientists 21st-century Canadian scientists
https://en.wikipedia.org/wiki/Brillouin%20zone
In mathematics and solid state physics, the first Brillouin zone (named after Léon Brillouin) is a uniquely defined primitive cell in reciprocal space. In the same way the Bravais lattice is divided up into Wigner–Seitz cells in the real lattice, the reciprocal lattice is broken up into Brillouin zones. The boundaries of this cell are given by planes related to points on the reciprocal lattice. The importance of the Brillouin zone stems from the description of waves in a periodic medium given by Bloch's theorem, in which it is found that the solutions can be completely characterized by their behavior in a single Brillouin zone. The first Brillouin zone is the locus of points in reciprocal space that are closer to the origin of the reciprocal lattice than they are to any other reciprocal lattice points (see the derivation of the Wigner–Seitz cell). Another definition is as the set of points in k-space that can be reached from the origin without crossing any Bragg plane. Equivalently, this is the Voronoi cell around the origin of the reciprocal lattice. There are also second, third, etc., Brillouin zones, corresponding to a sequence of disjoint regions (all with the same volume) at increasing distances from the origin, but these are used less frequently. As a result, the first Brillouin zone is often called simply the Brillouin zone. In general, the n-th Brillouin zone consists of the set of points that can be reached from the origin by crossing exactly n − 1 distinct Bragg planes. A related concept is that of the irreducible Brillouin zone, which is the first Brillouin zone reduced by all of the symmetries in the point group of the lattice (point group of the crystal). The concept of a Brillouin zone was developed by Léon Brillouin (1889–1969), a French physicist. Within the Brillouin zone, a constant-energy surface represents the loci of all the -points (that is, all the electron momentum values) that have the same energy. Fermi surface is a special constant-energy surface that separates the unfilled orbitals from the filled ones at zero kelvin. Critical points Several points of high symmetry are of special interest – these are called critical points. Other lattices have different types of high-symmetry points. They can be found in the illustrations below. See also Fundamental pair of periods Fundamental domain References Bibliography External links Brillouin Zone simple lattice diagrams by Thayer Watkins Brillouin Zone 3d lattice diagrams by Technion. DoITPoMS Teaching and Learning Package – "Brillouin Zones" Aflowlib.org consortium database (Duke University) AFLOW Standardization of VASP/QUANTUM ESPRESSO input files (Duke University) Crystallography Electronic band structures Vibrational spectroscopy
https://en.wikipedia.org/wiki/128%20%28number%29
128 (one hundred [and] twenty-eight) is the natural number following 127 and preceding 129. In mathematics 128 is the seventh power of 2. It is the largest number which cannot be expressed as the sum of any number of distinct squares. However, it is divisible by the total number of its divisors, making it a refactorable number. The sum of Euler's totient function φ() over the first twenty integers is 128. 128 can be expressed by a combination of its digits with mathematical operators, thus 128 28 − 1, making it a Friedman number in base 10. A hepteract has 128 vertices. 128 is the only 3-digit number that is a 7th power (27). In bar codes Code 128 is a Uniform Symbology Specification (USS Code 128) alphanumeric bar code that encodes text, numbers, numerous functions, and designed to encode all 128 ASCII characters (ASCII 0 to ASCII 127), as used in the shipping industry. Subdivisions include: 128A (0–9, A–Z, ASCII control codes, special characters) 128B (0–9, A–Z, a–z, special characters) 128C (00–99 numeric characters) GS1-128 application standard of the GS1 implementation using the Code 128 barcode specification ISBT 128 system for blood product labeling for the International Society of Blood Transfusion In computing 128-bit key size encryption for secure communications over the Internet IPv6 uses 128-bit (16-byte) addresses Any bit with a binary prefix is 128 bytes of a lesser binary prefix value, such as 1 gibibit is 128 mebibytes 128-bit integers, memory addresses, or other data units are those that are at most 128 bits 16 octets wide Seven-segment displays have 128 possible states. ASCII includes definitions for 128 characters (33 non-printing characters, mostly obsolete control characters that affect how text is processed, and 94 printable) A 128-bit integer can represent up to 3.40282366...e+38 values (2128 340,282,366,920,938,463,463,374,607,431,768,211,456). CAST-128 is a block cipher used in a number of products, notably as the default cipher in some versions of GPG and PGP. Graphics cards have a 128-bit, 256-bit, or 512-bit data bus to memory. Atari 2600 consoles have 128 bytes of memory Sony's PlayStation 2 Emotion Engine CPU has two 128-bit vector units Macintosh 128K, the original Apple Macintosh personal computer released in 1984 Laser 128, a clone of the Apple II released in 1984 Commodore 128, a home/personal computer which had a 128 KB of memory released in 1985 Enterprise 128 Zilog Z80, a home computer released in 1985 Jane 128, a GUI-based integrated software package for the Commodore 128 personal computer released in 1985 RIVA 128 (Real-time Interactive Video and Animation accelerator), one of the first consumer graphics chips to integrate 3D and video acceleration in 1997 Super Mario 128, a cancelled Nintendo game, though many elements were included in Super Mario Galaxy and Pikmin In the military , a United States Navy Mission Buenaventura-class fleet oilers during World War II , a Un
https://en.wikipedia.org/wiki/175%20%28number%29
175 (one hundred [and] seventy-five) is the natural number following 174 and preceding 176. In mathematics Raising the decimal digits of 175 to the powers of successive integers produces 175 back again: 175 is a figurate number for a rhombic dodecahedron, the difference of two consecutive fourth powers: It is also a decagonal number and a decagonal pyramid number, the smallest number after 1 that has both properties. In other fields In the Book of Genesis 25:7-8, Abraham is said to have lived to be 175 years old. 175 is the fire emergency number in Lebanon. See also The year AD 175 or 175 BC List of highways numbered 175 References Integers
https://en.wikipedia.org/wiki/Higher-order%20logic
In mathematics and logic, a higher-order logic (abbreviated HOL) is a form of predicate logic that is distinguished from first-order logic by additional quantifiers and, sometimes, stronger semantics. Higher-order logics with their standard semantics are more expressive, but their model-theoretic properties are less well-behaved than those of first-order logic. The term "higher-order logic" is commonly used to mean higher-order simple predicate logic. Here "simple" indicates that the underlying type theory is the theory of simple types, also called the simple theory of types. Leon Chwistek and Frank P. Ramsey proposed this as a simplification of the complicated and clumsy ramified theory of types specified in the Principia Mathematica by Alfred North Whitehead and Bertrand Russell. Simple types is sometimes also meant to exclude polymorphic and dependent types. Quantification scope First-order logic quantifies only variables that range over individuals; second-order logic, also quantifies over sets; third-order logic also quantifies over sets of sets, and so on. Higher-order logic is the union of first-, second-, third-, ..., nth-order logic; i.e., higher-order logic admits quantification over sets that are nested arbitrarily deeply. Semantics There are two possible semantics for higher-order logic. In the standard or full semantics, quantifiers over higher-type objects range over all possible objects of that type. For example, a quantifier over sets of individuals ranges over the entire powerset of the set of individuals. Thus, in standard semantics, once the set of individuals is specified, this is enough to specify all the quantifiers. HOL with standard semantics is more expressive than first-order logic. For example, HOL admits categorical axiomatizations of the natural numbers, and of the real numbers, which are impossible with first-order logic. However, by a result of Kurt Gödel, HOL with standard semantics does not admit an effective, sound, and complete proof calculus. The model-theoretic properties of HOL with standard semantics are also more complex than those of first-order logic. For example, the Löwenheim number of second-order logic is already larger than the first measurable cardinal, if such a cardinal exists. The Löwenheim number of first-order logic, in contrast, is ℵ0, the smallest infinite cardinal. In Henkin semantics, a separate domain is included in each interpretation for each higher-order type. Thus, for example, quantifiers over sets of individuals may range over only a subset of the powerset of the set of individuals. HOL with these semantics is equivalent to many-sorted first-order logic, rather than being stronger than first-order logic. In particular, HOL with Henkin semantics has all the model-theoretic properties of first-order logic, and has a complete, sound, effective proof system inherited from first-order logic. Properties Higher-order logics include the offshoots of Church's simple theory of typ
https://en.wikipedia.org/wiki/Excitatory%20synapse
An excitatory synapse is a synapse in which an action potential in a presynaptic neuron increases the probability of an action potential occurring in a postsynaptic cell. Neurons form networks through which nerve impulses travels, each neuron often making numerous connections with other cells of neurons. These electrical signals may be excitatory or inhibitory, and, if the total of excitatory influences exceeds that of the inhibitory influences, the neuron will generate a new action potential at its axon hillock, thus transmitting the information to yet another cell. This phenomenon is known as an excitatory postsynaptic potential (EPSP). It may occur via direct contact between cells (i.e., via gap junctions), as in an electrical synapse, but most commonly occurs via the vesicular release of neurotransmitters from the presynaptic axon terminal into the synaptic cleft, as in a chemical synapse. The excitatory neurotransmitters, the most common of which is glutamate, then migrate via diffusion to the dendritic spine of the postsynaptic neuron and bind a specific transmembrane receptor protein that triggers the depolarization of that cell. Depolarization, a deviation from a neuron's resting membrane potential towards its threshold potential, increases the likelihood of an action potential and normally occurs with the influx of positively charged sodium (Na+) ions into the postsynaptic cell through ion channels activated by neurotransmitter binding. Chemical vs electrical synapses There are two different kinds of synapses present within the human brain: chemical and electrical. Chemical synapses are by far the most prevalent and are the main player involved in excitatory synapses. Electrical synapses, the minority, allow direct, passive flow of electric current through special intercellular connections called gap junctions. These gap junctions allow for virtually instantaneous transmission of electrical signals through direct passive flow of ions between neurons (transmission can be bidirectional). The main goal of electrical synapses is to synchronize electrical activity among populations of neurons. The first electrical synapse was discovered in a crayfish nervous system. Chemical synaptic transmission is the transfer of neurotransmitters or neuropeptides from a presynaptic axon to a postsynaptic dendrite. Unlike an electrical synapse, the chemical synapses are separated by a space called the synaptic cleft, typically measured between 15 and 25 nm. Transmission of an excitatory signal involves several steps outlined below. Synaptic transmission In neurons that are involved in chemical synaptic transmission, neurotransmitters are synthesized either in the neuronal cell body, or within the presynaptic terminal, depending on the type of neurotransmitter being synthesized and the location of enzymes involved in its synthesis. These neurotransmitters are stored in synaptic vesicles that remain bound near the membrane by calcium-influenced pro
https://en.wikipedia.org/wiki/Torus-based%20cryptography
Torus-based cryptography involves using algebraic tori to construct a group for use in ciphers based on the discrete logarithm problem. This idea was first introduced by Alice Silverberg and Karl Rubin in 2003 in the form of a public key algorithm by the name of CEILIDH. It improves on conventional cryptosystems by representing some elements of large finite fields compactly and therefore transmitting fewer bits. See also Torus References Karl Rubin, Alice Silverberg: Torus-Based Cryptography. CRYPTO 2003: 349–365 External links Torus-Based Cryptography — the paper introducing the concept (in PDF). Public-key cryptography
https://en.wikipedia.org/wiki/369%20%28number%29
Three hundred sixty-nine is the natural number following three hundred sixty-eight and preceding three hundred seventy. In mathematics 369 is the magic constant of the 9 × 9 magic square and the n-Queens Problem for n = 9. There are 369 free octominoes (polyominoes of order 8). 369 is a Ruth-Aaron Pair with 370. The sum of their prime factors are equivalent. References Integers
https://en.wikipedia.org/wiki/Global%20optimization
Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is equivalent to the minimization of the function . Given a possibly nonlinear and non-convex continuous function with the global minima and the set of all global minimizers in , the standard minimization problem can be given as that is, finding and a global minimizer in ; where is a (not necessarily convex) compact set defined by inequalities . Global optimization is distinguished from local optimization by its focus on finding the minimum or maximum over the given set, as opposed to finding local minima or maxima. Finding an arbitrary local minimum is relatively straightforward by using classical local optimization methods. Finding the global minimum of a function is far more difficult: analytical methods are frequently not applicable, and the use of numerical solution strategies often leads to very hard challenges. Applications Typical examples of global optimization applications include: Protein structure prediction (minimize the energy/free energy function) Computational phylogenetics (e.g., minimize the number of character transformations in the tree) Traveling salesman problem and electrical circuit design (minimize the path length) Chemical engineering (e.g., analyzing the Gibbs energy) Safety verification, safety engineering (e.g., of mechanical structures, buildings) Worst-case analysis Mathematical problems (e.g., the Kepler conjecture) Object packing (configuration design) problems The starting point of several molecular dynamics simulations consists of an initial optimization of the energy of the system to be simulated. Spin glasses Calibration of radio propagation models and of many other models in the sciences and engineering Curve fitting like non-linear least squares analysis and other generalizations, used in fitting model parameters to experimental data in chemistry, physics, biology, economics, finance, medicine, astronomy, engineering. IMRT radiation therapy planning Deterministic methods The most successful general exact strategies are: Inner and outer approximation In both of these strategies, the set over which a function is to be optimized is approximated by polyhedra. In inner approximation, the polyhedra are contained in the set, while in outer approximation, the polyhedra contain the set. Cutting-plane methods The cutting-plane method is an umbrella term for optimization methods which iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts. Such procedures are popularly used to find integer solutions to mixed integer linear programming (MILP) problems, as well as to solve general, not necessarily differentiable convex optimization problems. The use of cutting plane
https://en.wikipedia.org/wiki/BSGS
The initialism BSGS has two meanings, both related to group theory in mathematics: Baby-step giant-step, an algorithm for solving the discrete logarithm problem The combination of a base and strong generating set (SGS) for a permutation group
https://en.wikipedia.org/wiki/Baby-step%20giant-step
In group theory, a branch of mathematics, the baby-step giant-step is a meet-in-the-middle algorithm for computing the discrete logarithm or order of an element in a finite abelian group by Daniel Shanks. The discrete log problem is of fundamental importance to the area of public key cryptography. Many of the most commonly used cryptography systems are based on the assumption that the discrete log is extremely difficult to compute; the more difficult it is, the more security it provides a data transfer. One way to increase the difficulty of the discrete log problem is to base the cryptosystem on a larger group. Theory The algorithm is based on a space–time tradeoff. It is a fairly simple modification of trial multiplication, the naive method of finding discrete logarithms. Given a cyclic group of order , a generator of the group and a group element , the problem is to find an integer such that The baby-step giant-step algorithm is based on rewriting : Therefore, we have: The algorithm precomputes for several values of . Then it fixes an and tries values of in the right-hand side of the congruence above, in the manner of trial multiplication. It tests to see if the congruence is satisfied for any value of , using the precomputed values of . The algorithm Input: A cyclic group G of order n, having a generator α and an element β. Output: A value x satisfying . m ← Ceiling() For all j where 0 ≤ j < m: Compute αj and store the pair (j, αj) in a table. (See ) Compute α−m. γ ← β. (set γ = β) For all i where 0 ≤ i < m: Check to see if γ is the second component (αj) of any pair in the table. If so, return im + j. If not, γ ← γ • α−m. In practice The best way to speed up the baby-step giant-step algorithm is to use an efficient table lookup scheme. The best in this case is a hash table. The hashing is done on the second component, and to perform the check in step 1 of the main loop, γ is hashed and the resulting memory address checked. Since hash tables can retrieve and add elements in time (constant time), this does not slow down the overall baby-step giant-step algorithm. The space complexity of the algorithm is , while the time complexity of the algorithm is . This running time is better than the running time of the naive brute force calculation. The Baby-step giant-step algorithm could be used by an eavesdropper to derive the private key generated in the Diffie Hellman key exchange, when the modulus is a prime number that is not too large. If the modulus is not prime, the Pohlig–Hellman algorithm has a smaller algorithmic complexity, and potentially solves the same problem. Notes The baby-step giant-step algorithm is a generic algorithm. It works for every finite cyclic group. It is not necessary to know the order of the group G in advance. The algorithm still works if n is merely an upper bound on the group order. Usually the baby-step giant-step algorithm is used for groups whose order is prime. If the order o
https://en.wikipedia.org/wiki/Look-and-say%20sequence
In mathematics, the look-and-say sequence is the sequence of integers beginning as follows: 1, 11, 21, 1211, 111221, 312211, 13112221, 1113213211, 31131211131221, ... . To generate a member of the sequence from the previous member, read off the digits of the previous member, counting the number of digits in groups of the same digit. For example: 1 is read off as "one 1" or 11. 11 is read off as "two 1s" or 21. 21 is read off as "one 2, one 1" or 1211. 1211 is read off as "one 1, one 2, two 1s" or 111221. 111221 is read off as "three 1s, two 2s, one 1" or 312211. The look-and-say sequence was analyzed by John Conway after he was introduced to it by one of his students at a party. The idea of the look-and-say sequence is similar to that of run-length encoding. If started with any digit d from 0 to 9 then d will remain indefinitely as the last digit of the sequence. For any d other than 1, the sequence starts as follows: d, 1d, 111d, 311d, 13211d, 111312211d, 31131122211d, … Ilan Vardi has called this sequence, starting with d = 3, the Conway sequence . (for d = 2, see ) Basic properties Growth The sequence grows indefinitely. In fact, any variant defined by starting with a different integer seed number will (eventually) also grow indefinitely, except for the degenerate sequence: 22, 22, 22, 22, ... Digits presence limitation No digits other than 1, 2, and 3 appear in the sequence, unless the seed number contains such a digit or a run of more than three of the same digit. Cosmological decay Conway's cosmological theorem asserts that every sequence eventually splits ("decays") into a sequence of "atomic elements", which are finite subsequences that never again interact with their neighbors. There are 92 elements containing the digits 1, 2, and 3 only, which John Conway named after the 92 naturally-occurring chemical elements up to uranium, calling the sequence audioactive. There are also two "transuranic" elements (Np and Pu) for each digit other than 1, 2, and 3. Below is a table of all such elements: Growth in length The terms eventually grow in length by about 30% per generation. In particular, if Ln denotes the number of digits of the n-th member of the sequence, then the limit of the ratio exists and is given by where λ = 1.303577269034... is an algebraic number of degree 71. This fact was proven by Conway, and the constant λ is known as Conway's constant. The same result also holds for every variant of the sequence starting with any seed other than 22. Conway's constant as a polynomial root Conway's constant is the unique positive real root of the following polynomial : This polynomial was correctly given in Conway's original Eureka article, but in the reprinted version in the book edited by Cover and Gopinath the term was incorrectly printed with a minus sign in front. Popularization The look-and-say sequence is also popularly known as the Morris Number Sequence, after cryptographer Robert Morris, and the puzzl
https://en.wikipedia.org/wiki/Speak%20%26%20Math
The Speak & Math (or Speak & Maths in some countries) was a popular electronic toy created by Texas Instruments in . Speak & Math was one of a three-part talking educational toy series that also included Speak & Spell and Speak & Read. The Speak & Math was sold worldwide. It was advertised as a tool for helping young children to become better at mathematics. The Speak & Math had a distinct gray with blue and orange color scheme. The unit could utilize either 4 "C" batteries or 6 volt DC power adapter. The display was a 9-character, 14-segment vacuum fluorescent display. The Speak & Math used a TI TMS5110 chip for voice synthesis. The Speak & Math, like the earlier Speak & Spell, also had the ability to expand its memory using expansion modules that plugged into a slot inside the battery compartment. No expansion modules are known to have been produced for the Speak & Math however. Like some models of the Speak & Spell, the Speak & Math had a mono headphone port. Speak & Math had five distinct learning games: Solve It, Word Problems, Greater Than/Less Than, Write It, and Number Stumper, all playable at three levels of difficulty. Solve It is the classic math problem-solving game where the participant must solve five math problems to the best of their ability. Number Stumper is a game of Bulls and Cows, whereby one is told the "number [of digits] right" and the "number in wrong place." Write It involves the participant typing the number they hear. Greater Than/Less Than involves identifying whether the number on the left is greater than or less than the number on the right. External links The Texas Instruments Speak & Math page at 99er.net The Chip Collection at the Smithsonian – TI Speak & Math Educational toys Texas Instruments hardware Electronic toys
https://en.wikipedia.org/wiki/Catalan%20solid
In mathematics, a Catalan solid, or Archimedean dual, is a polyhedron that is dual to an Archimedean solid. There are 13 Catalan solids. They are named for the Belgian mathematician Eugène Catalan, who first described them in 1865. The Catalan solids are all convex. They are face-transitive but not vertex-transitive. This is because the dual Archimedean solids are vertex-transitive and not face-transitive. Note that unlike Platonic solids and Archimedean solids, the faces of Catalan solids are not regular polygons. However, the vertex figures of Catalan solids are regular, and they have constant dihedral angles. Being face-transitive, Catalan solids are isohedra. Additionally, two of the Catalan solids are edge-transitive: the rhombic dodecahedron and the rhombic triacontahedron. These are the duals of the two quasi-regular Archimedean solids. Just as prisms and antiprisms are generally not considered Archimedean solids, bipyramids and trapezohedra are generally not considered Catalan solids, despite being face-transitive. Two of the Catalan solids are chiral: the pentagonal icositetrahedron and the pentagonal hexecontahedron, dual to the chiral snub cube and snub dodecahedron. These each come in two enantiomorphs. Not counting the enantiomorphs, bipyramids, and trapezohedra, there are a total of 13 Catalan solids. List of Catalan solids and their duals Symmetry The Catalan solids, along with their dual Archimedean solids, can be grouped in those with tetrahedral, octahedral and icosahedral symmetry. For both octahedral and icosahedral symmetry there are six forms. The only Catalan solid with genuine tetrahedral symmetry is the triakis tetrahedron (dual of the truncated tetrahedron). The rhombic dodecahedron and tetrakis hexahedron have octahedral symmetry, but they can be colored to have only tetrahedral symmetry. Rectification and snub also exist with tetrahedral symmetry, but they are Platonic instead of Archimedean, so their duals are Platonic instead of Catalan. (They are shown with brown background in the table below.) Geometry All dihedral angles of a Catalan solid are equal. Denoting their value by , and denoting the face angle at the vertices where faces meet by , we have . This can be used to compute and , , ... , from , ... only. Triangular faces Of the 13 Catalan solids, 7 have triangular faces. These are of the form Vp.q.r, where p, q and r take their values among 3, 4, 5, 6, 8 and 10. The angles , and can be computed in the following way. Put , , and put . Then , . For and the expressions are similar of course. The dihedral angle can be computed from . Applying this, for example, to the disdyakis triacontahedron (, and , hence , and , where is the golden ratio) gives and . Quadrilateral faces Of the 13 Catalan solids, 4 have quadrilateral faces. These are of the form Vp.q.p.r, where p, q and r take their values among 3, 4, and 5. The angle can be computed by the following formula: . From this, , and the dih
https://en.wikipedia.org/wiki/Bernard%20Fr%C3%A9nicle%20de%20Bessy
Bernard Frénicle de Bessy (c. 1604 – 1674), was a French mathematician born in Paris, who wrote numerous mathematical papers, mainly in number theory and combinatorics. He is best remembered for , a treatise on magic squares published posthumously in 1693, in which he described all 880 essentially different normal magic squares of order 4. The Frénicle standard form, a standard representation of magic squares, is named after him. He solved many problems created by Fermat and also discovered the cube property of the number 1729 (Ramanujan number), later referred to as a taxicab number. He is also remembered for his treatise Traité des triangles rectangles en nombres published (posthumously) in 1676 and reprinted in 1729. Bessy was a member of many of the scientific circles of his day, including the French Academy of Sciences, and corresponded with many prominent mathematicians, such as Mersenne and Pascal. Bessy was also particularly close to Fermat, Descartes and Wallis, and was best known for his insights into number theory. In 1661 he proposed to John Wallis a problem of what amounted to the following system of equations in integers, x2 + y2 = z2,    x2 = u2 + v2,    x − y = u − v > 0. A solution was given by Théophile Pépin in 1880. La Méthode des exclusions Frénicle's La Méthode des exclusions was published (posthumously) in 1693, which appeared in the fifth volume of (1729, Paris), though the work appears to have been written around 1640. The book contains a short introduction followed by ten rules, intended to serve as a "method" or general rules one should apply in order to solve mathematical problems. During the Renaissance, "method" was primarily used for educational purposes, rather than for professional mathematicians (or natural philosophers). However, Frénicle's rules imply slight methodological preferences which suggests a turn towards explorational purposes. Frénicle's text provided a number of examples on how his rules ought to be applied. He proposed the problem of determining whether or not a given integer can be the hypotenuse of a right-angled triangle (it is not clear if Frénicle initially intended the other two sides of the triangle to have integral length). He considers the case where the integer is 221 and promptly applies his second rule, which states that "if you do not know, even generally, what is proposed, find its properties by systematically constructing similar numbers." He then goes on and exploits the Pythagorean Theorem. Next, the third rule is applied, which states that "in order not to omit any necessary number, establish the order of investigation as simple as possible." Frénicle then takes increasing sums of perfect squares. He produces tables of computations and is able to reduce computations by rules four to six, which all deal with simplifying matters. He eventually arrives at the conclusion that it is possible for 221 to satisfy the property under certain conditions and checks his assertion by exper
https://en.wikipedia.org/wiki/Symbolic%20method
In mathematics, the symbolic method in invariant theory is an algorithm developed by Arthur Cayley, Siegfried Heinrich Aronhold, Alfred Clebsch, and Paul Gordan in the 19th century for computing invariants of algebraic forms. It is based on treating the form as if it were a power of a degree one form, which corresponds to embedding a symmetric power of a vector space into the symmetric elements of a tensor product of copies of it. Symbolic notation The symbolic method uses a compact, but rather confusing and mysterious notation for invariants, depending on the introduction of new symbols a, b, c, ... (from which the symbolic method gets its name) with apparently contradictory properties. Example: the discriminant of a binary quadratic form These symbols can be explained by the following example from Gordan. Suppose that is a binary quadratic form with an invariant given by the discriminant The symbolic representation of the discriminant is where a and b are the symbols. The meaning of the expression (ab)2 is as follows. First of all, (ab) is a shorthand form for the determinant of a matrix whose rows are a1, a2 and b1, b2, so Squaring this we get Next we pretend that so that and we ignore the fact that this does not seem to make sense if f is not a power of a linear form. Substituting these values gives Higher degrees More generally if is a binary form of higher degree, then one introduces new variables a1, a2, b1, b2, c1, c2, with the properties What this means is that the following two vector spaces are naturally isomorphic: The vector space of homogeneous polynomials in A0,...An of degree m The vector space of polynomials in 2m variables a1, a2, b1, b2, c1, c2, ... that have degree n in each of the m pairs of variables (a1, a2), (b1, b2), (c1, c2), ... and are symmetric under permutations of the m symbols a, b, ...., The isomorphism is given by mapping aa, bb, .... to Aj. This mapping does not preserve products of polynomials. More variables The extension to a form f in more than two variables x1, x2, x3,... is similar: one introduces symbols a1, a2, a3 and so on with the properties Symmetric products The rather mysterious formalism of the symbolic method corresponds to embedding a symmetric product Sn(V) of a vector space V into a tensor product of n copies of V, as the elements preserved by the action of the symmetric group. In fact this is done twice, because the invariants of degree n of a quantic of degree m are the invariant elements of SnSm(V), which gets embedded into a tensor product of mn copies of V, as the elements invariant under a wreath product of the two symmetric groups. The brackets of the symbolic method are really invariant linear forms on this tensor product, which give invariants of SnSm(V) by restriction. See also Umbral calculus References Footnotes Further reading pp. 32–7, "Invariants of n-ary forms: the symbolic method. Reprinted as Algebra Invariant theory
https://en.wikipedia.org/wiki/Jean-Marie%20Souriau
Jean-Marie Souriau (3 June 1922, Paris – 15 March 2012, Aix-en-Provence) was a French mathematician. He was one of the pioneers of modern symplectic geometry. Education and career Souriau started studying mathematics in 1942 at École Normale Supérieure in Paris. In 1946 he was a research fellow of CNRS and an engineer at ONERA. His PhD thesis, defended in 1952 under the supervision of Joseph Pérès and André Lichnerowicz, was entitled "Sur la stabilité des avions" (On the stability of planes). Between 1952 and 1958 he worked at Institut des Hautes Études in Tunis, and since 1958 he was Professor of Mathematics at the University of Provence in Marseille. In 1981 he was awarded the Prix Jaffé of the French Academy of Sciences. Research Souriau contributed to the introduction and the development of many important concepts in symplectic geometry, arising from classical and quantum mechanics. In particular, he introduced the notion of moment map, gave a classification of the homogeneous symplectic manifolds (now known as the Kirillov-Kostant-Souriau theorem), and investigated the coadjoint action of a Lie group, which led to the first geometric interpretation of spin at a classical level. He also suggested a program of geometric quantization and developed a more general approach to differentiable manifolds by means of diffeologies. Souriau published more than 50 papers in peer-review scientific journals, as well as three monographs, on linear algebra, on relativity and on geometric mechanics. He supervised 9 PhD students. References External links Jean-Marie Souriau official website (the website hosts copies of many of Souriau's works) Ray F. Streater: Souriau Patrick Iglesias-Zemmour: Souriau (in French) In Memoriam Conference 2012: Web site "Structure des Systèmes Dynamiques" Anniversary Conference 2019 Interview of Jean-Marie Souriau by Laurence Honnorat on YouTube (in French) 1922 births 2012 deaths French mathematicians École Normale Supérieure alumni Academic staff of the University of Provence
https://en.wikipedia.org/wiki/Tilting
Tilting may refer to: Tilt (camera), a cinematographic technique Tilting at windmills, an English idiom Tilting theory, an algebra theory Exponential tilting, a probability distribution shifting technique Tilting three-wheeler, a vehicle which leans when cornering while keeping all of its three wheels on the ground Tilting train, a train with a mechanism enabling increased speed on regular railroad tracks Tilting, Newfoundland and Labrador, a town on Fogo Island, Canada Tilting, a type of jousting
https://en.wikipedia.org/wiki/Quasi-arithmetic%20mean
In mathematics and statistics, the quasi-arithmetic mean or generalised f-mean or Kolmogorov-Nagumo-de Finetti mean is one generalisation of the more familiar means such as the arithmetic mean and the geometric mean, using a function . It is also called Kolmogorov mean after Soviet mathematician Andrey Kolmogorov. It is a broader generalization than the regular generalized mean. Definition If f is a function which maps an interval of the real line to the real numbers, and is both continuous and injective, the f-mean of numbers is defined as , which can also be written We require f to be injective in order for the inverse function to exist. Since is defined over an interval, lies within the domain of . Since f is injective and continuous, it follows that f is a strictly monotonic function, and therefore that the f-mean is neither larger than the largest number of the tuple nor smaller than the smallest number in . Examples If = ℝ, the real line, and , (or indeed any linear function , not equal to 0) then the f-mean corresponds to the arithmetic mean. If = ℝ+, the positive real numbers and , then the f-mean corresponds to the geometric mean. According to the f-mean properties, the result does not depend on the base of the logarithm as long as it is positive and not 1. If = ℝ+ and , then the f-mean corresponds to the harmonic mean. If = ℝ+ and , then the f-mean corresponds to the power mean with exponent . If = ℝ and , then the f-mean is the mean in the log semiring, which is a constant shifted version of the LogSumExp (LSE) function (which is the logarithmic sum), . The corresponds to dividing by , since logarithmic division is linear subtraction. The LogSumExp function is a smooth maximum: a smooth approximation to the maximum function. Properties The following properties hold for for any single function : Symmetry: The value of is unchanged if its arguments are permuted. Idempotency: for all x, . Monotonicity: is monotonic in each of its arguments (since is monotonic). Continuity: is continuous in each of its arguments (since is continuous). Replacement: Subsets of elements can be averaged a priori, without altering the mean, given that the multiplicity of elements is maintained. With it holds: Partitioning: The computation of the mean can be split into computations of equal sized sub-blocks: Self-distributivity: For any quasi-arithmetic mean of two variables: . Mediality: For any quasi-arithmetic mean of two variables:. Balancing: For any quasi-arithmetic mean of two variables:. Central limit theorem : Under regularity conditions, for a sufficiently large sample, is approximately normal. A similar result is available for Bajraktarević means, which are generalizations of quasi-arithmetic means. Scale-invariance: The quasi-arithmetic mean is invariant with respect to offsets and scaling of : . Characterization There are several different sets of properties that characterize the quasi-arit
https://en.wikipedia.org/wiki/Jakob%20Steiner
Jakob Steiner (18 March 1796 – 1 April 1863) was a Swiss mathematician who worked primarily in geometry. Life Steiner was born in the village of Utzenstorf, Canton of Bern. At 18, he became a pupil of Heinrich Pestalozzi and afterwards studied at Heidelberg. Then, he went to Berlin, earning a livelihood there, as in Heidelberg, by tutoring. Here he became acquainted with A. L. Crelle, who, encouraged by his ability and by that of Niels Henrik Abel, then also staying at Berlin, founded his famous Journal (1826). After Steiner's publication (1832) of his Systematische Entwickelungen he received, through Carl Gustav Jacob Jacobi, who was then professor at Königsberg University, and earned an honorary degree there; and through the influence of Jacobi and of the brothers Alexander and Wilhelm von Humboldt a new chair of geometry was founded for him at Berlin (1834). This he occupied until his death in Bern on 1 April 1863. He was described by Thomas Hirst as follows: "He is a middle-aged man, of pretty stout proportions, has a long intellectual face, with beard and moustache and a fine prominent forehead, hair dark rather inclining to turn grey. The first thing that strikes you on his face is a dash of care and anxiety, almost pain, as if arising from physical suffering—he has rheumatism. He never prepares his lectures beforehand. He thus often stumbles or fails to prove what he wishes at the moment, and at every such failure he is sure to make some characteristic remark." Mathematical contributions Steiner's mathematical work was mainly confined to geometry. This he treated synthetically, to the total exclusion of analysis, which he hated, and he is said to have considered it a disgrace to synthetic geometry if equal or higher results were obtained by analytical geometry methods. In his own field he surpassed all his contemporaries. His investigations are distinguished by their great generality, by the fertility of his resources, and by the rigour in his proofs. He has been considered the greatest pure geometer since Apollonius of Perga. In his Systematische Entwickelung der Abhängigkeit geometrischer Gestalten von einander he laid the foundation of modern synthetic geometry. In projective geometry even parallel lines have a point in common: a point at infinity. Thus two points determine a line and two lines determine a point. The symmetry of point and line is expressed as projective duality. Starting with perspectivities, the transformations of projective geometry are formed by composition, producing projectivities. Steiner identified sets preserved by projectivities such as a projective range and pencils. He is particularly remembered for his approach to a conic section by way of projectivity called the Steiner conic. In a second little volume, Die geometrischen Constructionen ausgeführt mittels der geraden Linie und eines festen Kreises (1833), republished in 1895 by Ottingen, he shows, what had been already suggested by J. V. Poncele
https://en.wikipedia.org/wiki/Lexicographic%20order
In mathematics, the lexicographic or lexicographical order (also known as lexical order, or dictionary order) is a generalization of the alphabetical order of the dictionaries to sequences of ordered symbols or, more generally, of elements of a totally ordered set. There are several variants and generalizations of the lexicographical ordering. One variant applies to sequences of different lengths by comparing the lengths of the sequences before considering their elements. Another variant, widely used in combinatorics, orders subsets of a given finite set by assigning a total order to the finite set, and converting subsets into increasing sequences, to which the lexicographical order is applied. A generalization defines an order on an n-ary Cartesian product of partially ordered sets; this order is a total order if and only if all factors of the Cartesian product are totally ordered. Definition The words in a lexicon (the set of words used in some language) have a conventional ordering, used in dictionaries and encyclopedias, that depends on the underlying ordering of the alphabet of symbols used to build the words. The lexicographical order is one way of formalizing word order given the order of the underlying symbols. The formal notion starts with a finite set , often called the alphabet, which is totally ordered. That is, for any two symbols and in that are not the same symbol, either or . The words of are the finite sequences of symbols from , including words of length 1 containing a single symbol, words of length 2 with 2 symbols, and so on, even including the empty sequence with no symbols at all. The lexicographical order on the set of all these finite words orders the words as follows: Given two different words of the same length, say and , the order of the two words depends on the alphabetic order of the symbols in the first place where the two words differ (counting from the beginning of the words): if and only if in the underlying order of the alphabet . If two words have different lengths, the usual lexicographical order pads the shorter one with "blanks" (a special symbol that is treated as smaller than every element of ) at the end until the words are the same length, and then the words are compared as in the previous case. However, in combinatorics, another convention is frequently used for the second case, whereby a shorter sequence is always smaller than a longer sequence. This variant of the lexicographical order is sometimes called . In lexicographical order, the word "Thomas" appears before "Thompson" because they first differ at the fifth letter ('a' and 'p'), and letter 'a' comes before the letter 'p' in the alphabet. Because it is the first difference, in this case the 5th letter is the "most significant difference" for alphabetical ordering. An important property of the lexicographical order is that for each , the set of words of length is well-ordered by the lexicographical order (provided the alp
https://en.wikipedia.org/wiki/Riemann%E2%80%93Liouville%20integral
In mathematics, the Riemann–Liouville integral associates with a real function another function of the same kind for each value of the parameter . The integral is a manner of generalization of the repeated antiderivative of in the sense that for positive integer values of , is an iterated antiderivative of of order . The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the Euler transform, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential. Definition The Riemann–Liouville integral is defined by where is the gamma function and is an arbitrary but fixed base point. The integral is well-defined provided is a locally integrable function, and is a complex number in the half-plane . The dependence on the base-point is often suppressed, and represents a freedom in constant of integration. Clearly is an antiderivative of (of first order), and for positive integer values of , is an antiderivative of order by Cauchy formula for repeated integration. Another notation, which emphasizes the base point, is This also makes sense if , with suitable restrictions on . The fundamental relations hold the latter of which is a semigroup property. These properties make possible not only the definition of fractional integration, but also of fractional differentiation, by taking enough derivatives of . Properties Fix a bounded interval . The operator associates to each integrable function on the function on which is also integrable by Fubini's theorem. Thus defines a linear operator on : Fubini's theorem also shows that this operator is continuous with respect to the Banach space structure on 1, and that the following inequality holds: Here denotes the norm on . More generally, by Hölder's inequality, it follows that if , then as well, and the analogous inequality holds where is the norm on the interval . Thus we have a bounded linear operator . Furthermore, in the sense as along the real axis. That is for all . Moreover, by estimating the maximal function of , one can show that the limit holds pointwise almost everywhere. The operator is well-defined on the set of locally integrable function on the whole real line . It defines a bounded transformation on any of the Banach spaces of functions of exponential type consisting of locally integrable functions for which the norm is finite. For , the Laplace transform of takes the particularly simple form for . Here denotes the Laplace transform of , and this property expresses that is a Fourier multiplier. Fractional derivatives One can define fractional-order derivatives of as well by where denotes the ceiling function. One also obtains a differintegral interpolating between differentiation and integration by defining An alterna
https://en.wikipedia.org/wiki/Vedic%20Mathematics
Vedic Mathematics is a book written by the Indian monk Bharati Krishna Tirtha, and first published in 1965. It contains a list of mathematical techniques, which were falsely claimed to have been retrieved from the Vedas and to contain advanced mathematical knowledge. Krishna Tirtha failed to produce the sources, and scholars unanimously note it to be a mere compendium of tricks for increasing the speed of elementary mathematical calculations sharing no overlap with historical mathematical developments during the Vedic period. However, there has been a proliferation of publications in this area and multiple attempts to integrate the subject into mainstream education by right-wing Hindu nationalist governments. Contents The book contains metaphorical aphorisms in the form of sixteen sutras and thirteen sub-sutras, which Krishna Tirtha states allude to significant mathematical tools. The range of their asserted applications spans from topic as diverse as statics and pneumatics to astronomy and financial domains. Tirtha stated that no part of advanced mathematics lay beyond the realms of his book and propounded that studying it for a couple of hours every day for a year equated to spending about two decades in any standardized education system to become professionally trained in the discipline of mathematics. STS scholar S. G. Dani in 'Vedic Mathematics': Myth and Reality states that the book is primarily a compendium of tricks that can be applied in elementary, middle and high school arithmetic and algebra, to gain faster results. The sutras and sub-sutras are abstract literary expressions (for example, "as much less" or "one less than previous one") prone to creative interpretations; Krishna Tirtha exploited this to the extent of manipulating the same shloka to generate widely different mathematical equivalencies across a multitude of contexts. Source and relation with The Vedas According to Krishna Tirtha, the sutras and other accessory content were found after years of solitary study of the Vedas—a set of sacred ancient Hindu scriptures—in a forest. They were supposedly contained in the pariśiṣṭa—a supplementary text/appendix—of the Atharvaveda. He does not provide any more bibliographic clarification on the sourcing. The book's editor, Professor V. S. Agrawala argues that since the Vedas are defined as the traditional repositories of all knowledge, any knowledge can be de facto assumed to be in the Vedas, irrespective of whether it may be physically located in them; he even went to the extent of deeming Krishna Tirtha's work as a pariśiṣṭa in itself. However, numerous mathematicians and STS scholars (Dani, Kim Plofker, K.S. Shukla, Jan Hogendijk et al) note that the Vedas do not contain any of those sutras and sub-sutras. When challenged by Shukla, a mathematician and a historiographer of ancient Indian mathematics, to locate the sutras in the Parishishta of a standard edition of the Atharvaveda, Krishna Tirtha stated that they were not i
https://en.wikipedia.org/wiki/Solution%20set
In mathematics, a solution set is the set of values that satisfy a given set of equations or inequalities. For example, for a set of polynomials over a ring , the solution set is the subset of on which the polynomials all vanish (evaluate to 0), formally The feasible region of a constrained optimization problem is the solution set of the constraints. Examples The solution set of the single equation is the set {0}. For any non-zero polynomial over the complex numbers in one variable, the solution set is made up of finitely many points. However, for a complex polynomial in more than one variable the solution set has no isolated points. Remarks In algebraic geometry, solution sets are called algebraic sets if there are no inequalities. Over the reals, and with inequalities, there are called semialgebraic sets. Other meanings More generally, the solution set to an arbitrary collection E of relations (Ei) (i varying in some index set I) for a collection of unknowns , supposed to take values in respective spaces , is the set S of all solutions to the relations E, where a solution is a family of values such that substituting by in the collection E makes all relations "true". (Instead of relations depending on unknowns, one should speak more correctly of predicates, the collection E is their logical conjunction, and the solution set is the inverse image of the boolean value true by the associated boolean-valued function.) The above meaning is a special case of this one, if the set of polynomials fi if interpreted as the set of equations fi(x)=0. Examples The solution set for E = { x+y = 0 } with respect to is S = { (a,−a) : a ∈ R }. The solution set for E = { x+y = 0 } with respect to is S = { −y }. (Here, y is not "declared" as an unknown, and thus to be seen as a parameter on which the equation, and therefore the solution set, depends.) The solution set for with respect to is the interval S = [0,2] (since is undefined for negative values of x). The solution set for with respect to is S = 2πZ (see Euler's identity). See also Equation solving Extraneous and missing solutions Equations
https://en.wikipedia.org/wiki/Weakly%20harmonic%20function
In mathematics, a function is weakly harmonic in a domain if for all with compact support in and continuous second derivatives, where Δ is the Laplacian. This is the same notion as a weak derivative, however, a function can have a weak derivative and not be differentiable. In this case, we have the somewhat surprising result that a function is weakly harmonic if and only if it is harmonic. Thus weakly harmonic is actually equivalent to the seemingly stronger harmonic condition. See also Weak solution Weyl's lemma References Harmonic functions