source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/CO-OPN
The CO-OPN (Concurrent Object-Oriented Petri Nets) specification language is based on both algebraic specifications and algebraic Petri nets formalisms. The former formalism represent the data structures aspects, while the latter stands for the behavioral and concurrent aspects of systems. In order to deal with large specifications some structuring capabilities have been introduced. The object-oriented paradigm has been adopted, which means that a CO-OPN specification is a collection of objects which interact concurrently. Cooperation between the objects is achieved by means of a synchronization mechanism, i.e., each object event may request to be synchronized with some methods (parameterized events) of one or a group of partners by means of a synchronization expression. A CO-OPN specification consists of a collection of two different modules: the abstract data type modules and the object modules. The abstract data type modules concern the data structure component of the specifications, and many sorted algebraic specifications are used when describing these modules. Furthermore, the object modules represent the concept of encapsulated entities that possess an internal state and provide the exterior with various services. For this second sort of modules, an algebraic net formalism has been adopted. Algebraic Petri nets, a kind of high level nets, are a great improvement over the Petri nets, i.e. Petri nets tokens are replaced with data structures which are described by means of algebraic abstract data types. For managing visibility, both abstract data type modules and object modules are composed of an interface (which allows some operations to be visible from the outside) and a body (which mainly encapsulates the operations properties and some operation which are used for building the model). In the case of the objects modules, the state and the behavior of the objects remain concealed within the body section. To develop models using the CO-OPN language it is possible to use the COOPNBuilder framework that is an environment composed of a set of tools destinated to the support of concurrent software development based on the CO-OPN language. References External links Software Modeling and Verification Group of the University of Geneva. Specification languages Petri nets
https://en.wikipedia.org/wiki/Lemniscate%20constant
In mathematics, the lemniscate constant is a transcendental mathematical constant that is the ratio of the perimeter of Bernoulli's lemniscate to its diameter, analogous to the definition of for the circle. Equivalently, the perimeter of the lemniscate is . The lemniscate constant is closely related to the lemniscate elliptic functions and approximately equal to 2.62205755. The symbol is a cursive variant of ; see Pi § Variant pi. Gauss's constant, denoted by G, is equal to . John Todd named two more lemniscate constants, the first lemniscate constant and the second lemniscate constant . Sometimes the quantities or are referred to as the lemniscate constant. History Gauss's constant is named after Carl Friedrich Gauss, who calculated it via the arithmetic–geometric mean as . By 1799, Gauss had two proofs of the theorem that where is the lemniscate constant. The lemniscate constant and first lemniscate constant were proven transcendental by Theodor Schneider in 1937 and the second lemniscate constant and Gauss's constant were proven transcendental by Theodor Schneider in 1941. In 1975, Gregory Chudnovsky proved that the set is algebraically independent over , which implies that and are algebraically independent as well. But the set (where the prime denotes the derivative with respect to the second variable) is not algebraically independent over . In fact, Forms Usually, is defined by the first equality below. where is the complete elliptic integral of the first kind with modulus , is the beta function, is the gamma function and is the Riemann zeta function. The lemniscate constant can also be computed by the arithmetic–geometric mean , Moreover, which is analogous to where is the Dirichlet beta function and is the Riemann zeta function. Gauss's constant is typically defined as the reciprocal of the arithmetic–geometric mean of 1 and the square root of 2, after his calculation of published in 1800: Gauss's constant is equal to where Β denotes the beta function. A formula for G in terms of Jacobi theta functions is given by Gauss's constant may be computed from the gamma function at argument : John Todd's lemniscate constants may be given in terms of the beta function B: Series Viète's formula for can be written: An analogous formula for is: The Wallis product for is: An analogous formula for is: A related result for Gauss's constant () is: An infinite series of Gauss's constant discovered by Gauss is: The Machin formula for is and several similar formulas for can be developed using trigonometric angle sum identities, e.g. Euler's formula . Analogous formulas can be developed for , including the following found by Gauss: , where is the lemniscate arcsine. The lemniscate constant can be rapidly computed by the series where (these are the generalized pentagonal numbers). In a spirit similar to that of the Basel problem, where are the Gaussian integers and is the Eisenstein series of
https://en.wikipedia.org/wiki/Chord%20%28concurrency%29
A chord is a concurrency construct available in Polyphonic C♯ and Cω inspired by the join pattern of the join-calculus. A chord is a function body that is associated with multiple function headers and cannot execute until all function headers are called. Synchronicity Cω defines two types of functions: synchronous and asynchronous. A synchronous function acts like a normal function in typical imperative languages: upon invocation the function body is executed, and a return value may or may not be returned to the caller. An asynchronous function acts similarly to a synchronous function that immediately returns void, but also triggers execution of the actual code in another thread/execution context. References Concurrency (computer science)
https://en.wikipedia.org/wiki/Statistica
Statistica is an advanced analytics software package originally developed by StatSoft and currently maintained by TIBCO Software Inc. Statistica provides data analysis, data management, statistics, data mining, machine learning, text analytics and data visualization procedures. Overview Statistica is a suite of analytics software products and solutions originally developed by StatSoft and acquired by Dell in March 2014. The software includes an array of data analysis, data management, data visualization, and data mining procedures; as well as a variety of predictive modeling, clustering, classification, and exploratory techniques. Additional techniques are available through integration with the free, open source R programming environment. Different packages of analytical techniques are available in six product lines. History Statistica originally derived from a set of software packages and add-ons that were initially developed during the mid-1980s by StatSoft. Following the 1986 release of Complete Statistical System (CSS) and the 1988 release of Macintosh Statistical System (MacSS), the first DOS version (trademarked in capitals as STATISTICA) was released in 1991. In 1992, the Macintosh version of Statistica was released. Statistica 5.0 was released in 1995. It ran on both the new 32-bit Windows 95/NT and the older version of Windows (3.1). It featured many new statistics and graphics procedures, a word-processor-style output editor (combining tables and graphs), and a built-in development environment that enabled the user to easily design new procedures (e.g., via the included Statistica Basic language) and integrate them with the Statistica system. Statistica 5.1 was released in 1996 followed by Statistica CA '97 and Statistica '98 editions. In 2001, Statistica 6 was based on the COM architecture and it included multithreading and support for distributed computing. Statistica 9 was released in 2009, supporting 32 bit and 64-bit computing. Statistica 10 was released in November 2010. This release featured further performance optimizations for the 64-bit CPU architecture, as well as multithreading technologies, integration with Microsoft SharePoint, Microsoft Office 2010 and other applications, the ability to generate Java and C# code, and other GUI and kernel improvements. Statistica 12 was released in April 2013 and features a new GUI, performance improvements when handling large amounts of data, a new visual analytic workspace, a new database query tool as well as several analytics enhancements. Localized versions of Statistica (including the entire family of products) are available in Chinese (both Traditional and Simplified), Czech, English, French, German, Italian, Japanese, Polish, Russian, and Spanish. Documentation is available in Arabic, Chinese, Czech, English, French, German, Hungarian, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, and other languages. Acquisition history Statistica was acquired by D
https://en.wikipedia.org/wiki/World%20Drug%20Report
The World Drug Report is a United Nations Office on Drugs and Crime annual publication that analyzes market trends, compiling detailed statistics on drug markets. Using data, it helps draw conclusions about drugs as an issue needing intervention by government agencies around the world. UNAIDs stated on its website "The use of illicit drugs needs to be understood as a social and health condition requiring sustained prevention, treatment, and care. This is one of the major conclusions emerging from the 2015 World Drug Report, published on 26 June by the United Nations Office on Drugs and Crime." History The World Drug Report is published annually by the United Nations Office on Drugs and Crime. The first report was published in 1997, the same year the agency was established. The agency was tasked with the responsibility of crime prevention, criminal justice and criminal law reform. The World Drug Report is utilized as an annual overview of the major developments of global drug markets and as a tool to publish evidence-based drug prevention plans. There have been 19 World Drug Reports published since the original report was made public. Leader of the United Nations Office on Drugs and Crime On July 9, 2010, United Nations Secretary-General Ban Ki-moon appointed Yury Fedotov of the Russian Federation as executive director for the United Nations Office on Drugs and Crime Leadership. Mr. Fedotov is also Under-Secretary General for the United Nations as a whole. Mr. Fedotov has been active in the UN since 1972, when he was a member of the USSR delegation to the United Nations Disarmament Committee in Geneva. Since then, he has championed international issues around global human and drug trafficking. In that vein, he strongly advocates for supporting counter drug trafficking by building upon regional initiatives by providing technical assistance. Mr. Fedotov also promotes the idea that successful drug policies that stem from the World Drug Report have the ability to develop entire economies. On June 26, 2015, he gave remarks to announce the release of the 2015 World Drug Report. In those remarks, he said, "The report also shows that successful projects can foster a sustainable licit economy, including the transfer of skills and access to land, credit and infrastructure, as well as marketing support and access to markets." He uses his position as executive director of the United Nations Office on Drugs and Crime to encourage UN members to support these programs and initiatives through funding under the premise that this funding will grow their domestic economies. Research methodology The World Drug Report relies mainly on data submitted by member states through the Annual Reports Questionnaire, which is revised and monitored by the Commission on Narcotic Drugs. These Member States are all required to submit national drug control related information to the United Nations Office on Drugs and Crime annually, although historically the United Nations does n
https://en.wikipedia.org/wiki/Jason%20John%20Nassau
Jason John Nassau (1893–1965) was an American astronomer. He performed his doctoral studies at Syracuse, and gained his Ph.D. mathematics in 1920. (His thesis was Some Theorems in Alternants.) He then became an assistant professor at the Case Institute of Technology in 1921, teaching astronomy. He continued to instruct at that institution, becoming the University's first chair of astronomy from 1924 until 1959 and chairman of the graduate division from 1936 until 1940. After 1959 he was professor emeritus. From 1924 until 1959 he was also the director of the Case Western Reserve University (CWRU) Warner and Swasey Observatory in Cleveland, Ohio. He was a pioneer in the study of galactic structure. He also discovered a new star cluster, co-discovered 2 novae in 1961, and developed a technique of studying the distribution of red (M-class or cooler) stars. In 1922, Nassau led the formation of the Cleveland Astronomical Society, "a club among those citizens of Cleveland and vicinity who were interested in astronomy." He served as the extant organization's first president for 41 years. Bibliography Nassau, Jason John, A Textbook of Practical Astronomy, 1934, New York. Honors The Nassau Astronomical Station at the Warner and Swasey Observatory, Observatory Park, Geauga Park District, is named for him. The Jason J. Nassau Prize was established by the Cleveland Astronomical Society in 1965. It is awarded annually to an outstanding senior student in the CWRU Department of Astronomy. The Jason J. Nassau Service Award was established by the Cleveland Astronomical Society in 2007 to recognize a person who has shown exemplary leadership and contributions in the Local, National and International Astronomy Community. The crater Nassau on the Moon is named after him. Asteroid 9240 Nassau is named for him. It was discovered May 31, 1997. External links NASA Biographies of Aerospace Officials and Policymakers, K-N Encyclopedia of Cleveland History: Nassau, Jason J. Astrophysical Journal, vol. 65, p.73; A Study of Solar Motion by Harmonic Analysis; Nassau, J. J. & Morse, P. M; March 1927, p. 73 Quarterly Journal of the Royal Astronomical Society, Vol. 7, p.79; Jason John Nassau (obituary); 1966 References 1893 births 1965 deaths American astronomers Case Western Reserve University faculty
https://en.wikipedia.org/wiki/B%C3%A9zout%20matrix
In mathematics, a Bézout matrix (or Bézoutian or Bezoutiant) is a special square matrix associated with two polynomials, introduced by and and named after Étienne Bézout. Bézoutian may also refer to the determinant of this matrix, which is equal to the resultant of the two polynomials. Bézout matrices are sometimes used to test the stability of a given polynomial. Definition Let and be two complex polynomials of degree at most n, (Note that any coefficient or could be zero.) The Bézout matrix of order n associated with the polynomials f and g is where the entries result from the identity It is an n × n complex matrix, and its entries are such that if we let for each , then: To each Bézout matrix, one can associate the following bilinear form, called the Bézoutian: Examples For n = 3, we have for any polynomials f and g of degree (at most) 3: Let and be the two polynomials. Then: The last row and column are all zero as f and g have degree strictly less than n (which is 4). The other zero entries are because for each , either or is zero. Properties is symmetric (as a matrix); ; ; is a bilinear function; is a real matrix if f and g have real coefficients; is nonsingular with if and only if f and g have no common roots. with has determinant which is the resultant of f and g. Applications An important application of Bézout matrices can be found in control theory. To see this, let f(z) be a complex polynomial of degree n and denote by q and p the real polynomials such that f(iy) = q(y) + ip(y) (where y is real). We also denote r for the rank and σ for the signature of . Then, we have the following statements: f(z) has n − r roots in common with its conjugate; the left r roots of f(z) are located in such a way that: (r + σ)/2 of them lie in the open left half-plane, and (r − σ)/2 lie in the open right half-plane; f is Hurwitz stable if and only if is positive definite. The third statement gives a necessary and sufficient condition concerning stability. Besides, the first statement exhibits some similarities with a result concerning Sylvester matrices while the second one can be related to Routh–Hurwitz theorem. References Polynomials Matrices
https://en.wikipedia.org/wiki/Beck%27s%20monadicity%20theorem
In category theory, a branch of mathematics, Beck's monadicity theorem gives a criterion that characterises monadic functors, introduced by in about 1964. It is often stated in dual form for comonads. It is sometimes called the Beck tripleability theorem because of the older term triple for a monad. Beck's monadicity theorem asserts that a functor is monadic if and only if U has a left adjoint; U reflects isomorphisms (if U(f) is an isomorphism then so is f); and C has coequalizers of U-split parallel pairs (those parallel pairs of morphisms in C, which U sends to pairs having a split coequalizer in D), and U preserves those coequalizers. There are several variations of Beck's theorem: if U has a left adjoint then any of the following conditions ensure that U is monadic: U reflects isomorphisms and C has coequalizers of reflexive pairs (those with a common right inverse) and U preserves those coequalizers. (This gives the crude monadicity theorem.) Every diagram in C which is by U sent to a split coequalizer sequence in D is itself a coequalizer sequence in C. In different words, U creates (preserves and reflects) U-split coequalizer sequences. Another variation of Beck's theorem characterizes strictly monadic functors: those for which the comparison functor is an isomorphism rather than just an equivalence of categories. For this version the definitions of what it means to create coequalizers is changed slightly: the coequalizer has to be unique rather than just unique up to isomorphism. Beck's theorem is particularly important in its relation with the descent theory, which plays a role in sheaf and stack theory, as well as in the Alexander Grothendieck's approach to algebraic geometry. Most cases of faithfully flat descent of algebraic structures (e.g. those in FGA and in SGA1) are special cases of Beck's theorem. The theorem gives an exact categorical description of the process of 'descent', at this level. In 1970 the Grothendieck approach via fibered categories and descent data was shown (by Jean Bénabou and Jacques Roubaud) to be equivalent (under some conditions) to the comonad approach. In a later work, Pierre Deligne applied Beck's theorem to Tannakian category theory, greatly simplifying the basic developments. Examples The forgetful functor from topological spaces to sets is not monadic as it does not reflect isomorphisms: continuous bijections between (non-compact or non-Hausdorff) topological spaces need not be homeomorphisms. shows that the functor from commutative C*-algebras to sets sending such an algebra A to the unit ball, i.e., the set , is monadic. Negrepontis also deduces Gelfand duality, i.e., the equivalence of categories between the opposite category of compact Hausdorff spaces and commutative C*-algebras can be deduced from this. The powerset functor from Setop to Set is monadic, where Set is the category of sets. More generally Beck's theorem can be used to show that the powerset functor from Top to T is
https://en.wikipedia.org/wiki/Beck%27s%20theorem%20%28geometry%29
In discrete geometry, Beck's theorem is any of several different results, two of which are given below. Both appeared, alongside several other important theorems, in a well-known paper by József Beck. The two results described below primarily concern lower bounds on the number of lines determined by a set of points in the plane. (Any line containing at least two points of point set is said to be determined by that point set.) Erdős–Beck theorem The Erdős–Beck theorem is a variation of a classical result by L. M. Kelly and W. O. J. Moser involving configurations of n points of which at most n − k are collinear, for some 0 < k < O(). They showed that if n is sufficiently large, relative to k, then the configuration spans at least kn − (1/2)(3k + 2)(k − 1) lines. Elekes and Csaba Toth noted that the Erdős–Beck theorem does not easily extend to higher dimensions. Take for example a set of 2n points in R3 all lying on two skew lines. Assume that these two lines are each incident to n points. Such a configuration of points spans only 2n planes. Thus, a trivial extension to the hypothesis for point sets in Rd is not sufficient to obtain the desired result. This result was first conjectured by Erdős, and proven by Beck. (See Theorem 5.2 in.) Statement Let S be a set of n points in the plane. If no more than n − k points lie on any line for some 0 ≤ k < n − 2, then there exist Ω(nk) lines determined by the points of S. Proof Beck's theorem Beck's theorem says that finite collections of points in the plane fall into one of two extremes; one where a large fraction of points lie on a single line, and one where a large number of lines are needed to connect all the points. Although not mentioned in Beck's paper, this result is implied by the Erdős–Beck theorem. Statement The theorem asserts the existence of positive constants C, K such that given any n points in the plane, at least one of the following statements is true: There is a line which contains at least n/C of the points. There exist at least lines, each of which contains at least two of the points. In Beck's original argument, C is 100 and K is an unspecified constant; it is not known what the optimal values of C and K are. Proof A proof of Beck's theorem can be given as follows. Consider a set P of n points in the plane. Let j be a positive integer. Let us say that a pair of points A, B in the set P is j-connected if the line connecting A and B contains between and points of P (including A and B). From the Szemerédi–Trotter theorem, the number of such lines is , as follows: Consider the set P of n points, and the set L of all those lines spanned by pairs of points of P that contain at least points of P. Since no two points can lie on two distinct lines . Now using Szemerédi–Trotter theorem, it follows that the number of incidences between P and L is at most . All the lines connecting j-connected points also lie in L, and each contributes at least incidences. Therefore, the tot
https://en.wikipedia.org/wiki/Isogonal%20conjugate
__notoc__ In geometry, the isogonal conjugate of a point with respect to a triangle is constructed by reflecting the lines about the angle bisectors of respectively. These three reflected lines concur at the isogonal conjugate of . (This definition applies only to points not on a sideline of triangle .) This is a direct result of the trigonometric form of Ceva's theorem. The isogonal conjugate of a point is sometimes denoted by . The isogonal conjugate of is . The isogonal conjugate of the incentre is itself. The isogonal conjugate of the orthocentre is the circumcentre . The isogonal conjugate of the centroid is (by definition) the symmedian point . The isogonal conjugates of the Fermat points are the isodynamic points and vice versa. The Brocard points are isogonal conjugates of each other. In trilinear coordinates, if is a point not on a sideline of triangle , then its isogonal conjugate is For this reason, the isogonal conjugate of is sometimes denoted by . The set of triangle centers under the trilinear product, defined by is a commutative group, and the inverse of each in is . As isogonal conjugation is a function, it makes sense to speak of the isogonal conjugate of sets of points, such as lines and circles. For example, the isogonal conjugate of a line is a circumconic; specifically, an ellipse, parabola, or hyperbola according as the line intersects the circumcircle in 0, 1, or 2 points. The isogonal conjugate of the circumcircle is the line at infinity. Several well-known cubics (e.g., Thompson cubic, Darboux cubic, Neuberg cubic) are self-isogonal-conjugate, in the sense that if is on the cubic, then is also on the cubic. Another construction for the isogonal conjugate of a point For a given point in the plane of triangle , let the reflections of in the sidelines be . Then the center of the circle is the isogonal conjugate of . See also Isotomic conjugate Central line (geometry) Triangle center References External links Interactive Java Applet illustrating isogonal conjugate and its properties MathWorld Pedal Triangle and Isogonal Conjugacy Triangle geometry
https://en.wikipedia.org/wiki/Going%20up%20and%20going%20down
In commutative algebra, a branch of mathematics, going up and going down are terms which refer to certain properties of chains of prime ideals in integral extensions. The phrase going up refers to the case when a chain can be extended by "upward inclusion", while going down refers to the case when a chain can be extended by "downward inclusion". The major results are the Cohen–Seidenberg theorems, which were proved by Irvin S. Cohen and Abraham Seidenberg. These are known as the going-up and going-down theorems. Going up and going down Let A ⊆ B be an extension of commutative rings. The going-up and going-down theorems give sufficient conditions for a chain of prime ideals in B, each member of which lies over members of a longer chain of prime ideals in A, to be able to be extended to the length of the chain of prime ideals in A. Lying over and incomparability First, we fix some terminology. If and are prime ideals of A and B, respectively, such that (note that is automatically a prime ideal of A) then we say that lies under and that lies over . In general, a ring extension A ⊆ B of commutative rings is said to satisfy the lying over property if every prime ideal of A lies under some prime ideal of B. The extension A ⊆ B is said to satisfy the incomparability property if whenever and are distinct primes of B lying over a prime in A, then  ⊈  and  ⊈ . Going-up The ring extension A ⊆ B is said to satisfy the going-up property if whenever is a chain of prime ideals of A and is a chain of prime ideals of B with m < n and such that lies over for 1 ≤ i ≤ m, then the latter chain can be extended to a chain such that lies over for each 1 ≤ i ≤ n. In it is shown that if an extension A ⊆ B satisfies the going-up property, then it also satisfies the lying-over property. Going-down The ring extension A ⊆ B is said to satisfy the going-down property if whenever is a chain of prime ideals of A and is a chain of prime ideals of B with m < n and such that lies over for 1 ≤ i ≤ m, then the latter chain can be extended to a chain such that lies over for each 1 ≤ i ≤ n. There is a generalization of the ring extension case with ring morphisms. Let f : A → B be a (unital) ring homomorphism so that B is a ring extension of f(A). Then f is said to satisfy the going-up property if the going-up property holds for f(A) in B. Similarly, if B is a ring extension of f(A), then f is said to satisfy the going-down property if the going-down property holds for f(A) in B. In the case of ordinary ring extensions such as A ⊆ B, the inclusion map is the pertinent map. Going-up and going-down theorems The usual statements of going-up and going-down theorems refer to a ring extension A ⊆ B: (Going up) If B is an integral extension of A, then the extension satisfies the going-up property (and hence the lying over property), and the incomparability property. (Going down) If B is an integral extension of A, and B is a domain, and A is inte
https://en.wikipedia.org/wiki/Going%20Up
Going Up may refer to: Going up and going down, terms in commutative algebra which refer to certain properties of chains of prime ideals in integral extensions Going Up (musical), a musical comedy that opened in New York in 1917 and in London in 1918 Going Up (film), a 1923 film starring Douglas MacLean "Going Up" (TV episode), an episode of PBS's POV series Going Up (2007 film), starring Nandita Chandra "Going Up", a song by Echo & the Bunnymen from their 1980 album Crocodiles "Going Up", a common announcement played in elevators
https://en.wikipedia.org/wiki/MCMC
MCMC may refer to: Malaysian Communications and Multimedia Commission, a regulator agency of the Malaysian government Markov chain Monte Carlo, a class of algorithms and methods in statistics See also MC (disambiguation) MC2 (disambiguation)
https://en.wikipedia.org/wiki/Direct%20sum
The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently, but analogously, for different kinds of structures. To see how the direct sum is used in abstract algebra, consider a more elementary kind of structure, the abelian group. The direct sum of two abelian groups and is another abelian group consisting of the ordered pairs where and . To add ordered pairs, we define the sum to be ; in other words addition is defined coordinate-wise. For example, the direct sum , where is real coordinate space, is the Cartesian plane, . A similar process can be used to form the direct sum of two vector spaces or two modules. We can also form direct sums with any finite number of summands, for example , provided and are the same kinds of algebraic structures (e.g., all abelian groups, or all vector spaces). This relies on the fact that the direct sum is associative up to isomorphism. That is, for any algebraic structures , , and of the same kind. The direct sum is also commutative up to isomorphism, i.e. for any algebraic structures and of the same kind. The direct sum of finitely many abelian groups, vector spaces, or modules is canonically isomorphic to the corresponding direct product. This is false, however, for some algebraic objects, like nonabelian groups. In the case where infinitely many objects are combined, the direct sum and direct product are not isomorphic, even for abelian groups, vector spaces, or modules. As an example, consider the direct sum and direct product of (countably) infinitely many copies of the integers. An element in the direct product is an infinite sequence, such as (1,2,3,...) but in the direct sum, there is a requirement that all but finitely many coordinates be zero, so the sequence (1,2,3,...) would be an element of the direct product but not of the direct sum, while (1,2,0,0,0,...) would be an element of both. Often, if a + sign is used, all but finitely many coordinates must be zero, while if some form of multiplication is used, all but finitely many coordinates must be 1. In more technical language, if the summands are , the direct sum is defined to be the set of tuples with such that for all but finitely many i. The direct sum is contained in the direct product , but is strictly smaller when the index set is infinite, because an element of the direct product can have infinitely many nonzero coordinates. Examples The xy-plane, a two-dimensional vector space, can be thought of as the direct sum of two one-dimensional vector spaces, namely the x and y axes. In this direct sum, the x and y axes intersect only at the origin (the zero vector). Addition is defined coordinate-wise, that is , which is the same as vector addition. Given two structures and , their direct sum is written as . Given an indexed family of structures , indexed with , the direct sum may be written . Each Ai is called a direct summand of A. If the index set is
https://en.wikipedia.org/wiki/Artin%27s%20conjecture%20on%20primitive%20roots
In number theory, Artin's conjecture on primitive roots states that a given integer a that is neither a square number nor −1 is a primitive root modulo infinitely many primes p. The conjecture also ascribes an asymptotic density to these primes. This conjectural density equals Artin's constant or a rational multiple thereof. The conjecture was made by Emil Artin to Helmut Hasse on September 27, 1927, according to the latter's diary. The conjecture is still unresolved as of 2023. In fact, there is no single value of a for which Artin's conjecture is proved. Formulation Let a be an integer that is not a square number and not −1. Write a = a0b2 with a0 square-free. Denote by S(a) the set of prime numbers p such that a is a primitive root modulo p. Then the conjecture states S(a) has a positive asymptotic density inside the set of primes. In particular, S(a) is infinite. Under the conditions that a is not a perfect power and that a0 is not congruent to 1 modulo 4 , this density is independent of a and equals Artin's constant, which can be expressed as an infinite product . Similar conjectural product formulas exist for the density when a does not satisfy the above conditions. In these cases, the conjectural density is always a rational multiple of CArtin. Example For example, take a = 2. The conjecture claims that the set of primes p for which 2 is a primitive root has the above density CArtin. The set of such primes is S(2) = {3, 5, 11, 13, 19, 29, 37, 53, 59, 61, 67, 83, 101, 107, 131, 139, 149, 163, 173, 179, 181, 197, 211, 227, 269, 293, 317, 347, 349, 373, 379, 389, 419, 421, 443, 461, 467, 491, ...}. It has 38 elements smaller than 500 and there are 95 primes smaller than 500. The ratio (which conjecturally tends to CArtin) is 38/95 = 2/5 = 0.4. Partial results In 1967, Christopher Hooley published a conditional proof for the conjecture, assuming certain cases of the generalized Riemann hypothesis. Without the generalized Riemann hypothesis, there is no single value of a for which Artin's conjecture is proved. D. R. Heath-Brown proved (Corollary 1) that at least one of 2, 3, or 5 is a primitive root modulo infinitely many primes p. He also proved (Corollary 2) that there are at most two primes for which Artin's conjecture fails. Some variations of Artin's problem Elliptic curve An elliptic curve given by , Lang and Trotter gave a conjecture for rational points on analogous to Artin's primitive root conjecture. Specifically, they said there exists a constant for a given point of infinite order in the set of rational points such that the number of primes () for which the reduction of the point denoted by generates the whole set of points in in , denoted by . Here we exclude the primes which divide the denominators of the coordinates of . Moreover, Lang and Trotter conjectured that . Gupta and Murty proved the Lang and Trotter conjecture for with complex multiplication under the Generalized Riemann Hypothesis. More
https://en.wikipedia.org/wiki/Isobel%20Lang
Isobel Dinah Lang (born 16 July 1970 in Lincoln) is a weather presenter for Sky News. Early life Lang grew up in Sussex and Hertfordshire. She graduated with a BSc degree in mathematics in 1991 from the University of Exeter, before joining the Met Office in 1991 where she prepared forecasts for the press, and presented the weather on local radio. Broadcasting She joined the BBC on 31 May 1995, becoming one of the better known faces of BBC Weather, in part due to her distinctive red hair. She appeared across all of the BBC's TV and Radio news outlets, including regular primetime forecasts on BBC One and appearances on BBC World and BBC News 24. Isobel left the BBC Weather Centre in August 2006 and joined Sky News in September 2006 where she also presented forecasts for Channel 5 until February 2012. Personal life She married Christopher Clarke in September 1997 in Hatfield, Hertfordshire. She had a son in September 2002 and a daughter in January 2004. They live in south west London. References External links Video clips Mystery head 11 August 1996 BBC forecast by Isobel Lang 1970 births Living people People from Lincoln, England People from Hertfordshire Alumni of the University of Exeter English meteorologists BBC weather forecasters Sky News weather forecasters
https://en.wikipedia.org/wiki/Men%20of%20Mathematics
Men of Mathematics: The Lives and Achievements of the Great Mathematicians from Zeno to Poincaré is a book on the history of mathematics published in 1937 by Scottish-born American mathematician and science fiction writer E. T. Bell (1883–1960). After a brief chapter on three ancient mathematicians, it covers the lives of about forty mathematicians who flourished in the seventeenth through nineteenth centuries. The book is illustrated by mathematical discussions, with emphasis on mainstream mathematics. To keep the interest of readers, the book typically focuses on unusual or dramatic aspects of its subjects' lives. Men of Mathematics has inspired many young people, including John Forbes Nash Jr., Julia Robinson, and Freeman Dyson, to become mathematicians. It is not intended as a rigorous history, and includes many anecdotal accounts. Publication In July 1935, Bell signed a contract with Simon and Schuster, for a book to be titled The Lives of Mathematicians. He delivered the manuscript at the beginning of November 1935 as promised, but was unhappy when the publishers made him cut about a third of it (125,000 words), and, in order to tie in with their book Men of Art (by Thomas Craven), gave it the title Men of Mathematics which he did not like. He was also unhappy with how long they took to print it: even before he had received his first printed copy in March 1937, he had written and got into print another book, The Handmaiden of the Sciences. Contents Eudoxus (408–355 BC) Archimedes (287?–212 BC) Descartes (1596–1650) Fermat (1601–1665) Pascal (1623–1662) Newton (1642–1727) Leibniz (1646–1716) The Bernoullis (17th and 18th century) Euler (1707–1783) Lagrange (1736–1813) Laplace (1749–1827) Monge (1746–1818) Fourier (1768–1830) Poncelet (1788–1867) Gauss (1777–1855) Cauchy (1789–1857) Lobachevsky (1793–1856) Abel (1802–1829) Hamilton (1805–1865) Galois (1811–1832) Sylvester (1814–1897) Cayley (1821–1895) Weierstrass (1815–1897) (1850–1891) Boole (1815–1864) Hermite (1822–1901) Kronecker (1823–1891) Riemann (1826–1866) Kummer (1810–1893) Dedekind (1831–1916) Poincaré (1854–1912) Cantor (1845–1918) Reception Men of Mathematics remains widely read. It has received general praise and some criticism. In the opinion of Ivor Grattan-Guinness the mathematics profession was poorly served by Bell's book: ...perhaps the most widely read modern book on the history of mathematics. As it is also one of the worst, it can be said to have done a considerable disservice to the profession. Eric Bell was criticized in 1983 for incorrectly ascribing the origin of spacetime to Joseph Lagrange: There is a general impression based on the widely read book of E.T. Bell that Lagrange, in his Méchanique Analytique, was the first to have connected time to space as a fourth coordinate. ...However, Lagrange did not express these thoughts quite as precisely as Bell seems to imply....Thus, it is far from certain after consulting the original text whether or not Lagra
https://en.wikipedia.org/wiki/Small%20group
Small group can mean: In psychology, a group of 3 to 9 individuals, see communication in small groups In mathematics, a group of small order, see list of small groups In connection with churches, a cell group In jazz, a small ensemble also known as a combo
https://en.wikipedia.org/wiki/Fibred%20category
Fibred categories (or fibered categories) are abstract entities in mathematics used to provide a general framework for descent theory. They formalise the various situations in geometry and algebra in which inverse images (or pull-backs) of objects such as vector bundles can be defined. As an example, for each topological space there is the category of vector bundles on the space, and for every continuous map from a topological space X to another topological space Y is associated the pullback functor taking bundles on Y to bundles on X. Fibred categories formalise the system consisting of these categories and inverse image functors. Similar setups appear in various guises in mathematics, in particular in algebraic geometry, which is the context in which fibred categories originally appeared. Fibered categories are used to define stacks, which are fibered categories (over a site) with "descent". Fibrations also play an important role in categorical semantics of type theory, and in particular that of dependent type theories. Fibred categories were introduced by , and developed in more detail by . Background and motivations There are many examples in topology and geometry where some types of objects are considered to exist on or above or over some underlying base space. The classical examples include vector bundles, principal bundles, and sheaves over topological spaces. Another example is given by "families" of algebraic varieties parametrised by another variety. Typical to these situations is that to a suitable type of a map between base spaces, there is a corresponding inverse image (also called pull-back) operation taking the considered objects defined on to the same type of objects on . This is indeed the case in the examples above: for example, the inverse image of a vector bundle on is a vector bundle on . Moreover, it is often the case that the considered "objects on a base space" form a category, or in other words have maps (morphisms) between them. In such cases the inverse image operation is often compatible with composition of these maps between objects, or in more technical terms is a functor. Again, this is the case in examples listed above. However, it is often the case that if is another map, the inverse image functors are not strictly compatible with composed maps: if is an object over (a vector bundle, say), it may well be that Instead, these inverse images are only naturally isomorphic. This introduction of some "slack" in the system of inverse images causes some delicate issues to appear, and it is this set-up that fibred categories formalise. The main application of fibred categories is in descent theory, concerned with a vast generalisation of "glueing" techniques used in topology. In order to support descent theory of sufficient generality to be applied in non-trivial situations in algebraic geometry the definition of fibred categories is quite general and abstract. However, the underlying intuition is quite s
https://en.wikipedia.org/wiki/Kruskal%E2%80%93Szekeres%20coordinates
In general relativity, Kruskal–Szekeres coordinates, named after Martin Kruskal and George Szekeres, are a coordinate system for the Schwarzschild geometry for a black hole. These coordinates have the advantage that they cover the entire spacetime manifold of the maximally extended Schwarzschild solution and are well-behaved everywhere outside the physical singularity. There is no misleading coordinate singularity at the horizon. The Kruskal–Szekeres coordinates also apply to space-time around a spherical object, but in that case do not give a description of space-time inside the radius of the object. Space-time in a region where a star is collapsing into a black hole is approximated by the Kruskal–Szekeres coordinates (or by the Schwarzschild coordinates). The surface of the star remains outside the event horizon in the Schwarzschild coordinates, but crosses it in the Kruskal–Szekeres coordinates. (In any "black hole" which we observe, we see it at a time when its matter has not yet finished collapsing, so it is not really a black hole yet.) Similarly, objects falling into a black hole remain outside the event horizon in Schwarzschild coordinates, but cross it in Kruskal–Szekeres coordinates. Definition Kruskal–Szekeres coordinates on a black hole geometry are defined, from the Schwarzschild coordinates , by replacing t and r by a new timelike coordinate T and a new spacelike coordinate : for the exterior region outside the event horizon and: for the interior region . Here is the gravitational constant multiplied by the Schwarzschild mass parameter, and this article is using units where = 1. It follows that on the union of the exterior region, the event horizon and the interior region the Schwarzschild radial coordinate (not to be confused with the Schwarzschild radius ), is determined in terms of Kruskal–Szekeres coordinates as the (unique) solution of the equation: Using the Lambert W function the solution is written as: . Moreover one sees immediately that in the region external to the black hole whereas in the region internal to the black hole In these new coordinates the metric of the Schwarzschild black hole manifold is given by written using the (− + + +) metric signature convention and where the angular component of the metric (the Riemannian metric of the 2-sphere) is: . Expressing the metric in this form shows clearly that radial null geodesics i.e. with constant are parallel to one of the lines . In the Schwarzschild coordinates, the Schwarzschild radius is the radial coordinate of the event horizon . In the Kruskal–Szekeres coordinates the event horizon is given by . Note that the metric is perfectly well defined and non-singular at the event horizon. The curvature singularity is located at . The maximally extended Schwarzschild solution The transformation between Schwarzschild coordinates and Kruskal–Szekeres coordinates defined for r > 2GM and can be extended, as an analytic function, at least to the first s
https://en.wikipedia.org/wiki/Kernel%20density%20estimation
In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE answers a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy. Definition Let (x1, x2, ..., xn) be independent and identically distributed samples drawn from some univariate distribution with an unknown density ƒ at any given point x. We are interested in estimating the shape of this function ƒ. Its kernel density estimator is where K is the kernel — a non-negative function — and is a smoothing parameter called the bandwidth. A kernel with subscript h is called the scaled kernel and defined as . Intuitively one wants to choose h as small as the data will allow; however, there is always a trade-off between the bias of the estimator and its variance. The choice of bandwidth is discussed in more detail below. A range of kernel functions are commonly used: uniform, triangular, biweight, triweight, Epanechnikov, normal, and others. The Epanechnikov kernel is optimal in a mean square error sense, though the loss of efficiency is small for the kernels listed previously. Due to its convenient mathematical properties, the normal kernel is often used, which means , where ϕ is the standard normal density function. The construction of a kernel density estimate finds interpretations in fields outside of density estimation. For example, in thermodynamics, this is equivalent to the amount of heat generated when heat kernels (the fundamental solution to the heat equation) are placed at each data point locations xi. Similar methods are used to construct discrete Laplace operators on point clouds for manifold learning (e.g. diffusion map). Example Kernel density estimates are closely related to histograms, but can be endowed with properties such as smoothness or continuity by using a suitable kernel. The diagram below based on these 6 data points illustrates this relationship: For the histogram, first, the horizontal axis is divided into sub-intervals or bins which cover the range of the data: In this case, six bins each of width 2. Whenever a data point falls inside this interval, a box of height 1/12 is placed there. If more than one data point falls inside the same bin, the boxes are stacked on top of each other. For the kernel density estimate, normal kernels with variance 2.25 (indicated by the red dashed lines)
https://en.wikipedia.org/wiki/Ubay%2C%20Bohol
Ubay, officially the Municipality of Ubay (; ), is a fast growing 1st class municipality in the province of Bohol, Philippines. Based on the 2020 Philippine Statistics Authority census, it has a population of 81,799 people which is projected to grow to 100,000 in 2030. Ubay is in the northeast of the province, and has an uncontested area of 258.132847 square kilometers (25,813.2847 hectares) and has a contested area of 5.87 square kilometers (587.8688 hectares) with other Municipality per certification issued by the Land Management Bureau(LMB) of the DENR. It has a of coastline. It is the largest (estimated eight times (8x) larger than the capital city of Tagbilaran) and most populated municipality in Bohol. Etymology One etymology derivation is that the town's name is a contraction of the term ubay-ubay, meaning "alongside". According to Kaufmann's Visayan-English dictionary, the Visayan word "ubay" means: The flow of seawater between the mainland and the island of Lapinig Grande (now Pres. C.P. Garcia town) could justify the second definition of Ubay. It is a situation that is permanent and the constant reference to the flow of water can make the term ubay be attached as the name of the place. An alternative derivation is that the term 'ubay-ubay' or 'alongside' became the byword of seafarers who used to travel close to the shorelines of Ubay to avoid the strong current of the Canigao Channel. There was a single path to follow reach the island trading centres. This trail was located alongside (ubay) the sandy beach. Later on the term Ubay became the original name of the community. History Historically, Ubay was part of Talibon, when the latter was established as a town in civil aspect in 1722. During Spanish period, a town has two aspects - religious, headed by the parish priest and civil aspect, headed by a gobernadorcillo. Religious aspect was then superior to civil aspect since it was commonly headed by a Spanish priest. In 1744, the Dagohoy Revolution started, controlling the entire northeastern part of the province, stretching from Duero to Inabanga. As the revolution progresses, Jesuit were replaced by Augustinian Recollects in Bohol in 1768 led by Fr. Pedro de Santa Barbara, who travelled through mountains with proposals of peace and resettlement for Dagohoy and its followers. Through his untiring intrepidness, later on, pacified patriots together with their cluster chieftains chose to resettle in southern coastal towns. Later in 1794, Fr. Manuel de la Consolacion, then parish priest of Inabanga, successfully brought hundreds of followers and resetlled them in towns of Talibon and Inabanga, as well in the barangays (villages) of San Pedro (Talibon), Pangpang (Buenavista), and Ubay. Therefore, much of early residents of Ubay were followers of Dagohoy. In 1821, Ubay became an independent town from Talibon in civil aspect. Its religious aspect was then still administered by the parish of Inabanga, until Talibon was able to esta
https://en.wikipedia.org/wiki/Volterra%20integral%20equation
In mathematics, the Volterra integral equations are a special type of integral equations. They are divided into two groups referred to as the first and the second kind. A linear Volterra equation of the first kind is where f is a given function and x is an unknown function to be solved for. A linear Volterra equation of the second kind is In operator theory, and in Fredholm theory, the corresponding operators are called Volterra operators. A useful method to solve such equations, the Adomian decomposition method, is due to George Adomian. A linear Volterra integral equation is a convolution equation if The function in the integral is called the kernel. Such equations can be analyzed and solved by means of Laplace transform techniques. For a weakly singular kernel of the form with , Volterra integral equation of the first kind can conveniently be transformed into a classical Abel integral equation. The Volterra integral equations were introduced by Vito Volterra and then studied by Traian Lalescu in his 1908 thesis, Sur les équations de Volterra, written under the direction of Émile Picard. In 1911, Lalescu wrote the first book ever on integral equations. Volterra integral equations find application in demography as Lotka's integral equation, the study of viscoelastic materials, in actuarial science through the renewal equation, and in fluid mechanics to describe the flow behavior near finite-sized boundaries. Conversion of Volterra equation of the first kind to the second kind A linear Volterra equation of the first kind can always be reduced to a linear Volterra equation of the second kind, assuming that . Taking the derivative of the first kind Volterra equation gives us:Dividing through by yields:Defining and completes the transformation of the first kind equation into a linear Volterra equation of the second kind. Numerical solution using trapezoidal rule A standard method for computing the numerical solution of a linear Volterra equation of the second kind is the trapezoidal rule, which for equally-spaced subintervals is given by:Assuming equal spacing for the subintervals, the integral component of the Volterra equation may be approximated by:Defining , , and , we have the system of linear equations:This is equivalent to the matrix equation:For well-behaved kernels, the trapezoidal rule tends to work well. Application: Ruin theory One area where Volterra integral equations appear is in ruin theory, the study of the risk of insolvency in actuarial science. The objective is to quantify the probability of ruin , where is the initial surplus and is the time of ruin. In the classical model of ruin theory, the net cash position is a function of the initial surplus, premium income earned at rate , and outgoing claims :where is a Poisson process for the number of claims with intensity . Under these circumstances, the ruin probability may be represented by a Volterra integral equation of the form:where is the survival
https://en.wikipedia.org/wiki/Stiff%20equation
In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution. When integrating a differential equation numerically, one would expect the requisite step size to be relatively small in a region where the solution curve displays much variation and to be relatively large where the solution curve straightens out to approach a line with slope nearly zero. For some problems this is not the case. In order for a numerical method to give a reliable solution to the differential system sometimes the step size is required to be at an unacceptably small level in a region where the solution curve is very smooth. The phenomenon is known as stiffness. In some cases there may be two different problems with the same solution, yet one is not stiff and the other is. The phenomenon cannot therefore be a property of the exact solution, since this is the same for both problems, and must be a property of the differential system itself. Such systems are thus known as stiff systems. Motivating example Consider the initial value problem The exact solution (shown in cyan) is We seek a numerical solution that exhibits the same behavior. The figure (right) illustrates the numerical issues for various numerical integrators applied on the equation. One of the most prominent examples of the stiff ordinary differential equations (ODEs) is a system that describes the chemical reaction of Robertson: If one treats this system on a short interval, for example, there is no problem in numerical integration. However, if the interval is very large (1011 say), then many standard codes fail to integrate it correctly. Additional examples are the sets of ODEs resulting from the temporal integration of large chemical reaction mechanisms. Here, the stiffness arises from the coexistence of very slow and very fast reactions. To solve them, the software packages KPP and Autochem can be used. Stiffness ratio Consider the linear constant coefficient inhomogeneous system where and is a constant, diagonalizable, matrix with eigenvalues (assumed distinct) and corresponding eigenvectors . The general solution of () takes the form where the are arbitrary constants and is a particular integral. Now let us suppose that which implies that each of the terms as , so that the solution approaches asymptotically as ; the term will decay monotonically if is real and sinusoidally if is complex. Interpreting to be time (as it often is in physical problems), is called the transient solution and the steady-state solution. If is large, then the corresponding term will decay quickly as increases and is thus called a fast transient; if is small, the correspondin
https://en.wikipedia.org/wiki/Dym%20equation
In mathematics, and in particular in the theory of solitons, the Dym equation (HD) is the third-order partial differential equation It is often written in the equivalent form for some function v of one space variable and time The Dym equation first appeared in Kruskal and is attributed to an unpublished paper by Harry Dym. The Dym equation represents a system in which dispersion and nonlinearity are coupled together. HD is a completely integrable nonlinear evolution equation that may be solved by means of the inverse scattering transform. It obeys an infinite number of conservation laws; it does not possess the Painlevé property. The Dym equation has strong links to the Korteweg–de Vries equation. C.S. Gardner, J.M. Greene, Kruskal and R.M. Miura applied [Dym equation] to the solution of corresponding problem in Korteweg–de Vries equation. The Lax pair of the Harry Dym equation is associated with the Sturm–Liouville operator. The Liouville transformation transforms this operator isospectrally into the Schrödinger operator. Thus by the inverse Liouville transformation solutions of the Korteweg–de Vries equation are transformed into solutions of the Dym equation. An explicit solution of the Dym equation, valid in a finite interval, is found by an auto-Bäcklund transform Notes References Solitons Exactly solvable models Integrable systems
https://en.wikipedia.org/wiki/Radical%20axis
In Euclidean geometry, the radical axis of two non-concentric circles is the set of points whose power with respect to the circles are equal. For this reason the radical axis is also called the power line or power bisector of the two circles. In detail: For two circles with centers and radii the powers of a point with respect to the circles are Point belongs to the radical axis, if If the circles have two points in common, the radical axis is the common secant line of the circles. If point is outside the circles, has equal tangential distance to both the circles. If the radii are equal, the radical axis is the line segment bisector of . In any case the radical axis is a line perpendicular to On notations The notation radical axis was used by the French mathematician M. Chasles as axe radical. J.V. Poncelet used . J. Plücker introduced the term . J. Steiner called the radical axis line of equal powers () which led to power line (). Properties Geometric shape and its position Let be the position vectors of the points . Then the defining equation of the radical line can be written as: From the right equation one gets The pointset of the radical axis is indeed a line and is perpendicular to the line through the circle centers. ( is a normal vector to the radical axis !) Dividing the equation by , one gets the Hessian normal form. Inserting the position vectors of the centers yields the distances of the centers to the radical axis: , with . ( may be negative if is not between .) If the circles are intersecting at two points, the radical line runs through the common points. If they only touch each other, the radical line is the common tangent line. Special positions The radical axis of two intersecting circles is their common secant line. The radical axis of two touching circles is their common tangent. The radical axis of two non intersecting circles is the common secant of two convenient equipower circles (see below). Orthogonal circles For a point outside a circle and the two tangent points the equation holds and lie on the circle with center and radius . Circle intersects orthogonal. Hence: If is a point of the radical axis, then the four points lie on circle , which intersects the given circles orthogonally. The radical axis consists of all centers of circles, which intersect the given circles orthogonally. System of orthogonal circles The method described in the previous section for the construction of a pencil of circles, which intersect two given circles orthogonally, can be extended to the construction of two orthogonally intersecting systems of circles: Let be two apart lying circles (as in the previous section), their centers and radii and their radical axis. Now, all circles will be determined with centers on line , which have together with line as radical axis, too. If is such a circle, whose center has distance to the center and radius . From the result in the previous section o
https://en.wikipedia.org/wiki/Power%20of%20a%20point
In elementary plane geometry, the power of a point is a real number that reflects the relative distance of a given point from a given circle. It was introduced by Jakob Steiner in 1826. Specifically, the power of a point with respect to a circle with center and radius is defined by If is outside the circle, then , if is on the circle, then and if is inside the circle, then . Due to the Pythagorean theorem the number has the simple geometric meanings shown in the diagram: For a point outside the circle is the squared tangential distance of point to the circle . Points with equal power, isolines of , are circles concentric to circle . Steiner used the power of a point for proofs of several statements on circles, for example: Determination of a circle, that intersects four circles by the same angle. Solving the Problem of Apollonius Construction of the Malfatti circles: For a given triangle determine three circles, which touch each other and two sides of the triangle each. Spherical version of Malfatti's problem: The triangle is a spherical one. Essential tools for investigations on circles are the radical axis of two circles and the radical center of three circles. The power diagram of a set of circles divides the plane into regions within which the circle minimizing the power is constant. More generally, French mathematician Edmond Laguerre defined the power of a point with respect to any algebraic curve in a similar way. Geometric properties Besides the properties mentioned in the lead there are further properties: Orthogonal circle For any point outside of the circle there are two tangent points on circle , which have equal distance to . Hence the circle with center through passes , too, and intersects orthogonal: The circle with center and radius intersects circle orthogonal. If the radius of the circle centered at is different from one gets the angle of intersection between the two circles applying the Law of cosines (see the diagram): ( and are normals to the circle tangents.) If lies inside the blue circle, then and is always different from . If the angle is given, then one gets the radius by solving the quadratic equation . Intersecting secants theorem, intersecting chords theorem For the intersecting secants theorem and chord theorem the power of a point plays the role of an invariant: Intersecting secants theorem: For a point outside a circle and the intersection points of a secant line with the following statement is true: , hence the product is independent of line . If is tangent then and the statement is the tangent-secant theorem. Intersecting chords theorem: For a point inside a circle and the intersection points of a secant line with the following statement is true: , hence the product is independent of line . Radical axis Let be a point and two non concentric circles with centers and radii . Point has the power with respect to circle . The set
https://en.wikipedia.org/wiki/Lie%20algebra%20cohomology
In mathematics, Lie algebra cohomology is a cohomology theory for Lie algebras. It was first introduced in 1929 by Élie Cartan to study the topology of Lie groups and homogeneous spaces by relating cohomological methods of Georges de Rham to properties of the Lie algebra. It was later extended by to coefficients in an arbitrary Lie module. Motivation If is a compact simply connected Lie group, then it is determined by its Lie algebra, so it should be possible to calculate its cohomology from the Lie algebra. This can be done as follows. Its cohomology is the de Rham cohomology of the complex of differential forms on . Using an averaging process, this complex can be replaced by the complex of left-invariant differential forms. The left-invariant forms, meanwhile, are determined by their values at the identity, so that the space of left-invariant differential forms can be identified with the exterior algebra of the Lie algebra, with a suitable differential. The construction of this differential on an exterior algebra makes sense for any Lie algebra, so it is used to define Lie algebra cohomology for all Lie algebras. More generally one uses a similar construction to define Lie algebra cohomology with coefficients in a module. If is a simply connected noncompact Lie group, the Lie algebra cohomology of the associated Lie algebra does not necessarily reproduce the de Rham cohomology of . The reason for this is that the passage from the complex of all differential forms to the complex of left-invariant differential forms uses an averaging process that only makes sense for compact groups. Definition Let be a Lie algebra over a commutative ring R with universal enveloping algebra , and let M be a representation of (equivalently, a -module). Considering R as a trivial representation of , one defines the cohomology groups (see Ext functor for the definition of Ext). Equivalently, these are the right derived functors of the left exact invariant submodule functor Analogously, one can define Lie algebra homology as (see Tor functor for the definition of Tor), which is equivalent to the left derived functors of the right exact coinvariants functor Some important basic results about the cohomology of Lie algebras include Whitehead's lemmas, Weyl's theorem, and the Levi decomposition theorem. Chevalley–Eilenberg complex Let be a Lie algebra over a field , with a left action on the -module . The elements of the Chevalley–Eilenberg complex are called cochains from to . A homogeneous -cochain from to is thus an alternating -multilinear function . When is finitely generated as vector space, the Chevalley–Eilenberg complex is canonically isomorphic to the tensor product , where denotes the dual vector space of . The Lie bracket on induces a transpose application by duality. The latter is sufficient to define a derivation of the complex of cochains from to by extending according to the graded Leibniz rule. It follows from the Jacobi iden
https://en.wikipedia.org/wiki/Keith%20R.%20Thompson
Keith Thompson (1951 - 2022) was a professor at Dalhousie University with a joint appointment in the Department of Oceanography and the Department of Mathematics and Statistics. Thompson was trained in the UK and obtained his Ph.D. from the University of Liverpool in 1979. His research interests focused on ocean and shelf circulation, 4D data assimilation, extremal analysis and applied time series analysis. Prof. Thompson was awarded a Tier I Canada Research Chair in Marine Prediction and Environmental Statistics. The Canada Research Chairs Program is part of a national strategy to make Canada one of the world’s top five countries for research and development. Chair holders are recognized leaders in their fields and are selected in order to advance the frontiers of knowledge, not only through research, but also by teaching and supervising students and coordinating the work of other researchers. Prof. Thompson has been awarded a Tier I chair, which is the highest level. (For more information on the Canada Research Chairs program see http://www.chairs.gc.ca/). Prof. Thompson was also awarded the President’s Prize of the Canadian Oceanographic and Meteorological Society in 1990, and Reviewer of the Year by the same organization. He has written over 50 scientific publications and sits on international committees including the Coastal Ocean Observations Panel of the Global Ocean Observing System. Thompson was awarded the J.P. Tully Medal in Oceanography from the Canadian Meteorological and Oceanographic Society in 2016. Thompson died aged 71 on 11 July 2022. See also References External links Keith Thompson Academic staff of the Dalhousie University Canada Research Chairs 1951 births 2022 deaths
https://en.wikipedia.org/wiki/Totally%20positive%20matrix
In mathematics, a totally positive matrix is a square matrix in which all the minors are positive: that is, the determinant of every square submatrix is a positive number. A totally positive matrix has all entries positive, so it is also a positive matrix; and it has all principal minors positive (and positive eigenvalues). A symmetric totally positive matrix is therefore also positive-definite. A totally non-negative matrix is defined similarly, except that all the minors must be non-negative (positive or zero). Some authors use "totally positive" to include all totally non-negative matrices. Definition Let be an n × n matrix. Consider any and any p × p submatrix of the form where: Then A is a totally positive matrix if: for all submatrices that can be formed this way. History Topics which historically led to the development of the theory of total positivity include the study of: the spectral properties of kernels and matrices which are totally positive, ordinary differential equations whose Green's function is totally positive (by M. G. Krein and some colleagues in the mid-1930s), the variation diminishing properties (started by I. J. Schoenberg in 1930), Pólya frequency functions (by I. J. Schoenberg in the late 1940s and early 1950s). Examples For example, a Vandermonde matrix whose nodes are positive and increasing is a totally positive matrix. See also Compound matrix References Further reading External links Spectral Properties of Totally Positive Kernels and Matrices, Allan Pinkus Parametrizations of Canonical Bases and Totally Positive Matrices, Arkady Berenstein Tensor Product Multiplicities, Canonical Bases And Totally Positive Varieties (2001), A. Berenstein , A. Zelevinsky Matrix theory Determinants
https://en.wikipedia.org/wiki/T-norm
In mathematics, a t-norm (also T-norm or, unabbreviated, triangular norm) is a kind of binary operation used in the framework of probabilistic metric spaces and in multi-valued logic, specifically in fuzzy logic. A t-norm generalizes intersection in a lattice and conjunction in logic. The name triangular norm refers to the fact that in the framework of probabilistic metric spaces t-norms are used to generalize the triangle inequality of ordinary metric spaces. Definition A t-norm is a function T: [0, 1] × [0, 1] → [0, 1] that satisfies the following properties: Commutativity: T(a, b) = T(b, a) Monotonicity: T(a, b) ≤ T(c, d) if a ≤ c and b ≤ d Associativity: T(a, T(b, c)) = T(T(a, b), c) The number 1 acts as identity element: T(a, 1) = a Since a t-norm is a binary algebraic operation on the interval [0, 1], infix algebraic notation is also common, with the t-norm usually denoted by . The defining conditions of the t-norm are exactly those of a partially ordered abelian monoid on the real unit interval [0, 1]. (Cf. ordered group.) The monoidal operation of any partially ordered abelian monoid L is therefore by some authors called a triangular norm on L. Classification of t-norms A t-norm is called continuous if it is continuous as a function, in the usual interval topology on [0, 1]2. (Similarly for left- and right-continuity.) A t-norm is called strict if it is continuous and strictly monotone. A t-norm is called nilpotent if it is continuous and each x in the open interval (0, 1) is nilpotent, that is, there is a natural number n such that x ... x (n times) equals 0. A t-norm is called Archimedean if it has the Archimedean property, that is, if for each x, y in the open interval (0, 1) there is a natural number n such that x ... x (n times) is less than or equal to y. The usual partial ordering of t-norms is pointwise, that is, T1 ≤ T2   if   T1(a, b) ≤ T2(a, b) for all a, b in [0, 1]. As functions, pointwise larger t-norms are sometimes called stronger than those pointwise smaller. In the semantics of fuzzy logic, however, the larger a t-norm, the weaker (in terms of logical strength) conjunction it represents. Prominent examples Minimum t-norm also called the Gödel t-norm, as it is the standard semantics for conjunction in Gödel fuzzy logic. Besides that, it occurs in most t-norm based fuzzy logics as the standard semantics for weak conjunction. It is the pointwise largest t-norm (see the properties of t-norms below). Product t-norm (the ordinary product of real numbers). Besides other uses, the product t-norm is the standard semantics for strong conjunction in product fuzzy logic. It is a strict Archimedean t-norm. Łukasiewicz t-norm The name comes from the fact that the t-norm is the standard semantics for strong conjunction in Łukasiewicz fuzzy logic. It is a nilpotent Archimedean t-norm, pointwise smaller than the product t-norm. Drastic t-norm The name reflects the fact that the drastic t-norm is the pointw
https://en.wikipedia.org/wiki/Mimic%20function
A mimic function changes a file so it assumes the statistical properties of another file . That is, if is the probability of some substring occurring in , then a mimic function , recodes so that approximates for all strings of length less than some . It is commonly considered to be one of the basic techniques for hiding information, often called steganography. The simplest mimic functions use simple statistical models to pick the symbols in the output. If the statistical model says that item occurs with probability and item occurs with probability , then a random number is used to choose between outputting or with probability or respectively. Even more sophisticated models use reversible Turing machines. References Steganography
https://en.wikipedia.org/wiki/Multi-adjoint%20logic%20programming
Multi-adjoint logic programming defines syntax and semantics of a logic programming program in such a way that the underlying maths justifying the results are a residuated lattice and/or MV-algebra. The definition of a multi-adjoint logic program is given, as usual in fuzzy logic programming, as a set of weighted rules and facts of a given formal language F. Notice that we are allowed to use different implications in our rules. Definition: A multi-adjoint logic program is a set P of rules of the form <(A ←i B), δ> such that: 1. The rule (A ←i B) is a formula of F; 2. The confidence factor δ is an element (a truth-value) of L; 3. The head A is an atom; 4. The body B is a formula built from atoms B1, …, Bn (n ≥ 0) by the use of conjunctors, disjunctors, and aggregators. 5. Facts are rules with body ┬. 6. A query (or goal) is an atom intended as a question ?A prompting the system. Implementations Implementations of Multi-adjoint logic programming: Rfuzzy, Floper, and more we do not remember now. Programming languages
https://en.wikipedia.org/wiki/How%20to%20Lie%20with%20Statistics
How to Lie with Statistics is a book written by Darrell Huff in 1954, presenting an introduction to statistics for the general reader. Not a statistician, Huff was a journalist who wrote many how-to articles as a freelancer. The book is a brief, breezy illustrated volume outlining the misuse of statistics and errors in the interpretation of statistics, and how errors create incorrect conclusions. In the 1960s and 1970s, it became a standard textbook introduction to the subject of statistics for many college students. It has become one of the best-selling statistics books in history, with over one and a half million copies sold in the English-language edition. It has also been widely translated. Themes of the book include "Correlation does not imply causation" and "Using random sampling". It also shows how statistical graphs can be used to distort reality, for example by truncating the bottom of a line or bar chart, so that differences seem larger than they are, or by representing one-dimensional quantities on a pictogram by two- or three-dimensional objects to compare their sizes, so that the reader forgets that the images do not scale the same way the quantities do. The original edition contained illustrations by artist Irving Geis. In a UK edition, Geis' illustrations were replaced by cartoons by Mel Calman. See also Lies, damned lies, and statistics Notes References Darrell Huff, (1954) How to Lie with Statistics (illust. I. Geis), Norton, New York, External links 1954 non-fiction books Statistics books Misuse of statistics
https://en.wikipedia.org/wiki/CPMP
CPMP may refer to: Committee for Proprietary Medicinal Products Certified Project Management Professional Core-Plus Mathematics Project Cyclic pyranopterin monophosphate or fosdenopterin Commissioning Process Management Professional (ASHRAE)
https://en.wikipedia.org/wiki/Peter%20Swinnerton-Dyer
Sir Henry Peter Francis Swinnerton-Dyer, 16th Baronet, (2 August 1927 – 26 December 2018) was an English mathematician specialising in number theory at the University of Cambridge. As a mathematician he was best known for his part in the Birch and Swinnerton-Dyer conjecture relating algebraic properties of elliptic curves to special values of L-functions, which was developed with Bryan Birch during the first half of the 1960s with the help of machine computation, and for his work on the Titan operating system. Biography Swinnerton-Dyer was the son of Sir Leonard Schroeder Swinnerton Dyer, 15th Baronet, and his wife Barbara, daughter of Hereward Brackenbury. He was educated at the Dragon School in Oxford, Eton College and Trinity College, Cambridge, where he was supervised by J. E. Littlewood, and spent a year abroad as a Commonwealth Fund Fellow at the University of Chicago. He was later made a Fellow of Trinity, and was Master of St Catharine's College from 1973 to 1983 and vice-chancellor of the University of Cambridge from 1979 to 1981. In 1983 he was made an Honorary Fellow of St Catharine's. In that same year, he became Chairman of the University Grants Committee and then from 1989, Chief Executive of its successor, the Universities Funding Council. He was elected Fellow of the Royal Society in 1967 and was made a KBE in 1987. In 1981, he was awarded an Honorary Degree (Doctor of Science) by the University of Bath. In 2006 he was awarded the Sylvester Medal, and also the Pólya Prize (LMS). Swinnerton-Dyer was, in his younger days, an international bridge player, representing the British team twice in the European Open teams championship. In 1953 at Helsinki he was partnered by Dimmie Fleming: the team came second out of fifteen teams. In 1962 he was partnered by Ken Barbour; the team came fourth out of twelve teams at Beirut. He married Dr Harriet Crawford in 1983. Death Swinnerton-Dyer died on 26 December 2018 at the age of 91. Books . . See also List of Masters of St Catharine's College, Cambridge List of Vice-Chancellors of the University of Cambridge Littlewood conjecture Rank of a partition Swinnerton-Dyer polynomials Notes External links Number Theory and Algebraic Geometry -- to Peter Swinnerton-Dyer on his 75th birthday, edited by Miles Reid and Alexei Skorobogatov, LMS Lecture Notes 303, Cambridge University Press, 2004 Interviewed by Alan Macfarlane 12 May 2008 (video) 1927 births 2018 deaths Alumni of Trinity College, Cambridge Dyer, Henry Peter Francis Swinnerton-Dyer, 16th Baronet British and Irish contract bridge players Cambridge mathematicians English mathematicians Fellows of the Royal Society Fellows of Trinity College, Cambridge Knights Commander of the Order of the British Empire Masters of St Catharine's College, Cambridge Vice-Chancellors of the University of Cambridge
https://en.wikipedia.org/wiki/Cyclically%20reduced%20word
In mathematics, cyclically reduced word is a concept of combinatorial group theory. Let F(X) be a free group. Then a word w in F(X) is said to be cyclically reduced if and only if every cyclic permutation of the word is reduced. Properties Every cyclic shift and the inverse of a cyclically reduced word are cyclically reduced again. Every word is conjugate to a cyclically reduced word. The cyclically reduced words are minimal-length representatives of the conjugacy classes in the free group. This representative is not uniquely determined, but it is unique up to cyclic shifts (since every cyclic shift is a conjugate element). References Combinatorial group theory Combinatorics on words
https://en.wikipedia.org/wiki/P%C3%A9ter%20Frankl
Péter Frankl (born 26 March 1953 in Kaposvár, Somogy County, Hungary) is a mathematician, street performer, columnist and educator, active in Japan. Frankl studied mathematics at Eötvös Loránd University in Budapest and submitted his PhD thesis while still an undergraduate. He holds a PhD degree from the University Paris Diderot as well. He has lived in Japan since 1988, where he is a well-known personality and often appears in the media. He keeps travelling around Japan performing (juggling and giving public lectures on various topics). Frankl won a gold medal at the International Mathematical Olympiad in 1971. He has seven joint papers with Paul Erdős, and eleven joint papers with Ronald Graham. His research is in combinatorics, especially in extremal combinatorics. He is the author of the union-closed sets conjecture. Personality Both of his parents were survivors of concentration camps and taught him "The only things you own are in your heart and brain". So he became a mathematician. Frankl often lectures about racial discrimination. Adolescence and abilities He could multiply two digit numbers when he was four years old. Frankl speaks 12 languages (Hungarian, English, Russian, Swedish, French, Spanish, Polish, German, Japanese, Chinese, Thai, Korean) and lectured mathematics in many countries in these languages. He has travelled to more than 100 countries. Activities Frankl learnt juggling from Ronald Graham. He and Vojtěch Rödl solved a $1000 problem of Paul Erdős. Zsolt Baranyai helped Frankl to get a scholarship in France, where he became a CNRS research fellow. For 1984 to 1990, Frankl and Akiyama worked hard organizing a Japanese mathematical Olympiad team, and as a consequence the Japanese team is now a regular participant of the International Mathematical Olympiad. Since 1998, he is an external member of the Hungarian Academy of Sciences. He authored more than thirty books in Japanese, and with László Babai, he wrote the manuscript of a book on "Linear Algebra Methods in Combinatorics". With Norihide Tokushige he is the coauthor of the book Extremal Problems For Finite Sets (American Mathematical Society, 2018). Frankl conjecture For any finite union-closed family of finite sets, other than the family consisting only of the empty set, there exists an element that belongs to at least half of the sets in the family. See also Frankl–Rödl graph References External links Timeline in Japanese 1953 births Living people 20th-century Hungarian mathematicians 20th-century Hungarian people 21st-century Hungarian mathematicians 21st-century Hungarian people Graph theorists Jugglers Members of the Hungarian Academy of Sciences Hungarian Jews Expatriate television personalities in Japan Hungarian expatriates in Japan People from Kaposvár International Mathematical Olympiad participants
https://en.wikipedia.org/wiki/Master%20of%20Mathematics
A Master of Mathematics (or MMath) degree is a specific advanced integrated Master's degree for courses in the field of mathematics. United Kingdom In the United Kingdom, the MMath is the internationally recognized standard qualification after a four-year course in mathematics at a university. The MMath programme was set up by most leading universities after the Neumann Report in 1992. It is classed as a level 7 qualification in the Frameworks of Higher Education Qualifications of UK Degree-Awarding Bodies. The UCAS course codes for the MMath degrees start at G100 upwards, most courses taking the codes G101 - G104. Universities which offer MMath degrees include: Aberystwyth University University of Bath University of Bristol (MSci) Brunel University University of Birmingham (MSci) Cardiff University University of Cambridge City University London University of Central Lancashire University of Dundee University of Durham University of East Anglia University of Edinburgh University of Essex University of Exeter University of Glasgow Heriot-Watt University University of Hull University of Keele University of Kent Lancaster University University of Leeds University of Leicester University of Lincoln University of Liverpool Liverpool Hope University Loughborough University University of Manchester Manchester Metropolitan University Middlesex University (from 2014) Newcastle University Northumbria University University of Nottingham Nottingham Trent University Open University (until 2007) Oxford Brookes University University of Oxford University of Plymouth University of Portsmouth University of Reading University of St Andrews University of Sheffield University of Southampton University of Strathclyde University of Surrey University of Sussex Swansea University University of Warwick University of York Notes Canada In Canada, the MMath is a graduate degree offered by the University of Waterloo. The length of the MMath degree program is typically between one and two years, and consists of course work along with a research component. The first Waterloo MMath degrees were awarded in 1967. MMath is the master's degree offered by David R. Cheriton School of Computer Science, since it is within the Faculty of Mathematics at Waterloo. India In India, the M. Math. is a graduate degree offered by the Indian Statistical Institute. The course consists of two years and is done in the Bangalore and Kolkata centres in alternating years. The participants are selected through two screening(one objective and other subjective) tests along with an interview and it is arguably the best graduate course in mathematics offered in India. See also Bachelor of Mathematics British degree abbreviations Bachelor's degrees Master's degrees References Mathematics Mathematics education
https://en.wikipedia.org/wiki/Phase%20plane
In applied mathematics, in particular the context of nonlinear system analysis, a phase plane is a visual display of certain characteristics of certain kinds of differential equations; a coordinate plane with axes being the values of the two state variables, say (x, y), or (q, p) etc. (any pair of variables). It is a two-dimensional case of the general n-dimensional phase space. The phase plane method refers to graphically determining the existence of limit cycles in the solutions of the differential equation. The solutions to the differential equation are a family of functions. Graphically, this can be plotted in the phase plane like a two-dimensional vector field. Vectors representing the derivatives of the points with respect to a parameter (say time t), that is (dx/dt, dy/dt), at representative points are drawn. With enough of these arrows in place the system behaviour over the regions of plane in analysis can be visualized and limit cycles can be easily identified. The entire field is the phase portrait, a particular path taken along a flow line (i.e. a path always tangent to the vectors) is a phase path. The flows in the vector field indicate the time-evolution of the system the differential equation describes. In this way, phase planes are useful in visualizing the behaviour of physical systems; in particular, of oscillatory systems such as predator-prey models (see Lotka–Volterra equations). In these models the phase paths can "spiral in" towards zero, "spiral out" towards infinity, or reach neutrally stable situations called centres where the path traced out can be either circular, elliptical, or ovoid, or some variant thereof. This is useful in determining if the dynamics are stable or not. Other examples of oscillatory systems are certain chemical reactions with multiple steps, some of which involve dynamic equilibria rather than reactions that go to completion. In such cases one can model the rise and fall of reactant and product concentration (or mass, or amount of substance) with the correct differential equations and a good understanding of chemical kinetics. Example of a linear system A two-dimensional system of linear differential equations can be written in the form: which can be organized into a matrix equation: where A is the 2 × 2 coefficient matrix above, and v = (x, y) is a coordinate vector of two independent variables. Such systems may be solved analytically, for this case by integrating: although the solutions are implicit functions in x and y, and are difficult to interpret. Solving using eigenvalues More commonly they are solved with the coefficients of the right hand side written in matrix form using eigenvalues λ, given by the determinant: and eigenvectors: The eigenvalues represent the powers of the exponential components and the eigenvectors are coefficients. If the solutions are written in algebraic form, they express the fundamental multiplicative factor of the exponential term. Due to the nonuni
https://en.wikipedia.org/wiki/New%20York%20State%20Mathematics%20League
The New York State Mathematics League (NYSML) competition was originally held in 1973 and has been held annually in a different location each year since. It was founded by Alfred Kalfus. The American Regions Math League competition is based on the format of the NYSML competition. The current iteration contains four sections: the team round, power question, individual round, and a relay. These are done in competition in that order. All of these rounds are done without a calculator. Each individual team can have up to fifteen students, which is the usual amount per team. Like ARML, it has banned the user of calculators beginning in the 2009 contest. Competition Format There are four sections in the current iteration, done so in a day: A team round where a team collaborates to solve ten questions in twenty minutes. There is a possible 50 points to earn here. A power question where a team has an hour to complete ten questions which requires proofs and explanations for a possible 50 points. An individual round, where each team member has five groups of two questions to answer, with each group of questions taking ten minutes, totaling fifty minutes for ten questions for a possible 150 points. A relay round, where teams are broken up into five groups of three if possible. There are three problems, with each member giving their answer back to the next member until it hits the third member, who can rise at 3 minutes for a correct answer to get 5 points, or rise at the time limit of 6 minutes for a correct answer to get 3 points. The maximum is fifty points. This brings the total maximum for points to 300. Past NYSML Competition Sites Past NYSML Winners Past NYSML Individual Winners (a.k.a. the Curt Boddie Award) External links NYSML Homepage https://artofproblemsolving.com/wiki/index.php/New_York_State_Math_League https://web.archive.org/web/20230000000000*/NYSML.com Mathematics competitions
https://en.wikipedia.org/wiki/Cellular%20homology
In mathematics, cellular homology in algebraic topology is a homology theory for the category of CW-complexes. It agrees with singular homology, and can provide an effective means of computing homology modules. Definition If is a CW-complex with n-skeleton , the cellular-homology modules are defined as the homology groups Hi of the cellular chain complex where is taken to be the empty set. The group is free abelian, with generators that can be identified with the -cells of . Let be an -cell of , and let be the attaching map. Then consider the composition where the first map identifies with via the characteristic map of , the object is an -cell of X, the third map is the quotient map that collapses to a point (thus wrapping into a sphere ), and the last map identifies with via the characteristic map of . The boundary map is then given by the formula where is the degree of and the sum is taken over all -cells of , considered as generators of . Examples The following examples illustrate why computations done with cellular homology are often more efficient than those calculated by using singular homology alone. The n-sphere The n-dimensional sphere Sn admits a CW structure with two cells, one 0-cell and one n-cell. Here the n-cell is attached by the constant mapping from to 0-cell. Since the generators of the cellular chain groups can be identified with the k-cells of Sn, we have that for and is otherwise trivial. Hence for , the resulting chain complex is but then as all the boundary maps are either to or from trivial groups, they must all be zero, meaning that the cellular homology groups are equal to When , it is possible to verify that the boundary map is zero, meaning the above formula holds for all positive . Genus g surface Cellular homology can also be used to calculate the homology of the genus g surface . The fundamental polygon of is a -gon which gives a CW-structure with one 2-cell, 1-cells, and one 0-cell. The 2-cell is attached along the boundary of the -gon, which contains every 1-cell twice, once forwards and once backwards. This means the attaching map is zero, since the forwards and backwards directions of each 1-cell cancel out. Similarly, the attaching map for each 1-cell is also zero, since it is the constant mapping from to the 0-cell. Therefore, the resulting chain complex is where all the boundary maps are zero. Therefore, this means the cellular homology of the genus g surface is given by Similarly, one can construct the genus g surface with a crosscap attached as a CW complex with 1 0-cell, g 1-cells, and 1 2-cell. Its homology groups are Torus The n-torus can be constructed as the CW complex with 1 0-cell, n 1-cells, ..., and 1 n-cell. The chain complex is and all the boundary maps are zero. This can be understood by explicitly constructing the cases for , then see the pattern. Thus, . Complex projective space If has no adjacent-dimensional cells, (so if it has n-
https://en.wikipedia.org/wiki/Duoprism
In geometry of 4 dimensions or higher, a double prism or duoprism is a polytope resulting from the Cartesian product of two polytopes, each of two dimensions or higher. The Cartesian product of an -polytope and an -polytope is an -polytope, where and are dimensions of 2 (polygon) or higher. The lowest-dimensional duoprisms exist in 4-dimensional space as 4-polytopes being the Cartesian product of two polygons in 2-dimensional Euclidean space. More precisely, it is the set of points: where and are the sets of the points contained in the respective polygons. Such a duoprism is convex if both bases are convex, and is bounded by prismatic cells. Nomenclature Four-dimensional duoprisms are considered to be prismatic 4-polytopes. A duoprism constructed from two regular polygons of the same edge length is a uniform duoprism. A duoprism made of n-polygons and m-polygons is named by prefixing 'duoprism' with the names of the base polygons, for example: a triangular-pentagonal duoprism is the Cartesian product of a triangle and a pentagon. An alternative, more concise way of specifying a particular duoprism is by prefixing with numbers denoting the base polygons, for example: 3,5-duoprism for the triangular-pentagonal duoprism. Other alternative names: q-gonal-p-gonal prism q-gonal-p-gonal double prism q-gonal-p-gonal hyperprism The term duoprism is coined by George Olshevsky, shortened from double prism. John Horton Conway proposed a similar name proprism for product prism, a Cartesian product of two or more polytopes of dimension at least two. The duoprisms are proprisms formed from exactly two polytopes. Example 16-16 duoprism Geometry of 4-dimensional duoprisms A 4-dimensional uniform duoprism is created by the product of a regular n-sided polygon and a regular m-sided polygon with the same edge length. It is bounded by n m-gonal prisms and m n-gonal prisms. For example, the Cartesian product of a triangle and a hexagon is a duoprism bounded by 6 triangular prisms and 3 hexagonal prisms. When m and n are identical, the resulting duoprism is bounded by 2n identical n-gonal prisms. For example, the Cartesian product of two triangles is a duoprism bounded by 6 triangular prisms. When m and n are identically 4, the resulting duoprism is bounded by 8 square prisms (cubes), and is identical to the tesseract. The m-gonal prisms are attached to each other via their m-gonal faces, and form a closed loop. Similarly, the n-gonal prisms are attached to each other via their n-gonal faces, and form a second loop perpendicular to the first. These two loops are attached to each other via their square faces, and are mutually perpendicular. As m and n approach infinity, the corresponding duoprisms approach the duocylinder. As such, duoprisms are useful as non-quadric approximations of the duocylinder. Nets Perspective projections A cell-centered perspective projection makes a duoprism look like a torus, with two sets of orthogonal cells, p-gonal
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Mathematics%20in%20the%20Sciences
The Max Planck Institute for Mathematics in the Sciences (MPI MiS) in Leipzig is a research institute of the Max Planck Society. Founded on March 1, 1996, the institute works on projects which apply mathematics in various areas of natural sciences, in particular physics, biology, chemistry and material science. Research groups Nonlinear algebra (Bernd Sturmfels), Pattern formation, energy landscapes and scaling laws (Felix Otto), Riemannian, Kählerian and algebraic geometry, and neuronal networks (Jürgen Jost), Applied Analysis () Geometry, Groups and Dynamics (Anna Wienhard) Information Theory of Cognitive Systems (Nihat Ay) Stochastic partial differential equations (Benjamin Gess) Mathematical Software (Michael Joswig) Combinatorial Algebraic Geometry (Mateusz Michałek) Deep Learning Theory (Guido Montúfar) Structure of Evolution (Matteo Smerlak) Tensors and Optimization (André Uschmajew) The institute has an extensive visitors programme which has made Leipzig a main place for research in applied mathematics. The MPI MiS is a member of ERCOM (European Research Centres in Mathematics). External links Homepage of the institute Mathematical institutes Mathematics in the Sciences
https://en.wikipedia.org/wiki/Meyer%27s%20theorem
In number theory, Meyer's theorem on quadratic forms states that an indefinite quadratic form Q in five or more variables over the field of rational numbers nontrivially represents zero. In other words, if the equation: Q(x) = 0 has a non-zero real solution, then it has a non-zero rational solution (the converse is obvious). By clearing the denominators, an integral solution x may also be found. Meyer's theorem is usually deduced from the Hasse–Minkowski theorem (which was proved later) and the following statement: A rational quadratic form in five or more variables represents zero over the field Qp of the p-adic numbers for all p. Meyer's theorem is best possible with respect to the number of variables: there are indefinite rational quadratic forms Q in four variables which do not represent zero. One family of examples is given by: Q(x1,x2,x3,x4) = x12 + x22 − p(x32 + x42), where p is a prime number that is congruent to 3 modulo 4. This can be proved by the method of infinite descent using the fact that if the sum of two perfect squares is divisible by such a p then each summand is divisible by p. See also Lattice (group) Oppenheim conjecture References Quadratic forms Theorems in number theory
https://en.wikipedia.org/wiki/Karl%20Shell
Karl Shell (born May 10, 1938) is an American theoretical economist, specializing in macroeconomics and monetary economics. Shell received an A.B. in mathematics from Princeton University in 1960. He earned his Ph.D. in economics in 1965 at Stanford University, where he studied under Nobel Prize in Economics winner Kenneth Arrow and Hirofumi Uzawa. Shell is currently Robert Julius Thorne Professor of Economics at Cornell University (succeeding notable economist and airline deregulator Alfred E. Kahn in the Thorne chair). He previously served on the economics faculty at MIT and the University of Pennsylvania. Shell has been editor of the Journal of Economic Theory, generally regarded as the leading journal in theoretical economics, since its inception in 1968. Contributions to economics While Shell has published academic articles on numerous topics in economics, he is primarily known for his contributions in three areas. Between 1966 and 1973, Shell published three papers on inventive activity, increasing returns to scale, industrial organization, and economic growth. This contribution was important in its day, and later influenced the development of "new growth theory." Among others, Paul Romer cited and heavily built upon Shell's work in his seminal papers on endogenous growth theory. Shell also made important contributions to the overlapping generations literature (and was perhaps the first to refer to the overlapping generations model by its modern name). The overlapping generations model is now a workhorse in modern macroeconomics and monetary economics. Karl Shell is also co-inventor (with David Cass) of the concept of sunspot equilibrium (and sunspots). Sunspot equilibrium provides a model for excess market volatility, including bank runs. References Publications Karl Shell, "Toward a Theory of Inventive Activity and Capital Accumulation", American Economic Review, Vol. 56(2), May 1966, 62–68. Karl Shell, "A Model of Inventive Activity and Capital Accumulation" in Essays on the Theory of Optimal Economic Growth (K. Shell, ed.), Cambridge, Massachusetts: MIT Press, 1967, Chapter IV, 67–85. Karl Shell (Editor), Essays on the Theory of Optimal Economic Growth, Cambridge, Massachusetts: MIT Press, 1967, (hardcover), 9780262690133(paperback). Karl Shell, "Notes on the Economics of Infinity", Journal of Political Economy, Vol. 79(5), September/October 1971, 1002–1011. Karl Shell and Giorgio P. Szegö (Editor), Mathematical Methods in Investment and Finance, Amsterdam: North-Holland, 1972. (North- Holland), 044410395 (American Elsevier). Karl Shell and Franklin M. Fisher, The Economic Theory of Price Indices: Two Essays on the Effect of Taste, Quality, and Technological Change , New York: Academic Press, 1972. . Karl Shell, "Inventive Activity, Industrial Organization and Economic Growth" in Models of Economic Growth (J.A. Mirrlees and N. Stern, eds.), London: Macmillan, and New York: Halsted (John Wiley & Sons), 1973, 77–100. Karl
https://en.wikipedia.org/wiki/Topological%20manifold
In topology, a branch of mathematics, a topological manifold is a topological space that locally resembles real n-dimensional Euclidean space. Topological manifolds are an important class of topological spaces, with applications throughout mathematics. All manifolds are topological manifolds by definition. Other types of manifolds are formed by adding structure to a topological manifold (e.g. differentiable manifolds are topological manifolds equipped with a differential structure). Every manifold has an "underlying" topological manifold, obtained by simply "forgetting" the added structure. However, not every topological manifold can be endowed with a particular additional structure. For example, the E8 manifold is a topological manifold which cannot be endowed with a differentiable structure. Formal definition A topological space X is called locally Euclidean if there is a non-negative integer n such that every point in X has a neighborhood which is homeomorphic to real n-space Rn. A topological manifold is a locally Euclidean Hausdorff space. It is common to place additional requirements on topological manifolds. In particular, many authors define them to be paracompact or second-countable. In the remainder of this article a manifold will mean a topological manifold. An n-manifold will mean a topological manifold such that every point has a neighborhood homeomorphic to Rn. Examples n-Manifolds The real coordinate space Rn is an n-manifold. Any discrete space is a 0-dimensional manifold. A circle is a compact 1-manifold. A torus and a Klein bottle are compact 2-manifolds (or surfaces). The n-dimensional sphere Sn is a compact n-manifold. The n-dimensional torus Tn (the product of n circles) is a compact n-manifold. Projective manifolds Projective spaces over the reals, complexes, or quaternions are compact manifolds. Real projective space RPn is a n-dimensional manifold. Complex projective space CPn is a 2n-dimensional manifold. Quaternionic projective space HPn is a 4n-dimensional manifold. Manifolds related to projective space include Grassmannians, flag manifolds, and Stiefel manifolds. Other manifolds Differentiable manifolds are a class of topological manifolds equipped with a differential structure. Lens spaces are a class of differentiable manifolds that are quotients of odd-dimensional spheres. Lie groups are a class of differentiable manifolds equipped with a compatible group structure. The E8 manifold is a topological manifold which cannot be given a differentiable structure. Properties The property of being locally Euclidean is preserved by local homeomorphisms. That is, if X is locally Euclidean of dimension n and f : Y → X is a local homeomorphism, then Y is locally Euclidean of dimension n. In particular, being locally Euclidean is a topological property. Manifolds inherit many of the local properties of Euclidean space. In particular, they are locally compact, locally connected, first countable, loca
https://en.wikipedia.org/wiki/Differentiable%20manifold
In mathematics, a differentiable manifold (also differential manifold) is a type of manifold that is locally similar enough to a vector space to allow one to apply calculus. Any manifold can be described by a collection of charts (atlas). One may then apply ideas from calculus while working within the individual charts, since each chart lies within a vector space to which the usual rules of calculus apply. If the charts are suitably compatible (namely, the transition from one chart to another is differentiable), then computations done in one chart are valid in any other differentiable chart. In formal terms, a differentiable manifold is a topological manifold with a globally defined differential structure. Any topological manifold can be given a differential structure locally by using the homeomorphisms in its atlas and the standard differential structure on a vector space. To induce a global differential structure on the local coordinate systems induced by the homeomorphisms, their compositions on chart intersections in the atlas must be differentiable functions on the corresponding vector space. In other words, where the domains of charts overlap, the coordinates defined by each chart are required to be differentiable with respect to the coordinates defined by every chart in the atlas. The maps that relate the coordinates defined by the various charts to one another are called transition maps. The ability to define such a local differential structure on an abstract space allows one to extend the definition of differentiability to spaces without global coordinate systems. A locally differential structure allows one to define the globally differentiable tangent space, differentiable functions, and differentiable tensor and vector fields. Differentiable manifolds are very important in physics. Special kinds of differentiable manifolds form the basis for physical theories such as classical mechanics, general relativity, and Yang–Mills theory. It is possible to develop a calculus for differentiable manifolds. This leads to such mathematical machinery as the exterior calculus. The study of calculus on differentiable manifolds is known as differential geometry. "Differentiability" of a manifold has been given several meanings, including: continuously differentiable, k-times differentiable, smooth (which itself has many meanings), and analytic. History The emergence of differential geometry as a distinct discipline is generally credited to Carl Friedrich Gauss and Bernhard Riemann. Riemann first described manifolds in his famous habilitation lecture before the faculty at Göttingen. He motivated the idea of a manifold by an intuitive process of varying a given object in a new direction, and presciently described the role of coordinate systems and charts in subsequent formal developments: Having constructed the notion of a manifoldness of n dimensions, and found that its true character consists in the property that the determination of position i
https://en.wikipedia.org/wiki/Weak%20convergence%20%28Hilbert%20space%29
In mathematics, weak convergence in a Hilbert space is convergence of a sequence of points in the weak topology. Definition A sequence of points in a Hilbert space H is said to converge weakly to a point x in H if for all y in H. Here, is understood to be the inner product on the Hilbert space. The notation is sometimes used to denote this kind of convergence. Properties If a sequence converges strongly (that is, if it converges in norm), then it converges weakly as well. Since every closed and bounded set is weakly relatively compact (its closure in the weak topology is compact), every bounded sequence in a Hilbert space H contains a weakly convergent subsequence. Note that closed and bounded sets are not in general weakly compact in Hilbert spaces (consider the set consisting of an orthonormal basis in an infinitely dimensional Hilbert space which is closed and bounded but not weakly compact since it doesn't contain 0). However, bounded and weakly closed sets are weakly compact so as a consequence every convex bounded closed set is weakly compact. As a consequence of the principle of uniform boundedness, every weakly convergent sequence is bounded. The norm is (sequentially) weakly lower-semicontinuous: if converges weakly to x, then and this inequality is strict whenever the convergence is not strong. For example, infinite orthonormal sequences converge weakly to zero, as demonstrated below. If weakly and , then strongly: If the Hilbert space is finite-dimensional, i.e. a Euclidean space, then weak and strong convergence are equivalent. Example The Hilbert space is the space of the square-integrable functions on the interval equipped with the inner product defined by (see Lp space). The sequence of functions defined by converges weakly to the zero function in , as the integral tends to zero for any square-integrable function on when goes to infinity, which is by Riemann–Lebesgue lemma, i.e. Although has an increasing number of 0's in as goes to infinity, it is of course not equal to the zero function for any . Note that does not converge to 0 in the or norms. This dissimilarity is one of the reasons why this type of convergence is considered to be "weak." Weak convergence of orthonormal sequences Consider a sequence which was constructed to be orthonormal, that is, where equals one if m = n and zero otherwise. We claim that if the sequence is infinite, then it converges weakly to zero. A simple proof is as follows. For x ∈ H, we have (Bessel's inequality) where equality holds when {en} is a Hilbert space basis. Therefore (since the series above converges, its corresponding sequence must go to zero) i.e. Banach–Saks theorem The Banach–Saks theorem states that every bounded sequence contains a subsequence and a point x such that converges strongly to x as N goes to infinity. Generalizations The definition of weak convergence can be extended to Banach spaces. A sequence of points in a Banach spac
https://en.wikipedia.org/wiki/Retraction%20%28topology%29
In topology, a branch of mathematics, a retraction is a continuous mapping from a topological space into a subspace that preserves the position of all points in that subspace. The subspace is then called a retract of the original space. A deformation retraction is a mapping that captures the idea of continuously shrinking a space into a subspace. An absolute neighborhood retract (ANR) is a particularly well-behaved type of topological space. For example, every topological manifold is an ANR. Every ANR has the homotopy type of a very simple topological space, a CW complex. Definitions Retract Let X be a topological space and A a subspace of X. Then a continuous map is a retraction if the restriction of r to A is the identity map on A; that is, for all a in A. Equivalently, denoting by the inclusion, a retraction is a continuous map r such that that is, the composition of r with the inclusion is the identity of A. Note that, by definition, a retraction maps X onto A. A subspace A is called a retract of X if such a retraction exists. For instance, any non-empty space retracts to a point in the obvious way (the constant map yields a retraction). If X is Hausdorff, then A must be a closed subset of X. If is a retraction, then the composition ι∘r is an idempotent continuous map from X to X. Conversely, given any idempotent continuous map we obtain a retraction onto the image of s by restricting the codomain. Deformation retract and strong deformation retract A continuous map is a deformation retraction of a space X onto a subspace A if, for every x in X and a in A, In other words, a deformation retraction is a homotopy between a retraction and the identity map on X. The subspace A is called a deformation retract of X. A deformation retraction is a special case of a homotopy equivalence. A retract need not be a deformation retract. For instance, having a single point as a deformation retract of a space X would imply that X is path connected (and in fact that X is contractible). Note: An equivalent definition of deformation retraction is the following. A continuous map is a deformation retraction if it is a retraction and its composition with the inclusion is homotopic to the identity map on X. In this formulation, a deformation retraction carries with it a homotopy between the identity map on X and itself. If, in the definition of a deformation retraction, we add the requirement that for all t in [0, 1] and a in A, then F is called a strong deformation retraction. In other words, a strong deformation retraction leaves points in A fixed throughout the homotopy. (Some authors, such as Hatcher, take this as the definition of deformation retraction.) As an example, the n-sphere is a strong deformation retract of as strong deformation retraction one can choose the map Cofibration and neighborhood deformation retract A map f: A → X of topological spaces is a (Hurewicz) cofibration if it has the homotopy extension property for maps to an
https://en.wikipedia.org/wiki/Space%20mathematics
Space mathematics may refer to: Orbital mechanics Newton's laws of motion Newton's law of universal gravitation Space (mathematics)
https://en.wikipedia.org/wiki/Polish%20Mathematical%20Society
The Polish Mathematical Society () is the main professional society of Polish mathematicians and represents Polish mathematics within the European Mathematical Society (EMS) and the International Mathematical Union (IMU). History The society was established in Kraków, Poland on 2 April 1919 . It was originally called the Mathematical Society in Kraków, the name was changed to the Polish Mathematical Society on 21 April 1920. It was founded by 16 mathematicians, Stanisław Zaremba, Franciszek Leja, Alfred Rosenblatt, Stefan Banach and Otto Nikodym were among them. Ever since its foundation, the society's main activity was to bring mathematicians together by means of organizing conferences and lectures. The second main activity is the publication of its annals Annales Societatis Mathematicae Polonae, consisting of: Series 1: Commentationes Mathematicae Series 2: Wiadomości Matematyczne ("Mathematical News"), in Polish Series 3: Mathematica Applicanda (formerly Matematyka Stosowana until 2012) Series 4: Fundamenta Informaticae Series 5: Didactica Mathematicae Series 6: Antiquitates Mathematicae Series 7: Delta, in Polish The annals are also known under the Polish name Roczniki Polskiego Towarzystwa Matematycznego and under the English name Polish Mathematical Society Annals. Stefan Banach Prize The Polish Mathematical Society has awarded the Stefan Banach Prize to the following recipients: International Stefan Banach Prize The International Stefan Banach Prize () is an annual award presented by the Mathematical Society to mathematicians for best doctoral dissertations in the mathematical sciences. Its aim is to "promote and financially support the most promising young researchers" in the field of mathematics. It was founded in 2009 and is named in honour of a renowned Polish mathematician Stefan Banach (1892-1945). The laureates of the award also receive a cash prize of zl 25,000 (c.$6,500). List of laureates: 2009: Tomasz Elsner 2010: Jakub Gismatullin 2011: Łukasz Pańkowski 2012: Andras Mathe 2013: Marcin Pilipczuk 2014: Dan Petersen 2015: Joonas Ilmavirta 2016: Adam Kanigowski 2017: Anna Szymusiak Presidents of the Polish Mathematical Society See also European Mathematical Society Polish Chemical Society Polish Physical Society References Website of the journal Wiadomości Matematyczne, edited by the Polish Mathematical Society. Retrieved on January 9, 2009. External links Polish Mathematical Society website (in Polish) Scientific societies based in Poland Mathematical societies 1917 establishments in Poland Polish awards
https://en.wikipedia.org/wiki/Gegenbauer%20polynomials
In mathematics, Gegenbauer polynomials or ultraspherical polynomials C(x) are orthogonal polynomials on the interval [−1,1] with respect to the weight function (1 − x2)α–1/2. They generalize Legendre polynomials and Chebyshev polynomials, and are special cases of Jacobi polynomials. They are named after Leopold Gegenbauer. Characterizations A variety of characterizations of the Gegenbauer polynomials are available. The polynomials can be defined in terms of their generating function : The polynomials satisfy the recurrence relation : Gegenbauer polynomials are particular solutions of the Gegenbauer differential equation : When α = 1/2, the equation reduces to the Legendre equation, and the Gegenbauer polynomials reduce to the Legendre polynomials. When α = 1, the equation reduces to the Chebyshev differential equation, and the Gegenbauer polynomials reduce to the Chebyshev polynomials of the second kind. They are given as Gaussian hypergeometric series in certain cases where the series is in fact finite: (Abramowitz & Stegun p. 561). Here (2α)n is the rising factorial. Explicitly, From this it is also easy to obtain the value at unit argument: They are special cases of the Jacobi polynomials : in which represents the rising factorial of . One therefore also has the Rodrigues formula Orthogonality and normalization For a fixed α > -1/2, the polynomials are orthogonal on [−1, 1] with respect to the weighting function (Abramowitz & Stegun p. 774) To wit, for n ≠ m, They are normalized by Applications The Gegenbauer polynomials appear naturally as extensions of Legendre polynomials in the context of potential theory and harmonic analysis. The Newtonian potential in Rn has the expansion, valid with α = (n − 2)/2, When n = 3, this gives the Legendre polynomial expansion of the gravitational potential. Similar expressions are available for the expansion of the Poisson kernel in a ball . It follows that the quantities are spherical harmonics, when regarded as a function of x only. They are, in fact, exactly the zonal spherical harmonics, up to a normalizing constant. Gegenbauer polynomials also appear in the theory of Positive-definite functions. The Askey–Gasper inequality reads In spectral methods for solving differential equations, if a function is expanded in the basis of Chebyshev polynomials and its derivative is represented in a Gegenbauer/ultraspherical basis, then the derivative operator becomes a diagonal matrix, leading to fast banded matrix methods for large problems. See also Rogers polynomials, the q-analogue of Gegenbauer polynomials Chebyshev polynomials Romanovski polynomials References * . . Specific Orthogonal polynomials Special hypergeometric functions
https://en.wikipedia.org/wiki/Axiomatic%20foundations%20of%20topological%20spaces
In the mathematical field of topology, a topological space is usually defined by declaring its open sets. However, this is not necessary, as there are many equivalent axiomatic foundations, each leading to exactly the same concept. For instance, a topological space determines a class of closed sets, of closure and interior operators, and of convergence of various types of objects. Each of these can instead be taken as the primary class of objects, with all of the others (including the class of open sets) directly determined from that new starting point. For example, in Kazimierz Kuratowski's well-known textbook on point-set topology, a topological space is defined as a set together with a certain type of "closure operator," and all other concepts are derived therefrom. Likewise, the neighborhood-based axioms (in the context of Hausdorff spaces) can be retraced to Felix Hausdorff's original definition of a topological space in Grundzüge der Mengenlehre. Many different textbooks use many different inter-dependences of concepts to develop point-set topology. The result is always the same collection of objects: open sets, closed sets, and so on. For many practical purposes, the question of which foundation is chosen is irrelevant, as long as the meaning and interrelation between objects (many of which are given in this article), which are the same regardless of choice of development, are understood. However, there are cases where it can be useful to have flexibility. For instance, there are various natural notions of convergence of measures, and it is not immediately clear whether they arise from a topological structure or not. Such questions are greatly clarified by the topological axioms based on convergence. Standard definitions via open sets A topological space is a set together with a collection of subsets of satisfying: The empty set and are in The union of any collection of sets in is also in The intersection of any pair of sets in is also in Equivalently, the intersection of any finite collection of sets in is also in Given a topological space one refers to the elements of as the open sets of and it is common only to refer to in this way, or by the label topology. Then one makes the following secondary definitions: Given a second topological space a function is said to be continuous if and only if for every open subset of one has that is an open subset of A subset of is closed if and only if its complement is open. Given a subset of the closure is the set of all points such that any open set containing such a point must intersect Given a subset of the interior is the union of all open sets contained in Given an element of one says that a subset is a neighborhood of if and only if is contained in an open subset of which is also a subset of Some textbooks use "neighborhood of " to instead refer to an open set containing One says that a net converges to a point of if for any open set containi
https://en.wikipedia.org/wiki/Borel%20equivalence%20relation
In mathematics, a Borel equivalence relation on a Polish space X is an equivalence relation on X that is a Borel subset of X × X (in the product topology). Given Borel equivalence relations E and F on Polish spaces X and Y respectively, one says that E is Borel reducible to F, in symbols E ≤B F, if and only if there is a Borel function Θ : X → Y such that for all x,x' ∈ X, one has x E x' ⇔ Θ(x) F Θ(x'). Conceptually, if E is Borel reducible to F, then E is "not more complicated" than F, and the quotient space X/E has a lesser or equal "Borel cardinality" than Y/F, where "Borel cardinality" is like cardinality except for a definability restriction on the witnessing mapping. Kuratowski's theorem A measure space X is called a standard Borel space if it is Borel-isomorphic to a Borel subset of a Polish space. Kuratowski's theorem then states that two standard Borel spaces X and Y are Borel-isomorphic iff |X| = |Y|. See also References Kanovei, Vladimir; Borel equivalence relations. Structure and classification. University Lecture Series, 44. American Mathematical Society, Providence, RI, 2008. x+240 pp. Descriptive set theory Equivalence (mathematics)
https://en.wikipedia.org/wiki/Abstrakt%20Algebra
Abstrakt Algebra was a Swedish experimental metal band with influences from power metal and doom metal. It was founded by bassist Leif Edling in 1994, shortly after his main project Candlemass split up. They made one album, but Edling had already started working on a second album with a different line-up. However, due to the commercial failure of Abstrakt Algebra, Edling reformed Candlemass while taking with him some of the ideas for that second album, as well as drummer Jejo Perkovic. There, materialised on the Dactylis Glomerata album. And as such Abstrakt Algebra was over. That second album, called Abstrakt Algebra II, was later included as a bonus disc in the 2006 re-release of Dactylis Glomerata. Mats Levén later appeared as the singer of the band Krux, another band by Edling, which has a similar take on the experimentation Edling started with Abstrakt Algebra. Members Current members Mats Levén – vocals Patrik Instedt – guitar Leif Edling – bass Jejo Perkovic – drums Carl Westholm – keyboards Former members Mike Wead – guitar Simon Johansson – guitar Discography Abstrakt Algebra (1995) Abstrakt Algebra II (2008) References Swedish heavy metal musical groups Musical groups established in 1994 Swedish musical quintets 1994 establishments in Sweden
https://en.wikipedia.org/wiki/Moving-knife%20procedure
In the mathematics of social science, and especially game theory, a moving-knife procedure is a type of solution to the fair division problem. The canonical example is the division of a cake using a knife. The simplest example is a moving-knife equivalent of the I cut, you choose scheme, first described by A.K.Austin as a prelude to his own procedure: One player moves the knife across the cake, conventionally from left to right. The cake is cut when either player calls "stop". If each player calls stop when he or she perceives the knife to be at the 50-50 point, then the first player to call stop will produce an envy-free division if the caller gets the left piece and the other player gets the right piece. (This procedure is not necessarily efficient.) Generalizing this scheme to more than two players cannot be done by a discrete procedure without sacrificing envy-freeness. Examples of moving-knife procedures include The Stromquist moving-knives procedure The Austin moving-knife procedures The Levmore–Cook moving-knives procedure The Robertson–Webb rotating-knife procedure The Dubins–Spanier moving-knife procedure The Webb moving-knife procedure References Cake-cutting
https://en.wikipedia.org/wiki/Central%20binomial%20coefficient
In mathematics the nth central binomial coefficient is the particular binomial coefficient They are called central since they show up exactly in the middle of the even-numbered rows in Pascal's triangle. The first few central binomial coefficients starting at n = 0 are: , , , , , , 924, 3432, 12870, 48620, ...; Combinatorial interpretations and other properties The central binomial coefficient is the number of arrangements where there are an equal number of two types of objects. For example, when , the binomial coefficient is equal to 6, and there are six arrangements of two copies of A and two copies of B: AABB, ABAB, ABBA, BAAB, BABA, BBAA. The same central binomial coefficient is also the number of words of length 2n made up of A and B where there are never more B than A at any point as one reads from left to right. For example, when , there are six words of length 4 in which each prefix has at least as many copies of A as of B: AAAA, AAAB, AABA, AABB, ABAA, ABAB. The number of factors of 2 in is equal to the number of 1s in the binary representation of n. As a consequence, 1 is the only odd central binomial coefficient. Generating function The ordinary generating function for the central binomial coefficients is This can be proved using the binomial series and the relation where is a generalized binomial coefficient. The central binomial coefficients have exponential generating function where I0 is a modified Bessel function of the first kind. The generating function of the squares of the central binomial coefficients can be written in terms of the complete elliptic integral of the first kind: Asymptotic growth Simple bounds that immediately follow from are The asymptotic behavior can be described more precisely: Related sequences The closely related Catalan numbers Cn are given by: A slight generalization of central binomial coefficients is to take them as , with appropriate real numbers n, where is the gamma function and is the beta function. The powers of two that divide the central binomial coefficients are given by Gould's sequence, whose nth element is the number of odd integers in row n of Pascal's triangle. Squaring the generating function gives Comparing the coefficients of gives For example, . Similarly, Other information Half the central binomial coefficient (for ) is seen in Wolstenholme's theorem. By the Erdős squarefree conjecture, proved in 1996, no central binomial coefficient with n > 4 is squarefree. is the sum of the squares of the n-th row of Pascal's Triangle: For example, . Erdős uses central binomial coefficients extensively in his proof of Bertrand's postulate. Another noteworthy fact is that the power of 2 dividing is exactly . See also Central trinomial coefficient References . External links Factorial and binomial topics
https://en.wikipedia.org/wiki/Pascal%27s%20rule
In mathematics, Pascal's rule (or Pascal's formula) is a combinatorial identity about binomial coefficients. It states that for positive natural numbers n and k, where is a binomial coefficient; one interpretation of the coefficient of the term in the expansion of . There is no restriction on the relative sizes of and , since, if the value of the binomial coefficient is zero and the identity remains valid. Pascal's rule can also be viewed as a statement that the formula solves the linear two-dimensional difference equation over the natural numbers. Thus, Pascal's rule is also a statement about a formula for the numbers appearing in Pascal's triangle. Pascal's rule can also be generalized to apply to multinomial coefficients. Combinatorial proof Pascal's rule has an intuitive combinatorial meaning, that is clearly expressed in this counting proof. Proof. Recall that equals the number of subsets with k elements from a set with n elements. Suppose one particular element is uniquely labeled X in a set with n elements. To construct a subset of k elements containing X, include X and choose k − 1 elements from the remaining n − 1 elements in the set. There are such subsets. To construct a subset of k elements not containing X, choose k elements from the remaining n − 1 elements in the set. There are such subsets. Every subset of k elements either contains X or not. The total number of subsets with k elements in a set of n elements is the sum of the number of subsets containing X and the number of subsets that do not contain X, . This equals ; therefore, . Algebraic proof Alternatively, the algebraic derivation of the binomial case follows. Generalization Pascal's rule can be generalized to multinomial coefficients. For any integer p such that , and , where is the coefficient of the term in the expansion of . The algebraic derivation for this general case is as follows. Let p be an integer such that , and . Then See also Pascal's triangle Hockey-stick identity References Bibliography Merris, Russell. Combinatorics. John Wiley & Sons. 2003 External links Combinatorics Mathematical identities Articles containing proofs
https://en.wikipedia.org/wiki/Degree%20distribution
In the study of graphs and networks, the degree of a node in a network is the number of connections it has to other nodes and the degree distribution is the probability distribution of these degrees over the whole network. Definition The degree of a node in a network (sometimes referred to incorrectly as the connectivity) is the number of connections or edges the node has to other nodes. If a network is directed, meaning that edges point in one direction from one node to another node, then nodes have two different degrees, the in-degree, which is the number of incoming edges, and the out-degree, which is the number of outgoing edges. The degree distribution P(k) of a network is then defined to be the fraction of nodes in the network with degree k. Thus if there are n nodes in total in a network and nk of them have degree k, we have . The same information is also sometimes presented in the form of a cumulative degree distribution, the fraction of nodes with degree smaller than k, or even the complementary cumulative degree distribution, the fraction of nodes with degree greater than or equal to k (1 - C) if one considers C as the cumulative degree distribution; i.e. the complement of C. Observed degree distributions The degree distribution is very important in studying both real networks, such as the Internet and social networks, and theoretical networks. The simplest network model, for example, the (Erdős–Rényi model) random graph, in which each of n nodes is independently connected (or not) with probability p (or 1 − p), has a binomial distribution of degrees k: (or Poisson in the limit of large n, if the average degree is held fixed). Most networks in the real world, however, have degree distributions very different from this. Most are highly right-skewed, meaning that a large majority of nodes have low degree but a small number, known as "hubs", have high degree. Some networks, notably the Internet, the World Wide Web, and some social networks were argued to have degree distributions that approximately follow a power law: , where γ is a constant. Such networks are called scale-free networks and have attracted particular attention for their structural and dynamical properties. However, a survey of a wide range of real world networks suggests that scale-free networks are rare when assessed using statistically rigorous measures. Some researchers have disputed these findings arguing that the definitions used in the study are inappropriately strict, while others have argued that the precise functional form of the degree distribution is less important than knowing whether the degree distribution is fat-tailed or not. The over-interpretation of specific forms of the degree distribution has also been criticised for failing to consider how networks may evolve over time. Excess degree distribution Excess degree distribution is the probability distribution, for a node reached by following an edge, of the number of other edges attached to that
https://en.wikipedia.org/wiki/Fixed-point%20index
In mathematics, the fixed-point index is a concept in topological fixed-point theory, and in particular Nielsen theory. The fixed-point index can be thought of as a multiplicity measurement for fixed points. The index can be easily defined in the setting of complex analysis: Let f(z) be a holomorphic mapping on the complex plane, and let z0 be a fixed point of f. Then the function f(z) − z is holomorphic, and has an isolated zero at z0. We define the fixed-point index of f at z0, denoted i(f, z0), to be the multiplicity of the zero of the function f(z) − z at the point z0. In real Euclidean space, the fixed-point index is defined as follows: If x0 is an isolated fixed point of f, then let g be the function defined by Then g has an isolated singularity at x0, and maps the boundary of some deleted neighborhood of x0 to the unit sphere. We define i(f, x0) to be the Brouwer degree of the mapping induced by g on some suitably chosen small sphere around x0. The Lefschetz–Hopf theorem The importance of the fixed-point index is largely due to its role in the Lefschetz–Hopf theorem, which states: where Fix(f) is the set of fixed points of f, and Λf is the Lefschetz number of f. Since the quantity on the left-hand side of the above is clearly zero when f has no fixed points, the Lefschetz–Hopf theorem trivially implies the Lefschetz fixed-point theorem. Notes References Robert F. Brown: Fixed Point Theory, in: I. M. James, History of Topology, Amsterdam 1999, , 271–299. Fixed points (mathematics) Topology
https://en.wikipedia.org/wiki/Fermat%27s%20factorization%20method
Fermat's factorization method, named after Pierre de Fermat, is based on the representation of an odd integer as the difference of two squares: That difference is algebraically factorable as ; if neither factor equals one, it is a proper factorization of N. Each odd number has such a representation. Indeed, if is a factorization of N, then Since N is odd, then c and d are also odd, so those halves are integers. (A multiple of four is also a difference of squares: let c and d be even.) In its simplest form, Fermat's method might be even slower than trial division (worst case). Nonetheless, the combination of trial division and Fermat's is more effective than either by itself. Basic method One tries various values of a, hoping that , a square. FermatFactor(N): // N should be odd a ← b2 ← a*a - N repeat until b2 is a square: a ← a + 1 b2 ← a*a - N // equivalently: // b2 ← b2 + 2*a + 1 // a ← a + 1 return a - // or a + For example, to factor , the first try for a is the square root of rounded up to the next integer, which is . Then, . Since 125 is not a square, a second try is made by increasing the value of a by 1. The second attempt also fails, because 282 is again not a square. The third try produces the perfect square of 441. So, , , and the factors of are and . Suppose N has more than two prime factors. That procedure first finds the factorization with the least values of a and b. That is, is the smallest factor ≥ the square-root of N, and so is the largest factor ≤ root-N. If the procedure finds , that shows that N is prime. For , let c be the largest subroot factor. , so the number of steps is approximately . If N is prime (so that ), one needs steps. This is a bad way to prove primality. But if N has a factor close to its square root, the method works quickly. More precisely, if c differs less than from , the method requires only one step; this is independent of the size of N. Fermat's and trial division Consider trying to factor the prime number , but also compute b and throughout. Going up from , we can tabulate: In practice, one wouldn't bother with that last row until b is an integer. But observe that if N had a subroot factor above , Fermat's method would have found it already. Trial division would normally try up to 48,432; but after only four Fermat steps, we need only divide up to 47830, to find a factor or prove primality. This all suggests a combined factoring method. Choose some bound ; use Fermat's method for factors between and . This gives a bound for trial division which is . In the above example, with the bound for trial division is 47830. A reasonable choice could be giving a bound of 28937. In this regard, Fermat's method gives diminishing returns. One would surely stop before this point: Sieve improvement When considering the table for , one can quickly tell that none of the values of are squares: It is not necessary to compute
https://en.wikipedia.org/wiki/Evolutionary%20graph%20theory
Evolutionary graph theory is an area of research lying at the intersection of graph theory, probability theory, and mathematical biology. Evolutionary graph theory is an approach to studying how topology affects evolution of a population. That the underlying topology can substantially affect the results of the evolutionary process is seen most clearly in a paper by Erez Lieberman, Christoph Hauert and Martin Nowak. In evolutionary graph theory, individuals occupy vertices of a weighted directed graph and the weight wi j of an edge from vertex i to vertex j denotes the probability of i replacing j. The weight corresponds to the biological notion of fitness where fitter types propagate more readily. One property studied on graphs with two types of individuals is the fixation probability, which is defined as the probability that a single, randomly placed mutant of type A will replace a population of type B. According to the isothermal theorem, a graph has the same fixation probability as the corresponding Moran process if and only if it is isothermal, thus the sum of all weights that lead into a vertex is the same for all vertices. Thus, for example, a complete graph with equal weights describes a Moran process. The fixation probability is where r is the relative fitness of the invading type. Graphs can be classified into amplifiers of selection and suppressors of selection. If the fixation probability of a single advantageous mutation is higher than the fixation probability of the corresponding Moran process then the graph is an amplifier, otherwise a suppressor of selection. One example of the suppressor of selection is a linear process where only vertex i-1 can replace vertex i (but not the other way around). In this case the fixation probability is (where N is the number of vertices) since this is the probability that the mutation arises in the first vertex which will eventually replace all the other ones. Since for all r greater than 1, this graph is by definition a suppressor of selection. Evolutionary graph theory may also be studied in a dual formulation, as a coalescing random walk, or as a stochastic process. We may consider the mutant population on a graph as a random walk between absorbing barriers representing mutant extinction and mutant fixation. For highly symmetric graphs, we can then use martingales to find the fixation probability as illustrated by Monk (2018). Also evolutionary games can be studied on graphs where again an edge between i and j means that these two individuals will play a game against each other. Closely related stochastic processes include the voter model, which was introduced by Clifford and Sudbury (1973) and independently by Holley and Liggett (1975), and which has been studied extensively. Bibliography References External links A virtual laboratory for studying evolution on graphs: Further reading Evolution Application-specific graphs Evolutionary dynamics
https://en.wikipedia.org/wiki/Emacs%20Speaks%20Statistics
Emacs Speaks Statistics (ESS) is an Emacs package for programming in statistical languages. It adds two types of modes to emacs: ESS modes for editing statistical languages like R, SAS and Julia; and inferior ESS (iESS) modes for interacting with statistical processes like R and SAS. Modes of types (1) and (2) work seamlessly together. In addition, modes of type (1) provide the capability to submit a batch job for statistical packages like SAS, BUGS or JAGS when an interactive session is unwanted due to the potentially lengthy time required for the task to complete. With Emacs Speaks Statistics, the user can conveniently edit statistical language commands in one emacs buffer, and execute the code in a second. There are a number of advantages of doing data analysis using Emacs/ESS in this way, rather than interacting with R, S-PLUS or other software directly. First, as indicated above, ESS provides a convenient way of writing and executing code without frequently switching between programs. This also encourages the good practice of keeping a record of one's data analysis, equivalent to working from do-files in Stata. Third, since emacs is also an able editor of LaTeX files, it facilitates the integration of data analysis and written text with Sweave. External links ESS is freely available for download from the ESS website, which also contains documentation and links to a mailing list. Emacs modes Free R (programming language) software Free statistical software
https://en.wikipedia.org/wiki/Prewellordering
In set theory, a prewellordering on a set is a preorder on (a transitive and reflexive relation on ) that is strongly connected (meaning that any two points are comparable) and well-founded in the sense that the induced relation defined by is a well-founded relation. Prewellordering on a set A prewellordering on a set is a homogeneous binary relation on that satisfies the following conditions: Reflexivity: for all Transitivity: if and then for all Total/Strongly connected: or for all for every non-empty subset there exists some such that for all This condition is equivalent to the induced strict preorder defined by and being a well-founded relation. A homogeneous binary relation on is a prewellordering if and only if there exists a surjection into a well-ordered set such that for all if and only if Examples Given a set the binary relation on the set of all finite subsets of defined by if and only if (where denotes the set's cardinality) is a prewellordering. Properties If is a prewellordering on then the relation defined by is an equivalence relation on and induces a wellordering on the quotient The order-type of this induced wellordering is an ordinal, referred to as the length of the prewellordering. A norm on a set is a map from into the ordinals. Every norm induces a prewellordering; if is a norm, the associated prewellordering is given by Conversely, every prewellordering is induced by a unique regular norm (a norm is regular if, for any and any there is such that ). Prewellordering property If is a pointclass of subsets of some collection of Polish spaces, closed under Cartesian product, and if is a prewellordering of some subset of some element of then is said to be a -prewellordering of if the relations and are elements of where for is said to have the prewellordering property if every set in admits a -prewellordering. The prewellordering property is related to the stronger scale property; in practice, many pointclasses having the prewellordering property also have the scale property, which allows drawing stronger conclusions. Examples and both have the prewellordering property; this is provable in ZFC alone. Assuming sufficient large cardinals, for every and have the prewellordering property. Consequences Reduction If is an adequate pointclass with the prewellordering property, then it also has the reduction property: For any space and any sets and both in the union may be partitioned into sets both in such that and Separation If is an adequate pointclass whose dual pointclass has the prewellordering property, then has the separation property: For any space and any sets and disjoint sets both in there is a set such that both and its complement are in with and For example, has the prewellordering property, so has the separation property. This means that if and are disjoint analytic subsets of some Polish space
https://en.wikipedia.org/wiki/Ordinal%20arithmetic
In the mathematical field of set theory, ordinal arithmetic describes the three usual operations on ordinal numbers: addition, multiplication, and exponentiation. Each can be defined in essentially two different ways: either by constructing an explicit well-ordered set that represents the result of the operation or by using transfinite recursion. Cantor normal form provides a standardized way of writing ordinals. In addition to these usual ordinal operations, there are also the "natural" arithmetic of ordinals and the nimber operations. Addition The union of two disjoint well-ordered sets S and T can be well-ordered. The order-type of that union is the ordinal that results from adding the order-types of S and T. If two well-ordered sets are not already disjoint, then they can be replaced by order-isomorphic disjoint sets, e.g. replace S by {0} × S and T by {1} × T. This way, the well-ordered set S is written "to the left" of the well-ordered set T, meaning one defines an order on S T in which every element of S is smaller than every element of T. The sets S and T themselves keep the ordering they already have. The definition of addition α + β can also be given by transfinite recursion on β: α + 0 = α , where S denotes the successor function. when β is a limit ordinal. Ordinal addition on the natural numbers is the same as standard addition. The first transfinite ordinal is ω, the set of all natural numbers, followed by ω + 1, ω + 2, etc. The ordinal ω + ω is obtained by two copies of the natural numbers ordered in the usual fashion and the second copy completely to the right of the first. Writing 0' < 1' < 2' < ... for the second copy, ω + ω looks like 0 < 1 < 2 < 3 < ... < 0' < 1' < 2' < ... This is different from ω because in ω only 0 does not have a direct predecessor while in ω + ω the two elements 0 and 0' do not have direct predecessors. Properties Ordinal addition is, in general, not commutative. For example, since the order relation for is 0 < 1 < 2 < 0' < 1' < 2' < ..., which can be relabeled to ω. In contrast is not equal to ω since the order relation 0 < 1 < 2 < ... < 0' < 1' < 2' has a largest element (namely, 2') and ω does not (ω and are equipotent, but not order-isomorphic). Ordinal addition is still associative; one can see for example that (ω + 4) + ω = ω + (4 + ω) = ω + ω. Addition is strictly increasing and continuous in the right argument: but the analogous relation does not hold for the left argument; instead we only have: Ordinal addition is left-cancellative: if α + β = α + γ, then β = γ. Furthermore, one can define left subtraction for ordinals β ≤ α: there is a unique γ such that α = β + γ. On the other hand, right cancellation does not work: but Nor does right subtraction, even when β ≤ α: for example, there does not exist any γ such that γ + 42 = ω. If the ordinals less than α are closed under addition and contain 0 then α is occasionally called a γ-number (see additively indecomposable ordin
https://en.wikipedia.org/wiki/Finite%20morphism
In algebraic geometry, a finite morphism between two affine varieties is a dense regular map which induces isomorphic inclusion between their coordinate rings, such that is integral over . This definition can be extended to the quasi-projective varieties, such that a regular map between quasiprojective varieties is finite if any point like has an affine neighbourhood V such that is affine and is a finite map (in view of the previous definition, because it is between affine varieties). Definition by schemes A morphism f: X → Y of schemes is a finite morphism if Y has an open cover by affine schemes such that for each i, is an open affine subscheme Spec Ai, and the restriction of f to Ui, which induces a ring homomorphism makes Ai a finitely generated module over Bi. One also says that X is finite over Y. In fact, f is finite if and only if for every open affine subscheme V = Spec B in Y, the inverse image of V in X is affine, of the form Spec A, with A a finitely generated B-module. For example, for any field k, is a finite morphism since as -modules. Geometrically, this is obviously finite since this is a ramified n-sheeted cover of the affine line which degenerates at the origin. By contrast, the inclusion of A1 − 0 into A1 is not finite. (Indeed, the Laurent polynomial ring k[y, y−1] is not finitely generated as a module over k[y].) This restricts our geometric intuition to surjective families with finite fibers. Properties of finite morphisms The composition of two finite morphisms is finite. Any base change of a finite morphism f: X → Y is finite. That is, if g: Z → Y is any morphism of schemes, then the resulting morphism X ×Y Z → Z is finite. This corresponds to the following algebraic statement: if A and C are (commutative) B-algebras, and A is finitely generated as a B-module, then the tensor product A ⊗B C is finitely generated as a C-module. Indeed, the generators can be taken to be the elements ai ⊗ 1, where ai are the given generators of A as a B-module. Closed immersions are finite, as they are locally given by A → A/I, where I is the ideal corresponding to the closed subscheme. Finite morphisms are closed, hence (because of their stability under base change) proper. This follows from the going up theorem of Cohen-Seidenberg in commutative algebra. Finite morphisms have finite fibers (that is, they are quasi-finite). This follows from the fact that for a field k, every finite k-algebra is an Artinian ring. A related statement is that for a finite surjective morphism f: X → Y, X and Y have the same dimension. By Deligne, a morphism of schemes is finite if and only if it is proper and quasi-finite. This had been shown by Grothendieck if the morphism f: X → Y is locally of finite presentation, which follows from the other assumptions if Y is Noetherian. Finite morphisms are both projective and affine. See also Glossary of algebraic geometry Finite algebra Notes References External links Algebraic geometry
https://en.wikipedia.org/wiki/Moore%20space%20%28algebraic%20topology%29
In algebraic topology, a branch of mathematics, Moore space is the name given to a particular type of topological space that is the homology analogue of the Eilenberg–Maclane spaces of homotopy theory, in the sense that it has only one nonzero homology (rather than homotopy) group. Formal definition Given an abelian group G and an integer n ≥ 1, let X be a CW complex such that and for i ≠ n, where denotes the n-th singular homology group of X and is the i-th reduced homology group. Then X is said to be a Moore space. Also, X is by definition simply-connected if n>1. Examples is a Moore space of for . is a Moore space of for . See also Eilenberg–MacLane space, the homotopy analog. Homology sphere References Hatcher, Allen. Algebraic topology, Cambridge University Press (2002), . For further discussion of Moore spaces, see Chapter 2, Example 2.40. A free electronic version of this book is available on the author's homepage. Algebraic topology
https://en.wikipedia.org/wiki/Ordination%20%28disambiguation%29
Ordination is the process of consecrating clergy. Ordination may also refer to: Ordination (statistics), a multivariate statistical analysis procedure Ordination (1640), a painting in Nicolas Poussin's first Seven Sacraments series See also Ordination of women Ordination mill
https://en.wikipedia.org/wiki/Ordination%20%28statistics%29
Ordination or gradient analysis, in multivariate analysis, is a method complementary to data clustering, and used mainly in exploratory data analysis (rather than in hypothesis testing). In contrast to cluster analysis, ordination orders quantities in a (usually lower-dimensional) latent space. In the ordination space, quantities that are near each other share attributes (i.e., are similar to some degree), and dissimilar objects are farther from each other. Such relationships between the objects, on each of several axes or latent variables, are then characterized numerically and/or graphically in a biplot. The first ordination method, principal components analysis, was suggested by Karl Pearson in 1901. Methods Ordination methods can broadly be categorized in eigenvector-, algorithm-, or model-based methods. Many classical ordination techniques, including principal components analysis, correspondence analysis (CA) and its derivatives (detrended correspondence analysis, canonical correspondence analysis, and redundancy analysis, belong to the first group. The second group includes some distance-based methods such as non-metric multidimensional scaling, and machine learning methods such as T-distributed stochastic neighbor embedding and nonlinear dimensionality reduction. The third group includes model-based ordination methods, which can be considered as multivariate extensions of Generalized Linear Models. Model-based ordination methods are more flexible in their application than classical ordination methods, so that it is for example possible to include random-effects. Unlike in the aforementioned two groups, there is no (implicit or explicit) distance measure in the ordination. Instead, a distribution needs to be specified for the responses as is typical for statistical models. These and other assumptions, such as the assumed mean-variance relationship, can be validated with the use of residual diagnostics, unlike in other ordination methods. Applications Ordination can be used on the analysis of any set of multivariate objects. It is frequently used in several environmental or ecological sciences, particularly plant community ecology. It is also used in genetics and systems biology for microarray data analysis and in psychometrics. See also Multivariate statistics Principal components analysis Correspondence analysis Multiple correspondence analysis Detrended correspondence analysis Intrinsic dimension References Further reading , 1998. An Annotated Bibliography Of Canonical Correspondence Analysis And Related Constrained Ordination Methods 1986–1996. Botanical Institute, University of Bergen. World Wide Web: http://www.bio.umontreal.ca/Casgrain/cca_bib/index.html 1988 A theory of gradient analysis. Adv. Ecol. Res. 18:271-313. , Jr. 1982. Multivariate Analysis in Community Ecology. Cambridge University Press, Cambridge. , 1995. Data Analysis in Community and Landscape Ecology. Cambridge University Press, Cambridge. , 2015. Me
https://en.wikipedia.org/wiki/Resampling
Resampling may refer to: Resampling (audio), several related audio processes Resampling (statistics), resampling methods in statistics Resampling (bitmap), scaling of bitmap images See also Sample-rate conversion Downsampling Upsampling Oversampling Sampling (information theory) Signal (information theory) Data conversion Interpolation Multivariate interpolation Subsampling (disambiguation)
https://en.wikipedia.org/wiki/Functional%20data%20analysis
Functional data analysis (FDA) is a branch of statistics that analyses data providing information about curves, surfaces or anything else varying over a continuum. In its most general form, under an FDA framework, each sample element of functional data is considered to be a random function. The physical continuum over which these functions are defined is often time, but may also be spatial location, wavelength, probability, etc. Intrinsically, functional data are infinite dimensional. The high intrinsic dimensionality of these data brings challenges for theory as well as computation, where these challenges vary with how the functional data were sampled. However, the high or infinite dimensional structure of the data is a rich source of information and there are many interesting challenges for research and data analysis. History Functional data analysis has roots going back to work by Grenander and Karhunen in the 1940s and 1950s. They considered the decomposition of square-integrable continuous time stochastic process into eigencomponents, now known as the Karhunen-Loève decomposition. A rigorous analysis of functional principal components analysis was done in the 1970s by Kleffe, Dauxois and Pousse including results about the asymptotic distribution of the eigenvalues. More recently in the 1990s and 2000s the field has focused more on applications and understanding the effects of dense and sparse observations schemes. The term "Functional Data Analysis" was coined by James O. Ramsay. Mathematical formalism Random functions can be viewed as random elements taking values in a Hilbert space, or as a stochastic process. The former is mathematically convenient, whereas the latter is somewhat more suitable from an applied perspective. These two approaches coincide if the random functions are continuous and a condition called mean-squared continuity is satisfied. Hilbertian random variables In the Hilbert space viewpoint, one considers an -valued random element , where is a separable Hilbert space such as the space of square-integrable functions . Under the integrability condition that , one can define the mean of as the unique element satisfying This formulation is the Pettis integral but the mean can also be defined as Bochner integral . Under the integrability condition that is finite, the covariance operator of is a linear operator that is uniquely defined by the relation or, in tensor form, . The spectral theorem allows to decompose as the Karhunen-Loève decomposition where are eigenvectors of , corresponding to the nonnegative eigenvalues of , in a non-increasing order. Truncating this infinite series to a finite order underpins functional principal component analysis. Stochastic processes The Hilbertian point of view is mathematically convenient, but abstract; the above considerations do not necessarily even view as a function at all, since common choices of like and Sobolev spaces consist of equivalence classes, no
https://en.wikipedia.org/wiki/Richard%20Fateman
Richard J Fateman (born November 4, 1946) is a professor emeritus of computer science at the University of California, Berkeley. He received a BS in Physics and Mathematics from Union College in June, 1966, and a Ph.D. in Applied Mathematics from Harvard University in June, 1971. He was a major contributor to the Macsyma computer algebra system at MIT and later to the Franz Lisp system. His current interests include scientific programming environments; computer algebra systems; distributed computing; analysis of algorithms; programming and measurement of large systems; design and implementation of programming languages; and optical character recognition. In 1999, he was inducted as a Fellow of the Association for Computing Machinery. Richard Fateman is the father of musician Johanna Fateman. References External links Home page. Living people Union College (New York) alumni Harvard University alumni UC Berkeley College of Engineering faculty Fellows of the Association for Computing Machinery Programming language researchers 1946 births Lisp (programming language) people Amateur radio people
https://en.wikipedia.org/wiki/Notation%20in%20probability%20and%20statistics
Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols. Probability theory Random variables are usually written in upper case roman letters: , , etc. Particular realizations of a random variable are written in corresponding lower case letters. For example, could be a sample corresponding to the random variable . A cumulative probability is formally written to differentiate the random variable from its realization. The probability is sometimes written to distinguish it from other functions and measure P so as to avoid having to define "P is a probability" and is short for , where is the event space and is a random variable. notation is used alternatively. or indicates the probability that events A and B both occur. The joint probability distribution of random variables X and Y is denoted as , while joint probability mass function or probability density function as and joint cumulative distribution function as . or indicates the probability of either event A or event B occurring ("or" in this case means one or the other or both). σ-algebras are usually written with uppercase calligraphic (e.g. for the set of sets on which we define the probability P) Probability density functions (pdfs) and probability mass functions are denoted by lowercase letters, e.g. , or . Cumulative distribution functions (cdfs) are denoted by uppercase letters, e.g. , or . Survival functions or complementary cumulative distribution functions are often denoted by placing an overbar over the symbol for the cumulative:, or denoted as , In particular, the pdf of the standard normal distribution is denoted by , and its cdf by . Some common operators: : expected value of X : variance of X : covariance of X and Y X is independent of Y is often written or , and X is independent of Y given W is often written or , the conditional probability, is the probability of given , i.e., after is observed. Statistics Greek letters (e.g. θ, β) are commonly used to denote unknown parameters (population parameters). A tilde (~) denotes "has the probability distribution of". Placing a hat, or caret (also known as a circumflex), over a true parameter denotes an estimator of it, e.g., is an estimator for . The arithmetic mean of a series of values is often denoted by placing an "overbar" over the symbol, e.g. , pronounced " bar". Some commonly used symbols for sample statistics are given below: the sample mean , the sample variance , the sample standard deviation , the sample correlation coefficient , the sample cumulants . Some commonly used symbols for population parameters are given below: the population mean , the population variance , the population standard deviation , the population
https://en.wikipedia.org/wiki/Procept
In mathematics education, a procept is an amalgam of three components: a "process" which produces a mathematical "object" and a "symbol" which is used to represent either process or object. It derives from the work of Eddie Gray and David O. Tall. The notion was first published in a paper in the Journal for Research in Mathematics Education in 1994, and is part of the process-object literature. This body of literature suggests that mathematical objects are formed by encapsulating processes, that is to say that the mathematical object 3 is formed by an encapsulation of the process of counting: 1,2,3... Gray and Tall's notion of procept improved upon the existing literature by noting that mathematical notation is often ambiguous as to whether it refers to process or object. Examples of such notations are: : refers to the process of adding as well as the outcome of the process. : refers to the process of summing an infinite sequence, and to the outcome of the process. : refers to the process of mapping x to 3x+2 as well as the outcome of that process, the function . References Gray, E. & Tall, D. (1994) "Duality, Ambiguity, and Flexibility: A "Proceptual" View of Simple Arithmetic", Journal for Research in Mathematics Education 25(2) p.116-40. Available Online as PDF External links Procepts Mathematics education
https://en.wikipedia.org/wiki/Glossary%20of%20probability%20and%20statistics
This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also Notation in probability and statistics Probability axioms Glossary of experimental design List of statistical topics List of probability topics Glossary of areas of mathematics Glossary of calculus References External links Probability and Statistics on the Earliest Uses Pages (Univ. of Southampton) Glossary Statistics-related lists Probability and statistics Probability and statistics Wikipedia glossaries using description lists
https://en.wikipedia.org/wiki/Quasi-finite%20morphism
In algebraic geometry, a branch of mathematics, a morphism f : X → Y of schemes is quasi-finite if it is of finite type and satisfies any of the following equivalent conditions: Every point x of X is isolated in its fiber f−1(f(x)). In other words, every fiber is a discrete (hence finite) set. For every point x of X, the scheme is a finite κ(f(x)) scheme. (Here κ(p) is the residue field at a point p.) For every point x of X, is finitely generated over . Quasi-finite morphisms were originally defined by Alexander Grothendieck in SGA 1 and did not include the finite type hypothesis. This hypothesis was added to the definition in EGA II 6.2 because it makes it possible to give an algebraic characterization of quasi-finiteness in terms of stalks. For a general morphism and a point x in X, f is said to be quasi-finite at x if there exist open affine neighborhoods U of x and V of f(x) such that f(U) is contained in V and such that the restriction is quasi-finite. f is locally quasi-finite if it is quasi-finite at every point in X. A quasi-compact locally quasi-finite morphism is quasi-finite. Properties For a morphism f, the following properties are true. If f is quasi-finite, then the induced map fred between reduced schemes is quasi-finite. If f is a closed immersion, then f is quasi-finite. If X is noetherian and f is an immersion, then f is quasi-finite. If , and if is quasi-finite, then f is quasi-finite if any of the following are true: g is separated, X is noetherian, is locally noetherian. Quasi-finiteness is preserved by base change. The composite and fiber product of quasi-finite morphisms is quasi-finite. If f is unramified at a point x, then f is quasi-finite at x. Conversely, if f is quasi-finite at x, and if also , the local ring of x in the fiber f−1(f(x)), is a field and a finite separable extension of κ(f(x)), then f is unramified at x. Finite morphisms are quasi-finite. A quasi-finite proper morphism locally of finite presentation is finite. Indeed, a morphism is finite if and only if it is proper and locally quasi-finite. Since proper morphisms are of finite type and finite type morphisms are quasi-compact one may omit the qualification locally, i.e., a morphism is finite if and only if it is proper and quasi-finite. A generalized form of Zariski Main Theorem is the following: Suppose Y is quasi-compact and quasi-separated. Let f be quasi-finite, separated and of finite presentation. Then f factors as where the first morphism is an open immersion and the second is finite. (X is open in a finite scheme over Y.) See also The quasi-finite fundamental group scheme Notes References Morphisms of schemes
https://en.wikipedia.org/wiki/Ky%20Fan%20inequality
In mathematics, there are two different results that share the common name of the Ky Fan inequality. One is an inequality involving the geometric mean and arithmetic mean of two sets of real numbers of the unit interval. The result was published on page 5 of the book Inequalities by Edwin F. Beckenbach and Richard E. Bellman (1961), who refer to an unpublished result of Ky Fan. They mention the result in connection with the inequality of arithmetic and geometric means and Augustin Louis Cauchy's proof of this inequality by forward-backward-induction; a method which can also be used to prove the Ky Fan inequality. This Ky Fan inequality is a special case of Levinson's inequality and also the starting point for several generalizations and refinements; some of them are given in the references below. The second Ky Fan inequality is used in game theory to investigate the existence of an equilibrium. Statement of the classical version If with for i = 1, ..., n, then with equality if and only if x1 = x2 = · · · = xn. Remark Let denote the arithmetic and geometric mean, respectively, of x1, . . ., xn, and let denote the arithmetic and geometric mean, respectively, of 1 − x1, . . ., 1 − xn. Then the Ky Fan inequality can be written as which shows the similarity to the inequality of arithmetic and geometric means given by Gn ≤ An. Generalization with weights If xi ∈ [0,½] and γi ∈ [0,1] for i = 1, . . ., n are real numbers satisfying γ1 + . . . + γn = 1, then with the convention 00 := 0. Equality holds if and only if either γixi = 0 for all i = 1, . . ., n or all xi > 0 and there exists x ∈ (0,½] such that x = xi for all i = 1, . . ., n with γi > 0. The classical version corresponds to γi = 1/n for all i = 1, . . ., n. Proof of the generalization Idea: Apply Jensen's inequality to the strictly concave function Detailed proof: (a) If at least one xi is zero, then the left-hand side of the Ky Fan inequality is zero and the inequality is proved. Equality holds if and only if the right-hand side is also zero, which is the case when γixi = 0 for all i = 1, . . ., n. (b) Assume now that all xi > 0. If there is an i with γi = 0, then the corresponding xi > 0 has no effect on either side of the inequality, hence the ith term can be omitted. Therefore, we may assume that γi > 0 for all i in the following. If x1 = x2 = . . . = xn, then equality holds. It remains to show strict inequality if not all xi are equal. The function f is strictly concave on (0,½], because we have for its second derivative Using the functional equation for the natural logarithm and Jensen's inequality for the strictly concave f, we obtain that where we used in the last step that the γi sum to one. Taking the exponential of both sides gives the Ky Fan inequality. The Ky Fan inequality in game theory A second inequality is also called the Ky Fan Inequality, because of a 1972 paper, "A minimax inequality and its applications". This second inequality is equivalent to the Bro
https://en.wikipedia.org/wiki/Quadratic%20variation
In mathematics, quadratic variation is used in the analysis of stochastic processes such as Brownian motion and other martingales. Quadratic variation is just one kind of variation of a process. Definition Suppose that is a real-valued stochastic process defined on a probability space and with time index ranging over the non-negative real numbers. Its quadratic variation is the process, written as , defined as where ranges over partitions of the interval and the norm of the partition is the mesh. This limit, if it exists, is defined using convergence in probability. Note that a process may be of finite quadratic variation in the sense of the definition given here and its paths be nonetheless almost surely of infinite 1-variation for every in the classical sense of taking the supremum of the sum over all partitions; this is in particular the case for Brownian motion. More generally, the covariation (or cross-variance) of two processes and is The covariation may be written in terms of the quadratic variation by the polarization identity: Notation: the quadratic variation is also notated as or . Finite variation processes A process is said to have finite variation if it has bounded variation over every finite time interval (with probability 1). Such processes are very common including, in particular, all continuously differentiable functions. The quadratic variation exists for all continuous finite variation processes, and is zero. This statement can be generalized to non-continuous processes. Any càdlàg finite variation process has quadratic variation equal to the sum of the squares of the jumps of . To state this more precisely, the left limit of with respect to is denoted by , and the jump of at time can be written as . Then, the quadratic variation is given by The proof that continuous finite variation processes have zero quadratic variation follows from the following inequality. Here, is a partition of the interval , and is the variation of over . By the continuity of , this vanishes in the limit as goes to zero. Itô processes The quadratic variation of a standard Brownian motion exists, and is given by , however the limit in the definition is meant in the sense and not pathwise. This generalizes to Itô processes that, by definition, can be expressed in terms of Itô integrals where is a Brownian motion. Any such process has quadratic variation given by Semimartingales Quadratic variations and covariations of all semimartingales can be shown to exist. They form an important part of the theory of stochastic calculus, appearing in Itô's lemma, which is the generalization of the chain rule to the Itô integral. The quadratic covariation also appears in the integration by parts formula which can be used to compute . Alternatively this can be written as a stochastic differential equation: where Martingales All càdlàg martingales, and local martingales have well defined quadratic variation, which follows
https://en.wikipedia.org/wiki/Generalizations%20of%20the%20derivative
In mathematics, the derivative is a fundamental construction of differential calculus and admits many possible generalizations within the fields of mathematical analysis, combinatorics, algebra, geometry, etc. Fréchet derivative The Fréchet derivative defines the derivative for general normed vector spaces . Briefly, a function , an open subset of , is called Fréchet differentiable at if there exists a bounded linear operator such that Functions are defined as being differentiable in some open neighbourhood of , rather than at individual points, as not doing so tends to lead to many pathological counterexamples. The Fréchet derivative is quite similar to the formula for the derivative found in elementary one-variable calculus, and simply moves A to the left hand side. However, the Fréchet derivative A denotes the function . In multivariable calculus, in the context of differential equations defined by a vector valued function Rn to Rm, the Fréchet derivative A is a linear operator on R considered as a vector space over itself, and corresponds to the best linear approximation of a function. If such an operator exists, then it is unique, and can be represented by an m by n matrix known as the Jacobian matrix Jx(ƒ) of the mapping ƒ at point x. Each entry of this matrix represents a partial derivative, specifying the rate of change of one range coordinate with respect to a change in a domain coordinate. Of course, the Jacobian matrix of the composition g°f is a product of corresponding Jacobian matrices: Jx(g°f) =Jƒ(x)(g)Jx(ƒ). This is a higher-dimensional statement of the chain rule. For real valued functions from Rn to R (scalar fields), the Fréchet derivative corresponds to a vector field called the total derivative. This can be interpreted as the gradient but it is more natural to use the exterior derivative. The convective derivative takes into account changes due to time dependence and motion through space along a vector field, and is a special case of the total derivative. For vector-valued functions from R to Rn (i.e., parametric curves), the Fréchet derivative corresponds to taking the derivative of each component separately. The resulting derivative can be mapped to a vector. This is useful, for example, if the vector-valued function is the position vector of a particle through time, then the derivative is the velocity vector of the particle through time. In complex analysis, the central objects of study are holomorphic functions, which are complex-valued functions on the complex numbers where the Fréchet derivative exists. In geometric calculus, the geometric derivative satisfies a weaker form of the Leibniz (product) rule. It specializes the Fréchet derivative to the objects of geometric algebra. Geometric calculus is a powerful formalism that has been shown to encompass the similar frameworks of differential forms and differential geometry. Exterior derivative and Lie derivative On the exterior algebra of differen
https://en.wikipedia.org/wiki/Multicategory
In mathematics (especially category theory), a multicategory is a generalization of the concept of category that allows morphisms of multiple arity. If morphisms in a category are viewed as analogous to functions, then morphisms in a multicategory are analogous to functions of several variables. Multicategories are also sometimes called operads, or colored operads. Definition A (non-symmetric) multicategory consists of a collection (often a proper class) of objects; for every finite sequence of objects (for von Neumann ordinal ) and object Y, a set of morphisms from to Y; and for every object X, a special identity morphism (with n = 1) from X to X. Additionally, there are composition operations: Given a sequence of sequences of objects, a sequence of objects, and an object Z: if for each , fj is a morphism from to Yj; and g is a morphism from to Z: then there is a composite morphism from to Z. This must satisfy certain axioms: If m = 1, Z = Y0, and g is the identity morphism for Y0, then g(f0) = f0; if for each , nj = 1, , and fj is the identity morphism for Yj, then ; and an associativity condition: if for each and , is a morphism from to , then are identical morphisms from to Z. Comcategories A comcategory (co-multi-category) is a totally ordered set O of objects, a set A of multiarrows with two functions where O% is the set of all finite ordered sequences of elements of O. The dual image of a multiarrow f may be summarized A comcategory C also has a multiproduct with the usual character of a composition operation. C is said to be associative if there holds a multiproduct axiom in relation to this operator. Any multicategory, symmetric or non-symmetric, together with a total-ordering of the object set, can be made into an equivalent comcategory. A multiorder is a comcategory satisfying the following conditions. There is at most one multiarrow with given head and ground. Each object x has a unit multiarrow. A multiarrow is a unit if its ground has one entry. Multiorders are a generalization of partial orders (posets), and were first introduced (in passing) by Tom Leinster. Examples There is a multicategory whose objects are (small) sets, where a morphism from the sets X1, X2, ..., and Xn to the set Y is an n-ary function, that is a function from the Cartesian product X1 × X2 × ... × Xn to Y. There is a multicategory whose objects are vector spaces (over the rational numbers, say), where a morphism from the vector spaces X1, X2, ..., and Xn to the vector space Y is a multilinear operator, that is a linear transformation from the tensor product X1 ⊗ X2 ⊗ ... ⊗ Xn to Y. More generally, given any monoidal category C, there is a multicategory whose objects are objects of C, where a morphism from the C-objects X1, X2, ..., and Xn to the C-object Y is a C-morphism from the monoidal product of X1, X2, ..., and Xn to Y. An operad is a multicategory with one unique object; except in degenerate cases, such a multic
https://en.wikipedia.org/wiki/Ganglia%20%28software%29
Ganglia is a scalable, distributed monitoring tool for high-performance computing systems, clusters and networks. The software is used to view either live or recorded statistics covering metrics such as CPU load averages or network utilization for many nodes. Ganglia software is bundled with enterprise-level Linux distributions such as Red Hat Enterprise Level (RHEL) or the CentOS repackaging of the same. Ganglia grew out of requirements for monitoring systems by Berkeley (University of California) but now sees use by commercial and educational organisations such as Cray, MIT, NASA and Twitter. Ganglia It is based on a hierarchical design targeted at federations of clusters. It relies on a multicast-based listen/announce protocol to monitor state within clusters and uses a tree of point-to-point connections amongst representative cluster nodes to federate clusters and aggregate their state. It leverages widely used technologies such as XML for data representation, XDR for compact, portable data transport, and RRDtool for data storage and visualization. It uses carefully engineered data structures and algorithms to achieve very low per-node overheads and high concurrency. The implementation is robust, has been ported to an extensive set of operating systems and processor architectures, and is currently in use on over 500 clusters around the world. It has been used to link clusters across university campuses and around the world and can scale to handle clusters with 2000 nodes. The ganglia system comprises two unique daemons, a PHP-based web front-end, and a few other small utility programs. Ganglia Monitoring Daemon (gmond) Gmond is a multi-threaded daemon which runs on each cluster node you want to monitor. Installation does not require having a common NFS filesystem or a database back-end, installing special accounts or maintaining configuration files. Gmond has four main responsibilities: Monitor changes in host state. Announce relevant changes. Listen to the state of all other ganglia nodes via a unicast or multicast channel. Answer requests for an XML description of the cluster state. Each gmond transmits information in two different ways: Unicasting or Multicasting host state in external data representation (XDR) format using UDP messages. Sending XML over a TCP connection. Ganglia Meta Daemon (gmetad) Federation in Ganglia is achieved using a tree of point-to-point connections amongst representative cluster nodes to aggregate the state of multiple clusters. At each node in the tree, a Ganglia Meta Daemon (gmetad) periodically polls a collection of child data sources, parses the collected XML, saves all numeric, volatile metrics to round-robin databases and exports the aggregated XML over a TCP socket to clients. Data sources may be either gmond daemons, representing specific clusters, or other gmetad daemons, representing sets of clusters. Data sources use source IP addresses for access control and can be specified using mu
https://en.wikipedia.org/wiki/Steenrod%20algebra
In algebraic topology, a Steenrod algebra was defined by to be the algebra of stable cohomology operations for mod cohomology. For a given prime number , the Steenrod algebra is the graded Hopf algebra over the field of order , consisting of all stable cohomology operations for mod cohomology. It is generated by the Steenrod squares introduced by for , and by the Steenrod reduced th powers introduced in and the Bockstein homomorphism for . The term "Steenrod algebra" is also sometimes used for the algebra of cohomology operations of a generalized cohomology theory. Cohomology operations A cohomology operation is a natural transformation between cohomology functors. For example, if we take cohomology with coefficients in a ring , the cup product squaring operation yields a family of cohomology operations: Cohomology operations need not be homomorphisms of graded rings; see the Cartan formula below. These operations do not commute with suspension—that is, they are unstable. (This is because if is a suspension of a space , the cup product on the cohomology of is trivial.) Steenrod constructed stable operations for all greater than zero. The notation and their name, the Steenrod squares, comes from the fact that restricted to classes of degree is the cup square. There are analogous operations for odd primary coefficients, usually denoted and called the reduced -th power operations: The generate a connected graded algebra over , where the multiplication is given by composition of operations. This is the mod 2 Steenrod algebra. In the case , the mod Steenrod algebra is generated by the and the Bockstein operation associated to the short exact sequence . In the case , the Bockstein element is and the reduced -th power is . As a cohomology ring We can summarize the properties of the Steenrod operations as generators in the cohomology ring of Eilenberg–Maclane spectra , since there is an isomorphism giving a direct sum decomposition of all possible cohomology operations with coefficients in . Note the inverse limit of cohomology groups appears because it is a computation in the stable range of cohomology groups of Eilenberg–Maclane spaces. This result was originally computed by and . Note there is a dual characterization using homology for the dual Steenrod algebra. Remark about generalizing to generalized cohomology theories It should be observed if the Eilenberg–Maclane spectrum is replaced by an arbitrary spectrum , then there are many challenges for studying the cohomology ring . In this case, the generalized dual Steenrod algebra should be considered instead because it has much better properties and can be tractably studied in many cases (such as ). In fact, these ring spectra are commutative and the bimodules are flat. In this case, these is a canonical coaction of on for any space , such that this action behaves well with respect to the stable homotopy category, i.e., there is an isomorphism hence we can
https://en.wikipedia.org/wiki/Change%20of%20variables
In mathematics, a change of variables is a basic technique used to simplify problems in which the original variables are replaced with functions of other variables. The intent is that when expressed in new variables, the problem may become simpler, or equivalent to a better understood problem. Change of variables is an operation that is related to substitution. However these are different operations, as can be seen when considering differentiation (chain rule) or integration (integration by substitution). A very simple example of a useful variable change can be seen in the problem of finding the roots of the sixth-degree polynomial: Sixth-degree polynomial equations are generally impossible to solve in terms of radicals (see Abel–Ruffini theorem). This particular equation, however, may be written (this is a simple case of a polynomial decomposition). Thus the equation may be simplified by defining a new variable . Substituting x by into the polynomial gives which is just a quadratic equation with the two solutions: The solutions in terms of the original variable are obtained by substituting x3 back in for u, which gives Then, assuming that one is interested only in real solutions, the solutions of the original equation are Simple example Consider the system of equations where and are positive integers with . (Source: 1991 AIME) Solving this normally is not very difficult, but it may get a little tedious. However, we can rewrite the second equation as . Making the substitutions and reduces the system to . Solving this gives and . Back-substituting the first ordered pair gives us , which gives the solution Back-substituting the second ordered pair gives us , which gives no solutions. Hence the solution that solves the system is . Formal introduction Let , be smooth manifolds and let be a -diffeomorphism between them, that is: is a times continuously differentiable, bijective map from to with times continuously differentiable inverse from to . Here may be any natural number (or zero), (smooth) or (analytic). The map is called a regular coordinate transformation or regular variable substitution, where regular refers to the -ness of . Usually one will write to indicate the replacement of the variable by the variable by substituting the value of in for every occurrence of . Other examples Coordinate transformation Some systems can be more easily solved when switching to polar coordinates. Consider for example the equation This may be a potential energy function for some physical problem. If one does not immediately see a solution, one might try the substitution given by Note that if runs outside a -length interval, for example, , the map is no longer bijective. Therefore, should be limited to, for example . Notice how is excluded, for is not bijective in the origin ( can take any value, the point will be mapped to (0, 0)). Then, replacing all occurrences of the original variables by the new expressions pr
https://en.wikipedia.org/wiki/Point%20process
In statistics and probability theory, a point process or point field is a collection of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others. There are different mathematical interpretations of a point process, such as a random counting measure or a random set. Some authors regard a point process and stochastic process as two different objects such that a point process is a random object that arises from or is associated with a stochastic process, though it has been remarked that the difference between point processes and stochastic processes is not clear. Others consider a point process as a stochastic process, where the process is indexed by sets of the underlying space on which it is defined, such as the real line or -dimensional Euclidean space. Other stochastic processes such as renewal and counting processes are studied in the theory of point processes. Sometimes the term "point process" is not preferred, as historically the word "process" denoted an evolution of some system in time, so point process is also called a random point field. Point processes on the real line form an important special case that is particularly amenable to study, because the points are ordered in a natural way, and the whole point process can be described completely by the (random) intervals between the points. These point processes are frequently used as models for random events in time, such as the arrival of customers in a queue (queueing theory), of impulses in a neuron (computational neuroscience), particles in a Geiger counter, location of radio stations in a telecommunication network or of searches on the world-wide web. General point process theory In mathematics, a point process is a random element whose values are "point patterns" on a set S. While in the exact mathematical definition a point pattern is specified as a locally finite counting measure, it is sufficient for more applied purposes to think of a point pattern as a countable subset of S that has no limit points. Definition To define general point processes, we start with a probability space , and a measurable space where is a locally compact second countable Hausdorff space and is its Borel σ-algebra. Consider now an integer-valued locally finite kernel from into , that is, a mapping such that: For every , is a locally finite measure on . For every , is a random variable over . This kernel defines a random measure in the following way. We would like to think of as defining a mapping which maps to a measure (namely, ), where is the set of all locally finite measures on . Now, to make this mapping measurable, we need to define a -field over . This -field is constr
https://en.wikipedia.org/wiki/Invariants%20of%20tensors
In mathematics, in the fields of multilinear algebra and representation theory, the principal invariants of the second rank tensor are the coefficients of the characteristic polynomial , where is the identity operator and represent the polynomial's eigenvalues. More broadly, any scalar-valued function is an invariant of if and only if for all orthogonal . This means that a formula expressing an invariant in terms of components, , will give the same result for all Cartesian bases. For example, even though individual diagonal components of will change with a change in basis, the sum of diagonal components will not change. Properties The principal invariants do not change with rotations of the coordinate system (they are objective, or in more modern terminology, satisfy the principle of material frame-indifference) and any function of the principal invariants is also objective. Calculation of the invariants of rank two tensors In a majority of engineering applications, the principal invariants of (rank two) tensors of dimension three are sought, such as those for the right Cauchy-Green deformation tensor. Principal invariants For such tensors, the principal invariants are given by: For symmetric tensors, these definitions are reduced. The correspondence between the principal invariants and the characteristic polynomial of a tensor, in tandem with the Cayley–Hamilton theorem reveals that where is the second-order identity tensor. Main invariants In addition to the principal invariants listed above, it is also possible to introduce the notion of main invariants which are functions of the principal invariants above. These are the coefficients of the characteristic polynomial of the deviator , such that it is traceless. The separation of a tensor into a component that is a multiple of the identity and a traceless component is standard in hydrodynamics, where the former is called isotropic, providing the modified pressure, and the latter is called deviatoric, providing shear effects. Mixed invariants Furthermore, mixed invariants between pairs of rank two tensors may also be defined. Calculation of the invariants of order two tensors of higher dimension These may be extracted by evaluating the characteristic polynomial directly, using the Faddeev-LeVerrier algorithm for example. Calculation of the invariants of higher order tensors The invariants of rank three, four, and higher order tensors may also be determined. Engineering applications A scalar function that depends entirely on the principal invariants of a tensor is objective, i.e., independent of rotations of the coordinate system. This property is commonly used in formulating closed-form expressions for the strain energy density, or Helmholtz free energy, of a nonlinear material possessing isotropic symmetry. This technique was first introduced into isotropic turbulence by Howard P. Robertson in 1940 where he was able to derive Kármán–Howarth equation from the inva
https://en.wikipedia.org/wiki/Coxeter%20element
In mathematics, the Coxeter number h is the order of a Coxeter element of an irreducible Coxeter group. It is named after H.S.M. Coxeter. Definitions Note that this article assumes a finite Coxeter group. For infinite Coxeter groups, there are multiple conjugacy classes of Coxeter elements, and they have infinite order. There are many different ways to define the Coxeter number h of an irreducible root system. A Coxeter element is a product of all simple reflections. The product depends on the order in which they are taken, but different orderings produce conjugate elements, which have the same order. The Coxeter number is the order of any Coxeter element;. The Coxeter number is 2m/n, where n is the rank, and m is the number of reflections. In the crystallographic case, m is half the number of roots; and 2m+n is the dimension of the corresponding semisimple Lie algebra. If the highest root is Σmiαi for simple roots αi, then the Coxeter number is 1 + Σmi. The Coxeter number is the highest degree of a fundamental invariant of the Coxeter group acting on polynomials. The Coxeter number for each Dynkin type is given in the following table: The invariants of the Coxeter group acting on polynomials form a polynomial algebra whose generators are the fundamental invariants; their degrees are given in the table above. Notice that if m is a degree of a fundamental invariant then so is h + 2 − m. The eigenvalues of a Coxeter element are the numbers e2πi(m − 1)/h as m runs through the degrees of the fundamental invariants. Since this starts with m = 2, these include the primitive hth root of unity, ζh = e2πi/h, which is important in the Coxeter plane, below. The dual Coxeter number is 1 plus the sum of the coefficients of simple roots in the highest short root of the dual root system. Group order There are relations between the order g of the Coxeter group and the Coxeter number h: [p]: 2h/gp = 1 [p,q]: 8/gp,q = 2/p + 2/q -1 [p,q,r]: 64h/gp,q,r = 12 - p - 2q - r + 4/p + 4/r [p,q,r,s]: 16/gp,q,r,s = 8/gp,q,r + 8/gq,r,s + 2/(ps) - 1/p - 1/q - 1/r - 1/s +1 ... For example, [3,3,5] has h=30, so 64*30/g = 12 - 3 - 6 - 5 + 4/3 + 4/5 = 2/15, so g = 1920*15/2 = 960*15 = 14400. Coxeter elements Distinct Coxeter elements correspond to orientations of the Coxeter diagram (i.e. to Dynkin quivers): the simple reflections corresponding to source vertices are written first, downstream vertices later, and sinks last. (The choice of order among non-adjacent vertices is irrelevant, since they correspond to commuting reflections.) A special choice is the alternating orientation, in which the simple reflections are partitioned into two sets of non-adjacent vertices, and all edges are oriented from the first to the second set. The alternating orientation produces a special Coxeter element w satisfying , where w0 is the longest element, provided the Coxeter number h is even. For , the symmetric group on n elements, Coxeter elements are certain n-cycles: the
https://en.wikipedia.org/wiki/Plastic%20number
In mathematics, the plastic number (also known as the plastic constant, the plastic ratio, the minimal Pisot number, the platin number, Siegel's number or, in French, ) is a mathematical constant which is the unique real solution of the cubic equation It has the exact value Its decimal expansion begins with . Properties Recurrences The powers of the plastic number satisfy the third-order linear recurrence relation for . Hence it is the limiting ratio of successive terms of any (non-zero) integer sequence satisfying this recurrence such as the Padovan sequence (also known as the Cordonnier numbers), the Perrin numbers and the Van der Laan numbers, and bears relationships to these sequences akin to the relationships of the golden ratio to the second-order Fibonacci and Lucas numbers, akin to the relationships between the silver ratio and the Pell numbers. The plastic number satisfies the nested radical recurrence Number theory Because the plastic number has the minimal polynomial it is also a solution of the polynomial equation for every polynomial that is a multiple of but not for any other polynomials with integer coefficients. Since the discriminant of its minimal polynomial is −23, its splitting field over rationals is This field is also a Hilbert class field of As such, it can be expressed in terms of the Dedekind eta function with argument , and root of unity . Similarly, for the supergolden ratio with argument , Also, the plastic number is the smallest Pisot–Vijayaraghavan number. Its algebraic conjugates are of absolute value ≈ 0.868837 . This value is also because the product of the three roots of the minimal polynomial is 1. Trigonometry The plastic number can be written using the hyperbolic cosine () and its inverse: (See Cubic equation#Trigonometric and hyperbolic solutions.) Geometry There are precisely three ways of partitioning a square into three similar rectangles: The trivial solution given by three congruent rectangles with aspect ratio 3:1. The solution in which two of the three rectangles are congruent and the third one has twice the side length of the other two, where the rectangles have aspect ratio 3:2. The solution in which the three rectangles are all of different sizes and where they have aspect ratio ρ2. The ratios of the linear sizes of the three rectangles are: ρ (large:medium); ρ2 (medium:small); and ρ3 (large:small). The internal, long edge of the largest rectangle (the square's fault line) divides two of the square's four edges into two segments each that stand to one another in the ratio ρ. The internal, coincident short edge of the medium rectangle and long edge of the small rectangle divides one of the square's other, two edges into two segments that stand to one another in the ratio ρ4. The fact that a rectangle of aspect ratio ρ2 can be used for dissections of a square into similar rectangles is equivalent to an algebraic property of the number ρ2 related to the Routh–Hurwitz theore
https://en.wikipedia.org/wiki/Uniformization%20%28set%20theory%29
In set theory, a branch of mathematics, the axiom of uniformization is a weak form of the axiom of choice. It states that if is a subset of , where and are Polish spaces, then there is a subset of that is a partial function from to , and whose domain (the set of all such that exists) equals Such a function is called a uniformizing function for , or a uniformization of . To see the relationship with the axiom of choice, observe that can be thought of as associating, to each element of , a subset of . A uniformization of then picks exactly one element from each such subset, whenever the subset is non-empty. Thus, allowing arbitrary sets X and Y (rather than just Polish spaces) would make the axiom of uniformization equivalent to the axiom of choice. A pointclass is said to have the uniformization property if every relation in can be uniformized by a partial function in . The uniformization property is implied by the scale property, at least for adequate pointclasses of a certain form. It follows from ZFC alone that and have the uniformization property. It follows from the existence of sufficient large cardinals that and have the uniformization property for every natural number . Therefore, the collection of projective sets has the uniformization property. Every relation in L(R) can be uniformized, but not necessarily by a function in L(R). In fact, L(R) does not have the uniformization property (equivalently, L(R) does not satisfy the axiom of uniformization). (Note: it's trivial that every relation in L(R) can be uniformized in V, assuming V satisfies the axiom of choice. The point is that every such relation can be uniformized in some transitive inner model of V in which the axiom of determinacy holds.) References Set theory Descriptive set theory Axiom of choice
https://en.wikipedia.org/wiki/Sieve%20of%20Atkin
In mathematics, the sieve of Atkin is a modern algorithm for finding all prime numbers up to a specified integer. Compared with the ancient sieve of Eratosthenes, which marks off multiples of primes, the sieve of Atkin does some preliminary work and then marks off multiples of squares of primes, thus achieving a better theoretical asymptotic complexity. It was created in 2003 by A. O. L. Atkin and Daniel J. Bernstein. Algorithm In the algorithm: All remainders are modulo-sixty remainders (divide the number by 60 and return the remainder). All numbers, including and , are positive integers. Flipping an entry in the sieve list means to change the marking (prime or nonprime) to the opposite marking. This results in numbers with an odd number of solutions to the corresponding equation being potentially prime (prime if they are also square free), and numbers with an even number of solutions being composite. The algorithm: Create a results list, filled with 2, 3, and 5. Create a sieve list with an entry for each positive integer; all entries of this list should initially be marked non prime (composite). For each entry number in the sieve list, with modulo-sixty remainder  : If is 1, 13, 17, 29, 37, 41, 49, or 53, flip the entry for each possible solution to . The number of flipping operations as a ratio to the sieving range for this step approaches × (the "8" in the fraction comes from the eight modulos handled by this quadratic and the 60 because Atkin calculated this based on an even number of modulo 60 wheels), which results in a fraction of about 0.1117010721276.... If is 7, 19, 31, or 43, flip the entry for each possible solution to . The number of flipping operations as a ratio to the sieving range for this step approaches × (the "4" in the fraction comes from the four modulos handled by this quadratic and the 60 because Atkin calculated this based on an even number of modulo 60 wheels), which results in a fraction of about 0.072551974569.... If is 11, 23, 47, or 59, flip the entry for each possible solution to when . The number of flipping operations as a ratio to the sieving range for this step approaches × (the "4" in the fraction comes from the four modulos handled by this quadratic and the 60 because Atkin calculated this based on an even number of modulo 60 wheels), which results in a fraction of about 0.060827679704.... If is something else, ignore it completely. Start with the lowest number in the sieve list. Take the next number in the sieve list still marked prime. Include the number in the results list. Square the number and mark all multiples of that square as non prime. Note that the multiples that can be factored by 2, 3, or 5 need not be marked, as these will be ignored in the final enumeration of primes. Repeat steps four through seven. The total number of operations for these repetitions of marking the squares of primes as a ratio of the sieving range is the sum of the inverse of the primes s
https://en.wikipedia.org/wiki/Irving%20S.%20Reed
Irving Stoy Reed (November 12, 1923 – September 11, 2012) was an American mathematician and engineer. He is best known for co-inventing a class of algebraic error-correcting and error-detecting codes known as Reed–Solomon codes in collaboration with Gustave Solomon. He also co-invented the Reed–Muller code. Reed made many contributions to areas of electrical engineering including radar, signal processing, and image processing. He was part of the team that built the MADDIDA, guidance system for Northrop's Snark cruise missile – one of the first digital computers. He developed and introduced the now-standard Register Transfer Language to the computer community while at the Massachusetts Institute of Technology's Lincoln Laboratory. He was a faculty member of the Electrical Engineering-Systems Department of the University of Southern California from 1962 to 1993. Reed was a member of the National Academy of Engineering (1979) and a Fellow of the IEEE (1973), a winner of the Claude E. Shannon Award, the IEEE Computer Society Charles Babbage Award, the IEEE Richard W. Hamming Medal (1989) and with Gustave Solomon, the 1995 IEEE Masaru Ibuka Award. In 1998 Reed received a Golden Jubilee Award for Technological Innovation from the IEEE Information Theory Society. Anecdotes The University of Southern California graduate school of electrical engineering required doctoral students to pass an oral screening exam, in which there were eight categories of test questions. Reed always asked the questions about electromagnetism and specifically Maxwell's equations, which he obviously viewed as fundamental to communication theory. While a student in mathematics at the California Institute of Technology, Reed did not complete his required physical education courses due to time pressure and was set to enter the Navy. The only way he could graduate was to obtain a special release from Robert A. Millikan, the university's president and a former physical education instructor as well as a Nobel Prize winner and a noted hard-liner on the physical education requirement. As Reed was in Millikan's office pleading his case, he saw reprints of two papers he had published as an undergraduate on the president's table and drew them to Millikan's attention. Millikan smiled and said "You seem to me a healthy young man. I believe you will do well in the service of your country as a graduate of the California Institute of Technology." Reed and colleagues demonstrated the MADDIDA computer to John von Neumann at the Institute for Advanced Study in Princeton, New Jersey. The problem set for MADDIDA was computation of a mathematical function. Von Neumann, a noted lightning calculator, kept up with the computer and checked its results with a paper and pencil. See also Computer Research Corporation (CRC) Reed–Muller expansion References External links Irvine C. Reed Bio at University of Southern California (Archived) 1923 births 20th-century American mathematicians 21st-centu
https://en.wikipedia.org/wiki/Gustave%20Solomon
Gustave Solomon (October 27, 1930 – January 31, 1996) was an American mathematician and electrical engineer who was one of the founders of the algebraic theory of error detection and correction. Career Solomon completed his Ph.D. in mathematics at the Massachusetts Institute of Technology in 1956 under direction of Kenkichi Iwasawa. Solomon was best known for developing, along with Irving S. Reed, the algebraic error correction and detection codes named the Reed–Solomon codes. These codes protect the integrity of digital information, and they have had widespread use in modern digital storage and communications, ranging from deep space communications down to the digital audio compact disc. Solomon was also one of the co-creators of the Mattson–Solomon polynomial and the Solomon–McEliece weight formulas. He received IEEE Masaru Ibuka Award along with Irving Reed in 1995. In his later years, Solomon consulted at the Jet Propulsion Laboratory near Pasadena, California. Personal life Solomon was very interested in opera and in theater, and he even wanted to get minor acting parts himself, perhaps in television commercials. Between his assignments in the Communications Research Section at JPL, he taught foreign-born engineers and scientists English by exposing them to music from American musical productions. He believed in enhancing health and the feelings of well-being through breathing exercises, and he was a practitioner of the Feldenkrais method. Solomon used the mind-body connection philosophy of the Feldenkrais method to teach voice lessons. He had a strong love for music, and was a composer as well. Solomon was survived by one daughter. References External links New York Times obituary Gustave Solomon IEEE Information Theory Society obituary Gus Solomon, 1930–1996 by Robert McEliece 1930 births 1996 deaths 20th-century American mathematicians Theoretical computer scientists American information theorists Massachusetts Institute of Technology School of Science alumni
https://en.wikipedia.org/wiki/Pascal%27s%20simplex
In mathematics, Pascal's simplex is a generalisation of Pascal's triangle into arbitrary number of dimensions, based on the multinomial theorem. Generic Pascal's m-simplex Let m (m > 0) be a number of terms of a polynomial and n (n ≥ 0) be a power the polynomial is raised to. Let denote a Pascal's m-simplex. Each Pascal's m-simplex is a semi-infinite object, which consists of an infinite series of its components. Let denote its nth component, itself a finite (m − 1)-simplex with the edge length n, with a notational equivalent . nth component consists of the coefficients of multinomial expansion of a polynomial with m terms raised to the power of n: where . Example for Pascal's 4-simplex , sliced along the k4. All points of the same color belong to the same n-th component, from red (for n = 0) to blue (for n = 3). Specific Pascal's simplices Pascal's 1-simplex is not known by any special name. nth component (a point) is the coefficient of multinomial expansion of a polynomial with 1 term raised to the power of n: Arrangement of which equals 1 for all n. Pascal's 2-simplex is known as Pascal's triangle . nth component (a line) consists of the coefficients of binomial expansion of a polynomial with 2 terms raised to the power of n: Arrangement of Pascal's 3-simplex is known as Pascal's tetrahedron . nth component (a triangle) consists of the coefficients of trinomial expansion of a polynomial with 3 terms raised to the power of n: Arrangement of Properties Inheritance of components is numerically equal to each (m − 1)-face (there is m + 1 of them) of , or: From this follows, that the whole is (m + 1)-times included in , or: Example For more terms in the above array refer to Equality of sub-faces Conversely, is (m + 1)-times bounded by , or: From this follows, that for given n, all i-faces are numerically equal in nth components of all Pascal's (m > i)-simplices, or: Example The 3rd component (2-simplex) of Pascal's 3-simplex is bounded by 3 equal 1-faces (lines). Each 1-face (line) is bounded by 2 equal 0-faces (vertices): 2-simplex 1-faces of 2-simplex 0-faces of 1-face 1 3 3 1 1 . . . . . . 1 1 3 3 1 1 . . . . . . 1 3 6 3 3 . . . . 3 . . . 3 3 3 . . 3 . . 1 1 1 . Also, for all m and all n: Number of coefficients For the nth component ((m − 1)-simplex) of Pascal's m-simplex, the number of the coefficients of multinomial expansion it consists of is given by: (where the latter is the multichoose notation). We can see this either as a sum of the number of coefficients of an (n − 1)th component ((m − 1)-simplex) of Pascal's m-simplex with the number of coefficients of an nth component ((m − 2)-simplex) of Pascal's (m − 1)-simplex, or by a number of all possible partitions of an nth power among m exponents. Example The terms of this table comprise a Pascal triangle in the format of a symmetric Pascal matrix
https://en.wikipedia.org/wiki/Subfunctor
In category theory, a branch of mathematics, a subfunctor is a special type of functor that is an analogue of a subset. Definition Let C be a category, and let F be a contravariant functor from C to the category of sets Set. A contravariant functor G from C to Set is a subfunctor of F if For all objects c of C, G(c) ⊆ F(c), and For all arrows f:  → c of C, G(f) is the restriction of F(f) to G(c). This relation is often written as G ⊆ F. For example, let 1 be the category with a single object and a single arrow. A functor F: 1 → Set maps the unique object of 1 to some set S and the unique identity arrow of 1 to the identity function 1S on S. A subfunctor G of F maps the unique object of 1 to a subset T of S and maps the unique identity arrow to the identity function 1T on T. Notice that 1T is the restriction of 1S to T. Consequently, subfunctors of F correspond to subsets of S. Remarks Subfunctors in general are like global versions of subsets. For example, if one imagines the objects of some category C to be analogous to the open sets of a topological space, then a contravariant functor from C to the category of sets gives a set-valued presheaf on C, that is, it associates sets to the objects of C in a way that is compatible with the arrows of C. A subfunctor then associates a subset to each set, again in a compatible way. The most important examples of subfunctors are subfunctors of the Hom functor. Let c be an object of the category C, and consider the functor . This functor takes an object of C and gives back all of the morphisms  → c. A subfunctor of gives back only some of the morphisms. Such a subfunctor is called a sieve, and it is usually used when defining Grothendieck topologies. Open subfunctors Subfunctors are also used in the construction of representable functors on the category of ringed spaces. Let F be a contravariant functor from the category of ringed spaces to the category of sets, and let G ⊆ F. Suppose that this inclusion morphism G → F is representable by open immersions, i.e., for any representable functor and any morphism , the fibered product is a representable functor and the morphism Y → X defined by the Yoneda lemma is an open immersion. Then G is called an open subfunctor of F. If F is covered by representable open subfunctors, then, under certain conditions, it can be shown that F is representable. This is a useful technique for the construction of ringed spaces. It was discovered and exploited heavily by Alexander Grothendieck, who applied it especially to the case of schemes. For a formal statement and proof, see Grothendieck, Éléments de géométrie algébrique, vol. 1, 2nd ed., chapter 0, section 4.5. Functors
https://en.wikipedia.org/wiki/Antiisomorphism
In category theory, a branch of mathematics, an antiisomorphism (or anti-isomorphism) between structured sets A and B is an isomorphism from A to the opposite of B (or equivalently from the opposite of A to B). If there exists an antiisomorphism between two structures, they are said to be antiisomorphic. Intuitively, to say that two mathematical structures are antiisomorphic is to say that they are basically opposites of one another. The concept is particularly useful in an algebraic setting, as, for instance, when applied to rings. Simple example Let A be the binary relation (or directed graph) consisting of elements {1,2,3} and binary relation defined as follows: Let B be the binary relation set consisting of elements {a,b,c} and binary relation defined as follows: Note that the opposite of B (denoted Bop) is the same set of elements with the opposite binary relation (that is, reverse all the arcs of the directed graph): If we replace a, b, and c with 1, 2, and 3 respectively, we see that each rule in Bop is the same as some rule in A. That is, we can define an isomorphism from A to Bop by . is then an antiisomorphism between A and B. Ring anti-isomorphisms Specializing the general language of category theory to the algebraic topic of rings, we have: Let R and S be rings and f: R → S be a bijection. Then f is a ring anti-isomorphism if If R = S then f is a ring anti-automorphism. An example of a ring anti-automorphism is given by the conjugate mapping of quaternions: Notes References Morphisms Ring theory Algebra
https://en.wikipedia.org/wiki/Interior%20product
In mathematics, the interior product (also known as interior derivative, interior multiplication, inner multiplication, inner derivative, insertion operator, or inner derivation) is a degree −1 (anti)derivation on the exterior algebra of differential forms on a smooth manifold. The interior product, named in opposition to the exterior product, should not be confused with an inner product. The interior product is sometimes written as Definition The interior product is defined to be the contraction of a differential form with a vector field. Thus if is a vector field on the manifold then is the map which sends a -form to the -form defined by the property that for any vector fields The interior product is the unique antiderivation of degree −1 on the exterior algebra such that on one-forms where is the duality pairing between and the vector Explicitly, if is a -form and is a -form, then The above relation says that the interior product obeys a graded Leibniz rule. An operation satisfying linearity and a Leibniz rule is called a derivation. Properties If in local coordinates the vector field is described by functions , then the interior product is given by where is the form obtained by omitting from . By antisymmetry of forms, and so This may be compared to the exterior derivative which has the property The interior product relates the exterior derivative and Lie derivative of differential forms by the Cartan formula (also known as the Cartan identity, Cartan homotopy formula or Cartan magic formula): This identity defines a duality between the exterior and interior derivatives. Cartan's identity is important in symplectic geometry and general relativity: see moment map. The Cartan homotopy formula is named after Élie Cartan. The interior product with respect to the commutator of two vector fields satisfies the identity See also Notes References Theodore Frankel, The Geometry of Physics: An Introduction; Cambridge University Press, 3rd ed. 2011 Loring W. Tu, An Introduction to Manifolds, 2e, Springer. 2011. Differential forms Differential geometry Multilinear algebra
https://en.wikipedia.org/wiki/Northern%20Italy
Northern Italy (, , ) is a geographical and cultural region in the northern part of Italy. The Italian National Institute of Statistics defines the region as encompassing the four Northwestern regions of Piedmont, Aosta Valley, Liguria and Lombardy in addition to the four Northeastern regions of Trentino-Alto Adige, Veneto, Friuli-Venezia Giulia and Emilia-Romagna With a total area of , and a population of 27.4 million as of 2022, the region covers roughly 40% of the Italian Republic and contains 46% of its population. Two of Italy's largest metropolitan areas, Milan and Turin, are located in the region. Northern Italy's GDP was estimated at about € 1 trillion in 2021, accounting for 56.5% of the Italian economy. Northern Italy has a rich and distinct culture. Thirty-seven of the fifty-nine World Heritage Sites in Italy are found in the region. Rhaeto-Romance and Gallo-Italic languages are spoken in the region, as opposed to the Italo-Dalmatian languages spoken in the rest of Italy. The Venetian language is sometimes considered to be part of the Italo-Dalmatian languages, but some major publications such as Ethnologue (to which UNESCO refers on its page about endangered languages) and Glottolog define it as Gallo-Italic. Definition and etymology Northern Italy was called by different terms in different periods of history. During ancient times the terms Gallia Cisalpina, Gallia Citerior or Gallia Togata were used to define that part of Italy inhabited by Celts (Gauls) between the 4th and 3rd century BC. Conquered by the Roman Republic in the 220s BC, it was a Roman province from c. 81 BC until 42 BC, when it was merged into Roman Italy. Until that time, it was considered part of Gaul, precisely that part of Gaul on the "hither side of the Alps" (from the perspective of the Romans), as opposed to Transalpine Gaul ("on the far side of the Alps"). After the fall of the Roman Empire and the settlement of the Lombards the name Langobardia Maior was used, in the Early Middle Ages, to define the domains of the Lombard Kingdom in Northern Italy with capital Pavia. The Lombard territories beyond were called Langobardia Minor, consisting of the duchies of Spoleto and Benevento. During the Late Middle Ages, after the fall of the northern part of the Lombard Kingdom to Charlemagne, the term Longobardia was used to mean Northern Italy within the medieval Kingdom of Italy. As the area became partitioned into regional states the term Lombardy subsequently shifted to indicate only the area of the Duchies of Milan, Mantua, Parma and Modena and later only to the area around Milan. More recently, the term Alta Italia (Italian for 'High Italy') became widely used, for such by the Comitato di Liberazione Nazionale Alta Italia during the Second World War. In the 1960s, the term Padania began to be sometimes used as a geographical synonym of the Po Valley. The term appeared sparingly until the early 1990s, when Lega Nord, then a secessionist political party, pro
https://en.wikipedia.org/wiki/Central%20Italy
Central Italy ( or just ) is one of the five official statistical regions of Italy used by the National Institute of Statistics (ISTAT), a first-level NUTS region, and a European Parliament constituency. Regions Central Italy encompasses four of the country's 20 regions: Lazio Marches (Marche) Tuscany (Toscana) Umbria The southernmost and easternmost parts of Lazio (Sora, Cassino, Gaeta, Cittaducale, Formia, and Amatrice districts) are often included in Southern Italy (the so-called Mezzogiorno) for cultural and historical reasons, since they were once part of the Kingdom of the Two Sicilies and southern Italian dialects are spoken. As a geographical region, however, central Italy may also include the regions of Abruzzo and Molise, which are otherwise considered part of Southern Italy for socio-cultural, linguistic and historical reasons. Geography It is crossed by the northern and central Apennines and is washed by the Adriatic Sea to the east, by the Tyrrhenian Sea and the Ligurian Sea to the west. The main rivers of this portion of the territory are the Arno and the Tiber with their tributaries (e.g. Aniene), and the Liri-Garigliano. The most important lakes are Lake Trasimeno, Lake Montedoglio, Lake Bolsena, Lake Bracciano, Lake Vico, Lake Albano and Lake Nemi. From an altimetric point of view, central Italy has a predominantly hilly territory (68.9%). The mountainous and flat areas are equivalent to 26.9% and 4.2% of the territorial distribution respectively. History For centuries, before the unification of Italy, which occurred in 1861, central Italy was divided into two states, the Papal States and the Grand Duchy of Tuscany. Papal States The Papal States, officially the State of the Church, were a series of territories in the Italian Peninsula under the direct sovereign rule of the pope from 756 until 1870. They were among the major states of Italy from the 8th century until the unification of Italy, between 1859 and 1870. The state had its origins in the rise of Christianity throughout Italy, and with it the rising influence of the Christian Church. By the mid-8th century, with the decline of the Byzantine Empire in Italy, the Papacy became effectively sovereign. Several Christian rulers, including the Frankish kings Charlemagne and Pepin the Short, further donated lands to be governed by the Church. During the Renaissance, the papal territory expanded greatly and the pope became one of Italy's most important secular rulers as well as the head of the Church. At their zenith, the Papal States covered most of the modern Italian regions of Lazio (which includes Rome), Marche, Umbria and Romagna, and portions of Emilia. These holdings were considered to be a manifestation of the temporal power of the pope, as opposed to his ecclesiastical primacy. By 1861, much of the Papal States' territory had been conquered by the Kingdom of Italy. Only Lazio, including Rome, remained under the pope's temporal control. In 1870, the pope lost
https://en.wikipedia.org/wiki/Insular%20Italy
Insular Italy ( or just , meaning "islands") is one of the five official statistical regions of Italy used by the National Institute of Statistics (ISTAT), a first level NUTS region and a European Parliament constituency. Insular Italy encompasses two of the country's 20 regions: Sardinia and Sicily. Geography Insular Italy occupies one sixth of the national territory in surface area. Territorially, both Sicily and Sardinia include several minor islands and archipelagoes that are administratively dependent on the mother islands. Sicily is the largest island in the Mediterranean (25,708 km2) and one of the largest of Europe, while Sardinia is only slightly less extensive (24,090 km2). The lowlands are generally limited in the geographic region and generally appear as narrow coastal belts. The only exceptions are the Campidano and Nurra, in Sardinia, and the Plain of Catania, in Sicily, which extend 1200 km2 and 430 km,2 respectively. The rest of the area is prevalently hilly, with hills occupying 70% of the territory. Sicily is home to Mount Etna, Italy's highest non-Alpine peak and Europe's largest active volcano. Sardinia is home to the Gennargentu mountain range. Demographics The population of Insular Italy totals combined over 6.7 million residents. Insular Italy has a population density of less than half the national average mainly because of the scarce population of Sardinia, one of the least densely populated regions of Italy and Europe. Sicily, on the other hand, has in fact a population density five times as high as Sardinia. Overall, their combined populations total just one-tenth of the national population, making Insular Italy the least populated macro-region of the country. In 2022, the population resident in Insular Italy amounts to inhabitants. Regions Most populous municipalities Below is the list of the population residing in 2022 in municipalities with more than inhabitants: Economy The gross domestic product (GDP) of the region was 123.9 billion euros in 2018, accounting for 7% of Italy's economic output. The GDP per capita adjusted for purchasing power was 18,500 euros or 62% of the EU27 average in the same year. Socio-economic situation The unemployment rate of Sicily is the highest in the country at 11.9%, while in Sardinia between 2006-07 it dropped for the first time below 10%, reaching 8.6%, the lowest of all the Mezzogiorno regions, excluding Molise and Abruzzo. The low level of entrepreneurship in Sicily is tied to the local organized criminal activity, and in Sardinia, it results from the rather expensive operating expenses (electricity, transportation etc.), which are 20-50% higher than other regions because of its peripheral location from the Italian mainland and the lack of a proper territorial continuity (continuità territoriale). That condition has been reduced in Sardinia with the development of information technologies like Tiscali low-cost carriers like Ryanair and laws regarding fares and route
https://en.wikipedia.org/wiki/Common%20Algebraic%20Specification%20Language
The Common Algebraic Specification Language (CASL) is a general-purpose specification language based on first-order logic with induction. Partial functions and subsorting are also supported. Overview CASL has been designed by CoFI, the Common Framework Initiative (CoFI), with the aim to subsume many existing specification languages. CASL comprises four levels: basic specifications, for the specification of single software modules, structured specifications, for the modular specification of modules, architectural specifications, for the prescription of the structure of implementations, specification libraries, for storing specifications distributed over the Internet. The four levels are orthogonal to each other. In particular, it is possible to use CASL structured and architectural specifications and libraries with logics other than CASL. For this purpose, the logic has to be formalized as an institution. This feature is also used by the CASL extensions. Extensions Several extensions of CASL have been designed: HasCASL, a higher-order extension CoCASL, a coalgebraic extension CspCASL, a concurrent extension based on CSP ModalCASL, a modal logic extension CASL-LTL, a temporal logic extension HetCASL, an extension for heterogeneous specification External links Official CoFI website CASL The heterogeneous tool set Hets, the main analysis tool for CASL Formal specification languages