source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Ring%20of%20symmetric%20functions
|
In algebra and in particular in algebraic combinatorics, the ring of symmetric functions is a specific limit of the rings of symmetric polynomials in n indeterminates, as n goes to infinity. This ring serves as universal structure in which relations between symmetric polynomials can be expressed in a way independent of the number n of indeterminates (but its elements are neither polynomials nor functions). Among other things, this ring plays an important role in the representation theory of the symmetric group.
The ring of symmetric functions can be given a coproduct and a bilinear form making it into a positive selfadjoint graded Hopf algebra that is both commutative and cocommutative.
Symmetric polynomials
The study of symmetric functions is based on that of symmetric polynomials. In a polynomial ring in some finite set of indeterminates, a polynomial is called symmetric if it stays the same whenever the indeterminates are permuted in any way. More formally, there is an action by ring automorphisms of the symmetric group Sn on the polynomial ring in n indeterminates, where a permutation acts on a polynomial by simultaneously substituting each of the indeterminates for another according to the permutation used. The invariants for this action form the subring of symmetric polynomials. If the indeterminates are X1, ..., Xn, then examples of such symmetric polynomials are
and
A somewhat more complicated example is
X13X2X3 + X1X23X3 + X1X2X33 + X13X2X4 + X1X23X4 + X1X2X43 + ...
where the summation goes on to include all products of the third power of some variable and two other variables. There are many specific kinds of symmetric polynomials, such as elementary symmetric polynomials, power sum symmetric polynomials, monomial symmetric polynomials, complete homogeneous symmetric polynomials, and Schur polynomials.
The ring of symmetric functions
Most relations between symmetric polynomials do not depend on the number n of indeterminates, other than that some polynomials in the relation might require n to be large enough in order to be defined. For instance the Newton's identity for the third power sum polynomial p3 leads to
where the denote elementary symmetric polynomials; this formula is valid for all natural numbers n, and the only notable dependency on it is that ek(X1,...,Xn) = 0 whenever n < k. One would like to write this as an identity
that does not depend on n at all, and this can be done in the ring of symmetric functions. In that ring there are nonzero elements ek for all integers k ≥ 1, and any element of the ring can be given by a polynomial expression in the elements ek.
Definitions
A ring of symmetric functions can be defined over any commutative ring R, and will be denoted ΛR; the basic case is for R = Z. The ring ΛR is in fact a graded R-algebra. There are two main constructions for it; the first one given below can be found in (Stanley, 1999), and the second is essentially the one given in (Macdonald, 1979).
As
|
https://en.wikipedia.org/wiki/Projective%20module
|
In mathematics, particularly in algebra, the class of projective modules enlarges the class of free modules (that is, modules with basis vectors) over a ring, by keeping some of the main properties of free modules. Various equivalent characterizations of these modules appear below.
Every free module is a projective module, but the converse fails to hold over some rings, such as Dedekind rings that are not principal ideal domains. However, every projective module is a free module if the ring is a principal ideal domain such as the integers, or a polynomial ring (this is the Quillen–Suslin theorem).
Projective modules were first introduced in 1956 in the influential book Homological Algebra by Henri Cartan and Samuel Eilenberg.
Definitions
Lifting property
The usual category theoretical definition is in terms of the property of lifting that carries over from free to projective modules: a module P is projective if and only if for every surjective module homomorphism and every module homomorphism , there exists a module homomorphism such that . (We don't require the lifting homomorphism h to be unique; this is not a universal property.)
The advantage of this definition of "projective" is that it can be carried out in categories more general than module categories: we don't need a notion of "free object". It can also be dualized, leading to injective modules. The lifting property may also be rephrased as every morphism from to factors through every epimorphism to . Thus, by definition, projective modules are precisely the projective objects in the category of R-modules.
Split-exact sequences
A module P is projective if and only if every short exact sequence of modules of the form
is a split exact sequence. That is, for every surjective module homomorphism there exists a section map, that is, a module homomorphism such that f h = idP . In that case, is a direct summand of B, h is an isomorphism from P to , and is a projection on the summand . Equivalently,
Direct summands of free modules
A module P is projective if and only if there is another module Q such that the direct sum of P and Q is a free module.
Exactness
An R-module P is projective if and only if the covariant functor is an exact functor, where is the category of left R-modules and Ab is the category of abelian groups. When the ring R is commutative, Ab is advantageously replaced by in the preceding characterization. This functor is always left exact, but, when P is projective, it is also right exact. This means that P is projective if and only if this functor preserves epimorphisms (surjective homomorphisms), or if it preserves finite colimits.
Dual basis
A module P is projective if and only if there exists a set and a set such that for every x in P, fi  (x) is only nonzero for finitely many i, and .
Elementary examples and properties
The following properties of projective modules are quickly deduced from any of the above (equival
|
https://en.wikipedia.org/wiki/Scheme%20%28mathematics%29
|
In mathematics, a scheme is a mathematical structure that enlarges the notion of algebraic variety in several ways, such as taking account of multiplicities (the equations x = 0 and x2 = 0 define the same algebraic variety but different schemes) and allowing "varieties" defined over any commutative ring (for example, Fermat curves are defined over the integers).
Scheme theory was introduced by Alexander Grothendieck in 1960 in his treatise "Éléments de géométrie algébrique"; one of its aims was developing the formalism needed to solve deep problems of algebraic geometry, such as the Weil conjectures (the last of which was proved by Pierre Deligne). Strongly based on commutative algebra, scheme theory allows a systematic use of methods of topology and homological algebra. Scheme theory also unifies algebraic geometry with much of number theory, which eventually led to Wiles's proof of Fermat's Last Theorem.
Formally, a scheme is a topological space together with commutative rings for all of its open sets, which arises from gluing together spectra (spaces of prime ideals) of commutative rings along their open subsets. In other words, it is a ringed space which is locally a spectrum of a commutative ring.
The relative point of view is that much of algebraic geometry should be developed for a morphism X → Y of schemes (called a scheme X over Y), rather than for an individual scheme. For example, in studying algebraic surfaces, it can be useful to consider families of algebraic surfaces over any scheme Y. In many cases, the family of all varieties of a given type can itself be viewed as a variety or scheme, known as a moduli space.
For some of the detailed definitions in the theory of schemes, see the glossary of scheme theory.
Development
The origins of algebraic geometry mostly lie in the study of polynomial equations over the real numbers. By the 19th century, it became clear (notably in the work of Jean-Victor Poncelet and Bernhard Riemann) that algebraic geometry was simplified by working over the field of complex numbers, which has the advantage of being algebraically closed. Two issues gradually drew attention in the early 20th century, motivated by problems in number theory: how can algebraic geometry be developed over any algebraically closed field, especially in positive characteristic? (The tools of topology and complex analysis used to study complex varieties do not seem to apply here.) And what about algebraic geometry over an arbitrary field?
Hilbert's Nullstellensatz suggests an approach to algebraic geometry over any algebraically closed field k: the maximal ideals in the polynomial ring k[x1,...,xn] are in one-to-one correspondence with the set kn of n-tuples of elements of k, and the prime ideals correspond to the irreducible algebraic sets in kn, known as affine varieties. Motivated by these ideas, Emmy Noether and Wolfgang Krull developed the subject of commutative algebra in the 1920s and 1930s. Their work generalizes algebr
|
https://en.wikipedia.org/wiki/Conformal%20field%20theory
|
A conformal field theory (CFT) is a quantum field theory that is invariant under conformal transformations. In two dimensions, there is an infinite-dimensional algebra of local conformal transformations, and conformal field theories can sometimes be exactly solved or classified.
Conformal field theory has important applications to condensed matter physics, statistical mechanics, quantum statistical mechanics, and string theory. Statistical and condensed matter systems are indeed often conformally invariant at their thermodynamic or quantum critical points.
Scale invariance vs conformal invariance
In quantum field theory, scale invariance is a common and natural symmetry, because any fixed point of the renormalization group is by definition scale invariant. Conformal symmetry is stronger than scale invariance, and one needs additional assumptions to argue that it should appear in nature. The basic idea behind its plausibility is that local scale invariant theories have their currents given by where is a Killing vector and is a conserved operator (the stress-tensor) of dimension exactly . For the associated symmetries to include scale but not conformal transformations, the trace has to be a non-zero total derivative implying that there is a non-conserved operator of dimension exactly .
Under some assumptions it is possible to completely rule out this type of non-renormalization and hence prove that scale invariance implies conformal invariance in a quantum field theory, for example in unitary compact conformal field theories in two dimensions.
While it is possible for a quantum field theory to be scale invariant but not conformally invariant, examples are rare. For this reason, the terms are often used interchangeably in the context of quantum field theory.
Two dimensions vs higher dimensions
The number of independent conformal transformations is infinite in two dimensions, and finite in higher dimensions. This makes conformal symmetry much more constraining in two dimensions. All conformal field theories share the ideas and techniques of the conformal bootstrap. But the resulting equations are more powerful in two dimensions, where they are sometimes exactly solvable (for example in the case of minimal models), in contrast to higher dimensions, where numerical approaches dominate.
The development of conformal field theory has been earlier and deeper in the two-dimensional case, in particular after the 1983 article by Belavin, Polyakov and Zamolodchikov.
The term conformal field theory has sometimes been used with the meaning of two-dimensional conformal field theory, as in the title of a 1997 textbook.
Higher-dimensional conformal field theories have become more popular with the AdS/CFT correspondence in the late 1990s, and the development of numerical conformal bootstrap techniques in the 2000s.
Global vs local conformal symmetry in two dimensions
The global conformal group of the Riemann sphere is the group of Möbius transformat
|
https://en.wikipedia.org/wiki/Conformal%20group
|
In mathematics, the conformal group of an inner product space is the group of transformations from the space to itself that preserve angles. More formally, it is the group of transformations that preserve the conformal geometry of the space.
Several specific conformal groups are particularly important:
The conformal orthogonal group. If V is a vector space with a quadratic form Q, then the conformal orthogonal group is the group of linear transformations T of V for which there exists a scalar λ such that for all x in V
For a definite quadratic form, the conformal orthogonal group is equal to the orthogonal group times the group of dilations.
The conformal group of the sphere is generated by the inversions in circles. This group is also known as the Möbius group.
In Euclidean space En, , the conformal group is generated by inversions in hyperspheres.
In a pseudo-Euclidean space Ep,q, the conformal group is .
All conformal groups are Lie groups.
Angle analysis
In Euclidean geometry one can expect the standard circular angle to be characteristic, but in pseudo-Euclidean space there is also the hyperbolic angle. In the study of special relativity the various frames of reference, for varying velocity with respect to a rest frame, are related by rapidity, a hyperbolic angle. One way to describe a Lorentz boost is as a hyperbolic rotation which preserves the differential angle between rapidities. Thus, they are conformal transformations with respect to the hyperbolic angle.
A method to generate an appropriate conformal group is to mimic the steps of the Möbius group as the conformal group of the ordinary complex plane. Pseudo-Euclidean geometry is supported by alternative complex planes where points are split-complex numbers or dual numbers. Just as the Möbius group requires the Riemann sphere, a compact space, for a complete description, so the alternative complex planes require compactification for complete description of conformal mapping. Nevertheless, the conformal group in each case is given by linear fractional transformations on the appropriate plane.
Mathematical definition
Given a (Pseudo-)Riemannian manifold with conformal class , the conformal group is the group of conformal maps from to itself.
More concretely, this is the group of angle-preserving smooth maps from to itself. However, when the signature of is not definite, the 'angle' is a hyper-angle which is potentially infinite.
For Pseudo-Euclidean space, the definition is slightly different. is the conformal group of the manifold arising from conformal compactification of the pseudo-Euclidean space (sometimes identified with after a choice of orthonormal basis). This conformal compactification can be defined using , considered as a submanifold of null points in by the inclusion (where is considered as a single spacetime vector). The conformal compactification is then with 'antipodal points' identified. This happens by projectivising the space . If is
|
https://en.wikipedia.org/wiki/Quotient%20module
|
In algebra, given a module and a submodule, one can construct their quotient module. This construction, described below, is very similar to that of a quotient vector space. It differs from analogous quotient constructions of rings and groups by the fact that in these cases, the subspace that is used for defining the quotient is not of the same nature as the ambient space (that is, a quotient ring is the quotient of a ring by an ideal, not a subring, and a quotient group is the quotient of a group by a normal subgroup, not by a general subgroup).
Given a module over a ring , and a submodule of , the quotient space is defined by the equivalence relation
if and only if
for any in . The elements of are the equivalence classes The function sending in to its equivalence class is called the quotient map or the projection map, and is a module homomorphism.
The addition operation on is defined for two equivalence classes as the equivalence class of the sum of two representatives from these classes; and scalar multiplication of elements of by elements of is defined similarly. Note that it has to be shown that these operations are well-defined. Then becomes itself an -module, called the quotient module. In symbols, for all in and in :
Examples
Consider the polynomial ring, with real coefficients, and the -module . Consider the submodule
of , that is, the submodule of all polynomials divisible by . It follows that the equivalence relation determined by this module will be
if and only if and give the same remainder when divided by .
Therefore, in the quotient module , is the same as 0; so one can view as obtained from by setting . This quotient module is isomorphic to the complex numbers, viewed as a module over the real numbers
See also
Quotient group
Quotient ring
Quotient (universal algebra)
References
Module theory
Module
|
https://en.wikipedia.org/wiki/Wishart%20distribution
|
In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928. Other names include Wishart ensemble (in random matrix theory, probability distributions over matrices are usually called "ensembles"), or Wishart–Laguerre ensemble (since its eigenvalue distribution involve Laguerre polynomials), or LOE, LUE, LSE (in analogy with GOE, GUE, GSE).
It is a family of probability distributions defined over symmetric, positive-definite random matrices (i.e. matrix-valued random variables). These distributions are of great importance in the estimation of covariance matrices in multivariate statistics. In Bayesian statistics, the Wishart distribution is the conjugate prior of the inverse covariance-matrix of a multivariate-normal random-vector.
Definition
Suppose is a matrix, each column of which is independently drawn from a -variate normal distribution with zero mean:
Then the Wishart distribution is the probability distribution of the random matrix
known as the scatter matrix. One indicates that has that probability distribution by writing
The positive integer is the number of degrees of freedom. Sometimes this is written . For the matrix is invertible with probability if is invertible.
If then this distribution is a chi-squared distribution with degrees of freedom.
Occurrence
The Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a multivariate normal distribution. It occurs frequently in likelihood-ratio tests in multivariate statistical analysis. It also arises in the spectral theory of random matrices and in multidimensional Bayesian analysis. It is also encountered in wireless communications, while analyzing the performance of Rayleigh fading MIMO wireless channels .
Probability density function
The Wishart distribution can be characterized by its probability density function as follows:
Let be a symmetric matrix of random variables that is positive semi-definite. Let be a (fixed) symmetric positive definite matrix of size .
Then, if , has a Wishart distribution with degrees of freedom if it has the probability density function
where is the determinant of and is the multivariate gamma function defined as
The density above is not the joint density of all the elements of the random matrix (such density does not exist because of the symmetry constrains ), it is rather the joint density of elements for (, page 38). Also, the density formula above applies only to positive definite matrices for other matrices the density is equal to zero.
Spectral density
The joint-eigenvalue density for the eigenvalues of a random matrix is,
where is a constant.
In fact the above definition can be extended to any real . If , then the Wishart no longer has a density—instead it represents a singular distribution that takes values in a lower-dime
|
https://en.wikipedia.org/wiki/Finite%20intersection%20property
|
In general topology, a branch of mathematics, a non-empty family A of subsets of a set is said to have the finite intersection property (FIP) if the intersection over any finite subcollection of is non-empty. It has the strong finite intersection property (SFIP) if the intersection over any finite subcollection of is infinite. Sets with the finite intersection property are also called centered systems and filter subbases.
The finite intersection property can be used to reformulate topological compactness in terms of closed sets; this is its most prominent application. Other applications include proving that certain perfect sets are uncountable, and the construction of ultrafilters.
Definition
Let be a set and a nonempty family of subsets of that is, is a subset of the power set of Then is said to have the finite intersection property if every nonempty finite subfamily has nonempty intersection; it is said to have the strong finite intersection property if that intersection is always infinite.
In symbols, has the FIP if, for any choice of a finite nonempty subset of there must exist a point Likewise, has the SFIP if, for every choice of such there are infinitely many such
In the study of filters, the common intersection of a family of sets is called a kernel, from much the same etymology as the sunflower. Families with empty kernel are called free; those with nonempty kernel, fixed.
Families of examples and non-examples
The empty set cannot belong to any collection with the finite intersection property.
A sufficient condition for the FIP intersection property is a nonempty kernel. The converse is generally false, but holds for finite families; that is, if is finite, then has the finite intersection property if and only if it is fixed.
Pairwise intersection
The finite intersection property is strictly stronger than pairwise intersection; the family has pairwise intersections, but not the FIP.
More generally, let be a positive integer greater than unity, and Then any subset of with fewer than elements has nonempty intersection, but lacks the FIP.
End-type constructions
If is a decreasing sequence of non-empty sets, then the family has the finite intersection property (and is even a –system). If the inclusions are strict, then admits the strong finite intersection property as well.
More generally, any that is totally ordered by inclusion has the FIP.
At the same time, the kernel of may be empty: if then the kernel of is the empty set. Similarly, the family of intervals also has the (S)FIP, but empty kernel.
"Generic" sets and properties
The family of all Borel subsets of with Lebesgue measure has the FIP, as does the family of comeagre sets. If is an infinite set, then the Fréchet filter (the family has the FIP. All of these are free filters; they are upwards-closed and have empty infinitary intersection.
If and, for each positive integer the subset is precisely all
|
https://en.wikipedia.org/wiki/Heptagon
|
In geometry, a heptagon or septagon is a seven-sided polygon or 7-gon.
The heptagon is sometimes referred to as the septagon, using "sept-" (an elision of septua-, a Latin-derived numerical prefix, rather than hepta-, a Greek-derived numerical prefix; both are cognate) together with the Greek suffix "-agon" meaning angle.
Regular heptagon
A regular heptagon, in which all sides and all angles are equal, has internal angles of 5π/7 radians (128 degrees). Its Schläfli symbol is {7}.
Area
The area (A) of a regular heptagon of side length a is given by:
This can be seen by subdividing the unit-sided heptagon into seven triangular "pie slices" with vertices at the center and at the heptagon's vertices, and then halving each triangle using the apothem as the common side. The apothem is half the cotangent of and the area of each of the 14 small triangles is one-fourth of the apothem.
The area of a regular heptagon inscribed in a circle of radius R is while the area of the circle itself is thus the regular heptagon fills approximately 0.8710 of its circumscribed circle.
Construction
As 7 is a Pierpont prime but not a Fermat prime, the regular heptagon is not constructible with compass and straightedge but is constructible with a marked ruler and compass. It is the smallest regular polygon with this property. This type of construction is called a neusis construction. It is also constructible with compass, straightedge and angle trisector. The impossibility of straightedge and compass construction follows from the observation that is a zero of the irreducible cubic . Consequently, this polynomial is the minimal polynomial of whereas the degree of the minimal polynomial for a constructible number must be a power of 2.
Approximation
An approximation for practical use with an error of about 0.2% is to use half the side of an equilateral triangle inscribed in the same circle as the length of the side of a regular heptagon. It is unknown who first found this approximation, but it was mentioned by Heron of Alexandria's Metrica in the 1st century AD, was well known to medieval Islamic mathematicians, and can be found in the work of Albrecht Dürer. Let A lie on the circumference of the circumcircle. Draw arc BOC. Then gives an approximation for the edge of the heptagon.
This approximation uses for the side of the heptagon inscribed in the unit circle while the exact value is .
Example to illustrate the error:
At a circumscribed circle radius r = 1 m, the absolute error of the 1st side would be approximately -1.7 mm Symmetry
The regular heptagon belongs to the D7h point group (Schoenflies notation), order 28. The symmetry elements are: a 7-fold proper rotation axis C7, a 7-fold improper rotation axis, S7, 7 vertical mirror planes, σv, 7 2-fold rotation axes, C2, in the plane of the heptagon and a horizontal mirror plane, σh, also in the heptagon's plane.
Diagonals and heptagonal triangle
The regular heptagon's side a, shorter diagonal b, and l
|
https://en.wikipedia.org/wiki/Ford%20circle
|
In mathematics, a Ford circle is a circle in the Euclidean plane, in a family of circles that are all tangent to the -axis at rational points. For each rational number , expressed in lowest terms, there is a Ford circle whose center is at the point and whose radius is . It is tangent to the -axis at its bottom point, . The two Ford circles for rational numbers and (both in lowest terms) are tangent circles when and otherwise these two circles are disjoint.
History
Ford circles are a special case of mutually tangent circles; the base line can be thought of as a circle with infinite radius. Systems of mutually tangent circles were studied by Apollonius of Perga, after whom the problem of Apollonius and the Apollonian gasket are named. In the 17th century René Descartes discovered Descartes' theorem, a relationship between the reciprocals of the radii of mutually tangent circles.
Ford circles also appear in the Sangaku (geometrical puzzles) of Japanese mathematics. A typical problem, which is presented on an 1824 tablet in the Gunma Prefecture, covers the relationship of three touching circles with a common tangent. Given the size of the two outer large circles, what is the size of the small circle between them? The answer is equivalent to a Ford circle:
Ford circles are named after the American mathematician Lester R. Ford, Sr., who wrote about them in 1938.
Properties
The Ford circle associated with the fraction is denoted by or There is a Ford circle associated with every rational number. In addition, the line is counted as a Ford circle – it can be thought of as the Ford circle associated with infinity, which is the case
Two different Ford circles are either disjoint or tangent to one another. No two interiors of Ford circles intersect, even though there is a Ford circle tangent to the x-axis at each point on it with rational coordinates. If is between 0 and 1, the Ford circles that are tangent to can be described variously as
the circles where
the circles associated with the fractions that are the neighbors of in some Farey sequence, or
the circles where is the next larger or the next smaller ancestor to in the Stern–Brocot tree or where is the next larger or next smaller ancestor to .
If and are two tangent Ford circles, then the circle through and (the x-coordinates of the centers of the Ford circles) and that is perpendicular to the -axis (whose center is on the x-axis) also passes through the point where the two circles are tangent to one another.
The centers of the Ford circles constitute a discrete (and hence countable) subset of the plane, whose closure is the real axis - an uncountable set.
Ford circles can also be thought of as curves in the complex plane. The modular group of transformations of the complex plane maps Ford circles to other Ford circles.
Ford circles are a sub-set of the circles in the Apollonian gasket generated by the lines and and the circle
By interpreting the upper half of t
|
https://en.wikipedia.org/wiki/Missing%20square%20puzzle
|
The missing square puzzle is an optical illusion used in mathematics classes to help students reason about geometrical figures; or rather to teach them not to reason using figures, but to use only textual descriptions and the axioms of geometry. It depicts two arrangements made of similar shapes in slightly different configurations. Each apparently forms a 13×5 right-angled triangle, but one has a 1×1 hole in it.
Solution
The key to the puzzle is the fact that neither of the 13×5 "triangles" is truly a triangle, nor would either truly be 13x5 if it were, because what appears to be the hypotenuse is bent. In other words, the "hypotenuse" does not maintain a consistent slope, even though it may appear that way to the human eye.
A true 13×5 triangle cannot be created from the given component parts. The four figures (the yellow, red, blue and green shapes) total 32 units of area. The apparent triangles formed from the figures are 13 units wide and 5 units tall, so it appears that the area should be S = = 32.5 units. However, the blue triangle has a ratio of 5:2 (=2.5), while the red triangle has the ratio 8:3 (≈2.667), so the apparent combined hypotenuse in each figure is actually bent. With the bent hypotenuse, the first figure actually occupies a combined 32 units, while the second figure occupies 33, including the "missing" square.
The amount of bending is approximately unit (1.245364267°), which is difficult to see on the diagram of the puzzle, and was illustrated as a graphic. Note the grid point where the red and blue triangles in the lower image meet (5 squares to the right and two units up from the lower left corner of the combined figure), and compare it to the same point on the other figure; the edge is slightly under the mark in the upper image, but goes through it in the lower. Overlaying the hypotenuses from both figures results in a very thin parallelogram (represented with the four red dots) with an area of exactly one grid square (Pick's theorem gives 0  + − 1 = 1), so the "missing" area.
Principle
According to Martin Gardner, this particular puzzle was invented by a New York City amateur magician, Paul Curry, in 1953. However, the principle of a dissection paradox has been known since the start of the 16th century.
The integer dimensions of the parts of the puzzle (2, 3, 5, 8, 13) are successive Fibonacci numbers, which leads to the exact unit area in the thin parallelogram.
Many other geometric dissection puzzles are based on a few simple properties of the Fibonacci sequence.
Similar puzzles
Sam Loyd's chessboard paradox demonstrates two rearrangements of an 8×8 square. In the "larger" rearrangement (the 5×13 rectangle in the image to the right), the gaps between the figures have a combined unit square more area than their square gaps counterparts, creating an illusion that the figures there take up more space than those in the original square figure. In the "smaller" rearrangement (the shape below the 5×13 rect
|
https://en.wikipedia.org/wiki/Banach%E2%80%93Mazur%20game
|
In general topology, set theory and game theory, a Banach–Mazur game is a topological game played by two players, trying to pin down elements in a set (space). The concept of a Banach–Mazur game is closely related to the concept of Baire spaces. This game was the first infinite positional game of perfect information to be studied. It was introduced by Stanisław Mazur as problem 43 in the Scottish book, and Mazur's questions about it were answered by Banach.
Definition
Let be a non-empty topological space, a fixed subset of and a family of subsets of that have the following properties:
Each member of has non-empty interior.
Each non-empty open subset of contains a member of .
Players, and alternately choose elements from to form a sequence
wins if and only if
Otherwise, wins.
This is called a general Banach–Mazur game and denoted by
Properties
has a winning strategy if and only if is of the first category in (a set is of the first category or meagre if it is the countable union of nowhere-dense sets).
If is a complete metric space, has a winning strategy if and only if is comeager in some non-empty open subset of
If has the Baire property in , then is determined.
The siftable and strongly-siftable spaces introduced by Choquet can be defined in terms of stationary strategies in suitable modifications of the game. Let denote a modification of where is the family of all non-empty open sets in and wins a play if and only if
Then is siftable if and only if has a stationary winning strategy in
A Markov winning strategy for in can be reduced to a stationary winning strategy. Furthermore, if has a winning strategy in , then has a winning strategy depending only on two preceding moves. It is still an unsettled question whether a winning strategy for can be reduced to a winning strategy that depends only on the last two moves of .
is called weakly -favorable if has a winning strategy in . Then, is a Baire space if and only if has no winning strategy in . It follows that each weakly -favorable space is a Baire space.
Many other modifications and specializations of the basic game have been proposed: for a thorough account of these, refer to [1987].
The most common special case arises when and consist of all closed intervals in the unit interval. Then wins if and only if and wins if and only if . This game is denoted by
A simple proof: winning strategies
It is natural to ask for what sets does have a winning strategy in . Clearly, if is empty, has a winning strategy, therefore the question can be informally rephrased as how "small" (respectively, "big") does (respectively, the complement of in ) have to be to ensure that has a winning strategy. The following result gives a flavor of how the proofs used to derive the properties in the previous section work:
Proposition. has a winning strategy in if is countable, is T1, and has no isolated points.
Proof. Index the elements of X a
|
https://en.wikipedia.org/wiki/Claude%20Chevalley
|
Claude Chevalley (; 11 February 1909 – 28 June 1984) was a French mathematician who made important contributions to number theory, algebraic geometry, class field theory, finite group theory and the theory of algebraic groups. He was a founding member of the Bourbaki group.
Life
His father, Abel Chevalley, was a French diplomat who, jointly with his wife Marguerite Chevalley née Sabatier, wrote The Concise Oxford French Dictionary. Chevalley graduated from the École Normale Supérieure in 1929, where he studied under Émile Picard. He then spent time at the University of Hamburg, studying under Emil Artin and at the University of Marburg, studying under Helmut Hasse. In Germany, Chevalley discovered Japanese mathematics in the person of Shokichi Iyanaga. Chevalley was awarded a doctorate in 1933 from the University of Paris for a thesis on class field theory.
When World War II broke out, Chevalley was at Princeton University. After reporting to the French Embassy, he stayed in the U.S., first at Princeton and then (after 1947) at Columbia University. His American students included Leon Ehrenpreis and Gerhard Hochschild. During his time in the U.S., Chevalley became an American citizen and wrote a substantial part of his lifetime's output in English.
When Chevalley applied for a chair at the Sorbonne, the difficulties he encountered were the subject of a polemical piece by his friend and fellow Bourbakiste André Weil, titled "Science Française?" and published in the Nouvelle Revue Française. Chevalley was the "professeur B" of the piece, as confirmed in the endnote to the reprint in Weil's collected works, Oeuvres Scientifiques, tome II. Chevalley eventually did obtain a position in 1957 at the faculty of sciences of the University of Paris and after 1970 at the Université de Paris VII.
Chevalley had artistic and political interests, and was a minor member of the French non-conformists of the 1930s. The following quote by the co-editor of Chevalley's collected works attests to these interests:
"Chevalley was a member of various avant-garde groups, both in politics and in the arts... Mathematics was the most important part of his life, but he did not draw any boundary between his mathematics and the rest of his life."
Work
In his PhD thesis, Chevalley made an important contribution to the technical development of class field theory, removing a use of L-functions and replacing it by an algebraic method. At that time use of group cohomology was implicit, cloaked by the language of central simple algebras. In the introduction to André Weil's Basic Number Theory, Weil attributed the book's adoption of that path to an unpublished manuscript by Chevalley.
Around 1950, Chevalley wrote a three-volume treatment of Lie groups. A few years later, he published the work for which he is best remembered, his investigation into what are now called Chevalley groups. Chevalley groups make up 9 of the 18 families of finite simple groups.
Chevalley's accurate dis
|
https://en.wikipedia.org/wiki/Pontryagin%20duality
|
In mathematics, Pontryagin duality is a duality between locally compact abelian groups that allows generalizing Fourier transform to all such groups, which include the circle group (the multiplicative group of complex numbers of modulus one), the finite abelian groups (with the discrete topology), and the additive group of the integers (also with the discrete topology), the real numbers, and every finite-dimensional vector space over the reals or a -adic field.
The Pontryagin dual of a locally compact abelian group is the locally compact abelian topological group formed by the continuous group homomorphisms from the group to the circle group with the operation of pointwise multiplication and the topology of uniform convergence on compact sets. The Pontryagin duality theorem establishes Pontryagin duality by stating that any locally compact abelian group is naturally isomorphic with its bidual (the dual of its dual). The Fourier inversion theorem is a special case of this theorem.
The subject is named after Lev Pontryagin who laid down the foundations for the theory of locally compact abelian groups and their duality during his early mathematical works in 1934. Pontryagin's treatment relied on the groups being second-countable and either compact or discrete. This was improved to cover the general locally compact abelian groups by Egbert van Kampen in 1935 and André Weil in 1940.
Introduction
Pontryagin duality places in a unified context a number of observations about functions on the real line or on finite abelian groups:
Suitably regular complex-valued periodic functions on the real line have Fourier series and these functions can be recovered from their Fourier series;
Suitably regular complex-valued functions on the real line have Fourier transforms that are also functions on the real line and, just as for periodic functions, these functions can be recovered from their Fourier transforms; and
Complex-valued functions on a finite abelian group have discrete Fourier transforms, which are functions on the dual group, which is a (non-canonically) isomorphic group. Moreover, any function on a finite abelian group can be recovered from its discrete Fourier transform.
The theory, introduced by Lev Pontryagin and combined with the Haar measure introduced by John von Neumann, André Weil and others depends on the theory of the dual group of a locally compact abelian group.
It is analogous to the dual vector space of a vector space: a finite-dimensional vector space and its dual vector space are not naturally isomorphic, but the endomorphism algebra (matrix algebra) of one is isomorphic to the opposite of the endomorphism algebra of the other: via the transpose. Similarly, a group and its dual group are not in general isomorphic, but their endomorphism rings are opposite to each other: . More categorically, this is not just an isomorphism of endomorphism algebras, but a contravariant equivalence of categories – see .
Definition
A topolog
|
https://en.wikipedia.org/wiki/Typed%20lambda%20calculus
|
A typed lambda calculus is a typed formalism that uses the lambda-symbol () to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (see kinds below). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus, but from another point of view, they can also be considered the more fundamental theory and untyped lambda calculus a special case with only one type.
Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typed imperative programming languages. Typed lambda calculi play an important role in the design of type systems for programming languages; here, typability usually captures desirable properties of the program (e.g., the program will not cause a memory access violation).
Typed lambda calculi are closely related to mathematical logic and proof theory via the Curry–Howard isomorphism and they can be considered as the internal language of certain classes of categories. For example, the simply typed lambda calculus is the language of Cartesian closed categories (CCCs)
Kinds of typed lambda calculi
Various typed lambda calculi have been studied. The simply typed lambda calculus has only one type constructor, the arrow , and its only types are basic types and function types . System T extends the simply typed lambda calculus with a type of natural numbers and higher order primitive recursion; in this system all functions provably recursive in Peano arithmetic are definable. System F allows polymorphism by using universal quantification over all types; from a logical perspective it can describe all functions that are provably total in second-order logic. Lambda calculi with dependent types are the base of intuitionistic type theory, the calculus of constructions and the logical framework (LF), a pure lambda calculus with dependent types. Based on work by Berardi on pure type systems, Henk Barendregt proposed the Lambda cube to systematize the relations of pure typed lambda calculi (including simply typed lambda calculus, System F, LF and the calculus of constructions).
Some typed lambda calculi introduce a notion of subtyping, i.e. if is a subtype of , then all terms of type also have type . Typed lambda calculi with subtyping are the simply typed lambda calculus with conjunctive types and System F<:.
All the systems mentioned so far, with the exception of the untyped lambda calculus, are strongly normalizing: all computations terminate. Therefore, they cannot describe all Turing-computable functions. As another consequence they are consistent as a logic, i.e. there are uninhabited types. There exist, however, typed lambda calculi that are not strongly normalizing. For example the dependently typed lambda calculus with a type of all
|
https://en.wikipedia.org/wiki/Mathematical%20puzzle
|
Mathematical puzzles make up an integral part of recreational mathematics. They have specific rules, but they do not usually involve competition between two or more players. Instead, to solve such a puzzle, the solver must find a solution that satisfies the given conditions. Mathematical puzzles require mathematics to solve them. Logic puzzles are a common type of mathematical puzzle.
Conway's Game of Life and fractals, as two examples, may also be considered mathematical puzzles even though the solver interacts with them only at the beginning by providing a set of initial conditions. After these conditions are set, the rules of the puzzle determine all subsequent changes and moves. Many of the puzzles are well known because they were discussed by Martin Gardner in his "Mathematical Games" column in Scientific American. Mathematical puzzles are sometimes used to motivate students in teaching elementary school math problem solving techniques. Creative thinkingor "thinking outside the box"often helps to find the solution.
List of mathematical puzzles
This list is not complete.
Numbers, arithmetic, and algebra
Cross-figures or cross number puzzles
Dyson numbers
Four fours
KenKen
Water pouring puzzle
The monkey and the coconuts
Pirate loot problem
Verbal arithmetics
24 Game
Combinatorial
Cryptograms
Fifteen Puzzle
Kakuro
Rubik's Cube and other sequential movement puzzles
Str8ts a number puzzle based on sequences
Sudoku
Sujiko
Think-a-Dot
Tower of Hanoi
Bridges Game
Analytical or differential
Ant on a rubber rope
See also: Zeno's paradoxes
Probability
Monty Hall problem
Tiling, packing, and dissection
Bedlam cube
Conway puzzle
Mutilated chessboard problem
Packing problem
Pentominoes tiling
Slothouber–Graatsma puzzle
Soma cube
T puzzle
Tangram
Involves a board
Conway's Game of Life
Mutilated chessboard problem
Peg solitaire
Sudoku
Nine dots problem
Chessboard tasks
Eight queens puzzle
Knight's Tour
No-three-in-line problem
Topology, knots, graph theory
The fields of knot theory and topology, especially their non-intuitive conclusions, are often seen as a part of recreational mathematics.
Disentanglement puzzles
Seven Bridges of Königsberg
Water, gas, and electricity
Slitherlink
Mechanical
Rubik's Cube
Think-a-Dot
Matchstick puzzle
0-player puzzles
Conway's Game of Life
Flexagon
Polyominoes
References
External links
Historical Math Problems/Puzzles at Mathematical Association of America Convergence
|
https://en.wikipedia.org/wiki/Frame%20bundle
|
In mathematics, a frame bundle is a principal fiber bundle F(E) associated to any vector bundle E. The fiber of F(E) over a point x is the set of all ordered bases, or frames, for Ex. The general linear group acts naturally on F(E) via a change of basis, giving the frame bundle the structure of a principal GL(k, R)-bundle (where k is the rank of E).
The frame bundle of a smooth manifold is the one associated to its tangent bundle. For this reason it is sometimes called the tangent frame bundle.
Definition and construction
Let E → X be a real vector bundle of rank k over a topological space X. A frame at a point x ∈ X is an ordered basis for the vector space Ex. Equivalently, a frame can be viewed as a linear isomorphism
The set of all frames at x, denoted Fx, has a natural right action by the general linear group GL(k, R) of invertible k × k matrices: a group element g ∈ GL(k, R) acts on the frame p via composition to give a new frame
This action of GL(k, R) on Fx is both free and transitive (this follows from the standard linear algebra result that there is a unique invertible linear transformation sending one basis onto another). As a topological space, Fx is homeomorphic to GL(k, R) although it lacks a group structure, since there is no "preferred frame". The space Fx is said to be a GL(k, R)-torsor.
The frame bundle of E, denoted by F(E) or FGL(E), is the disjoint union of all the Fx:
Each point in F(E) is a pair (x, p) where x is a point in X and p is a frame at x. There is a natural projection π : F(E) → X which sends (x, p) to x. The group GL(k, R) acts on F(E) on the right as above. This action is clearly free and the orbits are just the fibers of π.
The frame bundle F(E) can be given a natural topology and bundle structure determined by that of E. Let (Ui, φi) be a local trivialization of E. Then for each x ∈ Ui one has a linear isomorphism φi,x : Ex → Rk. This data determines a bijection
given by
With these bijections, each π−1(Ui) can be given the topology of Ui × GL(k, R). The topology on F(E) is the final topology coinduced by the inclusion maps π−1(Ui) → F(E).
With all of the above data the frame bundle F(E) becomes a principal fiber bundle over X with structure group GL(k, R) and local trivializations ({Ui}, {ψi}). One can check that the transition functions of F(E) are the same as those of E.
The above all works in the smooth category as well: if E is a smooth vector bundle over a smooth manifold M then the frame bundle of E can be given the structure of a smooth principal bundle over M.
Associated vector bundles
A vector bundle E and its frame bundle F(E) are associated bundles. Each one determines the other. The frame bundle F(E) can be constructed from E as above, or more abstractly using the fiber bundle construction theorem. With the latter method, F(E) is the fiber bundle with same base, structure group, trivializing neighborhoods, and transition functions as E but with abstract fiber GL(k, R), where the action o
|
https://en.wikipedia.org/wiki/Applied%20probability
|
Applied probability is the application of probability theory to statistical problems and other scientific and engineering domains.
Scope
Much research involving probability is done under the auspices of applied probability. However, while such research is motivated (to some degree) by applied problems, it is usually the mathematical aspects of the problems that are of most interest to researchers (as is typical of applied mathematics in general).
Applied probabilists are particularly concerned with the application of stochastic processes, and probability more generally, to the natural, applied and social sciences, including biology, physics (including astronomy), chemistry, medicine, computer science and information technology, and economics.
Another area of interest is in engineering: particularly in areas of uncertainty, risk management, probabilistic design, and Quality assurance.
History
Having initially been defined at a symposium of the American Mathematical Society in the later 1950s, the term "applied probability" was popularized by Maurice Bartlett through the name of a Methuen monograph series he edited, Applied Probability and Statistics. The area did not have an established outlet until 1964, when the Journal of Applied Probability came into existence through the efforts of Joe Gani.
See also
Areas of application:
Ruin theory
Statistical physics
Stoichiometry and modelling chemical reactions
Ecology, particularly population modelling
Evolutionary biology
Optimization in computer science
Telecommunications
Options pricing in economics
Ewens's sampling formula in population genetics
Operations research
Gaming mathematics
Stochastic processes:
Markov chain
Poisson process
Brownian motion and other diffusion processes
Queueing theory
Renewal theory
Additional information and resources
Applied Probability Trust
INFORMS Institute for Operations Research and the Management Sciences
References
Further reading
Baeza-Yates, R. (2005) Recent advances in applied probability, Springer.
Blake, I.F. (1981) Introduction to Applied Probability, Wiley.
External links
The Applied Probability Trust.
|
https://en.wikipedia.org/wiki/Indeterminate%20form
|
In calculus and other branches of mathematical analysis, when the limit of the sum, difference, product, quotient or power of two functions is taken, it may often be possible to simply add, subtract, multiply, divide or exponentiate the corresponding limits of these two functions respectively. However, there are occasions where it is unclear what the sum, difference, product, quotient, or power of these two limits ought to be. For example, it is unclear what the following expressions ought to evaluate to:
These seven expressions are known as indeterminate forms. More specifically, such expressions are obtained by naively applying the algebraic limit theorem to evaluate the limit of the corresponding arithmetic operation of two functions, yet there are examples of pairs of functions that after being operated on converge to 0, converge to another finite value, diverge to infinity or just diverge. This inability to decide what the limit ought to be explains why these forms are regarded as indeterminate. A limit confirmed to be infinity is not indeterminate since it has been determined to have a specific value (infinity). The term was originally introduced by Cauchy's student Moigno in the middle of the 19th century.
The most common example of an indeterminate form is the quotient of two functions each of which converges to zero. This indeterminate form is denoted by . For example, as approaches , the ratios , , and go to , , and respectively. In each case, if the limits of the numerator and denominator are substituted, the resulting expression is , which is indeterminate. In this sense, can take on the values , , or , by appropriate choices of functions to put in the numerator and denominator. A pair of functions for which the limit is any particular given value may in fact be found. Even more surprising, perhaps, the quotient of the two functions may in fact diverge, and not merely diverge to infinity. For example, .
So the fact that two functions and converge to as approaches some limit point is insufficient to determinate the limit
An expression that arises by ways other than applying the algebraic limit theorem may have the same form of an indeterminate form. However it is not appropriate to call an expression "indeterminate form" if the expression is made outside the context of determining limits.
For example, which arises from substituting for in the equation is not an indeterminate form since this expression is not made in the determination of a limit (it is in fact undefined as division by zero).
Another example is the expression . Whether this expression is left undefined, or is defined to equal , depends on the field of application and may vary between authors. For more, see the article Zero to the power of zero. Note that and other expressions involving infinity are not indeterminate forms.
Some examples and non-examples
Indeterminate form 0/0
The indeterminate form is particularly common in calculus, because it oft
|
https://en.wikipedia.org/wiki/Knot%20%28mathematics%29
|
In mathematics, a knot is an embedding of the circle into three-dimensional Euclidean space, (also known as ). Often two knots are considered equivalent if they are ambient isotopic, that is, if there exists a continuous deformation of which takes one knot to the other.
A crucial difference between the standard mathematical and conventional notions of a knot is that mathematical knots are closed — there are no ends to tie or untie on a mathematical knot. Physical properties such as friction and thickness also do not apply, although there are mathematical definitions of a knot that take such properties into account. The term knot is also applied to embeddings of in , especially in the case . The branch of mathematics that studies knots is known as knot theory and has many relations to graph theory.
Formal definition
A knot is an embedding of the circle () into three-dimensional Euclidean space (), or the 3-sphere (), since the 3-sphere is compact. Two knots are defined to be equivalent if there is an ambient isotopy between them.
Projection
A knot in (or alternatively in the 3-sphere, ), can be projected onto a plane (respectively a sphere ). This projection is almost always regular, meaning that it is injective everywhere, except at a finite number of crossing points, which are the projections of only two points of the knot, and these points are not collinear. In this case, by choosing a projection side, one can completely encode the isotopy class of the knot by its regular projection by recording a simple over/under information at these crossings. In graph theory terms, a regular projection of a knot, or knot diagram is thus a quadrivalent planar graph with over/under-decorated vertices. The local modifications of this graph which allow to go from one diagram to any other diagram of the same knot (up to ambient isotopy of the plane) are called Reidemeister moves.
Types of knots
The simplest knot, called the unknot or trivial knot, is a round circle embedded in . In the ordinary sense of the word, the unknot is not "knotted" at all. The simplest nontrivial knots are the trefoil knot ( in the table), the figure-eight knot () and the cinquefoil knot ().
Several knots, linked or tangled together, are called links. Knots are links with a single component.
Tame vs. wild knots
A polygonal knot is a knot whose image in is the union of a finite set of line segments. A tame knot is any knot equivalent to a polygonal knot. Knots which are not tame are called wild, and can have pathological behavior. In knot theory and 3-manifold theory, often the adjective "tame" is omitted. Smooth knots, for example, are always tame.
Framed knot
A framed knot is the extension of a tame knot to an embedding of the solid torus in .
The framing of the knot is the linking number of the image of the ribbon with the knot. A framed knot can be seen as the embedded ribbon and the framing is the (signed) number of twists. This definition generalizes to a
|
https://en.wikipedia.org/wiki/Plus%E2%80%93minus%20sign
|
The plus–minus sign, , is a mathematical symbol with multiple meanings:
In mathematics, it generally indicates a choice of exactly two possible values, one of which is obtained through addition and the other through subtraction.
In experimental sciences, the sign commonly indicates the confidence interval or uncertainty bounding a range of possible errors in a measurement, often the standard deviation or standard error. The sign may also represent an inclusive range of values that a reading might have.
In medicine, it means "with or without".
In engineering, the sign indicates the tolerance, which is the range of values that are considered to be acceptable or safe, or which comply with some standard or with a contract.
In botany, it is used in morphological descriptions to notate "more or less".
In chemistry, the sign is used to indicate a racemic mixture.
In chess, the sign indicates a clear advantage for the white player; the complementary minus-plus sign, , indicates the same advantage for the black player.
In electronics, this sign may indicate a dual voltage power supply, such as ±5 volts means +5 volts and −5 volts, when used with audio circuits and operational amplifiers.
In linguistics, it may indicate a distinctive feature, such as [±voiced].
In philosophy, the symbol ± or ∓ can be used to indicate a yinyang concept. Although Yin(-) and Yang(+) are in opposition, they coordinate and help each other in a unity. Yin and Yang are interdependent and coexist as two sides of the same concept.
History
A version of the sign, including also the French word ou ("or"), was used in its mathematical meaning by Albert Girard in 1626, and the sign in its modern form was used as early as 1631, in William Oughtred's Clavis Mathematicae.
Usage
In mathematics
In mathematical formulas, the symbol may be used to indicate a symbol that may be replaced by either the plus and minus signs, or , allowing the formula to represent two values or two equations.
If , one may give the solution as . This indicates that the equation has two solutions: and . A common use of this notation is found in the quadratic formula
which describes the two solutions to the quadratic equation
Similarly, the trigonometric identity
can be interpreted as a shorthand for two equations: one with on both sides of the equation, and one with on both sides.
The minus–plus sign, , is generally used in conjunction with the sign, in such expressions as , which can be interpreted as meaning or (but or ). The always has the opposite sign to .
The above expression can be rewritten as to avoid use of , but cases such as the trigonometric identity are most neatly written using the "∓" sign:
which represents the two equations:
Another example is the conjugate of the perfect squares
which represents the two equations:
A related usage is found in this presentation of the formula for the Taylor series of the sine function:
Here, the plus-or-minus sign indicates that the term
|
https://en.wikipedia.org/wiki/Antipodal%20point
|
In mathematics, two points of a sphere (or n-sphere, including a circle) are called antipodal or diametrically opposite if they are the intersections of the sphere with a diameter, a straight line passing through its center.
Given any point on a sphere, its antipodal point is the unique point at greatest distance, whether measured intrinsically (great-circle distance on the surface of the sphere) or extrinsically (chordal distance through the sphere's interior). Every great circle on a sphere passing through a point also passes through its antipodal point, and there are infinitely many great circles passing through a pair of antipodal points (unlike the situation for any non-antipodal pair of points, which have a unique great circle passing through both). Many results in spherical geometry depend on choosing non-antipodal points, and degenerate if antipodal points are allowed; for example, a spherical triangle degenerates to an underspecified lune if two of the vertices are antipodal.
The point antipodal to a given point is called its antipodes, from the Greek () meaning "opposite feet"; see . Sometimes the s is dropped, and this is rendered antipode, a back-formation.
Higher mathematics
The concept of antipodal points is generalized to spheres of any dimension: two points on the sphere are antipodal if they are opposite through the centre. Each line through the centre intersects the sphere in two points, one for each ray emanating from the centre, and these two points are antipodal.
The Borsuk–Ulam theorem is a result from algebraic topology dealing with such pairs of points. It says that any continuous function from to maps some pair of antipodal points in to the same point in Here, denotes the sphere and is real coordinate space.
The antipodal map sends every point on the sphere to its antipodal point. If points on the are represented as displacement vectors from the sphere's center in Euclidean then two antipodal points are represented by additive inverses and and the antipodal map can be defined as The antipodal map preserves orientation (is homotopic to the identity map) when is odd, and reverses it when is even. Its degree is
If antipodal points are identified (considered equivalent), the sphere becomes a model of real projective space.
See also
Cut locus
References
External links
Spherical geometry
Point (geometry)
|
https://en.wikipedia.org/wiki/Section%20%28fiber%20bundle%29
|
In the mathematical field of topology, a section (or cross section) of a fiber bundle is a continuous right inverse of the projection function . In other words, if is a fiber bundle over a base space, :
then a section of that fiber bundle is a continuous map,
such that
for all .
A section is an abstract characterization of what it means to be a graph. The graph of a function can be identified with a function taking its values in the Cartesian product , of and :
Let be the projection onto the first factor: . Then a graph is any function for which .
The language of fibre bundles allows this notion of a section to be generalized to the case when is not necessarily a Cartesian product. If is a fibre bundle, then a section is a choice of point in each of the fibres. The condition simply means that the section at a point must lie over . (See image.)
For example, when is a vector bundle a section of is an element of the vector space lying over each point . In particular, a vector field on a smooth manifold is a choice of tangent vector at each point of : this is a section of the tangent bundle of . Likewise, a 1-form on is a section of the cotangent bundle.
Sections, particularly of principal bundles and vector bundles, are also very important tools in differential geometry. In this setting, the base space is a smooth manifold , and is assumed to be a smooth fiber bundle over (i.e., is a smooth manifold and is a smooth map). In this case, one considers the space of smooth sections of over an open set , denoted . It is also useful in geometric analysis to consider spaces of sections with intermediate regularity (e.g., sections, or sections with regularity in the sense of Hölder conditions or Sobolev spaces).
Local and global sections
Fiber bundles do not in general have such global sections (consider, for example, the fiber bundle over with fiber obtained by taking the Möbius bundle and removing the zero section), so it is also useful to define sections only locally. A local section of a fiber bundle is a continuous map where is an open set in and for all in . If is a local trivialization of , where is a homeomorphism from to (where is the fiber), then local sections always exist over in bijective correspondence with continuous maps from to . The (local) sections form a sheaf over called the sheaf of sections of .
The space of continuous sections of a fiber bundle over is sometimes denoted , while the space of global sections of is often denoted or .
Extending to global sections
Sections are studied in homotopy theory and algebraic topology, where one of the main goals is to account for the existence or non-existence of global sections. An obstruction denies the existence of global sections since the space is too "twisted". More precisely, obstructions "obstruct" the possibility of extending a local section to a global section due to the space's "twistedness". Obstructions are indicated by pa
|
https://en.wikipedia.org/wiki/Probabilist
|
probabilist may refer to:
A follower of probabilism (in theology or philosophy)
A mathematician who studies and applies probability theory
List of mathematical probabilists
|
https://en.wikipedia.org/wiki/Lebesgue%E2%80%93Stieltjes%20integration
|
In measure-theoretic analysis and related branches of mathematics, Lebesgue–Stieltjes integration generalizes both Riemann–Stieltjes and Lebesgue integration, preserving the many advantages of the former in a more general measure-theoretic framework. The Lebesgue–Stieltjes integral is the ordinary Lebesgue integral with respect to a measure known as the Lebesgue–Stieltjes measure, which may be associated to any function of bounded variation on the real line. The Lebesgue–Stieltjes measure is a regular Borel measure, and conversely every regular Borel measure on the real line is of this kind.
Lebesgue–Stieltjes integrals, named for Henri Leon Lebesgue and Thomas Joannes Stieltjes, are also known as Lebesgue–Radon integrals or just Radon integrals, after Johann Radon, to whom much of the theory is due. They find common application in probability and stochastic processes, and in certain branches of analysis including potential theory.
Definition
The Lebesgue–Stieltjes integral
is defined when is Borel-measurable
and bounded and is of bounded variation in and right-continuous, or when is non-negative and is monotone and right-continuous. To start, assume that is non-negative and is monotone non-decreasing and right-continuous. Define and (Alternatively, the construction works for left-continuous, and ).
By Carathéodory's extension theorem, there is a unique Borel measure on which agrees with on every interval . The measure arises from an outer measure (in fact, a metric outer measure) given by
the infimum taken over all coverings of by countably many semiopen intervals. This measure is sometimes called the Lebesgue–Stieltjes measure associated with .
The Lebesgue–Stieltjes integral
is defined as the Lebesgue integral of with respect to the measure in the usual way. If is non-increasing, then define
the latter integral being defined by the preceding construction.
If is of bounded variation and is bounded, then it is possible to write
where is the total variation
of in the interval , and . Both and are monotone non-decreasing. Now the Lebesgue–Stieltjes integral with respect to is defined by
where the latter two integrals are well-defined by the preceding construction.
Daniell integral
An alternative approach is to define the Lebesgue–Stieltjes integral as the Daniell integral that extends the usual Riemann–Stieltjes integral. Let be a non-decreasing right-continuous function on , and define to be the Riemann–Stieltjes integral
for all continuous functions . The functional defines a Radon measure on . This functional can then be extended to the class of all non-negative functions by setting
For Borel measurable functions, one has
and either side of the identity then defines the Lebesgue–Stieltjes integral of . The outer measure is defined via
where is the indicator function of .
Integrators of bounded variation are handled as above by decomposing into positive and negative variations.
Example
|
https://en.wikipedia.org/wiki/Sphere%20packing
|
In geometry, a sphere packing is an arrangement of non-overlapping spheres within a containing space. The spheres considered are usually all of identical size, and the space is usually three-dimensional Euclidean space. However, sphere packing problems can be generalised to consider unequal spheres, spaces of other dimensions (where the problem becomes circle packing in two dimensions, or hypersphere packing in higher dimensions) or to non-Euclidean spaces such as hyperbolic space.
A typical sphere packing problem is to find an arrangement in which the spheres fill as much of the space as possible. The proportion of space filled by the spheres is called the packing density of the arrangement. As the local density of a packing in an infinite space can vary depending on the volume over which it is measured, the problem is usually to maximise the average or asymptotic density, measured over a large enough volume.
For equal spheres in three dimensions, the densest packing uses approximately 74% of the volume. A random packing of equal spheres generally has a density around 63.5%.
Classification and terminology
A lattice arrangement (commonly called a regular arrangement) is one in which the centers of the spheres form a very symmetric pattern which needs only n vectors to be uniquely defined (in n-dimensional Euclidean space). Lattice arrangements are periodic. Arrangements in which the spheres do not form a lattice (often referred to as irregular) can still be periodic, but also aperiodic (properly speaking non-periodic) or random. Because of their high degree of symmetry, lattice packings are easier to classify than non-lattice ones. Periodic lattices always have well-defined densities.
Regular packing
Dense packing
In three-dimensional Euclidean space, the densest packing of equal spheres is achieved by a family of structures called close-packed structures. One method for generating such a structure is as follows. Consider a plane with a compact arrangement of spheres on it. Call it A. For any three neighbouring spheres, a fourth sphere can be placed on top in the hollow between the three bottom spheres. If we do this for half of the holes in a second plane above the first, we create a new compact layer. There are two possible choices for doing this, call them B and C. Suppose that we chose B. Then one half of the hollows of B lies above the centers of the balls in A and one half lies above the hollows of A which were not used for B. Thus the balls of a third layer can be placed either directly above the balls of the first one, yielding a layer of type A, or above the holes of the first layer which were not occupied by the second layer, yielding a layer of type C. Combining layers of types A, B, and C produces various close-packed structures.
Two simple arrangements within the close-packed family correspond to regular lattices. One is called cubic close packing (or face-centred cubic, "FCC")—where the layers are alternated in the ABCABC...
|
https://en.wikipedia.org/wiki/Moment%20%28mathematics%29
|
In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total mass) is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.
For a distribution of mass or probability on a bounded interval, the collection of all the moments (of all orders, from to ) uniquely determines the distribution (Hausdorff moment problem). The same is not true on unbounded intervals (Hamburger moment problem).
In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the moments of random variables.
Significance of the moments
The -th raw moment (i.e., moment about zero) of a distribution is defined bywhereThe -th moment of a real-valued continuous function f(x) of a real variable about a value c is the integralIt is possible to define moments for random variables in a more general fashion than moments for real-valued functions — see moments in metric spaces. The moment of a function, without further explanation, usually refers to the above expression with c = 0.
For the second and higher moments, the central moment (moments about the mean, with c being the mean) are usually used rather than the moments about zero, because they provide clearer information about the distribution's shape.
Other moments may also be defined. For example, the th inverse moment about zero is and the -th logarithmic moment about zero is
The -th moment about zero of a probability density function f(x) is the expected value of and is called a raw moment or crude moment. The moments about its mean are called central moments; these describe the shape of the function, independently of translation.
If f is a probability density function, then the value of the integral above is called the -th moment of the probability distribution. More generally, if F is a cumulative probability distribution function of any probability distribution, which may not have a density function, then the -th moment of the probability distribution is given by the Riemann–Stieltjes integralwhere X is a random variable that has this cumulative distribution F, and is the expectation operator or mean.
Whenthe moment is said not to exist. If the -th moment about any point exists, so does the -th moment (and thus, all lower-order moments) about every point.
The zeroth moment of any probability density function is 1, since the area under any probability density function must be equal to one.
Standardized moments
The normalised -th central moment or standardised moment
|
https://en.wikipedia.org/wiki/27%20%28number%29
|
27 (twenty-seven; Roman numeral XXVII) is the natural number following 26 and preceding 28.
In mathematics
Twenty-seven is equal to the cube of three: ; also 23 (see tetration). It is divisible by the number of prime numbers below it (9).
In decimal, 27 is the first composite number not divisible by any of its digits. In base ten, it is also
It is also the first non-trivial decagonal number.
27 has a prime aliquot sum of 13 (the sixth prime number) in the aliquot sequence (27, 13, 1, 0) of only one composite number, rooted in the 13-aliquot tree.
Whereas the composite index of 27 is 17 (the cousin prime to 13), 7 is the prime index of 17; a prime reciprocal magic square based on multiples of has a magic constant of 27.
In the Collatz conjecture (i.e. the problem), a starting value of 27 requires 3 × 37 = 111 steps to reach 1, more than any smaller number.
The next two larger numbers to require more steps are 54 and 55, where the fourteenth prime number (43) requires twenty-seven steps to reach 1.
Including the null-motif, there are 27 distinct hypergraph motifs.
There are exactly twenty-seven straight lines on a smooth cubic surface, which give a basis of the fundamental representation of Lie algebra .
The unique simple formally real Jordan algebra, the exceptional Jordan algebra of self-adjoint 3 by 3 matrices of quaternions is 27-dimensional; its automorphism group is the 52-dimensional exceptional Lie algebra
There are twenty-seven sporadic groups, if the non-strict group of Lie type (with an irreducible representation that is twice that of in 104 dimensions) is included.
In Robin's theorem for the Riemann hypothesis, twenty-seven integers fail to hold for values where is the Euler–Mascheroni constant; this hypothesis is true if and only if this inequality holds for every larger
Base-specific
In base ten, if one cyclically rotates the digits of a three-digit number that is a multiple of 27, the new number is also a multiple of 27. For example, 378, 783, and 837 are all divisible by 27.
In similar fasion, any multiple of 27 can be mirrored and spaced with a zero each for another multiple of 27 (i.e. 27 and 702, 54 and 405, and 378 and 80703 are all multiples of 27).
Any multiple of 27 with "000" or "999" inserted yields another multiple of 27 (20007, 29997, 50004, and 59994 are all multiples of 27).
In base 6 (senary), one can readily test for divisibility by 43 (decimal 27) by seeing if the last three digits of the number match 000, 043, 130, 213, 300, 343, 430, or 513.
27 is located at the twenty-eighth (and twenty-ninth) digit after the decimal point in :
If one starts counting with zero, 27 is the second self-locating string after 6, of only a few known.
In science
The atomic number of cobalt.
Dark matter is thought to make up 27% of the universe.
27 is the number of bones in the human hand.
Astronomy
The Messier object M27, a magnitude 7.5 planetary nebula in the constellation Vulpecula, also known as the
|
https://en.wikipedia.org/wiki/Sequence%20%28disambiguation%29
|
A sequence, in mathematics, is an ordered list of elements.
Sequence may also refer to:
Arts and media
Film
Sequence (filmmaking), a series of shots or scenes, edited together in succession
Sequence (journal), a film journal
Séquences, a Quebec film magazine
Sequence (2013 film), a 2013 short fantasy horror film
Sequence, a 16 minute film directed by David Winning
Games
Sequence (game), a board-and-card game distributed by Jax Ltd., Inc.
Before the Echo, a video game also known as Sequence
Music
Sequence (music), a passage which is successively repeated at different pitches
Sequence (musical form), a medieval Latin poem or its musical setting which became part of the Mass
The Sequence, a 1980s all-female hip-hop/funk trio
Science, technology, and mathematics
Biology and medicine
Sequence (biology), the primary structure of a biopolymer
Sequencing, determining the primary structure of an unbranched biopolymer
DNA sequencing, determining the order of the nucleotide bases in a DNA molecule
Protein sequencing
Primary sequence, the sequence of a biological macromolecule
Sequence analysis
Sequence (medicine), a series of ordered consequences due to a single cause
Other uses in science, technology, and mathematics
Sequence (geology), a succession of geological events
Archaeological sequence
Sequence diagram, used to visualise the design of a computing system
Sequential manual transmission, a type of manual automotive transmission
Sequence of events, a time-related notion in physics and metaphysics
Sequences (book), mathematics book by Heini Halberstam and Klaus Roth
List (abstract data type)
A rarely used programming language
A term for a pair of sprites
Other uses
Sequence of tenses, in grammar
See also
Sequencer (disambiguation)
Sequent (disambiguation)
Sequin (disambiguation)
Sequention
Sequentor
|
https://en.wikipedia.org/wiki/Frequency%20domain
|
In mathematics, physics, electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency, rather than time. Put simply, a time-domain graph shows how a signal changes over time, whereas a frequency-domain graph shows how the signal is distributed within different frequency bands over a range of frequencies. A frequency-domain representation consists of both the magnitude and the phase of a set of sinusoids (or other basis waveforms) at the frequency components of the signal. Although it is common to refer to the magnitude portion as the frequency response of a signal, the phase portion is required to uniquely define the signal.
A given function or signal can be converted between the time and frequency domains with a pair of mathematical operators called transforms. An example is the Fourier transform, which converts a time function into a complex valued sum or integral of sine waves of different frequencies, with amplitudes and phases, each of which represents a frequency component. The "spectrum" of frequency components is the frequency-domain representation of the signal. The inverse Fourier transform converts the frequency-domain function back to the time-domain function. A spectrum analyzer is a tool commonly used to visualize electronic signals in the frequency domain.
A frequency-domain representation may describe either a static function or a particular time period of a dynamic function (signal or system). The frequency transform of a dynamic function is performed over a finite time period of that function and assumes the function repeats infinitely outside of that time period. Some specialized signal processing techniques for dynamic functions use transforms that result in a joint time–frequency domain, with the instantaneous frequency response being a key link between the time domain and the frequency domain.
Advantages
One of the main reasons for using a frequency-domain representation of a problem is to simplify the mathematical analysis. For mathematical systems governed by linear differential equations, a very important class of systems with many real-world applications, converting the description of the system from the time domain to a frequency domain converts the differential equations to algebraic equations, which are much easier to solve.
In addition, looking at a system from the point of view of frequency can often give an intuitive understanding of the qualitative behavior of the system, and a revealing scientific nomenclature has grown up to describe it, characterizing the behavior of physical systems to time varying inputs using terms such as bandwidth, frequency response, gain, phase shift, resonant frequencies, time constant, resonance width, damping factor, Q factor, harmonics, spectrum, power spectral density, eigenvalues, poles, and zeros.
An example of a field in which frequency-domain analysis gives a bet
|
https://en.wikipedia.org/wiki/SVP
|
SVP may refer to:
Science and mathematics
Shortest vector problem, the problem of finding the smallest non-zero vector in a lattice space
Society of Vertebrate Paleontology, a society of paleontologists
Saturated vapour pressure, the pressure exerted by a vapour in thermodynamic equilibrium with its condensed phases at a given temperature in a closed system.
Small volume parenterals, a type of injectable pharmaceutical product
Politics and law
Party of the Swedes (Svenskarnas parti), a former neo-Nazi political party in Sweden
Swiss People's Party (Schweizerische Volkspartei), a national-conservative political party in Switzerland
South Tyrolean People's Party (Südtiroler Volkspartei), a regionalist and autonomist political party in German-speaking South Tyrol, Italy
Sexually violent predator, a US legal classification allowing commitment to a mental institution
Entertainment
Save percentage, a goalkeeping statistic used by some sports leagues
Sega Virtua Processor, a processor added to the Sega Genesis game Virtua Racing
Struga Poetry Evenings (Struški Večeri na Poezijata), an international poetry festival held in Struga, North Macedonia
Scott Van Pelt (born 1966), American sportscaster
Other
SVP Worldwide, producer of sewing machines
Society of Saint Vincent de Paul, a Roman Catholic charity
Senior vice president, in the hierarchy of vice presidents
Soil vent pipe, in a drain-waste-vent system
SmoothVideo Project, motion interpolation software
|
https://en.wikipedia.org/wiki/Savant%20syndrome
|
Savant syndrome () is a phenomenon, sometimes following a brain injury, where someone demonstrates exceptional aptitude in one domain, such as art or mathematics, despite significant social or intellectual impairment.
Those with the condition generally have a neurodevelopmental disorder such as autism spectrum disorder or have a brain injury. About half of cases are associated with autism, and these individuals may be known as "autistic savants". While the condition usually becomes apparent in childhood, some cases develop later in life. It is not recognized as a mental disorder within the DSM-5, as it relates to parts of the brain healing or restructuring.
Savant syndrome is estimated to affect around one in a million people. The condition affects more males than females, at a ratio of 6:1. The first medical account of the condition was in 1783. Among those with autism, 1 in 10 to 1 in 200 have savant syndrome to some degree. It is estimated that there are fewer than a hundred prodigious savants, with skills so extraordinary that they would be considered spectacular even for a non-impaired person, currently living.
Signs and symptoms
Savant skills are usually found in one or more of five major areas: art, memory, arithmetic, musical abilities, and spatial skills. The most common kinds of savants are calendrical savants, "human calendars" who can calculate the day of the week for any given date with speed and accuracy, or recall personal memories from any given date. Advanced memory is the key "superpower" in savant abilities.
Approximately half of savants are autistic; the other half often have some form of central nervous system injury or disease. It is estimated that up to 10% of those with autism have some form of savant abilities.
Calendrical savants
A (or ) is someone who – despite having an intellectual disability – can name the day of the week of a date, or vice versa, on a limited range of decades or certain millennia. The rarity of human calendar calculators is possibly due to the lack of motivation to develop such skills among the general population, although mathematicians have developed formulas that allow them to obtain similar skills. Calendrical savants, on the other hand, may not be prone to invest in socially engaging skills.
Mechanism
Psychological
No widely accepted cognitive theory explains savants' combination of talent and deficit. It has been suggested that individuals with autism are biased towards detail-focused processing and that this cognitive style predisposes individuals either with or without autism to savant talents. Another hypothesis is that savants hyper-systemize, thereby giving an impression of talent. Hyper-systemizing is an extreme state in the empathizing–systemizing theory that classifies people based on their skills in empathizing with others versus systemizing facts about the external world. Also, the attention to detail of savants is a consequence of enhanced perception or sensory hypersensi
|
https://en.wikipedia.org/wiki/L%C3%A9vy%20flight
|
A Lévy flight is a random walk in which the step-lengths have a stable distribution, a probability distribution that is heavy-tailed. When defined as a walk in a space of dimension greater than one, the steps made are in isotropic random directions. Later researchers have extended the use of the term "Lévy flight" to also include cases where the random walk takes place on a discrete grid rather than on a continuous space.
The term "Lévy flight" was coined by Benoît Mandelbrot, who used this for one specific definition of the distribution of step sizes. He used the term Cauchy flight for the case where the distribution of step sizes is a Cauchy distribution, and Rayleigh flight for when the distribution is a normal distribution (which is not an example of a heavy-tailed probability distribution).
The particular case for which Mandelbrot used the term "Lévy flight" is defined by the survivor function of the distribution of step-sizes, U, being
Here D is a parameter related to the fractal dimension and the distribution is a particular case of the Pareto distribution.
Properties
Lévy flights are, by construction, Markov processes. For general distributions of the step-size, satisfying the power-like condition, the distance from the origin of the random walk tends, after a large number of steps, to a stable distribution due to the generalized central limit theorem, enabling many processes to be modeled using Lévy flights.
The probability densities for particles undergoing a Levy flight can be modeled using a generalized version of the Fokker–Planck equation, which is usually used to model Brownian motion. The equation requires the use of fractional derivatives. For jump lengths which have a symmetric probability distribution, the equation takes a simple form in terms of the Riesz fractional derivative. In one dimension, the equation reads as
where γ is a constant akin to the diffusion constant, α is the stability parameter and f(x,t) is the potential. The Riesz derivative can be understood in terms of its Fourier Transform.
This can be easily extended to multiple dimensions.
Another important property of the Lévy flight is that of diverging variances in all cases except that of α = 2, i.e. Brownian motion. In general, the θ fractional moment of the distribution diverges if α ≤ θ. Also,
The exponential scaling of the step lengths gives Lévy flights a scale invariant property, and they are used to model data that exhibits clustering.
Applications
The definition of a Lévy flight stems from the mathematics related to chaos theory and is useful in stochastic measurement and simulations for random or pseudo-random natural phenomena. Examples include earthquake data analysis, financial mathematics, cryptography, signals analysis as well as many applications in astronomy, biology, and physics.
It has been found that jumping between climate states observed in the paleoclimatic record can be described as a Lévy flight or an alpha-stable proce
|
https://en.wikipedia.org/wiki/Foliation
|
In mathematics (differential geometry), a foliation is an equivalence relation on an n-manifold, the equivalence classes being connected, injectively immersed submanifolds, all of the same dimension p, modeled on the decomposition of the real coordinate space Rn into the cosets x + Rp of the standardly embedded subspace Rp. The equivalence classes are called the leaves of the foliation. If the manifold and/or the submanifolds are required to have a piecewise-linear, differentiable (of class Cr), or analytic structure then one defines piecewise-linear, differentiable, or analytic foliations, respectively. In the most important case of differentiable foliation of class Cr it is usually understood that r ≥ 1 (otherwise, C0 is a topological foliation). The number p (the dimension of the leaves) is called the dimension of the foliation and is called its codimension.
In some papers on general relativity by mathematical physicists, the term foliation (or slicing) is used to describe a situation where the relevant Lorentz manifold (a (p+1)-dimensional spacetime) has been decomposed into hypersurfaces of dimension p, specified as the level sets of a real-valued smooth function (scalar field) whose gradient is everywhere non-zero; this smooth function is moreover usually assumed to be a time function, meaning that its gradient is everywhere time-like, so that its level-sets are all space-like hypersurfaces. In deference to standard mathematical terminology, these hypersurface are often called the leaves (or sometimes slices) of the foliation. Note that while this situation does constitute a codimension-1 foliation in the standard mathematical sense, examples of this type are actually globally trivial; while the leaves of a (mathematical) codimension-1 foliation are always locally the level sets of a function, they generally cannot be expressed this way globally, as a leaf may pass through a local-trivializing chart infinitely many times, and the holonomy around a leaf may also obstruct the existence of a globally-consistent defining functions for the leaves. For example, while the 3-sphere has a famous codimension-1 foliation discovered by Reeb, a codimension-1 foliation of a closed manifold cannot be given by the level sets of a smooth function, since a smooth function on a closed manifold necessarily has critical points at its maxima and minima.
Foliated charts and atlases
In order to give a more precise definition of foliation, it is necessary to define some auxiliary elements.
A rectangular neighborhood in Rn is an open subset of the form B = J1 × ⋅⋅⋅ × Jn, where Ji is a (possibly unbounded) relatively open interval in the ith coordinate axis. If J1 is of the form (a,0], it is said that B has boundary
In the following definition, coordinate charts are considered that have values in Rp × Rq, allowing the possibility of manifolds with boundary and (convex) corners.
A foliated chart on the n-manifold M of codimension q is a pair (U,φ), w
|
https://en.wikipedia.org/wiki/Quantitative%20psychological%20research
|
Quantitative psychological research is psychological research that employs quantitative research methods.
Quantitative research falls under the category of empirical research.
See also
Statistics
Quantitative psychology
Quantitative research
References
Applied statistics
Experimental psychology
Quantitative research
Statistical data types
Quantitative psychology
|
https://en.wikipedia.org/wiki/Frobenius
|
Frobenius is a surname. Notable people with the surname include:
Ferdinand Georg Frobenius (1849–1917), mathematician
Frobenius algebra
Frobenius endomorphism
Frobenius inner product
Frobenius norm
Frobenius method
Frobenius group
Frobenius theorem (differential topology)
Georg Ludwig Frobenius (1566–1645), German publisher
Johannes Frobenius (1460–1527), publisher and printer in Basel
Hieronymus Frobenius (1501–1563), publisher and printer in Basel, son of Johannes
Ambrosius Frobenius (1537–1602), publisher and printer in Basel, son of Hieronymus
Leo Frobenius (1873–1938), ethnographer
Nikolaj Frobenius (born 1965), Norwegian writer and screenwriter
August Sigmund Frobenius (died 1741), German chemist
See also
Frobenius Orgelbyggeri, Danish organ building firm
|
https://en.wikipedia.org/wiki/Semiring
|
In abstract algebra, a semiring is an algebraic structure. It is a generalization of a ring, dropping the requirement that each element must have an additive inverse. At the same time, it is a generalization of bounded distributive lattices.
The smallest semiring that is not a ring is the two-element Boolean algebra, e.g. with logical disjunction as addition. A motivating example that is neither a ring nor a lattice is the set of natural numbers under ordinary addition and multiplication, when including the number zero. Semirings are abundant, because a suitable multiplication operation arises as the function composition of endomorphism over any commutative monoid.
The theory of (associative) algebras over commutative rings can be generalized to one over commutative semirings.
Terminology
Some authors call semiring the structure without the requirement for there to be a or . This makes the analogy between ring and on the one hand and and on the other hand work more smoothly. These authors often use rig for the concept defined here. This originated as a joke, suggesting that rigs are rings without negative elements. (And this is similar to using rng to mean a ring without a multiplicative identity.)
The term dioid (for "double monoid") has been used to mean semirings or other structures. It was used by Kuntzman in 1972 to denote a semiring. (It is alternatively sometimes used for naturally ordered semirings, but the term was also used for idempotent subgroups by Baccelli et al. in 1992.)
Definition
A semiring is a set equipped with two binary operations and called addition and multiplication, such that:
is a monoid with identity element called :
is a monoid with identity element called :
Addition is commutative:
Multiplication by the additive identity annihilates :
Multiplication left- and right-distributes over addition:
Explicitly stated, is a commutative monoid.
Notation
The symbol is usually omitted from the notation; that is, is just written
Similarly, an order of operations is conventional, in which is applied before . That is, denotes .
For the purpose of disambiguation, one may write or to emphasize which structure the units at hand belong to.
If is an element of a semiring and , then -times repeated multiplication of with itself is denoted , and one similarly writes for the -times repeated addition.
Construction of new semirings
The zero ring with underlying set is also a semiring, called the trivial semiring. This triviality can be characterized via and so is often silently assumed as if it were an additional axiom.
Now given any semiring, there are several ways to define new ones.
As noted, the natural numbers with its arithmetic structure form a semiring. The set equipped with the operations inherited from a semiring , is always a sub-semiring of .
If is a commutative monoid, function composition provides the multiplication to form a semiring: The set of endomorphisms
|
https://en.wikipedia.org/wiki/Anyonic%20Lie%20algebra
|
In mathematics, an anyonic Lie algebra is a U(1) graded vector space over equipped with a bilinear operator and linear maps (some authors use ) and such that , satisfying following axioms:
for pure graded elements X, Y, and Z.
References
Vector spaces
Lie algebras
|
https://en.wikipedia.org/wiki/Real%20tree
|
In mathematics, real trees (also called -trees) are a class of metric spaces generalising simplicial trees. They arise naturally in many mathematical contexts, in particular geometric group theory and probability theory. They are also the simplest examples of Gromov hyperbolic spaces.
Definition and examples
Formal definition
A metric space is a real tree if it is a geodesic space where every triangle is a tripod. That is, for every three points there exists a point such that the geodesic segments intersect in the segment and also . This definition is equivalent to being a "zero-hyperbolic space" in the sense of Gromov (all triangles are "zero-thin").
Real trees can also be characterised by a topological property. A metric space is a real tree if for any pair of points all topological embeddings of the segment into such that have the same image (which is then a geodesic segment from to ).
Simple examples
If is a connected graph with the combinatorial metric then it is a real tree if and only if it is a tree (i.e. it has no cycles). Such a tree is often called a simplicial tree. They are characterised by the following topological property: a real tree is simplicial if and only if the set of singular points of (points whose complement in has three or more connected components) is closed and discrete in .
The -tree obtained in the following way is nonsimplicial. Start with the interval [0, 2] and glue, for each positive integer n, an interval of length 1/n to the point 1 − 1/n in the original interval. The set of singular points is discrete, but fails to be closed since 1 is an ordinary point in this -tree. Gluing an interval to 1 would result in a closed set of singular points at the expense of discreteness.
The Paris metric makes the plane into a real tree. It is defined as follows: one fixes an origin , and if two points are on the same ray from , their distance is defined as the Euclidean distance. Otherwise, their distance is defined to be the sum of the Euclidean distances of these two points to the origin .
The plane under the Paris metric is an example of a hedgehog space, a collection of line segments joined at a common endpoint. Any such space is a real tree.
Characterizations
Here are equivalent characterizations of real trees which can be used as definitions:
1) (similar to trees as graphs) A real tree is a geodesic metric space which contains no subset homeomorphic to a circle.
2) A real tree is a connected metric space which has the four points condition (see figure):
For all .
3) A real tree is a connected 0-hyperbolic metric space (see figure). Formally:
For all .
4) (similar to the characterization of Galton-Watson trees by the contour process). Consider a positive excursion of a function. In other words, let be a continuous real-valued function and an interval such that and for .
For , , define a pseudometric and an equivalence relation with:
Then, the quotient space is a r
|
https://en.wikipedia.org/wiki/Open%20and%20closed%20maps
|
In mathematics, more specifically in topology, an open map is a function between two topological spaces that maps open sets to open sets.
That is, a function is open if for any open set in the image is open in
Likewise, a closed map is a function that maps closed sets to closed sets.
A map may be open, closed, both, or neither; in particular, an open map need not be closed and vice versa.
Open and closed maps are not necessarily continuous. Further, continuity is independent of openness and closedness in the general case and a continuous function may have one, both, or neither property; this fact remains true even if one restricts oneself to metric spaces.
Although their definitions seem more natural, open and closed maps are much less important than continuous maps.
Recall that, by definition, a function is continuous if the preimage of every open set of is open in (Equivalently, if the preimage of every closed set of is closed in ).
Early study of open maps was pioneered by Simion Stoilow and Gordon Thomas Whyburn.
Definitions and characterizations
If is a subset of a topological space then let and (resp. ) denote the closure (resp. interior) of in that space.
Let be a function between topological spaces. If is any set then is called the image of under
Competing definitions
There are two different competing, but closely related, definitions of "" that are widely used, where both of these definitions can be summarized as: "it is a map that sends open sets to open sets."
The following terminology is sometimes used to distinguish between the two definitions.
A map is called a
"" if whenever is an open subset of the domain then is an open subset of 's codomain
"" if whenever is an open subset of the domain then is an open subset of 's image where as usual, this set is endowed with the subspace topology induced on it by 's codomain
Every strongly open map is a relatively open map. However, these definitions are not equivalent in general.
Warning: Many authors define "open map" to mean " open map" (for example, The Encyclopedia of Mathematics) while others define "open map" to mean " open map". In general, these definitions are equivalent so it is thus advisable to always check what definition of "open map" an author is using.
A surjective map is relatively open if and only if it is strongly open; so for this important special case the definitions are equivalent.
More generally, a map is relatively open if and only if the surjection is a strongly open map.
Because is always an open subset of the image of a strongly open map must be an open subset of its codomain In fact, a relatively open map is a strongly open map if and only if its image is an open subset of its codomain.
In summary,
A map is strongly open if and only if it is relatively open and its image is an open subset of its codomain.
By using this characterization, it is often straightforward to apply results involving one of
|
https://en.wikipedia.org/wiki/Weierstrass%20preparation%20theorem
|
In mathematics, the Weierstrass preparation theorem is a tool for dealing with analytic functions of several complex variables, at a given point P. It states that such a function is, up to multiplication by a function not zero at P, a polynomial in one fixed variable z, which is monic, and whose coefficients of lower degree terms are analytic functions in the remaining variables and zero at P.
There are also a number of variants of the theorem, that extend the idea of factorization in some ring R as u·w, where u is a unit and w is some sort of distinguished Weierstrass polynomial. Carl Siegel has disputed the attribution of the theorem to Weierstrass, saying that it occurred under the current name in some of late nineteenth century Traités d'analyse without justification.
Complex analytic functions
For one variable, the local form of an analytic function f(z) near 0 is zkh(z) where h(0) is not 0, and k is the order of the zero of f at 0. This is the result that the preparation theorem generalises.
We pick out one variable z, which we may assume is first, and write our complex variables as (z, z2, ..., zn). A Weierstrass polynomial W(z) is
zk + gk−1zk−1 + ... + g0
where gi(z2, ..., zn) is analytic and gi(0, ..., 0) = 0.
Then the theorem states that for analytic functions f, if
f(0, ...,0) = 0,
and
f(z, z2, ..., zn)
as a power series has some term only involving z, we can write (locally near (0, ..., 0))
f(z, z2, ..., zn) = W(z)h(z, z2, ..., zn)
with h analytic and h(0, ..., 0) not 0, and W a Weierstrass polynomial.
This has the immediate consequence that the set of zeros of f, near (0, ..., 0), can be found by fixing any small values of z2, ..., zn and then solving the equation W(z)=0. The corresponding values of z form a number of continuously-varying branches, in number equal to the degree of W in z. In particular f cannot have an isolated zero.
Division theorem
A related result is the Weierstrass division theorem, which states that if f and g are analytic functions, and g is a Weierstrass polynomial of degree N, then there exists a unique pair h and j such that f = gh + j, where j is a polynomial of degree less than N. In fact, many authors prove the Weierstrass preparation as a corollary of the division theorem. It is also possible to prove the division theorem from the preparation theorem so that the two theorems are actually equivalent.
Applications
The Weierstrass preparation theorem can be used to show that the ring of germs of analytic functions in n variables is a Noetherian ring, which is also referred to as the Rückert basis theorem.
Smooth functions
There is a deeper preparation theorem for smooth functions, due to Bernard Malgrange, called the Malgrange preparation theorem. It also has an associated division theorem, named after John Mather.
Formal power series in complete local rings
There is an analogous result, also referred to as the Weierstrass preparation theorem, for the ring of formal power series over compl
|
https://en.wikipedia.org/wiki/Tensor%20product%20of%20fields
|
In mathematics, the tensor product of two fields is their tensor product as algebras over a common subfield. If no subfield is explicitly specified, the two fields must have the same characteristic and the common subfield is their prime subfield.
The tensor product of two fields is sometimes a field, and often a direct product of fields; In some cases, it can contain non-zero nilpotent elements.
The tensor product of two fields expresses in a single structure the different way to embed the two fields in a common extension field.
Compositum of fields
First, one defines the notion of the compositum of fields. This construction occurs frequently in field theory. The idea behind the compositum is to make the smallest field containing two other fields. In order to formally define the compositum, one must first specify a tower of fields. Let k be a field and L and K be two extensions of k. The compositum, denoted K.L, is defined to be where the right-hand side denotes the extension generated by K and L. This assumes some field containing both K and L. Either one starts in a situation where an ambient field is easy to identify (for example if K and L are both subfields of the complex numbers), or one proves a result that allows one to place both K and L (as isomorphic copies) in some large enough field.
In many cases one can identify K.L as a vector space tensor product, taken over the field N that is the intersection of K and L. For example, if one adjoins √2 to the rational field to get K, and √3 to get L, it is true that the field M obtained as K.L inside the complex numbers is (up to isomorphism)
as a vector space over . (This type of result can be verified, in general, by using the ramification theory of algebraic number theory.)
Subfields K and L of M are linearly disjoint (over a subfield N) when in this way the natural N-linear map of
to K.L is injective. Naturally enough this isn't always the case, for example when K = L. When the degrees are finite, injectivity is equivalent here to bijectivity. Hence, when K and L are linearly disjoint finite-degree extension fields over N, , as with the aforementioned extensions of the rationals.
A significant case in the theory of cyclotomic fields is that for the nth roots of unity, for n a composite number, the subfields generated by the pk th roots of unity for prime powers dividing n are linearly disjoint for distinct p.
The tensor product as ring
To get a general theory, one needs to consider a ring structure on . One can define the product to be (see Tensor product of algebras). This formula is multilinear over N in each variable; and so defines a ring structure on the tensor product, making into a commutative N-algebra, called the tensor product of fields.
Analysis of the ring structure
The structure of the ring can be analysed by considering all ways of embedding both K and L in some field extension of N. The construction here assumes the common subfield N; but does not as
|
https://en.wikipedia.org/wiki/Radical%20of%20a%20ring
|
In ring theory, a branch of mathematics, a radical of a ring is an ideal of "not-good" elements of the ring.
The first example of a radical was the nilradical introduced by , based on a suggestion of . In the next few years several other radicals were discovered, of which the most important example is the Jacobson radical. The general theory of radicals was defined independently by and .
Definitions
In the theory of radicals, rings are usually assumed to be associative, but need not be commutative and need not have a multiplicative identity. In particular, every ideal in a ring is also a ring.
A radical class (also called radical property or just radical) is a class σ of rings possibly without identities, such that:
the homomorphic image of a ring in σ is also in σ
every ring R contains an ideal S(R) in σ that contains every other ideal of R that is in σ
S(R/S(R)) = 0. The ideal S(R) is called the radical, or σ-radical, of R.
The study of such radicals is called torsion theory.
For any class δ of rings, there is a smallest radical class Lδ containing it, called the lower radical of δ. The operator L is called the lower radical operator.
A class of rings is called regular if every non-zero ideal of a ring in the class has a non-zero image in the class. For every regular class δ of rings, there is a largest radical class Uδ, called the upper radical of δ, having zero intersection with δ. The operator U is called the upper radical operator.
A class of rings is called hereditary if every ideal of a ring in the class also belongs to the class.
Examples
The Jacobson radical
Let R be any ring, not necessarily commutative. The Jacobson radical of R is the intersection of the annihilators of all simple right R-modules.
There are several equivalent characterizations of the Jacobson radical, such as:
J(R) is the intersection of the regular maximal right (or left) ideals of R.
J(R) is the intersection of all the right (or left) primitive ideals of R.
J(R) is the maximal right (or left) quasi-regular right (resp. left) ideal of R.
As with the nilradical, we can extend this definition to arbitrary two-sided ideals I by defining J(I) to be the preimage of J(R/I) under the projection map R → R/I.
If R is commutative, the Jacobson radical always contains the nilradical. If the ring R is a finitely generated Z-algebra, then the nilradical is equal to the Jacobson radical, and more generally: the radical of any ideal I will always be equal to the intersection of all the maximal ideals of R that contain I. This says that R is a Jacobson ring.
The Baer radical
The Baer radical of a ring is the intersection of the prime ideals of the ring R. Equivalently it is the smallest semiprime ideal in R. The Baer radical is the lower radical of the class of nilpotent rings. Also called the "lower nilradical" (and denoted Nil∗R), the "prime radical", and the "Baer-McCoy radical". Every element of the Baer radical is nilpotent, so it is a nil ideal.
For
|
https://en.wikipedia.org/wiki/Magic%20cube
|
In mathematics, a magic cube is the 3-dimensional equivalent of a magic square, that is, a collection of integers arranged in an n × n × n pattern such that the sums of the numbers on each row, on each column, on each pillar and on each of the four main space diagonals are equal, the so-called magic constant of the cube, denoted M3(n). It can be shown that if a magic cube consists of the numbers 1, 2, ..., n3, then it has magic constant
If, in addition, the numbers on every cross section diagonal also sum up to the cube's magic number, the cube is called a perfect magic cube; otherwise, it is called a semiperfect magic cube. The number n is called the order of the magic cube. If the sums of numbers on a magic cube's broken space diagonals also equal the cube's magic number, the cube is called a pandiagonal magic cube.
Alternative definition
In recent years, an alternative definition for the perfect magic cube has gradually come into use. It is based on the fact that a pandiagonal magic square has traditionally been called "perfect", because all possible lines sum correctly. That is not the case with the above definition for the cube.
Multimagic cubes
As in the case of magic squares, a bimagic cube has the additional property of remaining a magic cube when all of the entries are squared, a trimagic cube remains a magic cube under both the operations of squaring the entries and of cubing the entries. (Only two of these are known, as of 2005.) A tetramagic cube remains a magic cube when the entries are squared, cubed, or raised to the fourth power.
Magic cubes based on Dürer's and Gaudi Magic squares
A magic cube can be built with the constraint of a given magic square appearing on one of its faces Magic cube with the magic square of Dürer, and Magic cube with the magic square of Gaudi
See also
Perfect magic cube
Semiperfect magic cube
Multimagic cube
Magic hypercube
Magic cube classes
Magic series
Nasik magic hypercube
John R. Hendricks
References
External links
Harvey Heinz, All about Magic Cubes
Marian Trenkler, Magic p-dimensional cubes
Marian Trenkler, An algorithm for making magic cubes
Marian Trenkler, On additive and multiplicative magic cubes
Ali Skalli's magic squares and magic cubes
Magic squares
|
https://en.wikipedia.org/wiki/Perfect%20magic%20cube
|
In mathematics, a perfect magic cube is a magic cube in which not only the columns, rows, pillars, and main space diagonals, but also the cross section diagonals sum up to the cube's magic constant.
Perfect magic cubes of order one are trivial; cubes of orders two to four can be proven not to exist, and cubes of orders five and six were first discovered by Walter Trump and Christian Boyer on November 13 and September 1, 2003, respectively. A perfect magic cube of order seven was given by A. H. Frost in 1866, and on March 11, 1875, an article was published in the Cincinnati Commercial newspaper on the discovery of a perfect magic cube of order 8 by Gustavus Frankenstein. Perfect magic cubes of orders nine and eleven have also been constructed.
The first perfect cube of order 10 was constructed in 1988 (Li Wen, China).
An alternative definition
In recent years, an alternative definition for the perfect magic cube was proposed by John R. Hendricks. By this definition, a perfect magic cube is one in which all possible lines through each cell sum to the magic constant. The name Nasik magic hypercube is another, unambiguous, name for such a cube. This definition is based on the fact that a pandiagonal magic square has traditionally been called 'perfect', because all possible lines sum correctly.
This same reasoning may be applied to hypercubes of any dimension. Simply stated; in an order m magic hypercube, if all possible lines of m cells sum to the magic constant, the hypercube is perfect. All lower dimension hypercubes contained in this hypercube will then also be perfect. This is not the case with the original definition, which does not require that the planar and diagonal squares be a pandiagonal magic cube. For example, a magic cube of order 8 has 244 correct lines by the old definition of "perfect", but 832 correct lines by this new definition.
The smallest perfect magic cube has order 8, and none can exist for double odd orders.
Gabriel Arnoux constructed an order 17 perfect magic cube in 1887. F.A.P.Barnard published order 8 and order 11 perfect cubes in 1888.
By the modern (given by J.R. Hendricks) definition, there are actually six classes of magic cube; simple magic cubes, pantriagonal magic cubes, diagonal magic cubes, pantriagonal diagonal magic cubes, pandiagonal magic cubes, and perfect magic cubes.
Examples
1. Order 4 cube by Thomas Krijgsman, 1982; magic constant 130.
2. Order 5 cube by Walter Trump and Christian Boyer, 2003-11-13; magic constant 315.
See also
Magic cube classes
Nasik magic hypercube
John R. Hendricks
References
Planck, C., The Theory of Paths Nasik, Printed for private circulation, A.J. Lawrence, Printer, Rugby,(England), 1905
H.D, Heinz & J.R. Hendricks, Magic Square Lexicon: Illustrated, hdh, 2000, 0-9687985-0-0
External links
Christian Boyer: Perfect magic cubes
MathWorld news: Perfect magic cube of order 5 discovered
Harvey Heinz: Perfect Magic Hypercubes
Aale de Winkel: The Magic Enc
|
https://en.wikipedia.org/wiki/Semiperfect%20magic%20cube
|
In mathematics, a semiperfect magic cube is a magic cube that is not a perfect magic cube, i.e., a magic cube for which the cross section diagonals do not necessarily sum up to the cube's magic constant.
References
.
Magic squares
|
https://en.wikipedia.org/wiki/Tensor%20product%20of%20algebras
|
In mathematics, the tensor product of two algebras over a commutative ring R is also an R-algebra. This gives the tensor product of algebras. When the ring is a field, the most common application of such products is to describe the product of algebra representations.
Definition
Let R be a commutative ring and let A and B be R-algebras. Since A and B may both be regarded as R-modules, their tensor product
is also an R-module. The tensor product can be given the structure of a ring by defining the product on elements of the form by
and then extending by linearity to all of . This ring is an R-algebra, associative and unital with identity element given by . where 1A and 1B are the identity elements of A and B. If A and B are commutative, then the tensor product is commutative as well.
The tensor product turns the category of R-algebras into a symmetric monoidal category.
Further properties
There are natural homomorphisms from A and B to given by
These maps make the tensor product the coproduct in the category of commutative R-algebras. The tensor product is not the coproduct in the category of all R-algebras. There the coproduct is given by a more general free product of algebras. Nevertheless, the tensor product of non-commutative algebras can be described by a universal property similar to that of the coproduct:
where [-, -] denotes the commutator.
The natural isomorphism is given by identifying a morphism on the left hand side with the pair of morphisms on the right hand side where and similarly .
Applications
The tensor product of commutative algebras is of frequent use in algebraic geometry. For affine schemes X, Y, Z with morphisms from X and Z to Y, so X = Spec(A), Y = Spec(R), and Z = Spec(B) for some commutative rings A, R, B, the fiber product scheme is the affine scheme corresponding to the tensor product of algebras:
More generally, the fiber product of schemes is defined by gluing together affine fiber products of this form.
Examples
The tensor product can be used as a means of taking intersections of two subschemes in a scheme: consider the -algebras , , then their tensor product is , which describes the intersection of the algebraic curves f = 0 and g = 0 in the affine plane over C.
More generally, if is a commutative ring and are ideals, then , with a unique isomorphism sending to .
Tensor products can be used as a means of changing coefficients. For example, and .
Tensor products also can be used for taking products of affine schemes over a field. For example, is isomorphic to the algebra which corresponds to an affine surface in if f and g are not zero.
Given -algebras and whose underlying rings are graded-commutative rings, the tensor product becomes a graded commutative ring by defining for homogeneous , , , and .
See also
Extension of scalars
Tensor product of modules
Tensor product of fields
Linearly disjoint
Multilinear subspace learning
Notes
References
.
Algebras
Ring theory
Commutative
|
https://en.wikipedia.org/wiki/Multimagic%20cube
|
In mathematics, a P-multimagic cube is a magic cube that remains magic even if all its numbers are replaced by their k th powers for 1 ≤ k ≤ P. cubes are called bimagic, cubes are called trimagic, and cubes tetramagic. A cube is said to be semi-perfect if the k th power cubes are perfect for 1 ≤ k < P, and the P th power cube is semiperfect. If all P of the power cubes are perfect, the cube is said to be perfect.
The first known example of a bimagic cube was given by John Hendricks in 2000; it is a semiperfect cube of order 25 and magic constant 195325. In 2003, C. Bower discovered two semi-perfect bimagic cubes of order 16, and a perfect bimagic cube of order 32.
MathWorld reports that only two trimagic cubes are known, discovered by C. Bower in 2003; a semiperfect cube of order 64 and a perfect cube of order 256. It also reports that he discovered the only two known tetramagic cubes, a semiperfect cube of order 1024, and perfect cube of order 8192.
References
See also
Magic square
Multimagic square
Magic squares
|
https://en.wikipedia.org/wiki/Opposite%20category
|
In category theory, a branch of mathematics, the opposite category or dual category Cop of a given category C is formed by reversing the morphisms, i.e. interchanging the source and target of each morphism. Doing the reversal twice yields the original category, so the opposite of an opposite category is the original category itself. In symbols, .
Examples
An example comes from reversing the direction of inequalities in a partial order. So if X is a set and ≤ a partial order relation, we can define a new partial order relation ≤op by
x ≤op y if and only if y ≤ x.
The new order is commonly called dual order of ≤, and is mostly denoted by ≥. Therefore, duality plays an important role in order theory and every purely order theoretic concept has a dual. For example, there are opposite pairs child/parent, descendant/ancestor, infimum/supremum, down-set/up-set, ideal/filter etc. This order theoretic duality is in turn a special case of the construction of opposite categories as every ordered set can be understood as a category.
Given a semigroup (S, ·), one usually defines the opposite semigroup as (S, ·)op = (S, *) where x*y ≔ y·x for all x,y in S. So also for semigroups there is a strong duality principle. Clearly, the same construction works for groups, as well, and is known in ring theory, too, where it is applied to the multiplicative semigroup of the ring to give the opposite ring. Again this process can be described by completing a semigroup to a monoid, taking the corresponding opposite category, and then possibly removing the unit from that monoid.
The category of Boolean algebras and Boolean homomorphisms is equivalent to the opposite of the category of Stone spaces and continuous functions.
The category of affine schemes is equivalent to the opposite of the category of commutative rings.
The Pontryagin duality restricts to an equivalence between the category of compact Hausdorff abelian topological groups and the opposite of the category of (discrete) abelian groups.
By the Gelfand–Neumark theorem, the category of localizable measurable spaces (with measurable maps) is equivalent to the category of commutative Von Neumann algebras (with normal unital homomorphisms of *-algebras).
Properties
Opposite preserves products:
(see product category)
Opposite preserves functors:
(see functor category, opposite functor)
Opposite preserves slices:
(see comma category)
See also
Dual object
Dual (category theory)
Duality (mathematics)
Adjoint functor
Contravariant functor
Opposite functor
References
Category theory
|
https://en.wikipedia.org/wiki/Multimagic%20square
|
In mathematics, a P-multimagic square (also known as a satanic square) is a magic square that remains magic even if all its numbers are replaced by their kth powers for 1 ≤ k ≤ P. squares are called bimagic, squares are called trimagic, squares tetramagic, and squares pentamagic.
Constants for normal squares
If the squares are normal, the constant for the power-squares can be determined as follows:
Bimagic series totals for bimagic squares are also linked to the square-pyramidal number sequence is as follows :-
Squares 0, 1, 4, 9, 16, 25, 36, 49, ....
Sum of Squares 0, 1, 5, 14, 30, 55, 91, 140, 204, 285, ... )number of units in a square-based pyramid)
The bimagic series is the 1st, 4th, 9th in this series (divided by 1, 2, 3, n) etc. so values for the rows and columns in order-1, order-2, order-3 Bimagic squares would be 1, 15, 95, 374, 1105, 2701, 5775, 11180, ...
The trimagic series would be related in the same way to the hyper-pyramidal sequence of nested cubes.
Cubes 0, 1, 8, 27, 64, 125, 216, ...
Sum of Cubes 0, 1, 9, 36, 100, ...
Value for Trimagic squares 1, 50, 675, 4624, ...
Similarly the tetramagic sequence
4-Power 0, 1, 16, 81, 256, 625, 1296, ...
Sum of 4-Power 0, 1, 17, 98, 354, 979, 2275, ...
Sums for Tetramagic squares 0, 1, 177, ...
Bimagic square
A bimagic square is a magic square that remains magic when all of its numbers are replaced by their squares.
The first known bimagic square has order 8 and magic constant 260 and a bimagic constant of 11180.
It has been conjectured by Bensen and Jacoby that no nontrivial bimagic squares of order less than 8 exist. This was shown for magic squares containing the elements 1 to n2 by Boyer and Trump.
However, J. R. Hendricks was able to show in 1998 that no bimagic square of order 3 exists, save for the trivial bimagic square containing the same number nine times. The proof is fairly simple: let the following be our bimagic square.
{|class="wikitable" style="text-align:center;height:10em;width:10em;;table-layout:fixed"
|-
| a || b || c
|-
| d || e || f
|-
| g || h || i
|-
|}
It is well known that a property of magic squares is that . Similarly, . Therefore,
. It follows that . The same holds for all lines going through the center.
For 4 × 4 squares, Luke Pebody was able to show by similar methods that the only 4 × 4 bimagic squares (up to symmetry) are of the form
or
An 8 × 8 bimagic square.
Nontrivial bimagic squares are now (2010) known for any order from eight to 64. Li Wen of China created the first known bimagic squares of orders 34, 37, 38, 41, 43, 46, 47, 53, 58, 59, 61, 62 filling the gaps of the last unknown orders.
In 2006 Jaroslaw Wroblewski built a non-normal bimagic square of order 6. Non-normal means that it uses non-consecutive integers.
Also in 2006 Lee Morgenstern built several non-normal bimagic squares of order 7.
Trimagic square
A trimagic square is a magic square that remains magic when all of its numbers are re
|
https://en.wikipedia.org/wiki/Magic%20hypercube
|
In mathematics, a magic hypercube is the k-dimensional generalization of magic squares and magic cubes, that is, an n × n × n × ... × n array of integers such that the sums of the numbers on each pillar (along any axis) as well as on the main space diagonals are all the same. The common sum is called the magic constant of the hypercube, and is sometimes denoted Mk(n). If a magic hypercube consists of the numbers 1, 2, ..., nk, then it has magic number
.
For k = 4, a magic hypercube may be called a magic tesseract, with sequence of magic numbers given by .
The side-length n of the magic hypercube is called its order. Four-, five-, six-, seven- and eight-dimensional magic hypercubes of order three have been constructed by J. R. Hendricks.
Marian Trenkler proved the following theorem:
A p-dimensional magic hypercube of order n exists if and only if
p > 1 and n is different from 2 or p = 1. A construction of a magic hypercube follows from the proof.
The R programming language includes a module, library(magic), that will create magic hypercubes of any dimension with n a multiple of 4.
Perfect magic hypercubes
If, in addition, the numbers on every cross section diagonal also sum up to the hypercube's magic number, the hypercube is called a perfect magic hypercube; otherwise, it is called a semiperfect magic hypercube. The number n is called the order of the magic hypercube.
This definition of "perfect" assumes that one of the older definitions for perfect magic cubes is used. The Universal Classification System for Hypercubes (John R. Hendricks) requires that for any dimension hypercube, all possible lines sum correctly for the hypercube to be considered perfect magic. Because of the confusion with the term perfect, nasik is now the preferred term for any magic hypercube where all possible lines sum to S. Nasik was defined in this manner by C. Planck in 1905. A nasik magic hypercube has (3n − 1) lines of m numbers passing through each of the mn cells.
Nasik magic hypercubes
A Nasik magic hypercube is a magic hypercube with the added restriction that all possible lines through each cell sum correctly to S = where S is the magic constant, m the order and n the dimension of the hypercube.
Or, to put it more concisely, all pan-r-agonals sum correctly for r = 1...n. This definition is the same as the Hendricks definition of perfect, but different from the Boyer/Trump definition.
The term nasik would apply to all dimensions of magic hypercubes in which the number of correctly summing paths (lines) through any cell of the hypercube is P = .
A pandiagonal magic square then would be a nasik square because 4 magic line pass through each of the m2 cells. This was A.H. Frost’s original definition of nasik. A nasik magic cube would have 13 magic lines passing through each of its m3 cells. (This cube also contains 9m pandiagonal magic squares of order m.) A nasik magic tesseract would have 40 lines passing through each of its m4 cells, and so on
|
https://en.wikipedia.org/wiki/Grothendieck%27s%20Galois%20theory
|
In mathematics, Grothendieck's Galois theory is an abstract approach to the Galois theory of fields, developed around 1960 to provide a way to study the fundamental group of algebraic topology in the setting of algebraic geometry. It provides, in the classical setting of field theory, an alternative perspective to that of Emil Artin based on linear algebra, which became standard from about the 1930s.
The approach of Alexander Grothendieck is concerned with the category-theoretic properties that characterise the categories of finite G-sets for a fixed profinite group G. For example, G might be the group denoted , which is the inverse limit of the cyclic additive groups Z/nZ — or equivalently the completion of the infinite cyclic group Z for the topology of subgroups of finite index. A finite G-set is then a finite set X on which G acts through a quotient finite cyclic group, so that it is specified by giving some permutation of X.
In the above example, a connection with classical Galois theory can be seen by regarding as the profinite Galois group Gal(F/F) of the algebraic closure F of any finite field F, over F. That is, the automorphisms of F fixing F are described by the inverse limit, as we take larger and larger finite splitting fields over F. The connection with geometry can be seen when we look at covering spaces of the unit disk in the complex plane with the origin removed: the finite covering realised by the zn map of the disk, thought of by means of a complex number variable z, corresponds to the subgroup n.Z of the fundamental group of the punctured disk.
The theory of Grothendieck, published in SGA1, shows how to reconstruct the category of G-sets from a fibre functor Φ, which in the geometric setting takes the fibre of a covering above a fixed base point (as a set). In fact there is an isomorphism proved of the type
G ≅ Aut(Φ),
the latter being the group of automorphisms (self-natural equivalences) of Φ. An abstract classification of categories with a functor to the category of sets is given, by means of which one can recognise categories of G-sets for G profinite.
To see how this applies to the case of fields, one has to study the tensor product of fields. In topos theory this is a part of the study of atomic toposes.
See also
Tannakian formalism
Fiber functor
Anabelian geometry
References
(This book introduces the reader to the Galois theory of Grothendieck, and some generalisations, leading to Galois groupoids.)
Galois theory
Algebraic geometry
Category theory
|
https://en.wikipedia.org/wiki/Schur%20decomposition
|
In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.
Statement
The Schur decomposition reads as follows: if is an square matrix with complex entries, then A can be expressed as
where Q is a unitary matrix (so that its inverse Q−1 is also the conjugate transpose Q* of Q), and U is an upper triangular matrix, which is called a Schur form of A. Since U is similar to A, it has the same spectrum, and since it is triangular, its eigenvalues are the diagonal entries of U.
The Schur decomposition implies that there exists a nested sequence of A-invariant subspaces , and that there exists an ordered orthonormal basis (for the standard Hermitian form of ) such that the first i basis vectors span for each i occurring in the nested sequence. Phrased somewhat differently, the first part says that a linear operator J on a complex finite-dimensional vector space stabilizes a complete flag .
Proof
A constructive proof for the Schur decomposition is as follows: every operator A on a complex finite-dimensional vector space has an eigenvalue λ, corresponding to some eigenspace Vλ. Let Vλ⊥ be its orthogonal complement. It is clear that, with respect to this orthogonal decomposition, A has matrix representation (one can pick here any orthonormal bases Z1 and Z2 spanning Vλ and Vλ⊥ respectively)
where Iλ is the identity operator on Vλ. The above matrix would be upper-triangular except for the A22 block. But exactly the same procedure can be applied to the sub-matrix A22, viewed as an operator on Vλ⊥, and its submatrices. Continue this way until the resulting matrix is upper triangular. Since each conjugation increases the dimension of the upper-triangular block by at least one, this process takes at most n steps. Thus the space Cn will be exhausted and the procedure has yielded the desired result.
The above argument can be slightly restated as follows: let λ be an eigenvalue of A, corresponding to some eigenspace Vλ. A induces an operator T on the quotient space Cn/Vλ. This operator is precisely the A22 submatrix from above. As before, T would have an eigenspace, say Wμ ⊂ Cn modulo Vλ. Notice the preimage of Wμ under the quotient map is an invariant subspace of A that contains Vλ. Continue this way until the resulting quotient space has dimension 0. Then the successive preimages of the eigenspaces found at each step form a flag that A stabilizes.
Notes
Although every square matrix has a Schur decomposition, in general this decomposition is not unique. For example, the eigenspace Vλ can have dimension > 1, in which case any orthonormal basis for Vλ would lead to the desired result.
Write the triangular matrix U as U = D + N, where D is diagonal and N is strictly upper triangular (and
|
https://en.wikipedia.org/wiki/Schur%20complement
|
In linear algebra and the theory of matrices, the Schur complement of a block matrix is defined as follows.
Suppose p, q are nonnegative integers, and suppose A, B, C, D are respectively p × p, p × q, q × p, and q × q matrices of complex numbers. Let
so that M is a (p + q) × (p + q) matrix.
If D is invertible, then the Schur complement of the block D of the matrix M is the p × p matrix defined by
If A is invertible, the Schur complement of the block A of the matrix M is the q × q matrix defined by
In the case that A or D is singular, substituting a generalized inverse for the inverses on M/A and M/D yields the generalized Schur complement.
The Schur complement is named after Issai Schur who used it to prove Schur's lemma, although it had been used previously. Emilie Virginia Haynsworth was the first to call it the Schur complement. The Schur complement is a key tool in the fields of numerical analysis, statistics, and matrix analysis.
Background
The Schur complement arises when performing a block Gaussian elimination on the matrix M. In order to eliminate the elements below the block diagonal, one multiplies the matrix M by a block lower triangular matrix on the right as follows:
where Ip denotes a p×p identity matrix. As a result, the Schur complement appears in the upper-left p×p block.
Continuing the elimination process beyond this point (i.e., performing a block Gauss–Jordan elimination),
leads to an LDU decomposition of M, which reads
Thus, the inverse of M may be expressed involving D−1 and the inverse of Schur's complement, assuming it exists, as
The above relationship comes from the elimination operations that involve D−1 and M/D. An equivalent derivation can be done with the roles of A and D interchanged. By equating the expressions for M−1 obtained in these two different ways, one can establish the matrix inversion lemma, which relates the two Schur complements of M: M/D and M/A (see "Derivation from LDU decomposition" in ).
Properties
If p and q are both 1 (i.e., A, B, C and D are all scalars), we get the familiar formula for the inverse of a 2-by-2 matrix:
provided that AD − BC is non-zero.
In general, if A is invertible, then
whenever this inverse exists.
(Schur's formula) When A, respectively D, is invertible, the determinant of M is also clearly seen to be given by
, respectively
,
which generalizes the determinant formula for 2 × 2 matrices.
(Guttman rank additivity formula) If D is invertible, then the rank of M is given by
(Haynsworth inertia additivity formula) If A is invertible, then the inertia of the block matrix M is equal to the inertia of A plus the inertia of M/A.
(Quotient identity) .
The Schur complement of a Laplacian matrix is also a Laplacian matrix.
Application to solving linear equations
The Schur complement arises naturally in solving a system of linear equations such as
.
Assuming that the submatrix is invertible, we can eliminate from the equations, as follows.
.
Sub
|
https://en.wikipedia.org/wiki/Pathological%20%28mathematics%29
|
In mathematics, when a mathematical phenomenon runs counter to some intuition, then the phenomenon is sometimes called pathological. On the other hand, if a phenomenon does not run counter to intuition,
it is sometimes called well-behaved. These terms are sometimes useful in mathematical research and teaching, but there is no strict mathematical definition of pathological or well-behaved.
In analysis
A classic example of a pathology is the Weierstrass function, a function that is continuous everywhere but differentiable nowhere. The sum of a differentiable function and the Weierstrass function is again continuous but nowhere differentiable; so there are at least as many such functions as differentiable functions. In fact, using the Baire category theorem, one can show that continuous functions are generically nowhere differentiable.
Such examples were deemed pathological when they were first discovered:
To quote Henri Poincaré:
Since Poincaré, nowhere differentiable functions have been shown to appear in basic physical and biological processes such as Brownian motion and in applications such as the Black-Scholes model in finance.
Counterexamples in Analysis is a whole book of such counterexamples.
In topology
One famous counterexample in topology is the Alexander horned sphere, showing that topologically embedding the sphere S2 in R3 may fail to separate the space cleanly. As a counterexample, it motivated mathematicians to define the tameness property, which suppresses the kind of wild behavior exhibited by the horned sphere, wild knot, and other similar examples.
Like many other pathologies, the horned sphere in a sense plays on infinitely fine, recursively generated structure, which in the limit violates ordinary intuition. In this case, the topology of an ever-descending chain of interlocking loops of continuous pieces of the sphere in the limit fully reflects that of the common sphere, and one would expect the outside of it, after an embedding, to work the same. Yet it does not: it fails to be simply connected.
For the underlying theory, see Jordan–Schönflies theorem.
Counterexamples in Topology is a whole book of such counterexamples.
Well-behaved
Mathematicians (and those in related sciences) very frequently speak of whether a mathematical object—a function, a set, a space of one sort or another—is "well-behaved". While the term has no fixed formal definition, it generally refers to the quality of satisfying a list of prevailing conditions, which might be dependent on context, mathematical interests, fashion, and taste. To ensure that an object is "well-behaved", mathematicians introduce further axioms to narrow down the domain of study. This has the benefit of making analysis easier, but produces a loss of generality of any conclusions reached.
In both pure and applied mathematics (e.g., optimization, numerical integration, mathematical physics), well-behaved also means not violating any assumptions needed to successfully ap
|
https://en.wikipedia.org/wiki/Jordan%20decomposition
|
In mathematics, Jordan decomposition may refer to
Hahn decomposition theorem, and the Jordan decomposition of a measure
Jordan normal form of a matrix
Jordan–Chevalley decomposition of a matrix
Deligne–Lusztig theory, and its Jordan decomposition of a character of a finite group of Lie type
The Jordan–Hölder theorem, about decompositions of finite groups.
|
https://en.wikipedia.org/wiki/Polynomial%20ring
|
In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring (which is also a commutative algebra) formed from the set of polynomials in one or more indeterminates (traditionally also called variables) with coefficients in another ring, often a field.
Often, the term "polynomial ring" refers implicitly to the special case of a polynomial ring in one indeterminate over a field. The importance of such polynomial rings relies on the high number of properties that they have in common with the ring of the integers.
Polynomial rings occur and are often fundamental in many parts of mathematics such as number theory, commutative algebra, and algebraic geometry. In ring theory, many classes of rings, such as unique factorization domains, regular rings, group rings, rings of formal power series, Ore polynomials, graded rings, have been introduced for generalizing some properties of polynomial rings.
A closely related notion is that of the ring of polynomial functions on a vector space, and, more generally, ring of regular functions on an algebraic variety.
Definition (univariate case)
The polynomial ring, , in over a field (or, more generally, a commutative ring) can be defined in several equivalent ways. One of them is to define as the set of expressions, called polynomials in , of the form
where , the coefficients of , are elements of , if , and are symbols, which are considered as "powers" of , and follow the usual rules of exponentiation: , , and for any nonnegative integers and . The symbol is called an indeterminate or variable. (The term of "variable" comes from the terminology of polynomial functions. However, here, has not any value (other than itself), and cannot vary, being a constant in the polynomial ring.)
Two polynomials are equal when the corresponding coefficients of each are equal.
One can think of the ring as arising from by adding one new element that is external to , commutes with all elements of , and has no other specific properties. This can be used for an equivalent definition of polynomial rings.
The polynomial ring in over is equipped with an addition, a multiplication and a scalar multiplication that make it a commutative algebra. These operations are defined according to the ordinary rules for manipulating algebraic expressions. Specifically, if
and
then
and
where ,
and
In these formulas, the polynomials and are extended by adding "dummy terms" with zero coefficients, so that all and that appear in the formulas are defined. Specifically, if , then for .
The scalar multiplication is the special case of the multiplication where is reduced to its constant term (the term that is independent of ); that is
It is straightforward to verify that these three operations satisfy the axioms of a commutative algebra over . Therefore, polynomial rings are also called polynomial algebras.
Another equivalent definition is often preferred, although less intuitive, be
|
https://en.wikipedia.org/wiki/Equinumerosity
|
In mathematics, two sets or classes A and B are equinumerous if there exists a one-to-one correspondence (or bijection) between them, that is, if there exists a function from A to B such that for every element y of B, there is exactly one element x of A with f(x) = y. Equinumerous sets are said to have the same cardinality (number of elements). The study of cardinality is often called equinumerosity (equalness-of-number). The terms equipollence (equalness-of-strength) and equipotence (equalness-of-power) are sometimes used instead.
Equinumerosity has the characteristic properties of an equivalence relation. The statement that two sets A and B are equinumerous is usually denoted
or , or
The definition of equinumerosity using bijections can be applied to both finite and infinite sets, and allows one to state whether two sets have the same size even if they are infinite. Georg Cantor, the inventor of set theory, showed in 1874 that there is more than one kind of infinity, specifically that the collection of all natural numbers and the collection of all real numbers, while both infinite, are not equinumerous (see Cantor's first uncountability proof). In his controversial 1878 paper, Cantor explicitly defined the notion of "power" of sets and used it to prove that the set of all natural numbers and the set of all rational numbers are equinumerous (an example where a proper subset of an infinite set is equinumerous to the original set), and that the Cartesian product of even a countably infinite number of copies of the real numbers is equinumerous to a single copy of the real numbers.
Cantor's theorem from 1891 implies that no set is equinumerous to its own power set (the set of all its subsets). This allows the definition of greater and greater infinite sets starting from a single infinite set.
If the axiom of choice holds, then the cardinal number of a set may be regarded as the least ordinal number of that cardinality (see initial ordinal). Otherwise, it may be regarded (by Scott's trick) as the set of sets of minimal rank having that cardinality.
The statement that any two sets are either equinumerous or one has a smaller cardinality than the other is equivalent to the axiom of choice.
Cardinality
Equinumerous sets have a one-to-one correspondence between them, and are said to have the same cardinality. The cardinality of a set X is a measure of the "number of elements of the set". Equinumerosity has the characteristic properties of an equivalence relation (reflexivity, symmetry, and transitivity):
Reflexivity Given a set A, the identity function on A is a bijection from A to itself, showing that every set A is equinumerous to itself: .
Symmetry For every bijection between two sets A and B there exists an inverse function which is a bijection between B and A, implying that if a set A is equinumerous to a set B then B is also equinumerous to A: implies .
Transitivity Given three sets A, B and C with two bijections and , the composition o
|
https://en.wikipedia.org/wiki/Limit%20ordinal
|
In set theory, a limit ordinal is an ordinal number that is neither zero nor a successor ordinal. Alternatively, an ordinal λ is a limit ordinal if there is an ordinal less than λ, and whenever β is an ordinal less than λ, then there exists an ordinal γ such that β < γ < λ. Every ordinal number is either zero, or a successor ordinal, or a limit ordinal.
For example, the smallest limit ordinal is ω, the smallest ordinal greater than every natural number. This is a limit ordinal because for any smaller ordinal (i.e., for any natural number) n we can find another natural number larger than it (e.g. n+1), but still less than ω. The next-smallest limit ordinal is ω+ω. This will be discussed further in the article.
Using the von Neumann definition of ordinals, every ordinal is the well-ordered set of all smaller ordinals. The union of a nonempty set of ordinals that has no greatest element is then always a limit ordinal. Using von Neumann cardinal assignment, every infinite cardinal number is also a limit ordinal.
Alternative definitions
Various other ways to define limit ordinals are:
It is equal to the supremum of all the ordinals below it, but is not zero. (Compare with a successor ordinal: the set of ordinals below it has a maximum, so the supremum is this maximum, the previous ordinal.)
It is not zero and has no maximum element.
It can be written in the form ωα for α > 0. That is, in the Cantor normal form there is no finite number as last term, and the ordinal is nonzero.
It is a limit point of the class of ordinal numbers, with respect to the order topology. (The other ordinals are isolated points.)
Some contention exists on whether or not 0 should be classified as a limit ordinal, as it does not have an immediate predecessor;
some textbooks include 0 in the class of limit ordinals while others exclude it.
Examples
Because the class of ordinal numbers is well-ordered, there is a smallest infinite limit ordinal; denoted by ω (omega). The ordinal ω is also the smallest infinite ordinal (disregarding limit), as it is the least upper bound of the natural numbers. Hence ω represents the order type of the natural numbers. The next limit ordinal above the first is ω + ω = ω·2, which generalizes to ω·n for any natural number n. Taking the union (the supremum operation on any set of ordinals) of all the ω·n, we get ω·ω = ω2, which generalizes to ωn for any natural number n. This process can be further iterated as follows to produce:
In general, all of these recursive definitions via multiplication, exponentiation, repeated exponentiation, etc. yield limit ordinals. All of the ordinals discussed so far are still countable ordinals. However, there is no recursively enumerable scheme for systematically naming all ordinals less than the Church–Kleene ordinal, which is a countable ordinal.
Beyond the countable, the first uncountable ordinal is usually denoted ω1. It is also a limit ordinal.
Continuing, one can obtain the following (all of which ar
|
https://en.wikipedia.org/wiki/Cardinal%20assignment
|
In set theory, the concept of cardinality is significantly developable without recourse to actually defining cardinal numbers as objects in the theory itself (this is in fact a viewpoint taken by Frege; Frege cardinals are basically equivalence classes on the entire universe of sets, by equinumerosity). The concepts are developed by defining equinumerosity in terms of functions and the concepts of one-to-one and onto (injectivity and surjectivity); this gives us a quasi-ordering relation
on the whole universe by size. It is not a true partial ordering because antisymmetry need not hold: if both and , it is true by the Cantor–Bernstein–Schroeder theorem that i.e. A and B are equinumerous, but they do not have to be literally equal (see isomorphism). That at least one of and holds turns out to be equivalent to the axiom of choice.
Nevertheless, most of the interesting results on cardinality and its arithmetic can be expressed merely with =c.
The goal of a cardinal assignment is to assign to every set A a specific, unique set that is only dependent on the cardinality of A. This is in accordance with Cantor's original vision of cardinals: to take a set and abstract its elements into canonical "units" and collect these units into another set, such that the only thing special about this set is its size. These would be totally ordered by the relation , and =c would be true equality. As Y. N. Moschovakis says, however, this is mostly an exercise in mathematical elegance, and you don't gain much unless you are "allergic to subscripts." However, there are various valuable applications of "real" cardinal numbers in various models of set theory.
In modern set theory, we usually use the Von Neumann cardinal assignment, which uses the theory of ordinal numbers and the full power of the axioms of choice and replacement. Cardinal assignments do need the full axiom of choice, if we want a decent cardinal arithmetic and an assignment for all sets.
Cardinal assignment without the axiom of choice
Formally, assuming the axiom of choice, the cardinality of a set X is the least ordinal α such that there is a bijection between X and α. This definition is known as the von Neumann cardinal assignment. If the axiom of choice is not assumed we need to do something different. The oldest definition of the cardinality of a set X (implicit in Cantor and explicit in Frege and Principia Mathematica) is as the set of all sets that are equinumerous with X: this does not work in ZFC or other related systems of axiomatic set theory because this collection is too large to be a set, but it does work in type theory and in New Foundations and related systems. However, if we restrict from this class to those equinumerous with X that have the least rank, then it will work (this is a trick due to Dana Scott: it works because the collection of objects with any given rank is a set).
References
Moschovakis, Yiannis N. Notes on Set Theory. New York: Springer-Verlag, 1994.
Ca
|
https://en.wikipedia.org/wiki/Lie%20group%20decomposition
|
In mathematics, Lie group decompositions are used to analyse the structure of Lie groups and associated objects, by showing how they are built up out of subgroups. They are essential technical tools in the representation theory of Lie groups and Lie algebras; they can also be used to study the algebraic topology of such groups and associated homogeneous spaces. Since the use of Lie group methods became one of the standard techniques in twentieth century mathematics, many phenomena can now be referred back to decompositions.
The same ideas are often applied to Lie groups, Lie algebras, algebraic groups and p-adic number analogues, making it harder to summarise the facts into a unified theory.
List of decompositions
The Jordan–Chevalley decomposition of an element in algebraic group as a product of semisimple and unipotent elements
The Bruhat decomposition G = BWB of a semisimple algebraic group into double cosets of a Borel subgroup can be regarded as a generalization of the principle of Gauss–Jordan elimination, which generically writes a matrix as the product of an upper triangular matrix with a lower triangular matrix—but with exceptional cases. It is related to the Schubert cell decomposition of Grassmannians: see Weyl group for more details.
The Cartan decomposition writes a semisimple real Lie algebra as the sum of eigenspaces of a Cartan involution.
The Iwasawa decomposition G = KAN of a semisimple group G as the product of compact, abelian, and nilpotent subgroups generalises the way a square real matrix can be written as a product of an orthogonal matrix and an upper triangular matrix (a consequence of Gram–Schmidt orthogonalization).
The Langlands decomposition P = MAN writes a parabolic subgroup P of a Lie group as the product of semisimple, abelian, and nilpotent subgroups.
The Levi decomposition writes a finite dimensional Lie algebra as a semidirect product of a normal solvable ideal and a semisimple subalgebra.
The LU decomposition of a dense subset in the general linear group. It can be considered as a special case of the Bruhat decomposition.
The Birkhoff decomposition, a special case of the Bruhat decomposition for affine groups.
Lie groups
factorization
|
https://en.wikipedia.org/wiki/Language%20of%20mathematics
|
The language of mathematics or mathematical language is an extension of the natural language (for example English) that is used in mathematics and in science for expressing results (scientific laws, theorems, proofs, logical deductions, etc) with concision, precision and unambiguity.
Features
The main features of the mathematical language are the following.
Use of common words with a derived meaning, generally more specific and more precise. For example, "or" means "one, the other or both", while, in common language, "both" is sometimes included and sometimes not. Also, a "line" is straight and has zero width.
Use of common words with a meaning that is completely different from their common meaning. For example, a mathematical ring is not related to any other meaning of "ring". Real numbers and imaginary numbers are two sorts of numbers, none being more real or more imaginary than the others.
Use of neologisms. For example polynomial, homomorphism.
Use of symbols as words or phrases. For example, and are respectively read as " equals " and
Use of formulas as part of sentences. For example: " represents quantitatively the mass–energy equivalence." A formula that is not included in a sentence is generally meaningless, since the meaning of the symbols may depend on the context: in this is the context that specifies that is the energy of a physical body, is its mass, and is the speed of light.
Use of mathematical jargon that consists of phrases that are used for informal explanations or shorthands. For example, "killing" is often used in place of "replacing with zero", and this led to the use of assassinator and annihilator as technical words.
Understanding mathematical text
The consequence of these features is that a mathematical text is generally not understandable without some prerequisite knowledge. For example the sentence "a free module is a module that has a basis" is perfectly correct, although it appears only as a grammatically correct nonsense, when one does not know the definitions of basis, module, and free module.
H. B. Williams, an electrophysiologist, wrote in 1927:
See also
Formulario mathematico
Formal language
History of mathematical notation
Mathematical notation
List of mathematical jargon
References
Further reading
Linguistic point of view
Keith Devlin (2000) The Language of Mathematics: Making the Invisible Visible, Holt Publishing.
Kay O'Halloran (2004) Mathematical Discourse: Language, Symbolism and Visual Images, Continuum.
R. L. E. Schwarzenberger (2000), "The Language of Geometry", in A Mathematical Spectrum Miscellany, Applied Probability Trust.
In education
F. Bruun, J. M. Diaz, & V. J. Dykes (2015) The Language of Mathematics. Teaching Children Mathematics, 21(9), 530–536.
J. O. Bullock (1994) Literacy in the Language of Mathematics. The American Mathematical Monthly, 101(8), 735–743.
L. Buschman (1995) Communicating in the Language of Mathematics. Teaching Children Mathematics, 1(6), 324–
|
https://en.wikipedia.org/wiki/Mikio%20Sato
|
was a Japanese mathematician known for founding the fields of algebraic analysis, hyperfunctions, and holonomic quantum fields. He was a professor at the Research Institute for Mathematical Sciences in Kyoto.
Biography
Born in Tokyo on 18 April 1928, Sato studied at the University of Tokyo, receiving his BSc in 1952 and PhD under Shokichi Iyanaga in 1963. He was a professor at Osaka University and the University of Tokyo before moving to the Research Institute for Mathematical Sciences (RIMS) attached to Kyoto University in 1970. He was director of RIMS from 1987 to 1991.
His disciples include Masaki Kashiwara, Takahiro Kawai, Tetsuji Miwa, as well as Michio Jimbo, who have been called the "Sato School".
Sato died at home in Kyoto on 9 January 2023, aged 94.
Research
Sato was known for his innovative work in a number of fields, such as prehomogeneous vector spaces and Bernstein–Sato polynomials; and particularly for his hyperfunction theory. This theory initially appeared as an extension of the ideas of distribution theory; it was soon connected to the local cohomology theory of Grothendieck, for which it was an independent realisation in terms of sheaf theory. Further, it led to the theory of microfunctions and microlocal analysis in linear partial differential equations and Fourier theory, such as for wave fronts, and ultimately to the current developments in D-module theory. Part of Sato's hyperfunction theory is the modern theory of holonomic systems: PDEs overdetermined to the point of having finite-dimensional spaces of solutions (algebraic analysis).
In theoretical physics, Sato wrote a series of papers in the 1970s with Michio Jimbo and Tetsuji Miwa that developed the theory of holonomic quantum fields. When Sato was awarded the 2002–2003 Wolf Prize in Mathematics, this work was described as "a far-reaching extension of the mathematical formalism underlying the two-dimensional Ising model, and introduced along the way the famous tau functions." Sato also contributed basic work to non-linear soliton theory, with the use of Grassmannians of infinite dimension.
In number theory, he and John Tate independently posed the Sato–Tate conjecture on L-functions around 1960.
Pierre Schapira remarked, "Looking back, 40 years later, we realize that Sato's approach to mathematics is not so different from that of Grothendieck, that Sato did have the incredible temerity to treat analysis as algebraic geometry and was also able to build the algebraic and geometric tools adapted to his problems."
Awards and honours
Sato received the 1969 Asahi Prize of Science, the 1976 Japan Academy Prize, the 1984 Person of Cultural Merits award of the Japanese Education Ministry, the 1997 Schock Prize, and the 2002–2003 Wolf Prize in Mathematics.
Sato was a plenary speaker at the 1983 International Congress of Mathematicians in Warsaw. He was elected a foreign member of the National Academy of Sciences in 1993.
Notes
External links
Schock Prize citation
199
|
https://en.wikipedia.org/wiki/Grassmannian
|
In mathematics, the Grassmannian is a differentiable manifold that parameterizes the set of all -dimensional linear subspaces of an -dimensional vector space over a field .
For example, the Grassmannian is the space of lines through the origin in , so it is the same as the projective space of one dimension lower than .
When is a real or complex vector space, Grassmannians are compact smooth manifolds , of dimension . In general they have the structure of a nonsingular projective algebraic variety.
The earliest work on a non-trivial Grassmannian is due to Julius Plücker, who studied the set of projective lines in real projective 3-space, which is equivalent to , parameterizing them by what are now called Plücker coordinates. (See below.) Hermann Grassmann later introduced the concept in general.
Notations for Grassmannians vary between authors, and include , ,, to denote the Grassmannian of -dimensional subspaces of an -dimensional vector space .
Motivation
By giving a collection of subspaces of a vector space a topological structure, it is possible to talk about a continuous choice of subspaces or open and closed collections of subspaces. Giving them the further structure of a differential manifold, one can talk about smooth choices of subspace.
A natural example comes from tangent bundles of smooth manifolds embedded in a Euclidean space. Suppose we have a manifold of dimension embedded in . At each point , the tangent space to can be considered as a subspace of the tangent space of , which is also just . The map assigning to its tangent space defines a map from to . (In order to do this, we have to translate the tangent space at each so that it passes through the origin rather than , and hence defines a -dimensional vector subspace. This idea is very similar to the Gauss map for surfaces in a 3-dimensional space.)
This can with some effort be extended to all vector bundles over a manifold , so that every vector bundle generates a continuous map from to a suitably generalised Grassmannian—although various embedding theorems must be proved to show this. We then find that the properties of our vector bundles are related to the properties of the corresponding maps. In particular we find that vector bundles inducing homotopic maps to the Grassmannian are isomorphic. Here the definition of homotopy relies on a notion of continuity, and hence a topology.
Low dimensions
For , the Grassmannian is the space of lines through the origin in -space, so it is the same as the projective space of dimensions.
For , the Grassmannian is the space of all 2-dimensional planes containing the origin. In Euclidean 3-space, a plane containing the origin is completely characterized by the one and only line through the origin that is perpendicular to that plane (and vice versa); hence the spaces , , and (the projective plane) may all be identified with each other.
The simplest Grassmannian that is not a projective space is .
The Grassmannian
|
https://en.wikipedia.org/wiki/Law%20of%20trichotomy
|
In mathematics, the law of trichotomy states that every real number is either positive, negative, or zero.
More generally, a binary relation R on a set X is trichotomous if for all x and y in X, exactly one of xRy, yRx and x=y holds. Writing R as <, this is stated in formal logic as:
Properties
A relation is trichotomous if, and only if, it is asymmetric and connected.
If a trichotomous relation is also transitive, then it is a strict total order; this is a special case of a strict weak order.
Examples
On the set X = {a,b,c}, the relation R = { (a,b), (a,c), (b,c) } is transitive and trichotomous, and hence a strict total order.
On the same set, the cyclic relation R = { (a,b), (b,c), (c,a) } is trichotomous, but not transitive; it is even antitransitive.
Trichotomy on numbers
A law of trichotomy on some set X of numbers usually expresses that some tacitly given ordering relation on X is a trichotomous one. An example is the law "For arbitrary real numbers x and y, exactly one of x < y, y < x, or x = y applies"; some authors even fix y to be zero, relying on the real number's additive linearly ordered group structure. The latter is a group equipped with a trichotomous order.
In classical logic, this axiom of trichotomy holds for ordinary comparison between real numbers and therefore also for comparisons between integers and between rational numbers. The law does not hold in general in intuitionistic logic.
In Zermelo–Fraenkel set theory and Bernays set theory, the law of trichotomy holds between the cardinal numbers of well-orderable sets even without the axiom of choice. If the axiom of choice holds, then trichotomy holds between arbitrary cardinal numbers (because they are all well-orderable in that case).
See also
Begriffsschrift contains an early formulation of the law of trichotomy
Dichotomy
Law of noncontradiction
Law of excluded middle
Three-way comparison
References
Order theory
Binary relations
3 (number)
|
https://en.wikipedia.org/wiki/Gibbs%20phenomenon
|
In mathematics, the Gibbs phenomenon is the oscillatory behavior of the Fourier series of a piecewise continuously differentiable periodic function around a jump discontinuity. The th partial Fourier series of the function (formed by summing the lowest constituent sinusoids of the Fourier series of the function) produces large peaks around the jump which overshoot and undershoot the function values. As more sinusoids are used, this approximation error approaches a limit of about 9% of the jump, though the infinite Fourier series sum does eventually converge almost everywhere (pointwise convergence on continuous points) except points of discontinuity.
The Gibbs phenomenon was observed by experimental physicists and was believed to be due to imperfections in the measuring apparatus, but it is in fact a mathematical result. It is one cause of ringing artifacts in signal processing.
Description
The Gibbs phenomenon is a behavior of the Fourier series of a function with a jump discontinuity and is described as the following:As more Fourier series constituents or components are taken, the Fourier series shows the first overshoot in the oscillatory behavior around the jump point approaching ~ 9% of the (full) jump and this oscillation does not disappear but gets closer to the point so that the integral of the oscillation approaches to zero (i.e., zero energy in the oscillation).At the jump point, the Fourier series gives the average of the function's both side limits toward the point.
Square wave example
The three pictures on the right demonstrate the Gibbs phenomenon for a square wave (with peak-to-peak amplitude of from to and the periodicity ) whose th partial Fourier series is
where . More precisely, this square wave is the function which equals between and and between and for every integer ; thus, this square wave has a jump discontinuity of peak-to-peak height at every integer multiple of .
As more sinusoidal terms are added (i.e., increasing ), the error of the partial Fourier series converges to a fixed height. But because the width of the error continues to narrow, the area of the error – and hence the energy of the error – converges to 0. The square wave analysis reveals that the error exceeds the height (from zero) of the square wave by
()
or about 9% of the full jump . More generally, at any discontinuity of a piecewise continuously differentiable function with a jump of , the th partial Fourier series of the function will (for a very large value) overshoot this jump by an error approaching at one end and undershoot it by the same amount at the other end; thus the "full jump" in the partial Fourier series will be about 18% larger than the full jump in the original function. At the discontinuity, the partial Fourier series will converge to the midpoint of the jump (regardless of the actual value of the original function at the discontinuity) as a consequence of Dirichlet's theorem. The quantity
()
is sometimes known as
|
https://en.wikipedia.org/wiki/Limit%20cardinal
|
In mathematics, limit cardinals are certain cardinal numbers. A cardinal number λ is a weak limit cardinal if λ is neither a successor cardinal nor zero. This means that one cannot "reach" λ from another cardinal by repeated successor operations. These cardinals are sometimes called simply "limit cardinals" when the context is clear.
A cardinal λ is a strong limit cardinal if λ cannot be reached by repeated powerset operations. This means that λ is nonzero and, for all κ < λ, 2κ < λ. Every strong limit cardinal is also a weak limit cardinal, because κ+ ≤ 2κ for every cardinal κ, where κ+ denotes the successor cardinal of κ.
The first infinite cardinal, (aleph-naught), is a strong limit cardinal, and hence also a weak limit cardinal.
Constructions
One way to construct limit cardinals is via the union operation: is a weak limit cardinal, defined as the union of all the alephs before it; and in general for any limit ordinal λ is a weak limit cardinal.
The ב operation can be used to obtain strong limit cardinals. This operation is a map from ordinals to cardinals defined as
(the smallest ordinal equinumerous with the powerset)
If λ is a limit ordinal,
The cardinal
is a strong limit cardinal of cofinality ω. More generally, given any ordinal α, the cardinal
is a strong limit cardinal. Thus there are arbitrarily large strong limit cardinals.
Relationship with ordinal subscripts
If the axiom of choice holds, every cardinal number has an initial ordinal. If that initial ordinal is then the cardinal number is of the form for the same ordinal subscript λ. The ordinal λ determines whether is a weak limit cardinal. Because if λ is a successor ordinal then is not a weak limit. Conversely, if a cardinal κ is a successor cardinal, say then Thus, in general, is a weak limit cardinal if and only if λ is zero or a limit ordinal.
Although the ordinal subscript tells us whether a cardinal is a weak limit, it does not tell us whether a cardinal is a strong limit. For example, ZFC proves that is a weak limit cardinal, but neither proves nor disproves that is a strong limit cardinal (Hrbacek and Jech 1999:168). The generalized continuum hypothesis states that for every infinite cardinal κ. Under this hypothesis, the notions of weak and strong limit cardinals coincide.
The notion of inaccessibility and large cardinals
The preceding defines a notion of "inaccessibility": we are dealing with cases where it is no longer enough to do finitely many iterations of the successor and powerset operations; hence the phrase "cannot be reached" in both of the intuitive definitions above. But the "union operation" always provides another way of "accessing" these cardinals (and indeed, such is the case of limit ordinals as well). Stronger notions of inaccessibility can be defined using cofinality. For a weak (respectively strong) limit cardinal κ the requirement is that cf(κ) = κ (i.e. κ be regular) so that κ cannot be expressed as a sum (union) of f
|
https://en.wikipedia.org/wiki/Regular%20cardinal
|
In set theory, a regular cardinal is a cardinal number that is equal to its own cofinality. More explicitly, this means that is a regular cardinal if and only if every unbounded subset has cardinality . Infinite well-ordered cardinals that are not regular are called singular cardinals. Finite cardinal numbers are typically not called regular or singular.
In the presence of the axiom of choice, any cardinal number can be well-ordered, and then the following are equivalent for a cardinal :
is a regular cardinal.
If and for all , then .
If , and if and for all , then .
The category of sets of cardinality less than and all functions between them is closed under colimits of cardinality less than .
is a regular ordinal (see below)
Crudely speaking, this means that a regular cardinal is one that cannot be broken down into a small number of smaller parts.
The situation is slightly more complicated in contexts where the axiom of choice might fail, as in that case not all cardinals are necessarily the cardinalities of well-ordered sets. In that case, the above equivalence holds for well-orderable cardinals only.
An infinite ordinal is a regular ordinal if it is a limit ordinal that is not the limit of a set of smaller ordinals that as a set has order type less than . A regular ordinal is always an initial ordinal, though some initial ordinals are not regular, e.g., (see the example below).
Examples
The ordinals less than are finite. A finite sequence of finite ordinals always has a finite maximum, so cannot be the limit of any sequence of type less than whose elements are ordinals less than , and is therefore a regular ordinal. (aleph-null) is a regular cardinal because its initial ordinal, , is regular. It can also be seen directly to be regular, as the cardinal sum of a finite number of finite cardinal numbers is itself finite.
is the next ordinal number greater than . It is singular, since it is not a limit ordinal. is the next limit ordinal after . It can be written as the limit of the sequence , , , , and so on. This sequence has order type , so is the limit of a sequence of type less than whose elements are ordinals less than ; therefore it is singular.
is the next cardinal number greater than , so the cardinals less than are countable (finite or denumerable). Assuming the axiom of choice, the union of a countable set of countable sets is itself countable. So cannot be written as the sum of a countable set of countable cardinal numbers, and is regular.
is the next cardinal number after the sequence , , , , and so on. Its initial ordinal is the limit of the sequence , , , , and so on, which has order type , so is singular, and so is . Assuming the axiom of choice, is the first infinite cardinal that is singular (the first infinite ordinal that is singular is , and the first infinite limit ordinal that is singular is ). Proving the existence of singular cardinals requires the axiom of replacement, and in fact
|
https://en.wikipedia.org/wiki/Inverse%20trigonometric%20functions
|
In mathematics, the inverse trigonometric functions (occasionally also called arcus functions, antitrigonometric functions or cyclometric functions) are the inverse functions of the trigonometric functions (with suitably restricted domains). Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, and are used to obtain an angle from any of the angle's trigonometric ratios. Inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry.
Notation
Several notations for the inverse trigonometric functions exist. The most common convention is to name inverse trigonometric functions using an arc- prefix: , , , etc. (This convention is used throughout this article.) This notation arises from the following geometric relationships:
when measuring in radians, an angle of radians will correspond to an arc whose length is , where is the radius of the circle. Thus in the unit circle, "the arc whose cosine is " is the same as "the angle whose cosine is ", because the length of the arc of the circle in radii is the same as the measurement of the angle in radians. In computer programming languages, the inverse trigonometric functions are often called by the abbreviated forms , , .
The notations , , , etc., as introduced by John Herschel in 1813, are often used as well in English-language sources, much more than the also established , , – conventions consistent with the notation of an inverse function, that is useful (for example) to define the multivalued version of each inverse trigonometric function: However, this might appear to conflict logically with the common semantics for expressions such as (although only , without parentheses, is the really common use), which refer to numeric power rather than function composition, and therefore may result in confusion between notation for the reciprocal (multiplicative inverse) and inverse function.
The confusion is somewhat mitigated by the fact that each of the reciprocal trigonometric functions has its own name — for example, . Nevertheless, certain authors advise against using it, since it is ambiguous. Another precarious convention used by a small number of authors is to use an uppercase first letter, along with a “” superscript: , , , etc. Although it is intended to avoid confusion with the reciprocal, which should be represented by , , etc., or, better, by , , etc., it in turn creates yet another major source of ambiguity, especially since many popular high-level programming languages (e.g. Mathematica, and MAGMA) use those very same capitalised representations for the standard trig functions, whereas others (Python, SymPy, NumPy, Matlab, MAPLE, etc.) use lower-case.
Hence, since 2009, the ISO 80000-2 standard has specified solely the "arc" prefix for the inverse functions.
Basic concepts
Principal values
Since none of the six trigonometric functions are one-to-one, they must be restricted in order to have inv
|
https://en.wikipedia.org/wiki/Triangular%20matrix
|
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries above the main diagonal are zero. Similarly, a square matrix is called if all the entries below the main diagonal are zero.
Because matrix equations with triangular matrices are easier to solve, they are very important in numerical analysis. By the LU decomposition algorithm, an invertible matrix may be written as the product of a lower triangular matrix L and an upper triangular matrix U if and only if all its leading principal minors are non-zero.
Description
A matrix of the form
is called a lower triangular matrix or left triangular matrix, and analogously a matrix of the form
is called an upper triangular matrix or right triangular matrix. A lower or left triangular matrix is commonly denoted with the variable L, and an upper or right triangular matrix is commonly denoted with the variable U or R.
A matrix that is both upper and lower triangular is diagonal. Matrices that are similar to triangular matrices are called triangularisable.
A non-square (or sometimes any) matrix with zeros above (below) the diagonal is called a lower (upper) trapezoidal matrix. The non-zero entries form the shape of a trapezoid.
Examples
This matrix
is lower triangular, and
is upper triangular
Forward and back substitution
A matrix equation in the form or is very easy to solve by an iterative process called forward substitution for lower triangular matrices and analogously back substitution for upper triangular matrices. The process is so called because for lower triangular matrices, one first computes , then substitutes that forward into the next equation to solve for , and repeats through to . In an upper triangular matrix, one works backwards, first computing , then substituting that back into the previous equation to solve for , and repeating through .
Notice that this does not require inverting the matrix.
Forward substitution
The matrix equation Lx = b can be written as a system of linear equations
Observe that the first equation () only involves , and thus one can solve for directly. The second equation only involves and , and thus can be solved once one substitutes in the already solved value for . Continuing in this way, the -th equation only involves , and one can solve for using the previously solved values for . The resulting formulas are:
A matrix equation with an upper triangular matrix U can be solved in an analogous way, only working backwards.
Applications
Forward substitution is used in financial bootstrapping to construct a yield curve.
Properties
The transpose of an upper triangular matrix is a lower triangular matrix and vice versa.
A matrix which is both symmetric and triangular is diagonal.
In a similar vein, a matrix which is both normal (meaning A*A = AA*, where A* is the conjugate transpose) and triangular is also diagonal. This can be seen by looking at the diagonal entries of A*A and AA*.
The
|
https://en.wikipedia.org/wiki/MathWorld
|
MathWorld is an online mathematics reference work, created and largely written by Eric W. Weisstein. It is sponsored by and licensed to Wolfram Research, Inc. and was partially funded by the National Science Foundation's National Science Digital Library grant to the University of Illinois at Urbana–Champaign.
History
Eric W. Weisstein, the creator of the site, was a physics and astronomy student who got into the habit of writing notes on his mathematical readings. In 1995 he put his notes online and called it "Eric's Treasure Trove of Mathematics." It contained hundreds of pages/articles, covering a wide range of mathematical topics. The site became popular as an extensive single resource on mathematics on the web. In 1998, he made a contract with CRC Press and the contents of the site were published in print and CD-ROM form, titled "CRC Concise Encyclopedia of Mathematics." The free online version became only partially accessible to the public. In 1999 Weisstein went to work for Wolfram Research, Inc. (WRI), and WRI renamed the Math Treasure Trove to MathWorld and hosted it on the company's website without access restrictions.
CRC lawsuit
In 2000, CRC Press sued Wolfram Research Inc. (WRI), WRI president Stephen Wolfram, and author Eric W. Weisstein, due to what they considered a breach of contract: that the MathWorld content was to remain in print only. The site was taken down by a court injunction.
The case was later settled out of court, with WRI paying an unspecified amount and complying with other stipulations. Among these stipulations is the inclusion of a copyright notice at the bottom of the website and broad rights for the CRC Press to produce MathWorld in printed book form. The site then became once again available free to the public.
This case made a wave of headlines in online publishing circles. The PlanetMath project was a result of MathWorld's being unavailable.
See also
List of online encyclopedias
Mathematica
References
External links
Mathematics websites
American educational websites
American online encyclopedias
Mathworld
Encyclopedias of mathematics
|
https://en.wikipedia.org/wiki/Trigonometric%20substitution
|
In mathematics, trigonometric substitution is the replacement of trigonometric functions for other expressions. In calculus, trigonometric substitution is a technique for evaluating integrals. Moreover, one may use the trigonometric identities to simplify certain integrals containing radical expressions. Like other methods of integration by substitution, when evaluating a definite integral, it may be simpler to completely deduce the antiderivative before applying the boundaries of integration.
Case I: Integrands containing a2 − x2
Let and use the identity
Examples of Case I
Example 1
In the integral
we may use
Then,
The above step requires that and We can choose to be the principal root of and impose the restriction by using the inverse sine function.
For a definite integral, one must figure out how the bounds of integration change. For example, as goes from to then goes from to so goes from to Then,
Some care is needed when picking the bounds. Because integration above requires that , can only go from to Neglecting this restriction, one might have picked to go from to which would have resulted in the negative of the actual value.
Alternatively, fully evaluate the indefinite integrals before applying the boundary conditions. In that case, the antiderivative gives
as before.
Example 2
The integral
may be evaluated by letting where so that and by the range of arcsine, so that and
Then,
For a definite integral, the bounds change once the substitution is performed and are determined using the equation with values in the range Alternatively, apply the boundary terms directly to the formula for the antiderivative.
For example, the definite integral
may be evaluated by substituting with the bounds determined using
Because and
On the other hand, direct application of the boundary terms to the previously obtained formula for the antiderivative yields
as before.
Case II: Integrands containing a2 + x2
Let and use the identity
Examples of Case II
Example 1
In the integral
we may write
so that the integral becomes
provided
For a definite integral, the bounds change once the substitution is performed and are determined using the equation with values in the range Alternatively, apply the boundary terms directly to the formula for the antiderivative.
For example, the definite integral
may be evaluated by substituting with the bounds determined using
Since and
Meanwhile, direct application of the boundary terms to the formula for the antiderivative yields
same as before.
Example 2
The integral
may be evaluated by letting
where so that and by the range of arctangent, so that and
Then,
The integral of secant cubed may be evaluated using integration by parts. As a result,
Case III: Integrands containing x2 − a2
Let and use the identity
Examples of Case III
Integrals like
can also be evaluated by partial fractions rather than trigonometric substitutions. However, the integral
|
https://en.wikipedia.org/wiki/Successor%20cardinal
|
In set theory, one can define a successor operation on cardinal numbers in a similar way to the successor operation on the ordinal numbers. The cardinal successor coincides with the ordinal successor for finite cardinals, but in the infinite case they diverge because every infinite ordinal and its successor have the same cardinality (a bijection can be set up between the two by simply sending the last element of the successor to 0, 0 to 1, etc., and fixing ω and all the elements above; in the style of Hilbert's Hotel Infinity). Using the von Neumann cardinal assignment and the axiom of choice (AC), this successor operation is easy to define: for a cardinal number κ we have
,
where ON is the class of ordinals. That is, the successor cardinal is the cardinality of the least ordinal into which a set of the given cardinality can be mapped one-to-one, but which cannot be mapped one-to-one back into that set.
That the set above is nonempty follows from Hartogs' theorem, which says that for any well-orderable cardinal, a larger such cardinal is constructible. The minimum actually exists because the ordinals are well-ordered. It is therefore immediate that there is no cardinal number in between κ and κ+. A successor cardinal is a cardinal that is κ+ for some cardinal κ. In the infinite case, the successor operation skips over many ordinal numbers; in fact, every infinite cardinal is a limit ordinal. Therefore, the successor operation on cardinals gains a lot of power in the infinite case (relative the ordinal successorship operation), and consequently the cardinal numbers are a very "sparse" subclass of the ordinals. We define the sequence of alephs (via the axiom of replacement) via this operation, through all the ordinal numbers as follows:
and for λ an infinite limit ordinal,
If β is a successor ordinal, then is a successor cardinal. Cardinals that are not successor cardinals are called limit cardinals; and by the above definition, if λ is a limit ordinal, then is a limit cardinal.
The standard definition above is restricted to the case when the cardinal can be well-ordered, i.e. is finite or an aleph. Without the axiom of choice, there are cardinals that cannot be well-ordered. Some mathematicians have defined the successor of such a cardinal as the cardinality of the least ordinal that cannot be mapped one-to-one into a set of the given cardinality. That is:
which is the Hartogs number of κ.
See also
Cardinal assignment
References
Paul Halmos, Naive set theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition).
Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. .
Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. .
Cardinal numbers
Set theory
|
https://en.wikipedia.org/wiki/Successor%20ordinal
|
In set theory, the successor of an ordinal number α is the smallest ordinal number greater than α. An ordinal number that is a successor is called a successor ordinal. The ordinals 1, 2, and 3 are the first three successor ordinals and the ordinals ω+1, ω+2 and ω+3 are the first three infinite successor ordinals.
Properties
Every ordinal other than 0 is either a successor ordinal or a limit ordinal.
In Von Neumann's model
Using von Neumann's ordinal numbers (the standard model of the ordinals used in set theory), the successor S(α) of an ordinal number α is given by the formula
Since the ordering on the ordinal numbers is given by α < β if and only if α ∈ β, it is immediate that there is no ordinal number between α and S(α), and it is also clear that α < S(α).
Ordinal addition
The successor operation can be used to define ordinal addition rigorously via transfinite recursion as follows:
and for a limit ordinal λ
In particular, . Multiplication and exponentiation are defined similarly.
Topology
The successor points and zero are the isolated points of the class of ordinal numbers, with respect to the order topology.
See also
Ordinal arithmetic
Limit ordinal
Successor cardinal
References
Ordinal numbers
|
https://en.wikipedia.org/wiki/Mereology
|
In logic, philosophy and related fields, mereology ( (root: , mere-, 'part') and the suffix -logy, 'study, discussion, science') is the study of parts and the wholes they form. Whereas set theory is founded on the membership relation between a set and its elements, mereology emphasizes the meronomic relation between entities, which—from a set-theoretic perspective—is closer to the concept of inclusion between sets.
Mereology has been explored in various ways as applications of predicate logic to formal ontology, in each of which mereology is an important part. Each of these fields provides its own axiomatic definition of mereology. A common element of such axiomatizations is the assumption, shared with inclusion, that the part-whole relation orders its universe, meaning that everything is a part of itself (reflexivity), that a part of a part of a whole is itself a part of that whole (transitivity), and that two distinct entities cannot each be a part of the other (antisymmetry), thus forming a poset. A variant of this axiomatization denies that anything is ever part of itself (irreflexivity) while accepting transitivity, from which antisymmetry follows automatically.
Although mereology is an application of mathematical logic, what could be argued to be a sort of "proto-geometry", it has been wholly developed by logicians, ontologists, linguists, engineers, and computer scientists, especially those working in artificial intelligence. In particular, mereology is also on the basis for a point-free foundation of geometry (see for example the quoted pioneering paper of Alfred Tarski and the review paper by Gerla 1995).
In general systems theory, mereology refers to formal work on system decomposition and parts, wholes and boundaries (by, e.g., Mihajlo D. Mesarovic (1970), Gabriel Kron (1963), or Maurice Jessel (see Bowden (1989, 1998)). A hierarchical version of Gabriel Kron's Network Tearing was published by Keith Bowden (1991), reflecting David Lewis's ideas on gunk. Such ideas appear in theoretical computer science and physics, often in combination with sheaf theory, topos, or category theory. See also the work of Steve Vickers on (parts of) specifications in computer science, Joseph Goguen on physical systems, and Tom Etter (1996, 1998) on link theory and quantum mechanics.
History
Informal part-whole reasoning was consciously invoked in metaphysics and ontology from Plato (in particular, in the second half of the Parmenides) and Aristotle onwards, and more or less unwittingly in 19th-century mathematics until the triumph of set theory around 1910. Metaphysical ideas of this era that discuss the concepts of parts and wholes include divine simplicity and the classical conception of beauty.
Ivor Grattan-Guinness (2001) sheds much light on part-whole reasoning during the 19th and early 20th centuries, and reviews how Cantor and Peano devised set theory. It appears that the first to reason consciously and at length about parts and wholes was Ed
|
https://en.wikipedia.org/wiki/Frieze%20group
|
In mathematics, a frieze or frieze pattern is a two-dimensional design that repeats in one direction. Such patterns occur frequently in architecture and decorative art. Frieze patterns can be classified into seven types according to their symmetries. The set of symmetries of a frieze pattern is called a frieze group.
Frieze groups are two-dimensional line groups, having repetition in only one direction. They are related to the more complex wallpaper groups, which classify patterns that are repetitive in two directions, and crystallographic groups, which classify patterns that are repetitive in three directions.
General
Formally, a frieze group is a class of infinite discrete symmetry groups of patterns on a strip (infinitely wide rectangle), hence a class of groups of isometries of the plane, or of a strip. A symmetry group of a frieze group necessarily contains translations and may contain glide reflections, reflections along the long axis of the strip, reflections along the narrow axis of the strip, and 180° rotations. There are seven frieze groups, listed in the summary table. Many authors present the frieze groups in a different order.
The actual symmetry groups within a frieze group are characterized by the smallest translation distance, and, for the frieze groups with vertical line reflection or 180° rotation (groups 2, 5, 6, and 7), by a shift parameter locating the reflection axis or point of rotation. In the case of symmetry groups in the plane, additional parameters are the direction of the translation vector, and, for the frieze groups with horizontal line reflection, glide reflection, or 180° rotation (groups 3–7), the position of the reflection axis or rotation point in the direction perpendicular to the translation vector. Thus there are two degrees of freedom for group 1, three for groups 2, 3, and 4, and four for groups 5, 6, and 7.
For two of the seven frieze groups (groups 1 and 4) the symmetry groups are singly generated, for four (groups 2, 3, 5, and 6) they have a pair of generators, and for group 7 the symmetry groups require three generators. A symmetry group in frieze group 1, 2, 3, or 5 is a subgroup of a symmetry group in the last frieze group with the same translational distance. A symmetry group in frieze group 4 or 6 is a subgroup of a symmetry group in the last frieze group with half the translational distance. This last frieze group contains the symmetry groups of the simplest periodic patterns in the strip (or the plane), a row of dots. Any transformation of the plane leaving this pattern invariant can be decomposed into a translation, , optionally followed by a reflection in either the horizontal axis, , or the vertical axis, , provided that this axis is chosen through or midway between two dots, or a rotation by 180°, (ditto). Therefore, in a way, this frieze group contains the "largest" symmetry groups, which consist of all such transformations.
The inclusion of the discrete condition is to exc
|
https://en.wikipedia.org/wiki/Identity
|
Identity may refer to:
Identity document
Identity (philosophy)
Identity (social science)
Identity (mathematics)
Arts and entertainment
Film and television
Identity (1987 film), an Iranian film
Identity (2003 film), an American slasher film
Identity (game show), an American game show
Identity (TV series), a British police procedural drama television series
"Identity" (Arrow), a 2013 episode
"Identity" (Burn Notice), a 2007 episode
"Identity" (Charlie Jade), a 2005 episode
"Identity" (Legend of the Seeker), a 2008 episode
"Identity" (Law & Order: Special Victims Unit episode), 2005
"Identity" (NCIS: Los Angeles), a 2009 pilot episode
Music
Albums
Identity (3T album), 2004
Identity (BoA album), 2010
Identity (Far East Movement album), 2016
Identity (Robert Pierre album), 2008
Identity (Raghav album), 2008
Identity (Victon EP), 2017
Identity (Zee album), 1984
Songs
"Identity" (Sakanaction song), 2010
"Identity" (X-Ray Spex song), 1978
"Identity", a 1983 song by Bucks Fizz, B-side to "London Town"
Other uses in music
Identity (music), in post-tonal music theory
Identity (tuning), an odd member below and including a limit
Publications
Identity, a defunct quarterly Australian magazine published by the Aboriginal Publications Foundation (1971–1982)
Identity (novel), by Milan Kundera, 1998
Business
Accounting identity, calculation that must be true regardless of its variables
Brand identity, the expression of a brand
Corporate identity, the manner a corporation presents itself to the public
Philosophy and social science
Identity (philosophy), the relation each thing bears only to itself
Law of identity, that each thing is identical with itself
Personal identity, the numerical identity of a person over time
Identity (social science), qualities etc that characterize a person or group
Political identity
Science
Digital identity, information used by computer systems to represent an external agent
Identity (object-oriented programming), the property of objects that distinguishes them from other objects
Identity (mathematics), an equality that holds regardless of the values of its variables
Identity element, an element of the set which leaves unchanged every element when the operation is applied
Identity function, a function that leaves its argument unchanged
Identity matrix, with ones on the main diagonal, zeros elsewhere
Other uses
Identity document, or ID
See also
Biometrics
Collective identity
Cultural diversity
Cultural identity
Entity (disambiguation)
ID (disambiguation)
Identification (disambiguation)
Identifier, a name that identifies a unique object or class of objects
Identity politics
National identity
Outline of self
Personal data
Personal identity (disambiguation)
Secret identity (disambiguation)
The Bourne Identity (disambiguation)
|
https://en.wikipedia.org/wiki/Truncated%20dodecahedron
|
In geometry, the truncated dodecahedron is an Archimedean solid. It has 12 regular decagonal faces, 20 regular triangular faces, 60 vertices and 90 edges.
Geometric relations
This polyhedron can be formed from a regular dodecahedron by truncating (cutting off) the corners so the pentagon faces become decagons and the corners become triangles.
It is used in the cell-transitive hyperbolic space-filling tessellation, the bitruncated icosahedral honeycomb.
Area and volume
The area A and the volume V of a truncated dodecahedron of edge length a are:
Cartesian coordinates
Cartesian coordinates for the vertices of a truncated dodecahedron with edge length 2φ − 2, centered at the origin, are all even permutations of:
(0, ±, ±(2 + φ))
(±, ±φ, ±2φ)
(±φ, ±2, ±(φ + 1))
where φ = is the golden ratio.
Orthogonal projections
The truncated dodecahedron has five special orthogonal projections, centered: on a vertex, on two types of edges, and two types of faces. The last two correspond to the A2 and H2 Coxeter planes.
Spherical tilings and Schlegel diagrams
The truncated dodecahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane.
Schlegel diagrams are similar, with a perspective projection and straight edges.
Vertex arrangement
It shares its vertex arrangement with three nonconvex uniform polyhedra:
Related polyhedra and tilings
It is part of a truncation process between a dodecahedron and icosahedron:
This polyhedron is topologically related as a part of sequence of uniform truncated polyhedra with vertex configurations (3.2n.2n), and [n,3] Coxeter group symmetry.
Truncated dodecahedral graph
In the mathematical field of graph theory, a truncated dodecahedral graph is the graph of vertices and edges of the truncated dodecahedron, one of the Archimedean solids. It has 60 vertices and 90 edges, and is a cubic Archimedean graph.
Notes
References
(Section 3-9)
External links
Editable printable net of a truncated dodecahedron with interactive 3D view
The Uniform Polyhedra
Virtual Reality Polyhedra The Encyclopedia of Polyhedra
Uniform polyhedra
Archimedean solids
Truncated tilings
|
https://en.wikipedia.org/wiki/Pixel%20geometry
|
The components of the pixels (primary colors red, green and blue) in an image sensor or display can be ordered in different patterns, called pixel geometry.
The geometric arrangement of the primary colors within a pixel varies depending on usage (see figure 1). In monitors, such as LCDs or CRTs, that typically display edges or rectangles, the components are arranged in vertical stripes. Displays with motion pictures should instead have triangular or diagonal patterns so that the image variation is perceived better by the viewer.
Knowledge of the pixel geometry used by a display may be used to create raster images of higher apparent resolution using subpixel rendering.
See also
PenTile matrix family
Quattron
Bayer filter
Subpixel rendering
Pixel
References
Digital imaging
|
https://en.wikipedia.org/wiki/Ten15
|
Ten15 is an algebraically specified abstract machine. It was developed by Foster, Currie et al. at the Royal Signals and Radar Establishment at Malvern, Worcestershire, during the 1980s. It arose from earlier work on the Flex machine, which was a capability computer implemented via microcode. Ten15 was intended to offer an intermediate language common to all implementations of the Flex architecture for portability purposes. It had the side effect of making the benefits of that work available on modern processors lacking a microcode facility.
Ten15 served as an intermediate language for compilers, but with several unique features, some of which have still to see the light of day in everyday systems. Firstly, it was strongly typed, yet wide enough in application to support most languages — C being an exception, chiefly because C deliberately treats an array similar to a pointer to the first element of that array. This ultimately led to Ten15's development into TDF, which in turn formed the basis for ANDF. Secondly, it offered a persistent, write-only filestore mechanism, allowing arbitrary data structures to be written and retrieved without conversion into an external representation.
Historical note
Why 'Ten15'? Nic Peeling reports that during early discussions of the concepts of Ten15, it was agreed that this was important and should have a name - but what? Ian Currie looked up at the clock and said 'Why not call it 10:15?'
See also
Virtual machine
TenDRA Compiler
References
Computer languages
History of computing in the United Kingdom
Malvern, Worcestershire
Science and technology in Worcestershire
Theory of computation
|
https://en.wikipedia.org/wiki/Nontraditional%20student
|
A nontraditional student is a term originating in North America, that refers to a category of students at colleges and universities.
The National Center for Education Statistics (NCES) notes that there are varying definitions of nontraditional student. Nontraditional students are contrasted with traditional students who "earn a high school diploma, enroll full time immediately after finishing high school, depend on parents for financial support, and either do not work during the school year or work part time". The NCES categorized anyone who satisfies at least one of the following as a nontraditional student:
Delays enrollment (does not enter postsecondary education in the same calendar year that high school ended)
Attends part-time for at least part of the academic year
Works full-time (35 hours or more per week) while enrolled
Is considered financially independent for purposes of determining eligibility for financial aid
Has dependents other than a spouse (usually children, but may also be caregivers of sick or elderly family members)
Does not have a high school diploma (completed high school with a GED or other high school completion certificate or did not finish high school)
By this definition, the NCES determined that 73% of all undergraduates in 1999–2000 could be considered nontraditional, representing the newly "typical" undergraduate. This remained consistent the following years: 72% in 2003–2004, 72% for 2007–2008, and 74% for 2011–2012.
History
It is uncertain exactly how or when the term “nontraditional student” was first incorporated into educational language. However, it is thought that K. Patricia Cross is responsible for the phrase becoming the accepted and appropriate term to describe adult students.
In 2018, PBS' Next Avenue wrote that nontraditional students were the new normal, stating that the majority of degree seekers were adult learners, a demographic who educational institutions are increasingly easing access to. The article reported that sixty percent of Americans aged 23 to 55 without bachelor's degrees have considered returning to school, but costs and student debts were deterrents. The author identified four reason people fifty years or older are returning to school:
searching for a second-chapter career
staying competitive in the workforce
creating new challenges/learning new things
meeting a long-held goal
Demographics
The typical college student is no longer a full-time student who enrolls immediately after high school, lives on-campus and who has limited family, employment, and financial obligations.
Regarding the 2011-2012 demographics distribution of nontraditional undergraduate students in the United States, the following were identified by the National Center for Education Statistics:
49% dependent and 51% independent
28% has dependent(s) and 72% has no dependent
15% single with dependent and 85% single with no dependent
91% high school graduate and 9% high school equivalency
66% delayed postse
|
https://en.wikipedia.org/wiki/Branching%20process
|
In probability theory, a branching process is a type of mathematical object known as a stochastic process, which consists of collections of random variables. The random variables of a stochastic process are indexed by the natural numbers. The original purpose of branching processes was to serve as a mathematical model of a population in which each individual in generation produces some random number of individuals in generation , according, in the simplest case, to a fixed probability distribution that does not vary from individual to individual. Branching processes are used to model reproduction; for example, the individuals might correspond to bacteria, each of which generates 0, 1, or 2 offspring with some probability in a single time unit. Branching processes can also be used to model other systems with similar dynamics, e.g., the spread of surnames in genealogy or the propagation of neutrons in a nuclear reactor.
A central question in the theory of branching processes is the probability of ultimate extinction, where no individuals exist after some finite number of generations. Using Wald's equation, it can be shown that starting with one individual in generation zero, the expected size of generation n equals μn where μ is the expected number of children of each individual. If μ < 1, then the expected number of individuals goes rapidly to zero, which implies ultimate extinction with probability 1 by Markov's inequality. Alternatively, if μ > 1, then the probability of ultimate extinction is less than 1 (but not necessarily zero; consider a process where each individual either has 0 or 100 children with equal probability. In that case, μ = 50, but probability of ultimate extinction is greater than 0.5, since that's the probability that the first individual has 0 children). If μ = 1, then ultimate extinction occurs with probability 1 unless each individual always has exactly one child.
In theoretical ecology, the parameter μ of a branching process is called the basic reproductive rate.
Mathematical formulation
The most common formulation of a branching process is that of the Galton–Watson process. Let Zn denote the state in period n (often interpreted as the size of generation n), and let Xn,i be a random variable denoting the number of direct successors of member i in period n, where Xn,i are independent and identically distributed random variables over all n ∈{ 0, 1, 2, ...} and i ∈ {1, ..., Zn}. Then the recurrence equation is
with Z0 = 1.
Alternatively, the branching process can be formulated as a random walk. Let Si denote the state in period i, and let Xi be a random variable that is iid over all i. Then the recurrence equation is
with S0 = 1. To gain some intuition for this formulation, imagine a walk where the goal is to visit every node, but every time a previously unvisited node is visited, additional nodes are revealed that must also be visited. Let Si represent the number of revealed but unvisited nodes in period
|
https://en.wikipedia.org/wiki/Hankel%20matrix
|
In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant, e.g.:
More generally, a Hankel matrix is any matrix of the form
In terms of the components, if the element of is denoted with , and assuming , then we have for all
Properties
Any Hankel matrix is symmetric.
Let be the exchange matrix. If is a Hankel matrix, then where is a Toeplitz matrix.
If is real symmetric, then will have the same eigenvalues as up to sign.
The Hilbert matrix is an example of a Hankel matrix.
Relation to formal Laurent series
Hankel matrices are closely related to formal Laurent series. In fact, such a series gives rise to a linear map, referred to as a Hankel operator
which takes a polynomial and sends it to the product , but discards all powers of with a non-negative exponent, so as to give an element in , the formal power series with strictly negative exponents. The map is in a natural way -linear, and its matrix with respect to the elements and is the Hankel matrix
Any Hankel matrix arises in such a way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if is a rational function, i.e., a fraction of two polynomials .
Hankel operator
A Hankel operator on a Hilbert space is one whose matrix is a (possibly infinite) Hankel matrix with respect to an orthonormal basis. As indicated above, a Hankel Matrix is a matrix with constant values along its antidiagonals, which means that a Hankel matrix must satisfy, for all rows and columns , . Note that every entry depends only on .
Let the corresponding Hankel Operator be . Given a Hankel matrix , the corresponding Hankel operator is then defined as .
We are often interested in Hankel operators over the Hilbert space , the space of square integrable bilateral complex sequences. For any , we have
We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.
Note that the matrix does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.
The determinant of a Hankel matrix is called a catalecticant.
Hankel matrix transform
The Hankel matrix transform, or simply Hankel transform, produces the sequence of the determinants of the Hankel matrices formed from the given sequence. Namely, the sequence is the Hankel transform of the sequence when
The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes
as the binomial transform of the sequence
|
https://en.wikipedia.org/wiki/Covering
|
Covering may refer to:
Window covering, material used to cover a window
Cover (topology), a collection of subsets of whose union is all of
Covering space, a certain kind of continuous maps
Covering (martial arts), an act of protecting against an opponent's strikes
The Covering, a studio album by American Christian heavy metal/hard rock band Stryper
Covering: The Hidden Assault on Our Civil Rights, a 2006 book by Kenji Yoshini
See also
Covering a base, in baseball
Covering sickness, a disease of horses and other members of the family Equidae
Coverage (disambiguation)
Cover (disambiguation)
Covering theorem (disambiguation)
|
https://en.wikipedia.org/wiki/88%20%28number%29
|
88 (eighty-eight) is the natural number following 87 and preceding 89.
In mathematics
88 is:
a refactorable number.
a primitive semiperfect number.
an untouchable number.
a hexadecagonal number.
an Erdős–Woods number, since it is possible to find sequences of 88 consecutive integers such that each inner member shares a factor with either the first or the last member.
a palindromic number in bases 5 (3235), 10 (8810), 21 (4421), and 43 (2243).
a repdigit in bases 10, 21 and 43.
a 2-automorphic number.
the smallest positive integer with a Zeckendorf representation requiring 5 Fibonacci numbers.
a strobogrammatic number.
the largest number in English not containing the letter 'n' in its name, when using short scale.
88 and 945 are the smallest coprime abundant numbers.
In science and technology
The atomic number of the element radium.
The number of constellations in the sky as defined by the International Astronomical Union.
Messier object M88, a magnitude 11.0 spiral galaxy in the constellation Coma Berenices.
The New General Catalogue object NGC 88, a spiral galaxy in the constellation Phoenix, and a member of Robert's Quartet.
Space Shuttle Mission 88 (STS-88), launched and completed in December 1998, began the construction of the International Space Station.
Approximately the number of days it takes Mercury to complete its orbit.
Cultural significance
In Chinese culture
Number 88 symbolizes fortune and good luck in Chinese culture, since the word 8 sounds similar to the word fā (, which implies , or wealth, in Mandarin or Cantonese). The number 8 is considered to be the luckiest number in Chinese culture, and prices in Chinese supermarkets often contain many 8s. The shape of the Chinese character for 8 () implies that a person will have a great, wide future as the character starts narrow and gets wider toward the bottom. The Chinese government has been auctioning auto license plates containing many 8s for tens of thousands of dollars. The 2008 Beijing Olympics opened at 8 p.m., 8 August 2008.
In addition, 88 is also used to mean "bye bye ()" in Chinese-language chats, text messages, SMSs and IMs, because its pronunciation in Mandarin is similar to "bye bye".
In amateur radio
In amateur radio, 88 is used as shorthand for "love and kisses" when signing a message or ending an exchange. It is used in spoken word (radiotelephony), Morse code (radiotelegraphy), and in various digital modes. It is considered rather more intimate than "73", which means "best regards"; therefore 73 is more often used. The two may be used together. Sometimes either expression is pluralized by appending an -s. These number codes originate with the 92 Code adopted by Western Union in 1859.
In neo-Nazism
Neo-Nazis use the number 88 as an abbreviation for the Nazi salute Heil Hitler. The letter H is eighth in the alphabet, whereby 88 becomes HH.
Often, this number is associated with the number 14, e.g. 14/88, 14-88, or 1488; this number symbolizes
|
https://en.wikipedia.org/wiki/76%20%28number%29
|
76 (seventy-six) is the natural number following 75 and preceding 77.
In mathematics
76 is:
a composite number; a square-prime, of the form (p2, q) where q is a higher prime. It is the ninth of this general form and the seventh of the form (22.q).
a Lucas number.
a telephone or involution number, the number of different ways of connecting 6 points with pairwise connections.
a nontotient.
a 14-gonal number.
a centered pentagonal number.
an Erdős–Woods number since it is possible to find sequences of 76 consecutive integers such that each inner member shares a factor with either the first or the last member.
with an aliquot sum of 64; within an aliquot sequence of two composite numbers (76,64,63,1,0) to the Prime in the 63-aliquot tree.
an automorphic number in base 10. It is one of two 2-digit numbers whose square, 5,776, and higher powers, end in the same two digits. The other is .
There are 76 unique compact uniform hyperbolic honeycombs in the third dimension that are generated from Wythoff constructions.
In science
The atomic number of osmium.
The Little Dumbbell Nebula in the constellation Pegasus is designated as Messier object 76 (M76).
In other fields
Seventy-six is also:
In colloquial American parlance, reference to 1776, the year of the signing of the United States Declaration of Independence.
Seventy-Six, an 1823 novel by American writer John Neal.
The Spirit of '76, patriotic painting by Archibald MacNeal Willard.
A brand of ConocoPhillips gas stations, 76.
The number of trombonists leading the parade in "Seventy-Six Trombones", from Meredith Willson's musical The Music Man.
The 76ers, a professional basketball team based in Philadelphia.
76, the debut album of Dutch trance producer and DJ Armin van Buuren.
Years like 1876 and 1976
See also
List of highways numbered 76
References
Integers
|
https://en.wikipedia.org/wiki/69%20%28number%29
|
69 (sixty-nine) is the natural number following 68 and preceding 70.
In mathematics
69 is:
a lucky number.
the twentieth semiprime (3.23) and the seventh of the form (3.q) where q is a higher prime.
the aliquot sum of sixty-nine is 27 within the aliquot sequence (69,27,13,1,0) and is the third composite number in the 13-aliquot tree; following (27,35).
a Blum integer, since the two factors of 69 are both Gaussian primes.
the sum of the sums of the divisors of the first 9 positive integers.
a strobogrammatic number.
a centered tetrahedral number.
Because 69 has an odd number of 1s in its binary representation, it is sometimes called an "odious number."
In decimal, 69 is the only natural number whose square () and cube () use every digit from 0–9 exactly once.
69 is equal to 105 octal, while 105 is equal to 69 hexadecimal. This same property can be applied to all numbers from 64 to 69.
On many handheld scientific and graphing calculators, the highest factorial that can be calculated, due to memory limitations, is 69!, or about 1.711224524.
In science
The atomic number of thulium, a lanthanide.
Astronomy
The Messier object M69 is a magnitude 9.0 globular cluster in the constellation Sagittarius.
In other fields
Sixty-nine may also refer to:
69ing, a sex position involving each partner aligning themselves to achieve oral sex simultaneously with each other.
In reference to the sex position, "69" has become an Internet meme, where users will respond to any occurrence of the number with the word "nice" and draw specific attention to it. This means to sarcastically imply that the reference to the sex position was intentional. Because of its association with the sex position and resulting meme, "69" has become known as "the sex number".
The registry of the U.S. Navy's aircraft carrier , named after Dwight D. Eisenhower, the 34th President of the United States and five-star general in the United States Army.
The number of the French department Rhône. The Lyon Metropolis, which was separated from the Rhône department in 2015, is designated as "69M". The postal codes for both entities start with "69".
The Taijitu
The last possible television channel number in the UHF bandplan for American terrestrial television from 1982 until its withdrawal on December 31, 2011.
References
External links
Integers
Internet memes
|
https://en.wikipedia.org/wiki/72%20%28number%29
|
72 (seventy-two) is the natural number following 71 and preceding 73. It is half a gross or 6 dozen (i.e., 60 in duodecimal).
In mathematics
Seventy-two is a pronic number, as it is the product of 8 and 9. It is the smallest Achilles number, as it's a powerful number that is not itself a power.
72 is an abundant number. With exactly twelve positive divisors, including 12 (one of only two sublime numbers), 72 is also the twelfth member in the sequence of refactorable numbers. 72 has a Euler totient of 24, which makes it a highly totient number, as there are 17 solutions to the equation φ(x) = 72, more than any integer below 72. It is equal to the sum of its preceding smaller highly totient numbers 24 and 48, and contains the first six highly totient numbers 1, 2, 4, 8, 12 and 24 as a subset of its proper divisors. 144, or twice 72, is also highly totient, as is 576, the square of 24. While 17 different integers have a totient value of 72, the sum of Euler's totient function φ(x) over the first 15 integers is 72. It also is a perfect indexed Harshad number in decimal (twenty-eighth), as it is divisible by the sum of its digits (9).
72 is the second multiple of 12, after 48, that is not a sum of twin primes. It is, however, the sum of four consecutive primes (13 + 17 + 19 + 23), as well as the sum of six consecutive primes (5 + 7 + 11 + 13 + 17 + 19).
72 is the smallest number whose fifth power is the sum of five smaller fifth powers: 195 + 435 + 465 + 475 + 675 = 725.
72 is the number of distinct } magic heptagrams, all with a magic constant of 30.
72 is the sum of the eighth row of Lozanić's triangle.
72 is the number of degrees in the central angle of a regular pentagon, which is constructible with a compass and straight-edge.
72 plays a role in the Rule of 72 in economics when approximating annual compounding of interest rates of a round 6% to 10%, due in part to its high number of divisors.
Inside Lie algebras:
72 is the number of vertices of the six-dimensional 122 polytope, which also contains as facets 720 edges, 702 polychoral 4-faces, of which 270 are four-dimensional 16-cells, and two sets of 27 demipenteract 5-faces. These 72 vertices are the root vectors of the simple Lie group , which as a honeycomb under 222 forms the lattice. 122 is part of a family of k22 polytopes whose first member is the fourth-dimensional 3-3 duoprism, of symmetry order 72 and made of six triangular prisms. On the other hand, 321 ∈ k21 is the only semiregular polytope in the seventh dimension, also featuring a total of 702 6-faces of which 576 are 6-simplexes and 126 are 6-orthoplexes that contain 60 edges and 12 vertices, or collectively 72 one-dimensional and two-dimensional elements; with 126 the number of root vectors in , which are contained in the vertices of 231 ∈ k31, also with 576 or 242 6-simplexes like 321. The triangular prism is the root polytope in the k21 family of polytopes, which is the simplest semiregular polytope, with k31 rooted in
|
https://en.wikipedia.org/wiki/Lowest%20common%20denominator
|
In mathematics, the lowest common denominator or least common denominator (abbreviated LCD) is the lowest common multiple of the denominators of a set of fractions. It simplifies adding, subtracting, and comparing fractions.
Description
The lowest common denominator of a set of fractions is the lowest number that is a multiple of all the denominators: their lowest common multiple.
The product of the denominators is always a common denominator, as in:
but it is not always the lowest common denominator, as in:
Here, 36 is the least common multiple of 12 and 18. Their product, 216, is also a common denominator, but calculating with that denominator involves larger numbers:
With variables rather than numbers, the same principles apply:
Some methods of calculating the LCD are at .
Role in arithmetic and algebra
The same fraction can be expressed in many different forms. As long as the ratio between numerator and denominator is the same, the fractions represent the same number. For example:
because they are all multiplied by 1 written as a fraction:
It is usually easiest to add, subtract, or compare fractions when each is expressed with the same denominator, called a "common denominator". For example, the numerators of fractions with common denominators can simply be added, such that and that , since each fraction has the common denominator 12. Without computing a common denominator, it is not obvious as to what equals, or whether is greater than or less than . Any common denominator will do, but usually the lowest common denominator is desirable because it makes the rest of the calculation as simple as possible.
Practical uses
The LCD has many practical uses, such as determining the number of objects of two different lengths necessary to align them in a row which starts and ends at the same place, such as in brickwork, tiling, and tessellation. It is also useful in planning work schedules with employees with y days off every x days.
In musical rhythm, the LCD is used in cross-rhythms and polymeters to determine the fewest notes necessary to count time given two or more metric divisions. For example, much African music is recorded in Western notation using because each measure is divided by 4 and by 3, the LCD of which is 12.
Colloquial usage
The expression "lowest common denominator" is used to describe (usually in a disapproving manner) a rule, proposal, opinion, or media that is deliberately simplified so as to appeal to the largest possible number of people.
See also
Anomalous cancellation
Greatest common divisor
Partial fraction decomposition, reverses the process of adding fractions into uncommon denominators
References
Elementary arithmetic
Fractions (mathematics)
|
https://en.wikipedia.org/wiki/William%20Lowell%20Putnam%20Mathematical%20Competition
|
The William Lowell Putnam Mathematical Competition, often abbreviated to Putnam Competition, is an annual mathematics competition for undergraduate college students enrolled at institutions of higher learning in the United States and Canada (regardless of the students' nationalities). It awards a scholarship and cash prizes ranging from $250 to $2,500 for the top students and $5,000 to $25,000 for the top schools, plus one of the top five individual scorers (designated as Putnam Fellows) is awarded a scholarship of up to $12,000 plus tuition at Harvard University (Putnam Fellow Prize Fellowship), the top 100 individual scorers have their names mentioned in the American Mathematical Monthly (alphabetically ordered within rank), and the names and addresses of the top 500 contestants are mailed to all participating institutions. It is widely considered to be the most prestigious university-level mathematics competition in the world, and its difficulty is such that the median score is often zero (out of 120) despite being attempted by students specializing in mathematics.
The competition was founded in 1927 by Elizabeth Lowell Putnam in memory of her husband William Lowell Putnam, who was an advocate of intercollegiate intellectual competition. The competition has been offered annually since 1938 and is administered by the Mathematical Association of America.
Competition layout
The Putnam competition takes place on the first Saturday in December, and consists of two three-hour sittings separated by a lunch break. The competition is supervised by faculty members at the participating schools. Each one consists of twelve challenging problems. The problems cover a range of advanced material in undergraduate mathematics, including concepts from group theory, set theory, graph theory, lattice theory, and number theory.
Each of the twelve questions is worth 10 points, and the most frequent scores above zero are 10 points for a complete solution, 9 points for a nearly complete solution, and 1 point for the beginnings of a solution. In earlier years, the twelve questions were worth one point each, with no partial credit given. The competition is considered to be very difficult: it is typically attempted by students specializing in mathematics, but the median score is usually zero or one point out of 120 possible, and there have been only five perfect scores . In 2003, of the 3,615 students competing, 1,024 (28%) scored 10 or more points, and 42 points was sufficient to make the top percentile.
At a participating college, any student who wishes to take part in the competition may (limited by the number of spots a school receives); but until 2019 the school's official team consisted of three individuals whom it designated in advance. Until 2019, a team's score was the sum of the ranks of its three team members, with the lowest cumulative rank winning. It was entirely possible, even commonplace at some institutions, for the eventual results to show tha
|
https://en.wikipedia.org/wiki/Mayer%E2%80%93Vietoris%20sequence
|
In mathematics, particularly algebraic topology and homology theory, the Mayer–Vietoris sequence is an algebraic tool to help compute algebraic invariants of topological spaces, known as their homology and cohomology groups. The result is due to two Austrian mathematicians, Walther Mayer and Leopold Vietoris. The method consists of splitting a space into subspaces, for which the homology or cohomology groups may be easier to compute. The sequence relates the (co)homology groups of the space to the (co)homology groups of the subspaces. It is a natural long exact sequence, whose entries are the (co)homology groups of the whole space, the direct sum of the (co)homology groups of the subspaces, and the (co)homology groups of the intersection of the subspaces.
The Mayer–Vietoris sequence holds for a variety of cohomology and homology theories, including simplicial homology and singular cohomology. In general, the sequence holds for those theories satisfying the Eilenberg–Steenrod axioms, and it has variations for both reduced and relative (co)homology. Because the (co)homology of most spaces cannot be computed directly from their definitions, one uses tools such as the Mayer–Vietoris sequence in the hope of obtaining partial information. Many spaces encountered in topology are constructed by piecing together very simple patches. Carefully choosing the two covering subspaces so that, together with their intersection, they have simpler (co)homology than that of the whole space may allow a complete deduction of the (co)homology of the space. In that respect, the Mayer–Vietoris sequence is analogous to the Seifert–van Kampen theorem for the fundamental group, and a precise relation exists for homology of dimension one.
Background, motivation, and history
Similarly to the fundamental group or the higher homotopy groups of a space, homology groups are important topological invariants. Although some (co)homology theories are computable using tools of linear algebra, many other important (co)homology theories, especially singular (co)homology, are not computable directly from their definition for nontrivial spaces. For singular (co)homology, the singular (co)chains and (co)cycles groups are often too big to handle directly. More subtle and indirect approaches become necessary. The Mayer–Vietoris sequence is such an approach, giving partial information about the (co)homology groups of any space by relating it to the (co)homology groups of two of its subspaces and their intersection.
The most natural and convenient way to express the relation involves the algebraic concept of exact sequences: sequences of objects (in this case groups) and morphisms (in this case group homomorphisms) between them such that the image of one morphism equals the kernel of the next. In general, this does not allow (co)homology groups of a space to be completely computed. However, because many important spaces encountered in topology are topological manifolds, simplicial complex
|
https://en.wikipedia.org/wiki/Gauss%20map
|
In differential geometry, the Gauss map of a surface is a function that maps each point in the surface to a unit vector that is orthogonal to the surface at that point. Namely, given a surface X in Euclidean space R3, the Gauss map is a map N: X → S2 (where S2 is the unit sphere) such that for each p in X, the function value N(p) is a unit vector orthogonal to X at p. The Gauss map is named after Carl F. Gauss.
The Gauss map can be defined (globally) if and only if the surface is orientable, in which case its degree is half the Euler characteristic. The Gauss map can always be defined locally (i.e. on a small piece of the surface). The Jacobian determinant of the Gauss map is equal to Gaussian curvature, and the differential of the Gauss map is called the shape operator.
Gauss first wrote a draft on the topic in 1825 and published in 1827.
There is also a Gauss map for a link, which computes linking number.
Generalizations
The Gauss map can be defined for hypersurfaces in Rn as a map from a hypersurface to the unit sphere Sn − 1 ⊆ Rn.
For a general oriented k-submanifold of Rn the Gauss map can also be defined, and its target space is the oriented Grassmannian
, i.e. the set of all oriented k-planes in Rn. In this case a point on the submanifold is mapped to its oriented tangent subspace. One can also map to its oriented normal subspace; these are equivalent as via orthogonal complement.
In Euclidean 3-space, this says that an oriented 2-plane is characterized by an oriented 1-line, equivalently a unit normal vector (as ), hence this is consistent with the definition above.
Finally, the notion of Gauss map can be generalized to an oriented submanifold X of dimension k in an oriented ambient Riemannian manifold M of dimension n. In that case, the Gauss map then goes from X to the set of tangent k-planes in the tangent bundle TM. The target space for the Gauss map N is a Grassmann bundle built on the tangent bundle TM. In the case where , the tangent bundle is trivialized (so the Grassmann bundle becomes a map to the Grassmannian), and we recover the previous definition.
Total curvature
The area of the image of the Gauss map is called the total curvature and is equivalent to the surface integral of the Gaussian curvature. This is the original interpretation given by Gauss.
The Gauss–Bonnet theorem links total curvature of a surface to its topological properties.
Cusps of the Gauss map
The Gauss map reflects many properties of the surface: when the surface has zero Gaussian curvature, (that is along a parabolic line) the Gauss map will have a fold catastrophe. This fold may contain cusps and these cusps were studied in depth by Thomas Banchoff, Terence Gaffney and Clint McCrory. Both parabolic lines and cusp are stable phenomena and will remain under slight deformations of the surface. Cusps occur when:
The surface has a bi-tangent plane
A ridge crosses a parabolic line
at the closure of the set of inflection points of the asympto
|
https://en.wikipedia.org/wiki/Equivalence%20of%20categories
|
In category theory, a branch of abstract mathematics, an equivalence of categories is a relation between two categories that establishes that these categories are "essentially the same". There are numerous examples of categorical equivalences from many areas of mathematics. Establishing an equivalence involves demonstrating strong similarities between the mathematical structures concerned. In some cases, these structures may appear to be unrelated at a superficial or intuitive level, making the notion fairly powerful: it creates the opportunity to "translate" theorems between different kinds of mathematical structures, knowing that the essential meaning of those theorems is preserved under the translation.
If a category is equivalent to the opposite (or dual) of another category then one speaks of
a duality of categories, and says that the two categories are dually equivalent.
An equivalence of categories consists of a functor between the involved categories, which is required to have an "inverse" functor. However, in contrast to the situation common for isomorphisms in an algebraic setting, the composite of the functor and its "inverse" is not necessarily the identity mapping. Instead it is sufficient that each object be naturally isomorphic to its image under this composition. Thus one may describe the functors as being "inverse up to isomorphism". There is indeed a concept of isomorphism of categories where a strict form of inverse functor is required, but this is of much less practical use than the equivalence concept.
Definition
Formally, given two categories C and D, an equivalence of categories consists of a functor F : C → D, a functor G : D → C, and two natural isomorphisms ε: FG→ID and η : IC→GF. Here FG: D→D and GF: C→C denote the respective compositions of F and G, and IC: C→C and ID: D→D denote the identity functors on C and D, assigning each object and morphism to itself. If F and G are contravariant functors one speaks of a duality of categories instead.
One often does not specify all the above data. For instance, we say that the categories C and D are equivalent (respectively dually equivalent) if there exists an equivalence (respectively duality) between them. Furthermore, we say that F "is" an equivalence of categories if an inverse functor G and natural isomorphisms as above exist. Note however that knowledge of F is usually not enough to reconstruct G and the natural isomorphisms: there may be many choices (see example below).
Alternative characterizations
A functor F : C → D yields an equivalence of categories if and only if it is simultaneously:
full, i.e. for any two objects c1 and c2 of C, the map HomC(c1,c2) → HomD(Fc1,Fc2) induced by F is surjective;
faithful, i.e. for any two objects c1 and c2 of C, the map HomC(c1,c2) → HomD(Fc1,Fc2) induced by F is injective; and
essentially surjective (dense), i.e. each object d in D is isomorphic to an object of the form Fc, for c in C.
This is a quite useful and commonly ap
|
https://en.wikipedia.org/wiki/List%20of%20cities%20and%20towns%20in%20Bangladesh
|
This article presents a list of cities and towns in Bangladesh. According to the Bangladesh Bureau of Statistics and the Ministry of Local Government, Rural Development and Co-operatives of Bangladesh, there are 532 urban centres in Bangladesh.
The bureau defines an urban centre with a population of 100,000 or more as a "city". Altogether, there are 43 such cities in Bangladesh. 11 of these cities can be considered major cities as these are governed by City Corporations. All of the City Corporation-governed cities currently have a population of more than 200,000, which is not a criterion for the status, because currently 17 cities in Bangladesh have a population of more than 200,000. Besides the 11 major cities, there are 36 other cities in Bangladesh that are not governed by "City Corporations", rather by "Municipal Corporations". A city with a population of more than 10,000,000 is defined by the bureau as a megacity. Dhaka is the only megacity in Bangladesh according to this definition. Together, Dhaka and the port city of Chittagong account for 48% of the country's urban population.
An urban centre with a population of less than 100,000 is defined as a "town". In total, there are 490 such towns in Bangladesh. Among these, 287 towns are governed by "Municipal Corporations". These are called "Paurashava"s in the local Bengali language. Altogether, including the ones governing the 32 other non-major cities, there are 318 Municipal Corporations.
In addition, there are another 203 towns which are Upazila centres (and other urban centres) and not governed by any Municipal Corporation or "Paurashava". These are the non-Municipal Corporation or "non-Paurashava" towns.
In 1951, Bangladesh was mostly a rural country and only 4% of the population lived in urban centres. The urban population rose to 20% in 1991 and to 24% by 2001. In 2011, Bangladesh had an urban population of 28% and the rate of urban population growth was estimated at 2.8%. At this growth rate, Bangladesh's urban population would reach 79 million or 42% of the population by 2035. The urban centers of Bangladesh have a combined area of about 10600 square kilometers, which is 7% of the total area of Bangladesh. As such, Bangladesh has a very high urban population density: 4028 persons per square kilometer (2011), whereas the rural density is significantly lower: 790 persons per square kilometer (2011). The number of municipalities tripled from 104 municipalities in 1991 to 318 municipalities in 2011.
Major cities
There are eleven major cities in Bangladesh that are governed by twelve city corporations, which include Dhaka North, Dhaka South, Chattogram, Khulna, Sylhet, Rajshahi, Mymensingh, Rangpur, Barisal, Cumilla, Gazipur, and Narayanganj. Among these, Dhaka is a megacity, governed by two city corporations, and has a population of more than 10 million. It was formerly governed by the Dhaka City Corporation, until it was split into North and South in 2011. The populations of the c
|
https://en.wikipedia.org/wiki/35%20%28number%29
|
35 (thirty-five) is the natural number following 34 and preceding 36.
In mathematics
35 is the sum of the first five triangular numbers, making it a tetrahedral number.
35 is the 10th discrete semiprime () and the first with 5 as the lowest non-unitary factor, thus being the first of the form (5.q) where q is a higher prime.
35 has two prime factors, (5 and 7 ) which also form its main factor pair (5 x 7) and comprise the second twin-prime distinct semiprime pair.
The aliquot sum of 35 is 13, within an aliquot sequence of only one composite number (35,13,1,0) to the Prime in the 13-aliquot tree. 35 is the second composite number with the aliquot sum 13; the first being the cube 27.
35 is the last member of the first triple cluster of semiprimes 33, 34, 35. The second such triple distinct semiprime cluster is 85, 86,and 87.
35 is the number of ways that three things can be selected from a set of seven unique things, also known as the "combination of seven things taken three at a time".
35 is a centered cube number, a centered tetrahedral number, a pentagonal number, and a pentatope number.
35 is a highly cototient number, since there are more solutions to the equation than there are for any other integers below it except 1.
There are 35 free hexominoes, the polyominoes made from six squares.
Since the greatest prime factor of is 613, which is more than 35 twice, 35 is a Størmer number.
35 is the highest number one can count to on one's fingers using senary.
35 is the number of quasigroups of order 4.
35 is the smallest composite number of the form , where is a non-negative integer.
In science
The atomic number of bromine
In other fields
35 mm film is the basic film gauge most commonly used for both analog photography and motion pictures.
The minimum age of presidential candidates for election to the United States, Ireland, Poland, Russia, Trinidad and Tobago, and Uruguay.
For Social Security in the United States, the 35 highest years of earnings are used to calculate retirement benefits.
See also
List of highways numbered 35
References
Integers
|
https://en.wikipedia.org/wiki/Stone%20duality
|
In mathematics, there is an ample supply of categorical dualities between certain categories of topological spaces and categories of partially ordered sets. Today, these dualities are usually collected under the label Stone duality, since they form a natural generalization of Stone's representation theorem for Boolean algebras. These concepts are named in honor of Marshall Stone. Stone-type dualities also provide the foundation for pointless topology and are exploited in theoretical computer science for the study of formal semantics.
This article gives pointers to special cases of Stone duality and explains a very general instance thereof in detail.
Overview of Stone-type dualities
Probably the most general duality that is classically referred to as "Stone duality" is the duality between the category Sob of sober spaces with continuous functions and the category SFrm of spatial frames with appropriate frame homomorphisms. The dual category of SFrm is the category of spatial locales denoted by SLoc. The categorical equivalence of Sob and SLoc is the basis for the mathematical area of pointless topology, which is devoted to the study of Loc—the category of all locales, of which SLoc is a full subcategory. The involved constructions are characteristic for this kind of duality, and are detailed below.
Now one can easily obtain a number of other dualities by restricting to certain special classes of sober spaces:
The category CohSp of coherent sober spaces (and coherent maps) is equivalent to the category CohLoc of coherent (or spectral) locales (and coherent maps), on the assumption of the Boolean prime ideal theorem (in fact, this statement is equivalent to that assumption). The significance of this result stems from the fact that CohLoc in turn is dual to the category DLat01 of bounded distributive lattices. Hence, DLat01 is dual to CohSp—one obtains Stone's representation theorem for distributive lattices.
When restricting further to coherent sober spaces that are Hausdorff, one obtains the category Stone of so-called Stone spaces. On the side of DLat01, the restriction yields the subcategory Bool of Boolean algebras. Thus one obtains Stone's representation theorem for Boolean algebras.
Stone's representation for distributive lattices can be extended via an equivalence of coherent spaces and Priestley spaces (ordered topological spaces, that are compact and totally order-disconnected). One obtains a representation of distributive lattices via ordered topologies: Priestley's representation theorem for distributive lattices.
Many other Stone-type dualities could be added to these basic dualities.
Duality of sober spaces and spatial locales
The lattice of open sets
The starting point for the theory is the fact that every topological space is characterized by a set of points X and a system Ω(X) of open sets of elements from X, i.e. a subset of the powerset of X. It is known that Ω(X) has certain special properties: it is a complete lattice
|
https://en.wikipedia.org/wiki/Decimal%20system
|
Decimal system may refer to:
Decimal (base ten) number system, used in mathematics for writing numbers and performing arithmetic
Dewey Decimal System, a subject classification system used in libraries
Decimal currency system, where each unit of currency can be divided into 100 (or 10 or 1000) sub-units
See also
Metric system
|
https://en.wikipedia.org/wiki/D.%20H.%20Lehmer
|
Derrick Henry "Dick" Lehmer (February 23, 1905 – May 22, 1991), almost always cited as D.H. Lehmer, was an American mathematician significant to the development of computational number theory. Lehmer refined Édouard Lucas' work in the 1930s and devised the Lucas–Lehmer test for Mersenne primes. His peripatetic career as a number theorist, with him and his wife taking numerous types of work in the United States and abroad to support themselves during the Great Depression, fortuitously brought him into the center of research into early electronic computing.
Early life
Lehmer was born in Berkeley, California, to Derrick Norman Lehmer, a professor of mathematics at the University of California, Berkeley, and Clara Eunice Mitchell.
He studied physics and earned a bachelor's degree from UC Berkeley, and continued with graduate studies at the University of Chicago.
He and his father worked together on Lehmer sieves.
Marriage
During his studies at Berkeley, Lehmer met Emma Markovna Trotskaia, a Russian student of his father's, who had begun with work toward an engineering degree but had subsequently switched focus to mathematics, earning her B.A. in 1928. Later that same year, Lehmer married Emma and, following a tour of Northern California and a trip to Japan to meet Emma's family, they moved by car to Providence, Rhode Island, after Brown University offered him an instructorship.
Career
Lehmer received a master's degree and a Ph.D., both from Brown University, in 1929 and 1930, respectively; his wife obtained a master's degree in 1930 as well, coaching mathematics to supplement the family income, while also helping her husband type his Ph.D. thesis, An Extended Theory of Lucas' Functions, which he wrote under Jacob Tamarkin.
Movements during the Depression
Lehmer became a National Research Fellow, allowing him to take positions at the California Institute of Technology from 1930 to 1931 and at Stanford University from 1931 to 1932. In the latter year, the couple's first child Laura was born.
After being awarded a second National Research Fellowship, the Lehmers moved on to Princeton, New Jersey between 1932 and 1934, where Dick spent a short time at the Institute for Advanced Study.
He worked at Lehigh University in Pennsylvania from 1934 until 1938. Their son Donald was born in 1934 while Dick and Emma were at Lehigh.
The year 1938–1939 was spent in England on a Guggenheim Fellowship visiting both the University of Cambridge and the University of Manchester, meeting G. H. Hardy, John Edensor Littlewood, Harold Davenport, Kurt Mahler, Louis Mordell, and Paul Erdős. The Lehmers returned to America by ship with second child Donald just before the beginning of the Battle of the Atlantic.
Lehmer continued at Lehigh University for the 1939–1940 academic year.
Berkeley
In 1940, Lehmer accepted a position back at the mathematics department of UC Berkeley. Lehmer was chairman of the Department of Mathematics at University of California, Berkeley fro
|
https://en.wikipedia.org/wiki/One-sided%20limit
|
In calculus, a one-sided limit refers to either one of the two limits of a function of a real variable as approaches a specified point either from the left or from the right.
The limit as decreases in value approaching ( approaches "from the right" or "from above") can be denoted:
The limit as increases in value approaching ( approaches "from the left" or "from below") can be denoted:
If the limit of as approaches exists then the limits from the left and from the right both exist and are equal. In some cases in which the limit
does not exist, the two one-sided limits nonetheless exist. Consequently, the limit as approaches is sometimes called a "two-sided limit".
It is possible for exactly one of the two one-sided limits to exist (while the other does not exist). It is also possible for neither of the two one-sided limits to exist.
Formal definition
Definition
If represents some interval that is contained in the domain of and if is a point in then the right-sided limit as approaches can be rigorously defined as the value that satisfies:
and the left-sided limit as approaches can be rigorously defined as the value that satisfies:
We can represent the same thing more symbolically, as follows.
Let represent an interval, where , and .
Intuition
In comparison to the formal definition for the limit of a function at a point, the one-sided limit (as the name would suggest) only deals with input values to one side of the approached input value.
For reference, the formal definition for the limit of a function at a point is as follows:
To define a one-sided limit, we must modify this inequality. Note that the absolute distance between and is .
For the limit from the right, we want to be to the right of , which means that , so is positive. From above, is the distance between and . We want to bound this distance by our value of , giving the inequality . Putting together the inequalities and and using the transitivity property of inequalities, we have the compound inequality .
Similarly, for the limit from the left, we want to be to the left of , which means that . In this case, it is that is positive and represents the distance between and . Again, we want to bound this distance by our value of , leading to the compound inequality .
Now, when our value of is in its desired interval, we expect that the value of is also within its desired interval. The distance between and , the limiting value of the left sided limit, is . Similarly, the distance between and , the limiting value of the right sided limit, is . In both cases, we want to bound this distance by , so we get the following: for the left sided limit, and for the right sided limit.
Examples
Example 1:
The limits from the left and from the right of as approaches are
The reason why is because is always negative (since means that with all values of satisfying ), which implies that is always positive so that diverges to (and not
|
https://en.wikipedia.org/wiki/Inflection%20point
|
In differential calculus and differential geometry, an inflection point, point of inflection, flex, or inflection (rarely inflexion) is a point on a smooth plane curve at which the curvature changes sign. In particular, in the case of the graph of a function, it is a point where the function changes from being concave (concave downward) to convex (concave upward), or vice versa.
For the graph of a function of differentiability class (f, its first derivative f, and its second derivative f'', exist and are continuous), the condition f'' = 0 can also be used to find an inflection point since a point of f'' = 0 must be passed to change f'' from a positive value (concave upward) to a negative value (concave downward) or vice versa as f'' is continuous; an inflection point of the curve is where f'' = 0 and changes its sign at the point (from positive to negative or from negative to positive). A point where the second derivative vanishes but does not change its sign is sometimes called a point of undulation or undulation point.
In algebraic geometry an inflection point is defined slightly more generally, as a regular point where the tangent meets the curve to order at least 3, and an undulation point or hyperflex''' is defined as a point where the tangent meets the curve to order at least 4.
Definition
Inflection points in differential geometry are the points of the curve where the curvature changes its sign.
For example, the graph of the differentiable function has an inflection point at if and only if its first derivative has an isolated extremum at . (this is not the same as saying that has an extremum). That is, in some neighborhood, is the one and only point at which has a (local) minimum or maximum. If all extrema of are isolated, then an inflection point is a point on the graph of at which the tangent crosses the curve.
A falling point of inflection is an inflection point where the derivative is negative on both sides of the point; in other words, it is an inflection point near which the function is decreasing. A rising point of inflection is a point where the derivative is positive on both sides of the point; in other words, it is an inflection point near which the function is increasing.
For a smooth curve given by parametric equations, a point is an inflection point if its signed curvature changes from plus to minus or from minus to plus, i.e., changes sign.
For a smooth curve which is a graph of a twice differentiable function, an inflection point is a point on the graph at which the second derivative has an isolated zero and changes sign.
In algebraic geometry, a non singular point of an algebraic curve is an inflection point if and only if the intersection number of the tangent line and the curve (at the point of tangency) is greater than 2. The main motivation of this different definition, is that otherwise the set of the inflection points of a curve would not be an algebraic set. In fact, the set of the inflection points o
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.