source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Summation
|
In mathematics, summation is the addition of a sequence of any kind of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined.
Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article.
The summation of an explicit sequence is denoted as a succession of additions. For example, summation of is denoted , and results in 9, that is, . Because addition is associative and commutative, there is no need of parentheses, and the result is the same irrespective of the order of the summands. Summation of a sequence of only one element results in this element itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0.
Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written as . Otherwise, summation is denoted by using Σ notation, where is an enlarged capital Greek letter sigma. For example, the sum of the first natural numbers can be denoted as
For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example,
Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article.
Notation
Capital-sigma notation
Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, , an enlarged form of the upright capital Greek letter sigma. This is defined as
where is the index of summation; is an indexed variable representing each term of the sum; is the lower bound of summation, and is the upper bound of summation. The "" under the summation symbol means that the index starts out equal to . The index, , is incremented by one for each successive term, stopping when .
This is read as "sum of , from to ".
Here is an example showing the summation of squares:
In general, while any variable can be used as the index of summation (provided that no ambiguity is incurred), some of the most common ones include letters such as , , , and ; the latter is also often used for the upper bound of a summation.
Alternatively, index and bounds of summation are sometimes omitted from the definition of summation if the context is sufficiently clear. This applies particularly when the index runs from 1 to n. For example, one might write that:
Generalizations of this notation are often used, in which an arbitrary logica
|
https://en.wikipedia.org/wiki/Summation%20by%20parts
|
In mathematics, summation by parts transforms the summation of products of sequences into other summations, often simplifying the computation or (especially) estimation of certain types of sums. It is also called Abel's lemma or Abel transformation, named after Niels Henrik Abel who introduced it in 1826.
Statement
Suppose and are two sequences. Then,
Using the forward difference operator , it can be stated more succinctly as
Summation by parts is an analogue to integration by parts:
or to Abel's summation formula:
An alternative statement is
which is analogous to the integration by parts formula for semimartingales.
Although applications almost always deal with convergence of sequences, the statement is purely algebraic and will work in any field. It will also work when one sequence is in a vector space, and the other is in the relevant field of scalars.
Newton series
The formula is sometimes given in one of these - slightly different - forms
which represent a special case () of the more general rule
both result from iterated application of the initial formula. The auxiliary quantities are Newton series:
and
A particular () result is the identity
Here, is the binomial coefficient.
Method
For two given sequences and , with , one wants to study the sum of the following series:
If we define then for every and
Finally
This process, called an Abel transformation, can be used to prove several criteria of convergence for .
Similarity with an integration by parts
The formula for an integration by parts is .
Beside the boundary conditions, we notice that the first integral contains two multiplied functions, one which is integrated in the final integral ( becomes ) and one which is differentiated ( becomes ).
The process of the Abel transformation is similar, since one of the two initial sequences is summed ( becomes ) and the other one is differenced ( becomes ).
Applications
It is used to prove Kronecker's lemma, which in turn, is used to prove a version of the strong law of large numbers under variance constraints.
It may be used to prove Nicomachus's theorem that the sum of the first cubes equals the square of the sum of the first positive integers.
Summation by parts is frequently used to prove Abel's theorem and Dirichlet's test.
One can also use this technique to prove Abel's test: If is a convergent series, and a bounded monotone sequence, then converges.
Proof of Abel's test. Summation by parts gives
where a is the limit of . As is convergent, is bounded independently of , say by . As go to zero, so go the first two terms. The third term goes to zero by the Cauchy criterion for . The remaining sum is bounded by
by the monotonicity of , and also goes to zero as .
Using the same proof as above, one can show that if
the partial sums form a bounded sequence independently of ;
(so that the sum goes to zero as goes to infinity)
then converges.
In both cases, the sum of the series satisfies:
|
https://en.wikipedia.org/wiki/Generalized%20permutation%20matrix
|
In mathematics, a generalized permutation matrix (or monomial matrix) is a matrix with the same nonzero pattern as a permutation matrix, i.e. there is exactly one nonzero entry in each row and each column. Unlike a permutation matrix, where the nonzero entry must be 1, in a generalized permutation matrix the nonzero entry can be any nonzero value. An example of a generalized permutation matrix is
Structure
An invertible matrix A is a generalized permutation matrix if and only if it can be written as a product of an invertible diagonal matrix D and an (implicitly invertible) permutation matrix P: i.e.,
Group structure
The set of n × n generalized permutation matrices with entries in a field F forms a subgroup of the general linear group GL(n, F), in which the group of nonsingular diagonal matrices Δ(n, F) forms a normal subgroup. Indeed, the generalized permutation matrices are the normalizer of the diagonal matrices, meaning that the generalized permutation matrices are the largest subgroup of GL(n, F) in which diagonal matrices are normal.
The abstract group of generalized permutation matrices is the wreath product of F× and Sn. Concretely, this means that it is the semidirect product of Δ(n, F) by the symmetric group Sn:
Sn ⋉ Δ(n, F),
where Sn acts by permuting coordinates and the diagonal matrices Δ(n, F) are isomorphic to the n-fold product (F×)n.
To be precise, the generalized permutation matrices are a (faithful) linear representation of this abstract wreath product: a realization of the abstract group as a subgroup of matrices.
Subgroups
The subgroup where all entries are 1 is exactly the permutation matrices, which is isomorphic to the symmetric group.
The subgroup where all entries are ±1 is the signed permutation matrices, which is the hyperoctahedral group.
The subgroup where the entries are mth roots of unity is isomorphic to a generalized symmetric group.
The subgroup of diagonal matrices is abelian, normal, and a maximal abelian subgroup. The quotient group is the symmetric group, and this construction is in fact the Weyl group of the general linear group: the diagonal matrices are a maximal torus in the general linear group (and are their own centralizer), the generalized permutation matrices are the normalizer of this torus, and the quotient, is the Weyl group.
Properties
If a nonsingular matrix and its inverse are both nonnegative matrices (i.e. matrices with nonnegative entries), then the matrix is a generalized permutation matrix.
The determinant of a generalized permutation matrix is given by where is the sign of the permutation associated with and are the diagonal elements of .
Generalizations
One can generalize further by allowing the entries to lie in a ring, rather than in a field. In that case if the non-zero entries are required to be units in the ring, one again obtains a group. On the other hand, if the non-zero entries are only required to be non-zero, but not necessarily invertible, this set o
|
https://en.wikipedia.org/wiki/Diagonalizable%20matrix
|
In linear algebra, a square matrix is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix and a diagonal matrix such that or equivalently (Such are not unique.) For a finite-dimensional vector space a linear map is called diagonalizable if there exists an ordered basis of consisting of eigenvectors of . These definitions are equivalent: if has a matrix representation as above, then the column vectors of form a basis consisting of eigenvectors of and the diagonal entries of are the corresponding eigenvalues of with respect to this eigenvector basis, is represented by Diagonalization is the process of finding the above and
Diagonalizing a matrix makes many subsequent computations easier. One can raise a diagonal matrix to a power by simply raising the diagonal entries to that power. The determinant of a diagonal matrix is simply the product of all diagonal entries. Such computations generalize easily to
The geometric transformation represented by a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling), meaning that it scales the space by a different amount in different directions. In particular, the direction of each eigenvector is scaled by a factor given by the corresponding eigenvalue.
A inhomogeneous dilation is in contrast to a homogeneous dilation, which scales by the same amount in every direction.
A square matrix that is not diagonalizable is called defective. It can happen that a matrix with real entries is defective over the real numbers, meaning that is impossible for any invertible and diagonal with real entries, but it is possible with complex entries, so that is diagonalizable over the complex numbers. For example, this is the case for a generic rotation matrix.
Many results for diagonalizable matrices hold only over an algebraically closed field (such as the complex numbers). In this case, diagonalizable matrices are dense in the space of all matrices, which means any defective matrix can be deformed into a diagonalizable matrix by a small perturbation; and the Jordan normal form theorem states that any matrix is uniquely the sum of a diagonalizable matrix and a nilpotent matrix. Over an algebraically closed field, diagonalizable matrices are equivalent to semi-simple matrices.
Definition
A square matrix, , with entries in a field is called diagonalizable or nondefective if there exists an invertible matrix (i.e. an element of the general linear group GLn(F)), , such that is a diagonal matrix. Formally,
Characterization
The fundamental fact about diagonalizable maps and matrices is expressed by the following:
An matrix over a field is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to , which is the case if and only if there exists a basis of consisting of eigenvectors of . If such a basis has been found, one can form the matrix having these basis vectors as co
|
https://en.wikipedia.org/wiki/Axiom%20of%20infinity
|
In axiomatic set theory and the branches of mathematics and philosophy that use it, the axiom of infinity is one of the axioms of Zermelo–Fraenkel set theory. It guarantees the existence of at least one infinite set, namely a set containing the natural numbers. It was first published by Ernst Zermelo as part of his set theory in 1908.
Formal statement
In the formal language of the Zermelo–Fraenkel axioms, the axiom reads:
In words, there is a set I (the set that is postulated to be infinite), such that the empty set is in I, and such that whenever any x is a member of I, the set formed by taking the union of x with its singleton {x} is also a member of I. Such a set is sometimes called an inductive set.
Interpretation and consequences
This axiom is closely related to the von Neumann construction of the natural numbers in set theory, in which the successor of x is defined as x ∪ {x}. If x is a set, then it follows from the other axioms of set theory that this successor is also a uniquely defined set. Successors are used to define the usual set-theoretic encoding of the natural numbers. In this encoding, zero is the empty set:
0 = {}.
The number 1 is the successor of 0:
1 = 0 ∪ {0} = {} ∪ {0} = {0} = {{}}.
Likewise, 2 is the successor of 1:
2 = 1 ∪ {1} = {0} ∪ {1} = {0, 1} = { {}, {{}} },
and so on:
3 = {0, 1, 2} = { {}, {{}}, {{}, {{}}} };
4 = {0, 1, 2, 3} = { {}, {{}}, { {}, {{}} }, { {}, {{}}, {{}, {{}}} } }.
A consequence of this definition is that every natural number is equal to the set of all preceding natural numbers. The count of elements in each set, at the top level, is the same as the represented natural number, and the nesting depth of the most deeply nested empty set {}, including its nesting in the set that represents the number of which it is a part, is also equal to the natural number that the set represents.
This construction forms the natural numbers. However, the other axioms are insufficient to prove the existence of the set of all natural numbers, . Therefore, its existence is taken as an axiom – the axiom of infinity. This axiom asserts that there is a set I that contains 0 and is closed under the operation of taking the successor; that is, for each element of I, the successor of that element is also in I.
Thus the essence of the axiom is:
There is a set, I, that includes all the natural numbers.
The axiom of infinity is also one of the von Neumann–Bernays–Gödel axioms.
Extracting the natural numbers from the infinite set
The infinite set I is a superset of the natural numbers. To show that the natural numbers themselves constitute a set, the axiom schema of specification can be applied to remove unwanted elements, leaving the set N of all natural numbers. This set is unique by the axiom of extensionality.
To extract the natural numbers, we need a definition of which sets are natural numbers. The natural numbers can be defined in a way that does not assume any axioms except the axiom of extensionality and
|
https://en.wikipedia.org/wiki/Operator%20norm
|
In mathematics, the operator norm measures the "size" of certain linear operators by assigning each a real number called its . Formally, it is a norm defined on the space of bounded linear operators between two given normed vector spaces. Informally, the operator norm of a linear map is the maximum factor by which it "lengthens" vectors.
Introduction and definition
Given two normed vector spaces and (over the same base field, either the real numbers or the complex numbers ), a linear map is continuous if and only if there exists a real number such that
The norm on the left is the one in and the norm on the right is the one in .
Intuitively, the continuous operator never increases the length of any vector by more than a factor of Thus the image of a bounded set under a continuous operator is also bounded. Because of this property, the continuous linear operators are also known as bounded operators.
In order to "measure the size" of one can take the infimum of the numbers such that the above inequality holds for all
This number represents the maximum scalar factor by which "lengthens" vectors.
In other words, the "size" of is measured by how much it "lengthens" vectors in the "biggest" case. So we define the operator norm of as
The infimum is attained as the set of all such is closed, nonempty, and bounded from below.
It is important to bear in mind that this operator norm depends on the choice of norms for the normed vector spaces and .
Examples
Every real -by- matrix corresponds to a linear map from to Each pair of the plethora of (vector) norms applicable to real vector spaces induces an operator norm for all -by- matrices of real numbers; these induced norms form a subset of matrix norms.
If we specifically choose the Euclidean norm on both and then the matrix norm given to a matrix is the square root of the largest eigenvalue of the matrix (where denotes the conjugate transpose of ).
This is equivalent to assigning the largest singular value of
Passing to a typical infinite-dimensional example, consider the sequence space which is an Lp space, defined by
This can be viewed as an infinite-dimensional analogue of the Euclidean space
Now consider a bounded sequence The sequence is an element of the space with a norm given by
Define an operator by pointwise multiplication:
The operator is bounded with operator norm
This discussion extends directly to the case where is replaced by a general space with and replaced by
Equivalent definitions
Let be a linear operator between normed spaces. The first four definitions are always equivalent, and if in addition then they are all equivalent:
If then the sets in the last two rows will be empty, and consequently their supremums over the set will equal instead of the correct value of If the supremum is taken over the set instead, then the supremum of the empty set is and the formulas hold for any
Importantly, a linear operator is not, in
|
https://en.wikipedia.org/wiki/K-theory
|
In mathematics, K-theory is, roughly speaking, the study of a ring generated by vector bundles over a topological space or scheme. In algebraic topology, it is a cohomology theory known as topological K-theory. In algebra and algebraic geometry, it is referred to as algebraic K-theory. It is also a fundamental tool in the field of operator algebras. It can be seen as the study of certain kinds of invariants of large matrices.
K-theory involves the construction of families of K-functors that map from topological spaces or schemes to associated rings; these rings reflect some aspects of the structure of the original spaces or schemes. As with functors to groups in algebraic topology, the reason for this functorial mapping is that it is easier to compute some topological properties from the mapped rings than from the original spaces or schemes. Examples of results gleaned from the K-theory approach include the Grothendieck–Riemann–Roch theorem, Bott periodicity, the Atiyah–Singer index theorem, and the Adams operations.
In high energy physics, K-theory and in particular twisted K-theory have appeared in Type II string theory where it has been conjectured that they classify D-branes, Ramond–Ramond field strengths and also certain spinors on generalized complex manifolds. In condensed matter physics K-theory has been used to classify topological insulators, superconductors and stable Fermi surfaces. For more details, see K-theory (physics).
Grothendieck completion
The Grothendieck completion of an abelian monoid into an abelian group is a necessary ingredient for defining K-theory since all definitions start by constructing an abelian monoid from a suitable category and turning it into an abelian group through this universal construction. Given an abelian monoid let be the relation on defined by
if there exists a such that Then, the set has the structure of a group where:
Equivalence classes in this group should be thought of as formal differences of elements in the abelian monoid. This group is also associated with a monoid homomorphism given by which has a certain universal property.
To get a better understanding of this group, consider some equivalence classes of the abelian monoid . Here we will denote the identity element of by so that will be the identity element of First, for any since we can set and apply the equation from the equivalence relation to get This implies
hence we have an additive inverse for each element in . This should give us the hint that we should be thinking of the equivalence classes as formal differences Another useful observation is the invariance of equivalence classes under scaling:
for any
The Grothendieck completion can be viewed as a functor and it has the property that it is left adjoint to the corresponding forgetful functor That means that, given a morphism of an abelian monoid to the underlying abelian monoid of an abelian group there exists a unique abelian group morphism
E
|
https://en.wikipedia.org/wiki/Abelian%20variety
|
In mathematics, particularly in algebraic geometry, complex analysis and algebraic number theory, an abelian variety is a projective algebraic variety that is also an algebraic group, i.e., has a group law that can be defined by regular functions. Abelian varieties are at the same time among the most studied objects in algebraic geometry and indispensable tools for much research on other topics in algebraic geometry and number theory.
An abelian variety can be defined by equations having coefficients in any field; the variety is then said to be defined over that field. Historically the first abelian varieties to be studied were those defined over the field of complex numbers. Such abelian varieties turn out to be exactly those complex tori that can be holomorphically embedded into a complex projective space.
Abelian varieties defined over algebraic number fields are a special case, which is important also from the viewpoint of number theory. Localization techniques lead naturally from abelian varieties defined over number fields to ones defined over finite fields and various local fields. Since a number field is the fraction field of a Dedekind domain, for any nonzero prime of your Dedekind domain, there is a map from the Dedekind domain to the quotient of the Dedekind domain by the prime, which is a finite field for all finite primes. This induces a map from the fraction field to any such finite field. Given a curve with equation defined over the number field, we can apply this map to the coefficients to get a curve defined over some finite field, where the choices of finite field correspond to the finite primes of the number field.
Abelian varieties appear naturally as Jacobian varieties (the connected components of zero in Picard varieties) and Albanese varieties of other algebraic varieties. The group law of an abelian variety is necessarily commutative and the variety is non-singular. An elliptic curve is an abelian variety of dimension 1. Abelian varieties have Kodaira dimension 0.
History and motivation
In the early nineteenth century, the theory of elliptic functions succeeded in giving a basis for the theory of elliptic integrals, and this left open an obvious avenue of research. The standard forms for elliptic integrals involved the square roots of cubic and quartic polynomials. When those were replaced by polynomials of higher degree, say quintics, what would happen?
In the work of Niels Abel and Carl Jacobi, the answer was formulated: this would involve functions of two complex variables, having four independent periods (i.e. period vectors). This gave the first glimpse of an abelian variety of dimension 2 (an abelian surface): what would now be called the Jacobian of a hyperelliptic curve of genus 2.
After Abel and Jacobi, some of the most important contributors to the theory of abelian functions were Riemann, Weierstrass, Frobenius, Poincaré and Picard. The subject was very popular at the time, already having a large li
|
https://en.wikipedia.org/wiki/Riemann%E2%80%93Roch%20theorem
|
The Riemann–Roch theorem is an important theorem in mathematics, specifically in complex analysis and algebraic geometry, for the computation of the dimension of the space of meromorphic functions with prescribed zeros and allowed poles. It relates the complex analysis of a connected compact Riemann surface with the surface's purely topological genus g, in a way that can be carried over into purely algebraic settings.
Initially proved as Riemann's inequality by , the theorem reached its definitive form for Riemann surfaces after work of Riemann's short-lived student . It was later generalized to algebraic curves, to higher-dimensional varieties and beyond.
Preliminary notions
A Riemann surface is a topological space that is locally homeomorphic to an open subset of , the set of complex numbers. In addition, the transition maps between these open subsets are required to be holomorphic. The latter condition allows one to transfer the notions and methods of complex analysis dealing with holomorphic and meromorphic functions on to the surface . For the purposes of the Riemann–Roch theorem, the surface is always assumed to be compact. Colloquially speaking, the genus of a Riemann surface is its number of handles; for example the genus of the Riemann surface shown at the right is three. More precisely, the genus is defined as half of the first Betti number, i.e., half of the -dimension of the first singular homology group with complex coefficients. The genus classifies compact Riemann surfaces up to homeomorphism, i.e., two such surfaces are homeomorphic if and only if their genus is the same. Therefore, the genus is an important topological invariant of a Riemann surface. On the other hand, Hodge theory shows that the genus coincides with the -dimension of the space of holomorphic one-forms on , so the genus also encodes complex-analytic information about the Riemann surface.
A divisor is an element of the free abelian group on the points of the surface. Equivalently, a divisor is a finite linear combination of points of the surface with integer coefficients.
Any meromorphic function gives rise to a divisor denoted defined as
where is the set of all zeroes and poles of , and is given by
The set is known to be finite; this is a consequence of being compact and the fact that the zeros of a (non-zero) holomorphic function do not have an accumulation point. Therefore, is well-defined. Any divisor of this form is called a principal divisor. Two divisors that differ by a principal divisor are called linearly equivalent. The divisor of a meromorphic 1-form is defined similarly. A divisor of a global meromorphic 1-form is called the canonical divisor (usually denoted ). Any two meromorphic 1-forms will yield linearly equivalent divisors, so the canonical divisor is uniquely determined up to linear equivalence (hence "the" canonical divisor).
The symbol denotes the degree (occasionally also called index) of the divisor , i.e. the sum of t
|
https://en.wikipedia.org/wiki/Partial%20fraction%20decomposition
|
In algebra, the partial fraction decomposition or partial fraction expansion of a rational fraction (that is, a fraction such that the numerator and the denominator are both polynomials) is an operation that consists of expressing the fraction as a sum of a polynomial (possibly zero) and one or several fractions with a simpler denominator.
The importance of the partial fraction decomposition lies in the fact that it provides algorithms for various computations with rational functions, including the explicit computation of antiderivatives, Taylor series expansions, inverse Z-transforms, and inverse Laplace transforms. The concept was discovered independently in 1702 by both Johann Bernoulli and Gottfried Leibniz.
In symbols, the partial fraction decomposition of a rational fraction of the form where and are polynomials, is its expression as
where
is a polynomial, and, for each ,
the denominator is a power of an irreducible polynomial (that is not factorable into polynomials of positive degrees), and
the numerator is a polynomial of a smaller degree than the degree of this irreducible polynomial.
When explicit computation is involved, a coarser decomposition is often preferred, which consists of replacing "irreducible polynomial" by "square-free polynomial" in the description of the outcome. This allows replacing polynomial factorization by the much easier-to-compute square-free factorization. This is sufficient for most applications, and avoids introducing irrational coefficients when the coefficients of the input polynomials are integers or rational numbers.
Basic principles
Let
be a rational fraction, where and are univariate polynomials in the indeterminate over a field. The existence of the partial fraction can be proved by applying inductively the following reduction steps.
Polynomial part
There exist two polynomials and such that
and
where denotes the degree of the polynomial .
This results immediately from the Euclidean division of by , which asserts the existence of and such that and
This allows supposing in the next steps that
Factors of the denominator
If and
where and are coprime polynomials, then there exist polynomials and such that
and
This can be proved as follows. Bézout's identity asserts the existence of polynomials and such that
(by hypothesis, is a greatest common divisor of and ).
Let with be the Euclidean division of by Setting one gets
It remains to show that By reducing the last sum of fractions to a common denominator, one gets
and thus
Powers in the denominator
Using the preceding decomposition inductively one gets fractions of the form with where is an irreducible polynomial. If , one can decompose further, by using that an irreducible polynomial is a square-free polynomial, that is, is a greatest common divisor of the polynomial and its derivative. If is the derivative of , Bézout's identity provides polynomials and such that and thus Euclidean divis
|
https://en.wikipedia.org/wiki/Free%20abelian%20group
|
In mathematics, a free abelian group is an abelian group with a basis. Being an abelian group means that it is a set with an addition operation that is associative, commutative, and invertible. A basis, also called an integral basis, is a subset such that every element of the group can be uniquely expressed as an integer combination of finitely many basis elements. For instance the two-dimensional integer lattice forms a free abelian group, with coordinatewise addition as its operation, and with the two points (1,0) and (0,1) as its basis. Free abelian groups have properties which make them similar to vector spaces, and may equivalently be called free the free modules over the integers. Lattice theory studies free abelian subgroups of real vector spaces. In algebraic topology, free abelian groups are used to define chain groups, and in algebraic geometry they are used to define divisors.
The elements of a free abelian group with basis may be described in several equivalent ways. These include formal sums which are expressions of the form where each is a nonzero integer, each is a distinct basis element, and the sum has finitely many terms. Alternatively, the elements of a free abelian group may be thought of as signed multisets containing finitely many elements with the multiplicity of an element in the multiset equal to its coefficient in the formal sum.
Another way to represent an element of a free abelian group is as a function from to the integers with finitely many nonzero values; for this functional representation, the group operation is the pointwise addition of functions.
Every set has a free abelian group with as its basis. This group is unique in the sense that every two free abelian groups with the same basis are isomorphic. Instead of constructing it by describing its individual elements, a free abelian group with basis may be constructed as a direct sum of copies of the additive group of the integers, with one copy per member Alternatively, the free abelian group with basis may be described by a presentation with the elements of as its generators and with the commutators of pairs of members as its relators. The rank of a free abelian group is the cardinality of a basis; every two bases for the same group give the same rank, and every two free abelian groups with the same rank are isomorphic. Every subgroup of a free abelian group is itself free abelian; this fact allows a general abelian group to be understood as a quotient of a free abelian group by "relations", or as a cokernel of an injective homomorphism between free abelian groups. The only free abelian groups that are free groups are the trivial group and the infinite cyclic group.
Definition and examples
A free abelian group is an abelian group that has a basis. Here, being an abelian group means that it is described by a set of its elements and a binary operation conventionally denoted as an additive group by the symbol (although it need not be the usua
|
https://en.wikipedia.org/wiki/Uniform%20boundedness%20principle
|
In mathematics, the uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis.
Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field.
In its basic form, it asserts that for a family of continuous linear operators (and thus bounded operators) whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm.
The theorem was first published in 1927 by Stefan Banach and Hugo Steinhaus, but it was also proven independently by Hans Hahn.
Theorem
The completeness of enables the following short proof, using the Baire category theorem.
There are also simple proofs not using the Baire theorem .
Corollaries
The above corollary does claim that converges to in operator norm, that is, uniformly on bounded sets. However, since is bounded in operator norm, and the limit operator is continuous, a standard "" estimate shows that converges to uniformly on sets.
Indeed, the elements of define a pointwise bounded family of continuous linear forms on the Banach space which is the continuous dual space of
By the uniform boundedness principle, the norms of elements of as functionals on that is, norms in the second dual are bounded.
But for every the norm in the second dual coincides with the norm in by a consequence of the Hahn–Banach theorem.
Let denote the continuous operators from to endowed with the operator norm.
If the collection is unbounded in then the uniform boundedness principle implies:
In fact, is dense in The complement of in is the countable union of closed sets
By the argument used in proving the theorem, each is nowhere dense, i.e. the subset is .
Therefore is the complement of a subset of first category in a Baire space. By definition of a Baire space, such sets (called or ) are dense.
Such reasoning leads to the , which can be formulated as follows:
Example: pointwise convergence of Fourier series
Let be the circle, and let be the Banach space of continuous functions on with the uniform norm. Using the uniform boundedness principle, one can show that there exists an element in for which the Fourier series does not converge pointwise.
For its Fourier series is defined by
and the N-th symmetric partial sum is
where is the -th Dirichlet kernel. Fix and consider the convergence of
The functional defined by
is bounded.
The norm of in the dual of is the norm of the signed measure namely
It can be verified that
So the collection is unbounded in the dual of
Therefore, by the uniform boundedness principle, for any the set of continuous functions whose Fourier series diverges at is dense in
More can be concluded by applying the principle of condensation of singularities.
Let be a dense sequence in
Define in the similar way as above. The principle of condensation of singularities then says that the set of continuo
|
https://en.wikipedia.org/wiki/Reciprocity%20law
|
In mathematics, a reciprocity law is a generalization of the law of quadratic reciprocity to arbitrary monic irreducible polynomials with integer coefficients. Recall that first reciprocity law, quadratic reciprocity, determines when an irreducible polynomial splits into linear terms when reduced mod . That is, it determines for which prime numbers the relationholds. For a general reciprocity lawpg 3, it is defined as the rule determining which primes the polynomial splits into linear factors, denoted .
There are several different ways to express reciprocity laws. The early reciprocity laws found in the 19th century were usually expressed in terms of a power residue symbol (p/q) generalizing the quadratic reciprocity symbol, that describes when a prime number is an nth power residue modulo another prime, and gave a relation between (p/q) and (q/p). Hilbert reformulated the reciprocity laws as saying that a product over p of Hilbert norm residue symbols (a,b/p), taking values in roots of unity, is equal to 1. Artin reformulated the reciprocity laws as a statement that the Artin symbol from ideals (or ideles) to elements of a Galois group is trivial on a certain subgroup. Several more recent generalizations express reciprocity laws using cohomology of groups or representations of adelic groups or algebraic K-groups, and their relationship with the original quadratic reciprocity law can be hard to see.
The name reciprocity law was coined by Legendre in his 1785 publication Recherches d'analyse indéterminée, because odd primes reciprocate or not in the sense of quadratic reciprocity stated below according to their residue classes . This reciprocating behavior does not generalize well, the equivalent splitting behavior does. The name reciprocity law is still used in the more general context of splittings.
Quadratic reciprocity
In terms of the Legendre symbol, the law of quadratic reciprocity states
for positive odd primes we have
Using the definition of the Legendre symbol this is equivalent to a more elementary statement about equations.
For positive odd primes the solubility of for determines the solubility of for and vice versa by the comparatively simple criterion whether is or .
By the factor theorem and the behavior of degrees in factorizations the solubility of such quadratic congruence equations is equivalent to the splitting of associated quadratic polynomials over a residue ring into linear factors. In this terminology the law of quadratic reciprocity is stated as follows.
For positive odd primes the splitting of the polynomial in -residues determines the splitting of the polynomial in -residues and vice versa through the quantity .
This establishes the bridge from the name giving reciprocating behavior of primes introduced by Legendre to the splitting behavior of polynomials used in the generalizations.
Cubic reciprocity
The law of cubic reciprocity for Eisenstein integers states that if α and β are primar
|
https://en.wikipedia.org/wiki/Algebraic%20group
|
In mathematics, an algebraic group is an algebraic variety endowed with a group structure that is compatible with its structure as an algebraic variety. Thus the study of algebraic groups belongs both to algebraic geometry and group theory.
Many groups of geometric transformations are algebraic groups; for example, orthogonal groups, general linear groups, projective groups, Euclidean groups, etc. Many matrix groups are also algebraic. Other algebraic groups occur naturally in algebraic geometry, such as elliptic curves and Jacobian varieties.
An important class of algebraic groups is given by the affine algebraic groups, those whose underlying algebraic variety is an affine variety; they are exactly the algebraic subgroups of the general linear group, and are therefore also called linear algebraic groups. Another class is formed by the abelian varieties, which are the algebraic groups whose underlying variety is a projective variety. Chevalley's structure theorem states that every algebraic group can be constructed from groups in those two families.
Definitions
Formally, an algebraic group over a field is an algebraic variety over , together with a distinguished element (the neutral element), and regular maps (the multiplication operation) and (the inversion operation) that satisfy the group axioms.
Examples
The additive group: the affine line endowed with addition and opposite as group operations is an algebraic group. It is called the additive group (because its -points are isomorphic as a group to the additive group of ), and usually denoted by .
The multiplicative group: Let be the affine variety defined by the equation in the affine plane . The functions and are regular on , and they satisfy the group axioms (with neutral element ). The algebraic group is called multiplicative group, because its -points are isomorphic to the multiplicative group of the field (an isomorphism is given by ; note that the subset of invertible elements does not define an algebraic subvariety in ).
The special linear group is an algebraic group: it is given by the algebraic equation in the affine space (identified with the space of -by- matrices), multiplication of matrices is regular and the formula for the inverse in terms of the adjugate matrix shows that inversion is regular as well on matrices with determinant 1.
The general linear group of invertible matrices over a field is an algebraic group. It can be realised as a subvariety in in much the same way as the multiplicative group in the previous example.
A non-singular cubic curve in the projective plane can be endowed with a geometrically defined group law that makes it into an algebraic group (see elliptic curve).
Related definitions
An algebraic subgroup of an algebraic group is a subvariety of that is also a subgroup of (that is, the maps and defining the group structure map and , respectively, into ).
A morphism between two algebraic groups is a regular m
|
https://en.wikipedia.org/wiki/Adele%20ring
|
In mathematics, the adele ring of a global field (also adelic ring, ring of adeles or ring of adèles) is a central object of class field theory, a branch of algebraic number theory. It is the restricted product of all the completions of the global field and is an example of a self-dual topological ring.
An adele derives from a particular kind of idele. "Idele" derives from the French "idèle" and was coined by the French mathematician Claude Chevalley. The word stands for 'ideal element' (abbreviated: id.el.). Adele (French: "adèle") stands for 'additive idele' (that is, additive ideal element).
The ring of adeles allows one to describe the Artin reciprocity law, which is a generalisation of quadratic reciprocity, and other reciprocity laws over finite fields. In addition, it is a classical theorem from Weil that -bundles on an algebraic curve over a finite field can be described in terms of adeles for a reductive group . Adeles are also connected with the adelic algebraic groups and adelic curves.
The study of geometry of numbers over the ring of adeles of a number field is called adelic geometry.
Definition
Let be a global field (a finite extension of or the function field of a curve over a finite field). The adele ring of is the subring
consisting of the tuples where lies in the subring for all but finitely many places . Here the index ranges over all valuations of the global field , is the completion at that valuation and the corresponding valuation ring.
Motivation
The ring of adeles solves the technical problem of "doing analysis on the rational numbers ." The classical solution was to pass to the standard metric completion and use analytic techniques there. But, as was learned later on, there are many more absolute values other than the Euclidean distance, one for each prime number , as was classified by Ostrowski. The Euclidean absolute value, denoted , is only one among many others, , but the ring of adeles makes it possible to comprehend and . This has the advantage of enabling analytic techniques while also retaining information about the primes, since their structure is embedded by the restricted infinite product.
The purpose of the adele ring is to look at all completions of at once. The adele ring is defined with the restricted product, rather than the Cartesian product. There are two reasons for this:
For each element of the valuations are zero for almost all places, i.e., for all places except a finite number. So, the global field can be embedded in the restricted product.
The restricted product is a locally compact space, while the Cartesian product is not. Therefore, there cannot be any application of harmonic analysis to the Cartesian product. This is because local compactness ensures the existence (and uniqueness) of Haar measure, a crucial tool in analysis on groups in general.
Why the restricted product?
The restricted infinite product is a required technical condition for giving the number field a
|
https://en.wikipedia.org/wiki/Restricted%20product
|
In mathematics, the restricted product is a construction in the theory of topological groups.
Let be an index set; a finite subset of . If is a locally compact group for each , and is an open compact subgroup for each , then the restricted product
is the subset of the product of the 's consisting of all elements such that for all but finitely many .
This group is given the topology whose basis of open sets are those of the form
where is open in and for all but finitely many .
One can easily prove that the restricted product is itself a locally compact group. The best known example of this construction is that of the adele ring and idele group of a global field.
See also
Direct sum
References
Topological groups
|
https://en.wikipedia.org/wiki/Ringed%20space
|
In mathematics, a ringed space is a family of (commutative) rings parametrized by open subsets of a topological space together with ring homomorphisms that play roles of restrictions. Precisely, it is a topological space equipped with a sheaf of rings called a structure sheaf. It is an abstraction of the concept of the rings of continuous (scalar-valued) functions on open subsets.
Among ringed spaces, especially important and prominent is a locally ringed space: a ringed space in which the analogy between the stalk at a point and the ring of germs of functions at a point is valid.
Ringed spaces appear in analysis as well as complex algebraic geometry and the scheme theory of algebraic geometry.
Note: In the definition of a ringed space, most expositions tend to restrict the rings to be commutative rings, including Hartshorne and Wikipedia. "Éléments de géométrie algébrique", on the other hand, does not impose the commutativity assumption, although the book mostly considers the commutative case.
Definitions
A ringed space is a topological space together with a sheaf of rings on . The sheaf is called the structure sheaf of .
A locally ringed space is a ringed space such that all stalks of are local rings (i.e. they have unique maximal ideals). Note that it is not required that be a local ring for every open set ; in fact, this is almost never the case.
Examples
An arbitrary topological space can be considered a locally ringed space by taking to be the sheaf of real-valued (or complex-valued) continuous functions on open subsets of . The stalk at a point can be thought of as the set of all germs of continuous functions at ; this is a local ring with the unique maximal ideal consisting of those germs whose value at is .
If is a manifold with some extra structure, we can also take the sheaf of differentiable, or complex-analytic functions. Both of these give rise to locally ringed spaces.
If is an algebraic variety carrying the Zariski topology, we can define a locally ringed space by taking to be the ring of rational mappings defined on the Zariski-open set that do not blow up (become infinite) within . The important generalization of this example is that of the spectrum of any commutative ring; these spectra are also locally ringed spaces. Schemes are locally ringed spaces obtained by "gluing together" spectra of commutative rings.
Morphisms
A morphism from to is a pair , where is a continuous map between the underlying topological spaces, and is a morphism from the structure sheaf of to the direct image of the structure sheaf of . In other words, a morphism from to is given by the following data:
a continuous map
a family of ring homomorphisms for every open set of which commute with the restriction maps. That is, if are two open subsets of , then the following diagram must commute (the vertical maps are the restriction homomorphisms):
There is an additional requirement for morphisms between locally ringed sp
|
https://en.wikipedia.org/wiki/Inaccessible%20cardinal
|
In set theory, an uncountable cardinal is inaccessible if it cannot be obtained from smaller cardinals by the usual operations of cardinal arithmetic. More precisely, a cardinal is strongly inaccessible if it satisfies the following three conditions: it is uncountable, it is not a sum of fewer than cardinals smaller than , and implies .
The term "inaccessible cardinal" is ambiguous. Until about 1950, it meant "weakly inaccessible cardinal", but since then it usually means "strongly inaccessible cardinal". An uncountable cardinal is weakly inaccessible if it is a regular weak limit cardinal. It is strongly inaccessible, or just inaccessible, if it is a regular strong limit cardinal (this is equivalent to the definition given above). Some authors do not require weakly and strongly inaccessible cardinals to be uncountable (in which case is strongly inaccessible). Weakly inaccessible cardinals were introduced by , and strongly inaccessible ones by and , in the latter they were referred to along with as Grenzzahlen.
Every strongly inaccessible cardinal is also weakly inaccessible, as every strong limit cardinal is also a weak limit cardinal. If the generalized continuum hypothesis holds, then a cardinal is strongly inaccessible if and only if it is weakly inaccessible.
(aleph-null) is a regular strong limit cardinal. Assuming the axiom of choice, every other infinite cardinal number is regular or a (weak) limit. However, only a rather large cardinal number can be both and thus weakly inaccessible.
An ordinal is a weakly inaccessible cardinal if and only if it is a regular ordinal and it is a limit of regular ordinals. (Zero, one, and are regular ordinals, but not limits of regular ordinals.) A cardinal which is weakly inaccessible and also a strong limit cardinal is strongly inaccessible.
The assumption of the existence of a strongly inaccessible cardinal is sometimes applied in the form of the assumption that one can work inside a Grothendieck universe, the two ideas being intimately connected.
Models and consistency
Zermelo–Fraenkel set theory with Choice (ZFC) implies that the th level of the Von Neumann universe is a model of ZFC whenever is strongly inaccessible. And ZF implies that the Gödel universe is a model of ZFC whenever is weakly inaccessible. Thus, ZF together with "there exists a weakly inaccessible cardinal" implies that ZFC is consistent. Therefore, inaccessible cardinals are a type of large cardinal.
If is a standard model of ZFC and is an inaccessible in , then: is one of the intended models of Zermelo–Fraenkel set theory; and is one of the intended models of Mendelson's version of Von Neumann–Bernays–Gödel set theory which excludes global choice, replacing limitation of size by replacement and ordinary choice; and is one of the intended models of Morse–Kelley set theory. Here is the set of Δ0 definable subsets of X (see constructible universe). However, does not need to be inaccessible, or even a card
|
https://en.wikipedia.org/wiki/Mahlo%20cardinal
|
In mathematics, a Mahlo cardinal is a certain kind of large cardinal number. Mahlo cardinals were first described by . As with all large cardinals, none of these varieties of Mahlo cardinals can be proven to exist by ZFC (assuming ZFC is consistent).
A cardinal number is called strongly Mahlo if is strongly inaccessible and the set is stationary in κ.
A cardinal is called weakly Mahlo if is weakly inaccessible and the set of weakly inaccessible cardinals less than is stationary in .
The term "Mahlo cardinal" now usually means "strongly Mahlo cardinal", though the cardinals originally considered by Mahlo were weakly Mahlo cardinals.
Minimal condition sufficient for a Mahlo cardinal
If κ is a limit ordinal and the set of regular ordinals less than κ is stationary in κ, then κ is weakly Mahlo.
The main difficulty in proving this is to show that κ is regular. We will suppose that it is not regular and construct a club set which gives us a μ such that:
μ = cf(μ) < cf(κ) < μ < κ which is a contradiction.
If κ were not regular, then cf(κ) < κ. We could choose a strictly increasing and continuous cf(κ)-sequence which begins with cf(κ)+1 and has κ as its limit. The limits of that sequence would be club in κ. So there must be a regular μ among those limits. So μ is a limit of an initial subsequence of the cf(κ)-sequence. Thus its cofinality is less than the cofinality of κ and greater than it at the same time; which is a contradiction. Thus the assumption that κ is not regular must be false, i.e. κ is regular.
No stationary set can exist below with the required property because {2,3,4,...} is club in ω but contains no regular ordinals; so κ is uncountable. And it is a regular limit of regular cardinals; so it is weakly inaccessible. Then one uses the set of uncountable limit cardinals below κ as a club set to show that the stationary set may be assumed to consist of weak inaccessibles.
If κ is weakly Mahlo and also a strong limit, then κ is Mahlo.
κ is weakly inaccessible and a strong limit, so it is strongly inaccessible.
We show that the set of uncountable strong limit cardinals below κ is club in κ. Let μ0 be the larger of the threshold and ω1. For each finite n, let μn+1 = 2μn which is less than κ because it is a strong limit cardinal. Then their limit is a strong limit cardinal and is less than κ by its regularity. The limits of uncountable strong limit cardinals are also uncountable strong limit cardinals. So the set of them is club in κ. Intersect that club set with the stationary set of weakly inaccessible cardinals less than κ to get a stationary set of strongly inaccessible cardinals less than κ.
Example: showing that Mahlo cardinals κ are κ-inaccessible (hyper-inaccessible)
The term "hyper-inaccessible" is ambiguous. In this section, a cardinal κ is called hyper-inaccessible if it is κ-inaccessible (as opposed to the more common meaning of 1-inaccessible).
Suppose κ is Mahlo. We proceed by transfinite
|
https://en.wikipedia.org/wiki/Zero%20sharp
|
In the mathematical discipline of set theory, 0# (zero sharp, also 0#) is the set of true formulae about indiscernibles and order-indiscernibles in the Gödel constructible universe. It is often encoded as a subset of the integers (using Gödel numbering), or as a subset of the hereditarily finite sets, or as a real number. Its existence is unprovable in ZFC, the standard form of axiomatic set theory, but follows from a suitable large cardinal axiom. It was first introduced as a set of formulae in Silver's 1966 thesis, later published as , where it was denoted by Σ, and rediscovered by , who considered it as a subset of the natural numbers and introduced the notation O# (with a capital letter O; this later changed to the numeral '0').
Roughly speaking, if 0# exists then the universe V of sets is much larger than the universe L of constructible sets, while if it does not exist then the universe of all sets is closely approximated by the constructible sets.
Definition
Zero sharp was defined by Silver and Solovay as follows. Consider the language of set theory with extra constant symbols c1, c2, ... for each positive integer. Then 0# is defined to be the set of Gödel numbers of the true sentences about the constructible universe, with ci interpreted as the uncountable cardinal .
(Here means in the full universe, not the constructible universe.)
If there is in V an uncountable set of Silver order-indiscernibles in the constructible universe L, then 0# is the set of Gödel numbers of formulas θ of set theory such that
where ω1, ... ωω are the "small" uncountable initial ordinals in V, but have all large cardinal properties consistent with V=L relative to L.
There is a subtlety about this definition: by Tarski's undefinability theorem it is not, in general, possible to define the truth of a formula of set theory in the language of set theory. To solve this, Silver and Solovay assumed the existence of a suitable large cardinal, such as a Ramsey cardinal, and showed that with this extra assumption it is possible to define the truth of statements about the constructible universe. More generally, the definition of 0# works provided that there is an uncountable set of indiscernibles for some Lα, and the phrase "0# exists" is used as a shorthand way of saying this.
There are several minor variations of the definition of 0#, which make no significant difference to its properties. There are many different choices of Gödel numbering, and 0# depends on this choice. Instead of being considered as a subset of the natural numbers, it is also possible to encode 0# as a subset of formulae of a language, or as a subset of the hereditarily finite sets, or as a real number.
Statements implying existence
The condition about the existence of a Ramsey cardinal implying that 0# exists can be weakened. The existence of ω1-Erdős cardinals implies the existence of 0#. This is close to being best possible, because the existence of 0# implies that in the constructibl
|
https://en.wikipedia.org/wiki/Indescribable%20cardinal
|
In set theory, a branch of mathematics, a Q-indescribable cardinal is a certain kind of large cardinal number that is hard to axiomatize in some language Q. There are many different types of indescribable cardinals corresponding to different choices of languages Q. They were introduced by .
A cardinal number is called -indescribable if for every proposition , and set with there exists an with . Following Lévy's hierarchy, here one looks at formulas with m-1 alternations of quantifiers with the outermost quantifier being universal. -indescribable cardinals are defined in a similar way, but with an outermost existential quantifier. Prior to defining the structure , one new predicate symbol is added to the language of set theory, which is interpreted as . The idea is that cannot be distinguished (looking from below) from smaller cardinals by any formula of n+1-th order logic with m-1 alternations of quantifiers even with the advantage of an extra unary predicate symbol (for A). This implies that it is large because it means that there must be many smaller cardinals with similar properties.
The cardinal number is called totally indescribable if it is -indescribable for all positive integers m and n.
If is an ordinal, the cardinal number is called -indescribable if for every formula and every subset of such that holds in there is a some such that holds in . If is infinite then -indescribable ordinals are totally indescribable, and if is finite they are the same as -indescribable ordinals. There is no that is -indescribable, nor does -indescribability necessarily imply -indescribability for any , but there is an alternative notion of shrewd cardinals that makes sense when : there is and such that holds in .
Historical note
Originally, a cardinal κ was called Q-indescribable if for every Q-formula and relation , if then there exists an such that . Using this definition, is -indescribable iff is regular and greater than .p.207 The cardinals satisfying the above version based on the cumulative hierarchy were called strongly Q-indescribable.
Equivalent conditions
A cardinal is -indescribable iff it is -indescribable. A cardinal is inaccessible if and only if it is -indescribable for all positive integers , equivalently iff it is -indescribable, equivalently if it is -indescribable.
-indescribable cardinals are the same as weakly compact cardinals.
The indescribability condition is equivalent to satisfying the reflection principle (which is provable in ZFC), but extended by allowing higher-order formulae with a second-order free variable.
For cardinals , say that an elementary embedding a small embedding if is transitive and . For any natural number , is -indescribable iff there is an such that for all there is a small embedding such that ., Corollary 4.3
If V=L, then for a natural number n>0, an uncountable cardinal is Π-indescribable iff it's (n+1)-stationary.
Enforceable classes
For a class of ordinals an
|
https://en.wikipedia.org/wiki/Measurable%20cardinal
|
In mathematics, a measurable cardinal is a certain kind of large cardinal number. In order to define the concept, one introduces a two-valued measure on a cardinal , or more generally on any set. For a cardinal , it can be described as a subdivision of all of its subsets into large and small sets such that itself is large, and all singletons are small, complements of small sets are large and vice versa. The intersection of fewer than large sets is again large.
It turns out that uncountable cardinals endowed with a two-valued measure are large cardinals whose existence cannot be proved from ZFC.
The concept of a measurable cardinal was introduced by Stanislaw Ulam in 1930.
Definition
Formally, a measurable cardinal is an uncountable cardinal number κ such that there exists a κ-additive, non-trivial, 0-1-valued measure on the power set of κ. (Here the term κ-additive means that, for any sequence Aα, α<λ of cardinality λ < κ, Aα being pairwise disjoint sets of ordinals less than κ, the measure of the union of the Aα equals the sum of the measures of the individual Aα.)
Equivalently, κ is measurable means that it is the critical point of a non-trivial elementary embedding of the universe V into a transitive class M. This equivalence is due to Jerome Keisler and Dana Scott, and uses the ultrapower construction from model theory. Since V is a proper class, a technical problem that is not usually present when considering ultrapowers needs to be addressed, by what is now called Scott's trick.
Equivalently, κ is a measurable cardinal if and only if it is an uncountable cardinal with a -complete, non-principal ultrafilter. Again, this means that the intersection of any strictly less than κ-many sets in the ultrafilter, is also in the ultrafilter.
Properties
It is trivial to note that if κ admits a non-trivial κ-additive measure, then κ must be regular. (By non-triviality and κ-additivity, any subset of cardinality less than κ must have measure 0, and then by κ-additivity again, this means that the entire set must not be a union of fewer than κ sets of cardinality less than κ.) Finally, if λ < κ, then it can't be the case that κ ≤ 2λ. If this were the case, then we could identify κ with some collection of 0-1 sequences of length λ. For each position in the sequence, either the subset of sequences with 1 in that position or the subset with 0 in that position would have to have measure 1. The intersection of these λ-many measure 1 subsets would thus also have to have measure 1, but it would contain exactly one sequence, which would contradict the non-triviality of the measure. Thus, assuming the Axiom of Choice, we can infer that κ is a strong limit cardinal, which completes the proof of its inaccessibility.
Although it follows from ZFC that every measurable cardinal is inaccessible (and is ineffable, Ramsey, etc.), it is consistent with ZF that a measurable cardinal can be a successor cardinal. It follows from ZF + axiom of determinacy t
|
https://en.wikipedia.org/wiki/Strong%20cardinal
|
In set theory, a strong cardinal is a type of large cardinal. It is a weakening of the notion of a supercompact cardinal.
Formal definition
If λ is any ordinal, κ is λ-strong means that κ is a cardinal number and there exists an elementary embedding j from the universe V into a transitive inner model M with critical point κ and
That is, M agrees with V on an initial segment. Then κ is strong means that it is λ-strong for all ordinals λ.
Relationship with other large cardinals
By definitions, strong cardinals lie below supercompact cardinals and above measurable cardinals in the consistency strength hierarchy.
κ is κ-strong if and only if it is measurable. If κ is strong or λ-strong for λ ≥ κ+2, then the ultrafilter U witnessing that κ is measurable will be in Vκ+2 and thus in M. So for any α < κ, we have that there exist an ultrafilter U in j(Vκ) − j(Vα), remembering that j(α) = α. Using the elementary embedding backwards, we get that there is an ultrafilter in Vκ − Vα. So there are arbitrarily large measurable cardinals below κ which is regular, and thus κ is a limit of κ-many measurable cardinals.
Strong cardinals also lie below superstrong cardinals and Woodin cardinals in consistency strength. However, the least strong cardinal is larger than the least superstrong cardinal.
Every strong cardinal is strongly unfoldable and therefore totally indescribable.
References
Large cardinals
|
https://en.wikipedia.org/wiki/Woodin%20cardinal
|
In set theory, a Woodin cardinal (named for W. Hugh Woodin) is a cardinal number such that for all functions
there exists a cardinal with
and an elementary embedding
from the Von Neumann universe into a transitive inner model with critical point and
An equivalent definition is this: is Woodin if and only if is strongly inaccessible and for all there exists a which is --strong.
being --strong means that for all ordinals , there exist a which is an elementary embedding with critical point , , and . (See also strong cardinal.)
A Woodin cardinal is preceded by a stationary set of measurable cardinals, and thus it is a Mahlo cardinal. However, the first Woodin cardinal is not even weakly compact.
Consequences
Woodin cardinals are important in descriptive set theory. By a result of Martin and Steel, existence of infinitely many Woodin cardinals implies projective determinacy, which in turn implies that every projective set is Lebesgue measurable, has the Baire property (differs from an open set by a meager set, that is, a set which is a countable union of nowhere dense sets), and the perfect set property (is either countable or contains a perfect subset).
The consistency of the existence of Woodin cardinals can be proved using determinacy hypotheses. Working in ZF+AD+DC one can prove that is Woodin in the class of hereditarily ordinal-definable sets. is the first ordinal onto which the continuum cannot be mapped by an ordinal-definable surjection (see Θ (set theory)).
Mitchell and Steel showed that assuming a Woodin cardinal exists, there is an inner model containing a Woodin cardinal in which there is a -well-ordering of the reals, ◊ holds, and the generalized continuum hypothesis holds.
Shelah proved that if the existence of a Woodin cardinal is consistent then it is consistent that the nonstationary ideal on is -saturated.
Woodin also proved the equiconsistency of the existence of infinitely many Woodin cardinals and the existence of an -dense ideal over .
Hyper-Woodin cardinals
A cardinal is called hyper-Woodin if there exists a normal measure on such that for every set , the set
is --strong
is in .
is --strong if and only if for each there is a transitive class and an elementary embedding
with
, and
.
The name alludes to the classical result that a cardinal is Woodin if and only if for every set , the set
is --strong
is a stationary set.
The measure will contain the set of all Shelah cardinals below .
Weakly hyper-Woodin cardinals
A cardinal is called weakly hyper-Woodin if for every set there exists a normal measure on such that the set is --strong is in . is --strong if and only if for each there is a transitive class and an elementary
embedding with , , and
The name alludes to the classic result that a cardinal is Woodin if for every set , the set is --strong is stationary.
The difference between hyper-Woodin cardinals and weakly hyper-Woodin cardinals is that the choice of do
|
https://en.wikipedia.org/wiki/Superstrong%20cardinal
|
In mathematics, a cardinal number κ is called superstrong if and only if there exists an elementary embedding j : V → M from V into a transitive inner model M with critical point κ and ⊆ M.
Similarly, a cardinal κ is n-superstrong if and only if there exists an elementary embedding j : V → M from V into a transitive inner model M with critical point κ and ⊆ M. Akihiro Kanamori has shown that the consistency strength of an n+1-superstrong cardinal exceeds that of an n-huge cardinal for each n > 0.
References
Set theory
Large cardinals
|
https://en.wikipedia.org/wiki/Supercompact%20cardinal
|
In set theory, a supercompact cardinal is a type of large cardinal independently introduced by Solovay and Reinhardt. They display a variety of reflection properties.
Formal definition
If is any ordinal, is -supercompact means that there exists an elementary embedding from the universe into a transitive inner model with critical point , and
That is, contains all of its -sequences. Then is supercompact means that it is -supercompact for all ordinals .
Alternatively, an uncountable cardinal is supercompact if for every such that there exists a normal measure over , in the following sense.
is defined as follows:
.
An ultrafilter over is fine if it is -complete and , for every . A normal measure over is a fine ultrafilter over with the additional property that every function such that is constant on a set in . Here "constant on a set in " means that there is such that .
Properties
Supercompact cardinals have reflection properties. If a cardinal with some property (say a 3-huge cardinal) that is witnessed by a structure of limited rank exists above a supercompact cardinal , then a cardinal with that property exists below . For example, if is supercompact and the generalized continuum hypothesis (GCH) holds below then it holds everywhere because a bijection between the powerset of and a cardinal at least would be a witness of limited rank for the failure of GCH at so it would also have to exist below .
Finding a canonical inner model for supercompact cardinals is one of the major problems of inner model theory.
The least supercompact cardinal is the least such that for every structure with cardinality of the domain , and for every sentence such that , there exists a substructure with smaller domain (i.e. ) that satisfies .
Supercompactness has a combinatorial characterization similar to the property of being ineffable. Let be the set of all nonempty subsets of which have cardinality . A cardinal is supercompact iff for every set (equivalently every cardinal ), for every function , if for all , then there is some such that is stationary.
See also
Indestructibility
Strongly compact cardinal
List of large cardinal properties
References
Citations
Large cardinals
|
https://en.wikipedia.org/wiki/Huge%20cardinal
|
In mathematics, a cardinal number is called huge if there exists an elementary embedding from into a transitive inner model with critical point and
Here, is the class of all sequences of length whose elements are in .
Huge cardinals were introduced by .
Variants
In what follows, refers to the -th iterate of the elementary embedding , that is, composed with itself times, for a finite ordinal . Also, is the class of all sequences of length less than whose elements are in . Notice that for the "super" versions, should be less than , not .
κ is almost n-huge if and only if there is with critical point and
κ is super almost n-huge if and only if for every ordinal γ there is with critical point , , and
κ is n-huge if and only if there is with critical point and
κ is super n-huge if and only if for every ordinal there is with critical point , , and
Notice that 0-huge is the same as measurable cardinal; and 1-huge is the same as huge. A cardinal satisfying one of the rank into rank axioms is -huge for all finite .
The existence of an almost huge cardinal implies that Vopěnka's principle is consistent; more precisely any almost huge cardinal is also a Vopěnka cardinal.
Kanamori, Reinhardt, and Solovay defined seven large cardinal properties between extendibility and hugeness in strength, named through , and a property . The additional property is equivalent to " is huge", and is equivalent to " is -supercompact for all ".
Consistency strength
The cardinals are arranged in order of increasing consistency strength as follows:
almost -huge
super almost -huge
-huge
super -huge
almost -huge
The consistency of a huge cardinal implies the consistency of a supercompact cardinal, nevertheless, the least huge cardinal is smaller than the least supercompact cardinal (assuming both exist).
ω-huge cardinals
One can try defining an -huge cardinal as one such that an elementary embedding from into a transitive inner model with critical point and , where is the supremum of for positive integers . However Kunen's inconsistency theorem shows that such cardinals are inconsistent in ZFC, though it is still open whether they are consistent in ZF. Instead an -huge cardinal is defined as the critical point of an elementary embedding from some rank to itself. This is closely related to the rank-into-rank axiom I1.
See also
List of large cardinal properties
The Dehornoy order on a braid group was motivated by properties of huge cardinals.
References
.
.
. A copy of parts I and II of this article with corrections is available at the author's web page.
Large cardinals
|
https://en.wikipedia.org/wiki/Local%20homeomorphism
|
In mathematics, more specifically topology, a local homeomorphism is a function between topological spaces that, intuitively, preserves local (though not necessarily global) structure.
If is a local homeomorphism, is said to be an étale space over Local homeomorphisms are used in the study of sheaves. Typical examples of local homeomorphisms are covering maps.
A topological space is locally homeomorphic to if every point of has a neighborhood that is homeomorphic to an open subset of
For example, a manifold of dimension is locally homeomorphic to
If there is a local homeomorphism from to then is locally homeomorphic to but the converse is not always true.
For example, the two dimensional sphere, being a manifold, is locally homeomorphic to the plane but there is no local homeomorphism
Formal definition
A function between two topological spaces is called a if every point has an open neighborhood whose image is open in and the restriction is a homeomorphism (where the respective subspace topologies are used on and on ).
Examples and sufficient conditions
Local homeomorphisms versus homeomorphisms
Every homeomorphism is a local homeomorphism. But a local homeomorphism is a homeomorphism if and only if it is bijective.
A local homeomorphism need not be a homeomorphism. For example, the function defined by (so that geometrically, this map wraps the real line around the circle) is a local homeomorphism but not a homeomorphism.
The map defined by which wraps the circle around itself times (that is, has winding number ), is a local homeomorphism for all non-zero but it is a homeomorphism only when it is bijective (that is, only when or ).
Generalizing the previous two examples, every covering map is a local homeomorphism; in particular, the universal cover of a space is a local homeomorphism.
In certain situations the converse is true. For example: if is a proper local homeomorphism between two Hausdorff spaces and if is also locally compact, then is a covering map.
Local homeomorphisms and composition of functions
The composition of two local homeomorphisms is a local homeomorphism; explicitly, if and are local homeomorphisms then the composition is also a local homeomorphism.
The restriction of a local homeomorphism to any open subset of the domain will again be a local homomorphism; explicitly, if is a local homeomorphism then its restriction to any open subset of is also a local homeomorphism.
If is continuous while both and are local homeomorphisms, then is also a local homeomorphism.
Inclusion maps
If is any subspace (where as usual, is equipped with the subspace topology induced by ) then the inclusion map is always a topological embedding. But it is a local homeomorphism if and only if is open in The subset being open in is essential for the inclusion map to be a local homeomorphism because the inclusion map of a non-open subset of yields a local homeomorphism (since it
|
https://en.wikipedia.org/wiki/Group%20cohomology
|
In mathematics (more specifically, in homological algebra), group cohomology is a set of mathematical tools used to study groups using cohomology theory, a technique from algebraic topology. Analogous to group representations, group cohomology looks at the group actions of a group G in an associated G-module M to elucidate the properties of the group. By treating the G-module as a kind of topological space with elements of representing n-simplices, topological properties of the space may be computed, such as the set of cohomology groups . The cohomology groups in turn provide insight into the structure of the group G and G-module M themselves. Group cohomology plays a role in the investigation of fixed points of a group action in a module or space and the quotient module or space with respect to a group action. Group cohomology is used in the fields of abstract algebra, homological algebra, algebraic topology and algebraic number theory, as well as in applications to group theory proper. As in algebraic topology, there is a dual theory called group homology. The techniques of group cohomology can also be extended to the case that instead of a G-module, G acts on a nonabelian G-group; in effect, a generalization of a module to non-Abelian coefficients.
These algebraic ideas are closely related to topological ideas. The group cohomology of a discrete group G is the singular cohomology of a suitable space having G as its fundamental group, namely the corresponding Eilenberg–MacLane space. Thus, the group cohomology of can be thought of as the singular cohomology of the circle S1, and similarly for and
A great deal is known about the cohomology of groups, including interpretations of low-dimensional cohomology, functoriality, and how to change groups. The subject of group cohomology began in the 1920s, matured in the late 1940s, and continues as an area of active research today.
Motivation
A general paradigm in group theory is that a group G should be studied via its group representations. A slight generalization of those representations are the G-modules: a G-module is an abelian group M together with a group action of G on M, with every element of G acting as an automorphism of M. We will write G multiplicatively and M additively.
Given such a G-module M, it is natural to consider the submodule of G-invariant elements:
Now, if N is a G-submodule of M (i.e., a subgroup of M mapped to itself by the action of G), it isn't in general true that the invariants in are found as the quotient of the invariants in M by those in N: being invariant 'modulo N ' is broader. The purpose of the first group cohomology is to precisely measure this difference.
The group cohomology functors in general measure the extent to which taking invariants doesn't respect exact sequences. This is expressed by a long exact sequence.
Definitions
The collection of all G-modules is a category (the morphisms are group homomorphisms f with the property for all g in G
|
https://en.wikipedia.org/wiki/Typical%20set
|
In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probability close to one is a consequence of the asymptotic equipartition property (AEP) which is a kind of law of large numbers. The notion of typicality is only concerned with the probability of a sequence and not the actual sequence itself.
This has great use in compression theory as it provides a theoretical means for compressing data, allowing us to represent any sequence Xn using nH(X) bits on average, and, hence, justifying the use of entropy as a measure of information from a source.
The AEP can also be proven for a large class of stationary ergodic processes, allowing typical set to be defined in more general cases.
(Weakly) typical sequences (weak typicality, entropy typicality)
If a sequence x1, ..., xn is drawn from an i.i.d. distribution X defined over a finite alphabet , then the typical set, Aε(n)(n) is defined as those sequences which satisfy:
where
is the information entropy of X. The probability above need only be within a factor of 2n ε. Taking the logarithm on all sides and dividing by -n, this definition can be equivalently stated as
For i.i.d sequence, since
we further have
By the law of large numbers, for sufficiently large n
Properties
An essential characteristic of the typical set is that, if one draws a large number n of independent random samples from the distribution X, the resulting sequence (x1, x2, ..., xn) is very likely to be a member of the typical set, even though the typical set comprises only a small fraction of all the possible sequences. Formally, given any , one can choose n such that:
The probability of a sequence from X(n) being drawn from Aε(n) is greater than 1 − ε, i.e.
If the distribution over is not uniform, then the fraction of sequences that are typical is
as n becomes very large, since where is the cardinality of .
For a general stochastic process {X(t)} with AEP, the (weakly) typical set can be defined similarly with p(x1, x2, ..., xn) replaced by p(x0τ) (i.e. the probability of the sample limited to the time interval [0, τ]), n being the degree of freedom of the process in the time interval and H(X) being the entropy rate. If the process is continuous-valued, differential entropy is used instead.
Example
Counter-intuitively, the most likely sequence is often not a member of the typical set. For example, suppose that X is an i.i.d Bernoulli random variable with p(0)=0.1 and p(1)=0.9. In n independent trials, since p(1)>p(0), the most likely sequence of outcome is the sequence of all 1's, (1,1,...,1). Here the entropy of X is H(X)=0.469, while
So this sequence is not in the typical set because its average logarithmic probability cannot come arbitrarily close to the entropy of the random variable X no matter how large we take the value of n.
For Bernoulli random variables, the ty
|
https://en.wikipedia.org/wiki/Algebraic%20variety
|
Algebraic varieties are the central objects of study in algebraic geometry, a sub-field of mathematics. Classically, an algebraic variety is defined as the set of solutions of a system of polynomial equations over the real or complex numbers. Modern definitions generalize this concept in several different ways, while attempting to preserve the geometric intuition behind the original definition.
Conventions regarding the definition of an algebraic variety differ slightly. For example, some definitions require an algebraic variety to be irreducible, which means that it is not the union of two smaller sets that are closed in the Zariski topology. Under this definition, non-irreducible algebraic varieties are called algebraic sets. Other conventions do not require irreducibility.
The fundamental theorem of algebra establishes a link between algebra and geometry by showing that a monic polynomial (an algebraic object) in one variable with complex number coefficients is determined by the set of its roots (a geometric object) in the complex plane. Generalizing this result, Hilbert's Nullstellensatz provides a fundamental correspondence between ideals of polynomial rings and algebraic sets. Using the Nullstellensatz and related results, mathematicians have established a strong correspondence between questions on algebraic sets and questions of ring theory. This correspondence is a defining feature of algebraic geometry.
Many algebraic varieties are manifolds, but an algebraic variety may have singular points while a manifold cannot. Algebraic varieties can be characterized by their dimension. Algebraic varieties of dimension one are called algebraic curves and algebraic varieties of dimension two are called algebraic surfaces.
In the context of modern scheme theory, an algebraic variety over a field is an integral (irreducible and reduced) scheme over that field whose structure morphism is separated and of finite type.
Overview and definitions
An affine variety over an algebraically closed field is conceptually the easiest type of variety to define, which will be done in this section. Next, one can define projective and quasi-projective varieties in a similar way. The most general definition of a variety is obtained by patching together smaller quasi-projective varieties. It is not obvious that one can construct genuinely new examples of varieties in this way, but Nagata gave an example of such a new variety in the 1950s.
Affine varieties
For an algebraically closed field and a natural number , let be an affine -space over , identified to through the choice of an affine coordinate system. The polynomials in the ring can be viewed as K-valued functions on by evaluating at the points in , i.e. by choosing values in K for each xi. For each set S of polynomials in , define the zero-locus Z(S) to be the set of points in on which the functions in S simultaneously vanish, that is to say
A subset V of is called an affine algebraic set if V = Z(
|
https://en.wikipedia.org/wiki/Product%20rule
|
In calculus, the product rule (or Leibniz rule or Leibniz product rule) is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as or in Leibniz's notation as
The rule may be extended or generalized to products of three or more functions, to a rule for higher-order derivatives of a product, and to other contexts.
Discovery
Discovery of this rule is credited to Gottfried Leibniz, who demonstrated it using differentials. (However, J. M. Child, a translator of Leibniz's papers, argues that it is due to Isaac Barrow.) Here is Leibniz's argument: Let u(x) and v(x) be two differentiable functions of x. Then the differential of uv is
Since the term du·dv is "negligible" (compared to du and dv), Leibniz concluded that
and this is indeed the differential form of the product rule. If we divide through by the differential dx, we obtain
which can also be written in Lagrange's notation as
Examples
Suppose we want to differentiate By using the product rule, one gets the derivative (since the derivative of is and the derivative of the sine function is the cosine function).
One special case of the product rule is the constant multiple rule, which states: if is a number, and is a differentiable function, then is also differentiable, and its derivative is This follows from the product rule since the derivative of any constant is zero. This, combined with the sum rule for derivatives, shows that differentiation is linear.
The rule for integration by parts is derived from the product rule, as is (a weak version of) the quotient rule. (It is a "weak" version in that it does not prove that the quotient is differentiable but only says what its derivative is it is differentiable.)
Proofs
Limit definition of derivative
Let and suppose that and are each differentiable at . We want to prove that is differentiable at and that its derivative, , is given by . To do this, (which is zero, and thus does not change the value) is added to the numerator to permit its factoring, and then properties of limits are used.
The fact that follows from the fact that differentiable functions are continuous.
Linear approximations
By definition, if are differentiable at , then we can write linear approximations:
and
where the error terms are small with respect to h: that is, also written . Then:
The "error terms" consist of items such as and which are easily seen to have magnitude Dividing by and taking the limit gives the result.
Quarter squares
This proof uses the chain rule and the quarter square function with derivative . We have:
and differentiating both sides gives:
Multivariable chain rule
The product rule can be considered a special case of the chain rule for several variables, applied to the multiplication function :
Non-standard analysis
Let u and v be continuous functions in x, and let dx, du and dv be infinitesimals within the framework of non-sta
|
https://en.wikipedia.org/wiki/Multiplication%20sign
|
The multiplication sign, also known as the times sign or the dimension sign, is the symbol ×, used in mathematics to denote the multiplication operation and its resulting product. While similar to a lowercase X (), the form is properly a four-fold rotationally symmetric saltire.
History
The earliest known use of the symbol to represent multiplication appears in an anonymous appendix to the 1618 edition of John Napier's . This appendix has been attributed to William Oughtred, who used the same symbol in his 1631 algebra text, , stating:"Multiplication of species [i.e. unknowns] connects both proposed magnitudes with the symbol 'in' or : or ordinarily without the symbol if the magnitudes be denoted with one letter." Two earlier uses of a notation have been identified, but do not stand critical examination.
Uses
In mathematics, the symbol × has a number of uses, including
Multiplication of two numbers, where it is read as "times" or "multiplied by"
Cross product of two vectors, where it is usually read as "cross"
Cartesian product of two sets, where it is usually read as "cross"
Geometric dimension of an object, such as noting that a room is 10 feet × 12 feet in area, where it is usually read as "by" (e.g., "10 feet by 12 feet")
Screen resolution in pixels, such as 1920 pixels across × 1080 pixels down. Read as "by".
Dimensions of a matrix, where it is usually read as "by"
A statistical interaction between two explanatory variables, where it is usually read as "by"
In biology, the multiplication sign is used in a botanical hybrid name, for instance Ceanothus papillosus × impressus (a hybrid between C. papillosus and C. impressus) or Crocosmia × crocosmiiflora (a hybrid between two other species of Crocosmia). However, the communication of these hybrid names with a Latin letter "x" is common, when the actual "×" symbol is not readily available.
The multiplication sign is also used by historians for an event between two dates. When employed between two dates for example 1225 and 1232 the expression "1225×1232" means "no earlier than 1225 and no later than 1232".
A monadic symbol is used by the APL programming language to denote the sign function.
Similar notations
The lower-case Latin letter is sometimes used in place of the multiplication sign. This is considered incorrect in mathematical writing.
In algebraic notation, widely used in mathematics, a multiplication symbol is usually omitted wherever it would not cause confusion: " multiplied by " can be written as or .
Other symbols can also be used to denote multiplication, often to reduce confusion between the multiplication sign × and the common variable . In some countries, such as Germany, the primary symbol for multiplication is the "dot operator" (as in ). This symbol is also used in compound units of measurement, e.g., N⋅m (see International System of Units#Lexicographic conventions). In algebra, it is a notation to resolve ambiguity (for instance, "b times 2" may be wri
|
https://en.wikipedia.org/wiki/Levi-Civita%20connection
|
In Riemannian or pseudo-Riemannian geometry (in particular the Lorentzian geometry of general relativity), the Levi-Civita connection is the unique affine connection on the tangent bundle of a manifold (i.e. affine connection) that preserves the (pseudo-)Riemannian metric and is torsion-free.
The fundamental theorem of Riemannian geometry states that there is a unique connection which satisfies these properties.
In the theory of Riemannian and pseudo-Riemannian manifolds the term covariant derivative is often used for the Levi-Civita connection. The components (structure coefficients) of this connection with respect to a system of local coordinates are called Christoffel symbols.
History
The Levi-Civita connection is named after Tullio Levi-Civita, although originally "discovered" by Elwin Bruno Christoffel. Levi-Civita, along with Gregorio Ricci-Curbastro, used Christoffel's symbols to define the notion of parallel transport and explore the relationship of parallel transport with the curvature, thus developing the modern notion of holonomy.
In 1869, Christoffel discovered that the components of the intrinsic derivative of a vector field, upon changing the coordinate system, transform as the components of a contravariant vector. This discovery was the real beginning of tensor analysis.
In 1906, L. E. J. Brouwer was the first mathematician to consider the parallel transport of a vector for the case of
a space of constant curvature.
In 1917, Levi-Civita pointed out its importance for the case of a hypersurface immersed in a Euclidean space, i.e., for the case of a Riemannian manifold embedded in a "larger" ambient space. He interpreted the intrinsic derivative in the case of an embedded surface as the tangential component of the usual derivative in the ambient affine space. The Levi-Civita notions of intrinsic derivative and parallel displacement of a vector along a curve make sense on an abstract Riemannian manifold, even though the original motivation relied on a specific embedding
In 1918, independently of Levi-Civita, Jan Arnoldus Schouten obtained analogous results. In the same year, Hermann Weyl generalized
Levi-Civita's results.
Notation
denotes a Riemannian or pseudo-Riemannian manifold.
is the tangent bundle of .
is the Riemannian or pseudo-Riemannian metric of .
are smooth vector fields on , i. e. smooth sections of .
is the Lie bracket of and . It is again a smooth vector field.
The metric can take up to two vectors or vector fields as arguments. In the former case the output is a number, the (pseudo-)inner product of and . In the latter case, the inner product of is taken at all points on the manifold so that defines a smooth function on . Vector fields act (by definition) as differential operators on smooth functions. In local coordinates , the action reads
where Einstein's summation convention is used.
Formal definition
An affine connection is called a Levi-Civita connection if
it preserves the metric, i.
|
https://en.wikipedia.org/wiki/Glossary%20of%20group%20theory
|
A group is a set together with an associative operation which admits an identity element and such that every element has an inverse.
Throughout the article, we use to denote the identity element of a group.
A
C
D
F
G
H
I
L
N
O
P
Q
R
S
T
Basic definitions
Subgroup. A subset of a group which remains a group when the operation is restricted to is called a subgroup of .
Given a subset of . We denote by the smallest subgroup of containing . is called the subgroup of generated by .
Normal subgroup. is a normal subgroup of if for all in and in , also belongs to .
Both subgroups and normal subgroups of a given group form a complete lattice under inclusion of subsets; this property and some related results are described by the lattice theorem.
Group homomorphism. These are functions that have the special property that
for any elements and of .
Kernel of a group homomorphism. It is the preimage of the identity in the codomain of a group homomorphism. Every normal subgroup is the kernel of a group homomorphism and vice versa.
Group isomorphism. Group homomorphisms that have inverse functions. The inverse of an isomorphism, it turns out, must also be a homomorphism.
Isomorphic groups. Two groups are isomorphic if there exists a group isomorphism mapping from one to the other. Isomorphic groups can be thought of as essentially the same, only with different labels on the individual elements.
One of the fundamental problems of group theory is the classification of groups up to isomorphism.
Direct product, direct sum, and semidirect product of groups. These are ways of combining groups to construct new groups; please refer to the corresponding links for explanation.
Types of groups
Finitely generated group. If there exists a finite set such that then is said to be finitely generated. If can be taken to have just one element, is a cyclic group of finite order, an infinite cyclic group, or possibly a group with just one element.
Simple group. Simple groups are those groups having only and themselves as normal subgroups. The name is misleading because a simple group can in fact be very complex. An example is the monster group, whose order is about 1054. Every finite group is built up from simple groups via group extensions, so the study of finite simple groups is central to the study of all finite groups. The finite simple groups are known and classified.
The structure of any finite abelian group is relatively simple; every finite abelian group is the direct sum of cyclic p-groups.
This can be extended to a complete classification of all finitely generated abelian groups, that is all abelian groups that are generated by a finite set.
The situation is much more complicated for the non-abelian groups.
Free group. Given any set , one can define a group as the smallest group containing the free semigroup of . The group consists of the finite strings (words) that can be composed by elements from , together wit
|
https://en.wikipedia.org/wiki/System%20of%20equations
|
In mathematics, a set of simultaneous equations, also known as a system of equations or an equation system, is a finite set of equations for which common solutions are sought. An equation system is usually classified in the same manner as single equations, namely as a:
System of linear equations,
System of nonlinear equations,
System of bilinear equations,
System of polynomial equations,
System of differential equations, or a
System of difference equations
See also
Simultaneous equations model, a statistical model in the form of simultaneous linear equations
Elementary algebra, for elementary methods
Equations
Broad-concept articles
de:Gleichung#Gleichungssysteme
|
https://en.wikipedia.org/wiki/Binary%20logarithm
|
In mathematics, the binary logarithm () is the power to which the number must be raised to obtain the value . That is, for any real number ,
For example, the binary logarithm of is , the binary logarithm of is , the binary logarithm of is , and the binary logarithm of is .
The binary logarithm is the logarithm to the base and is the inverse function of the power of two function. As well as , an alternative notation for the binary logarithm is (the notation preferred by ISO 31-11 and ISO 80000-2).
Historically, the first application of binary logarithms was in music theory, by Leonhard Euler: the binary logarithm of a frequency ratio of two musical tones gives the number of octaves by which the tones differ. Binary logarithms can be used to calculate the length of the representation of a number in the binary numeral system, or the number of bits needed to encode a message in information theory. In computer science, they count the number of steps needed for binary search and related algorithms. Other areas
in which the binary logarithm is frequently used include combinatorics, bioinformatics, the design of sports tournaments, and photography.
Binary logarithms are included in the standard C mathematical functions and other mathematical software packages.
The integer part of a binary logarithm can be found using the find first set operation on an integer value, or by looking up the exponent of a floating point value.
The fractional part of the logarithm can be calculated efficiently.
History
The powers of two have been known since antiquity; for instance, they appear in Euclid's Elements, Props. IX.32 (on the factorization of powers of two) and IX.36 (half of the Euclid–Euler theorem, on the structure of even perfect numbers).
And the binary logarithm of a power of two is just its position in the ordered sequence of powers of two.
On this basis, Michael Stifel has been credited with publishing the first known table of binary logarithms in 1544. His book Arithmetica Integra contains several tables that show the integers with their corresponding powers of two. Reversing the rows of these tables allow them to be interpreted as tables of binary logarithms.
Earlier than Stifel, the 8th century Jain mathematician Virasena is credited with a precursor to the binary logarithm. Virasena's concept of ardhacheda has been defined as the number of times a given number can be divided evenly by two. This definition gives rise to a function that coincides with the binary logarithm on the powers of two, but it is different for other integers, giving the 2-adic order rather than the logarithm.
The modern form of a binary logarithm, applying to any number (not just powers of two) was considered explicitly by Leonhard Euler in 1739. Euler established the application of binary logarithms to music theory, long before their applications in information theory and computer science became known. As part of his work in this area, Euler published a table of bina
|
https://en.wikipedia.org/wiki/Cos
|
Cos, COS, CoS, coS or Cos. may refer to:
Mathematics, science and technology
Carbonyl sulfide
Class of service (CoS or COS), a network header field defined by the IEEE 802.1p task group
Class of service (COS), a parameter in telephone systems
Cobalt sulfide
COS cells, cell lines COS-1 and COS-7
Cosine, a trigonometric function
Cosmic Origins Spectrograph, a Hubble Space Telescope instrument
Operating systems
COS (operating system), a Chinese mobile OS
Cray Operating System
Chippewa Operating System, from Control Data Corporation
Commercial Operating System, from Digital Equipment Corporation
GEC COS
Places
Cos, Ariège, France
Cos or Kos, a Greek island
COS, IATA code for Colorado Springs Airport, Colorado, US
Colorado Springs, Colorado, a US city, derived from its airport's code
Gulf of Cos, Aegean Sea
Villa de Cos, Zacatecas, Mexico
Cosio Valtellino (Cös), Lombardy, Italy
COS, UNDP country code of Costa Rica
Organizations, societies and churches
Charity Organization Society
Children's Orchestra Society, New York City, US
Church of Satan, a religious organization
Church of Scientology
Church of Scotland
Commandement des Opérations Spéciales, coordinating French special forces
Community of Science, an online database
Company of Servers, Anglican altar servers
Cooper Ornithological Society, California, US
Universities and schools
College of the Sequoias, California, US
College of the Siskiyous, California, US
Other uses
Childhood onset schizophrenia
Roman consul, a political office in Ancient Rome
COS, a British fashion brand
Cos lettuce
Martín Perfecto de Cos (1800–1854), Mexican general
Consequence of Sound (now Consequence), a New York, US online magazine
Cos (television series), 1976, hosted by Bill Cosby
Space Operations Command (Italy) (Comando delle Operazioni Spaziali)
See also
Kos (disambiguation)
|
https://en.wikipedia.org/wiki/Geometry%20of%20numbers
|
Geometry of numbers is the part of number theory which uses geometry for the study of algebraic numbers. Typically, a ring of algebraic integers is viewed as a lattice in and the study of these lattices provides fundamental information on algebraic numbers. The geometry of numbers was initiated by .
The geometry of numbers has a close relationship with other fields of mathematics, especially functional analysis and Diophantine approximation, the problem of finding rational numbers that approximate an irrational quantity.
Minkowski's results
Suppose that is a lattice in -dimensional Euclidean space and is a convex centrally symmetric body.
Minkowski's theorem, sometimes called Minkowski's first theorem, states that if , then contains a nonzero vector in .
The successive minimum is defined to be the inf of the numbers such that contains linearly independent vectors of .
Minkowski's theorem on successive minima, sometimes called Minkowski's second theorem, is a strengthening of his first theorem and states that
Later research in the geometry of numbers
In 1930-1960 research on the geometry of numbers was conducted by many number theorists (including Louis Mordell, Harold Davenport and Carl Ludwig Siegel). In recent years, Lenstra, Brion, and Barvinok have developed combinatorial theories that enumerate the lattice points in some convex bodies.
Subspace theorem of W. M. Schmidt
In the geometry of numbers, the subspace theorem was obtained by Wolfgang M. Schmidt in 1972. It states that if n is a positive integer, and L1,...,Ln are linearly independent linear forms in n variables with algebraic coefficients and if ε>0 is any given real number, then
the non-zero integer points x in n coordinates with
lie in a finite number of proper subspaces of Qn.
Influence on functional analysis
Minkowski's geometry of numbers had a profound influence on functional analysis. Minkowski proved that symmetric convex bodies induce norms in finite-dimensional vector spaces. Minkowski's theorem was generalized to topological vector spaces by Kolmogorov, whose theorem states that the symmetric convex sets that are closed and bounded generate the topology of a Banach space.
Researchers continue to study generalizations to star-shaped sets and other non-convex sets.
References
Bibliography
Matthias Beck, Sinai Robins. Computing the continuous discretely: Integer-point enumeration in polyhedra, Undergraduate Texts in Mathematics, Springer, 2007.
J. W. S. Cassels. An Introduction to the Geometry of Numbers. Springer Classics in Mathematics, Springer-Verlag 1997 (reprint of 1959 and 1971 Springer-Verlag editions).
John Horton Conway and N. J. A. Sloane, Sphere Packings, Lattices and Groups, Springer-Verlag, NY, 3rd ed., 1998.
R. J. Gardner, Geometric tomography, Cambridge University Press, New York, 1995. Second edition: 2006.
P. M. Gruber, Convex and discrete geometry, Springer-Verlag, New York, 2007.
P. M. Gruber, J. M. Wills (editors), Handbook of
|
https://en.wikipedia.org/wiki/Skewes%27s%20number
|
In number theory, Skewes's number is any of several large numbers used by the South African mathematician Stanley Skewes as upper bounds for the smallest natural number for which
where is the prime-counting function and is the logarithmic integral function. Skewes's number is much larger, but it is now known that there is a crossing between and near It is not known whether it is the smallest crossing.
Skewes's numbers
J.E. Littlewood, who was Skewes's research supervisor, had proved in that there is such a number (and so, a first such number); and indeed found that the sign of the difference changes infinitely many times. All numerical evidence then available seemed to suggest that was always less than Littlewood's proof did not, however, exhibit a concrete such number .
proved that, assuming that the Riemann hypothesis is true, there exists a number violating below
In , without assuming the Riemann hypothesis, Skewes proved that there must exist a value of below
Skewes's task was to make Littlewood's existence proof effective: exhibiting some concrete upper bound for the first sign change. According to Georg Kreisel, this was at the time not considered obvious even in principle.
More recent estimates
These upper bounds have since been reduced considerably by using large-scale computer calculations of zeros of the Riemann zeta function. The first estimate for the actual value of a crossover point was given by , who showed that somewhere between and there are more than consecutive integers with .
Without assuming the Riemann hypothesis, proved an upper bound of . A better estimate was discovered by , who showed there are at least consecutive integers somewhere near this value where . Bays and Hudson found a few much smaller values of where gets close to ; the possibility that there are crossover points near these values does not seem to have been definitely ruled out yet, though computer calculations suggest they are unlikely to exist. gave a small improvement and correction to the result of Bays and Hudson. found a smaller interval for a crossing, which was slightly improved by . The same source shows that there exists a number violating below . This can be reduced to assuming the Riemann hypothesis. gave .
Rigorously, proved that there are no crossover points below , improved by to , by to , by to , and by to .
There is no explicit value known for certain to have the property though computer calculations suggest some explicit numbers that are quite likely to satisfy this.
Even though the natural density of the positive integers for which does not exist, showed that the logarithmic density of these positive integers does exist and is positive. showed that this proportion is about 0.00000026, which is surprisingly large given how far one has to go to find the first example.
Riemann's formula
Riemann gave an explicit formula for , whose leading terms are (ignoring some subtle convergence question
|
https://en.wikipedia.org/wiki/Effective%20results%20in%20number%20theory
|
For historical reasons and in order to have application to the solution of Diophantine equations, results in number theory have been scrutinised more than in other branches of mathematics to see if their content is effectively computable. Where it is asserted that some list of integers is finite, the question is whether in principle the list could be printed out after a machine computation.
Littlewood's result
An early example of an ineffective result was J. E. Littlewood's theorem of 1914, that in the prime number theorem the differences of both ψ(x) and π(x) with their asymptotic estimates change sign infinitely often. In 1933 Stanley Skewes obtained an effective upper bound for the first sign change, now known as Skewes' number.
In more detail, writing for a numerical sequence f (n), an effective result about its changing sign infinitely often would be a theorem including, for every value of N, a value M > N such that f (N) and f (M) have different signs, and such that M could be computed with specified resources. In practical terms, M would be computed by taking values of n from N onwards, and the question is 'how far must you go?' A special case is to find the first sign change. The interest of the question was that the numerical evidence known showed no change of sign: Littlewood's result guaranteed that this evidence was just a small number effect, but 'small' here included values of n up to a billion.
The requirement of computability reflects on and contrasts with the approach used in analytic number theory to prove the results. It for example brings into question any use of Landau notation and its implied constants: are assertions pure existence theorems for such constants, or can one recover a version in which 1000 (say) takes the place of the implied constant? In other words, if it were known that there was M > N with a change of sign and such that
M = O(G(N))
for some explicit function G, say built up from powers, logarithms and exponentials, that means only
M < A.G(N)
for some absolute constant A. The value of A, the so-called implied constant, may also need to be made explicit, for computational purposes. One reason Landau notation was a popular introduction is that it hides exactly what A is. In some indirect forms of proof it may not be at all obvious that the implied constant can be made explicit.
The 'Siegel period'
Many of the principal results of analytic number theory that were proved in the period 1900–1950 were in fact ineffective. The main examples were:
The Thue–Siegel–Roth theorem
Siegel's theorem on integral points, from 1929
The 1934 theorem of Hans Heilbronn and Edward Linfoot on the class number 1 problem
The 1935 result on the Siegel zero
The Siegel–Walfisz theorem based on the Siegel zero.
The concrete information that was left theoretically incomplete included lower bounds for class numbers (ideal class groups for some families of number fields grow); and bounds for the best rati
|
https://en.wikipedia.org/wiki/Greeks%20%28finance%29
|
In mathematical finance, the Greeks are the quantities (known in calculus as partial derivatives; first-order or higher) representing the sensitivity of the price of a derivative instrument such as an option to changes in one or more underlying parameters on which the value of an instrument or portfolio of financial instruments is dependent. The name is used because the most common of these sensitivities are denoted by Greek letters (as are some other finance measures). Collectively these have also been called the risk sensitivities, risk measures or hedge parameters.
Use of the Greeks
The Greeks are vital tools in risk management. Each Greek measures the sensitivity of the value of a portfolio to a small change in a given underlying parameter, so that component risks may be treated in isolation, and the portfolio rebalanced accordingly to achieve a desired exposure; see for example delta hedging.
The Greeks in the Black–Scholes model (a relatively simple idealised model of certain financial markets) are relatively easy to calculate — a desirable property of financial models — and are very useful for derivatives traders, especially those who seek to hedge their portfolios from adverse changes in market conditions. For this reason, those Greeks which are particularly useful for hedging—such as delta, theta, and vega—are well-defined for measuring changes in the parameters spot price, time and volatility. Although rho (the partial derivative with respect to the risk-free interest rate) is a primary input into the Black–Scholes model, the overall impact on the value of a short-term option corresponding to changes in the risk-free interest rate is generally insignificant and therefore higher-order derivatives involving the risk-free interest rate are not common.
The most common of the Greeks are the first order derivatives: delta, vega, theta and rho; as well as gamma, a second-order derivative of the value function. The remaining sensitivities in this list are common enough that they have common names, but this list is by no means exhaustive.
The players in the market make competitive trades involving many billions (of $, £ or €) of underlying every day, so it is important to get the sums right. In practice they will use more sophisticated models which go beyond the simplifying assumptions used in the Black-Scholes model and hence in the Greeks.
Names
The use of Greek letter names is presumably by extension from the common finance terms alpha and beta, and the use of sigma (the standard deviation of logarithmic returns) and tau (time to expiry) in the Black–Scholes option pricing model. Several names such as 'vega' (whose symbol is similar to the lower-case Greek letter nu; the use of that name might have led to confusion) and 'zomma' are invented, but sound similar to Greek letters. The names 'color' and 'charm' presumably derive from the use of these terms for exotic properties of quarks in particle physics.
First-order Greeks
Delta
Delta,
|
https://en.wikipedia.org/wiki/Covering%20space
|
In topology, a covering or covering projection is a surjective map between topological spaces that, intuitively, locally acts like a projection of multiple copies of a space onto itself. In particular, coverings are special types of local homeomorphisms. If is a covering, is said to be a covering space or cover of , and is said to be the base of the covering, or simply the base. By abuse of terminology, and may sometimes be called covering spaces as well. Since coverings are local homeomorphisms, a covering space is a special kind of étale space.
Covering spaces first arose in the context of complex analysis (specifically, the technique of analytic continuation), where they were introduced by Riemann as domains on which naturally multivalued complex functions become single-valued. These spaces are now called Riemann surfaces.
Covering spaces are an important tool in several areas of mathematics. In modern geometry, covering spaces (or branched coverings, which have slightly weaker conditions) are used in the construction of manifolds, orbifolds, and the morphisms between them. In algebraic topology, covering spaces are closely related to the fundamental group: for one, since all coverings have the homotopy lifting property, covering spaces are an important tool in the calculation of homotopy groups. A standard example in this vein is the calculation of the fundamental group of the circle by means of the covering of by (see below). Under certain conditions, covering spaces also exhibit a Galois correspondance with the subgroups of the fundamental group.
Definition
Let be a topological space. A covering of is a continuous map
such that for every there exists an open neighborhood of and a discrete space such that and is a homeomorphism for every .
The open sets are called sheets, which are uniquely determined up to homeomorphism if is connected. For each the discrete set is called the fiber of . If is connected, it can be shown that the cardinality of is the same for all ; this value is called the degree of the covering. If is path-connected, then the covering is called a path-connected covering. This definition is equivalent to the statement that is a locally trivial Fiber bundle.
Examples
For every topological space , the identity map is a covering. Likewise for any discrete space the projection taking is a covering. Coverings of this type are called trivial coverings; if has finitely many (say ) elements, the covering is called the trivial -sheeted covering of .
The map with is a covering of the unit circle . The base of the covering is and the covering space is . For any point such that , the set is an open neighborhood of . The preimage of under is
and the sheets of the covering are for The fiber of is
Another covering of the unit circle is the map with for some For an open neighborhood of an , one has:
.
A map which is a local homeomorphism but not a covering of the unit circl
|
https://en.wikipedia.org/wiki/Ring%20theory
|
In algebra, ring theory is the study of rings—algebraic structures in which addition and multiplication are defined and have similar properties to those operations defined for the integers. Ring theory studies the structure of rings, their representations, or, in different language, modules, special classes of rings (group rings, division rings, universal enveloping algebras), as well as an array of properties that proved to be of interest both within the theory itself and for its applications, such as homological properties and polynomial identities.
Commutative rings are much better understood than noncommutative ones. Algebraic geometry and algebraic number theory, which provide many natural examples of commutative rings, have driven much of the development of commutative ring theory, which is now, under the name of commutative algebra, a major area of modern mathematics. Because these three fields (algebraic geometry, algebraic number theory and commutative algebra) are so intimately connected it is usually difficult and meaningless to decide which field a particular result belongs to. For example, Hilbert's Nullstellensatz is a theorem which is fundamental for algebraic geometry, and is stated and proved in terms of commutative algebra. Similarly, Fermat's Last Theorem is stated in terms of elementary arithmetic, which is a part of commutative algebra, but its proof involves deep results of both algebraic number theory and algebraic geometry.
Noncommutative rings are quite different in flavour, since more unusual behavior can arise. While the theory has developed in its own right, a fairly recent trend has sought to parallel the commutative development by building the theory of certain classes of noncommutative rings in a geometric fashion as if they were rings of functions on (non-existent) 'noncommutative spaces'. This trend started in the 1980s with the development of noncommutative geometry and with the discovery of quantum groups. It has led to a better understanding of noncommutative rings, especially noncommutative Noetherian rings.
For the definitions of a ring and basic concepts and their properties, see Ring (mathematics). The definitions of terms used throughout ring theory may be found in Glossary of ring theory.
Commutative rings
A ring is called commutative if its multiplication is commutative. Commutative rings resemble familiar number systems, and various definitions for commutative rings are designed to formalize properties of the integers. Commutative rings are also important in algebraic geometry. In commutative ring theory, numbers are often replaced by ideals, and the definition of the prime ideal tries to capture the essence of prime numbers. Integral domains, non-trivial commutative rings where no two non-zero elements multiply to give zero, generalize another property of the integers and serve as the proper realm to study divisibility. Principal ideal domains are integral domains in which every ideal can be gener
|
https://en.wikipedia.org/wiki/William%20Kingdon%20Clifford
|
William Kingdon Clifford (4 May 18453 March 1879) was an English mathematician and philosopher. Building on the work of Hermann Grassmann, he introduced what is now termed geometric algebra, a special case of the Clifford algebra named in his honour. The operations of geometric algebra have the effect of mirroring, rotating, translating, and mapping the geometric objects that are being modelled to new positions. Clifford algebras in general and geometric algebra in particular have been of ever increasing importance to mathematical physics, geometry, and computing. Clifford was the first to suggest that gravitation might be a manifestation of an underlying geometry. In his philosophical writings he coined the expression mind-stuff.
Biography
Born at Exeter, William Clifford showed great promise at school. He went on to King's College London (at age 15) and Trinity College, Cambridge, where he was elected fellow in 1868, after being second wrangler in 1867 and second Smith's prizeman. Being second was a fate he shared with others who became famous scientists, including William Thomson (Lord Kelvin) and James Clerk Maxwell. In 1870, he was part of an expedition to Italy to observe the solar eclipse of 22 December 1870. During that voyage he survived a shipwreck along the Sicilian coast.
In 1871, he was appointed professor of mathematics and mechanics at University College London, and in 1874 became a fellow of the Royal Society. He was also a member of the London Mathematical Society and the Metaphysical Society.
Clifford married Lucy Lane on 7 April 1875, with whom he had two children. Clifford enjoyed entertaining children and wrote a collection of fairy stories, The Little People.
Death and legacy
In 1876, Clifford suffered a breakdown, probably brought on by overwork. He taught and administered by day, and wrote by night. A half-year holiday in Algeria and Spain allowed him to resume his duties for 18 months, after which he collapsed again. He went to the island of Madeira to recover, but died there of tuberculosis after a few months, leaving a widow with two children.
Clifford and his wife are buried in London's Highgate Cemetery, near the graves of George Eliot and Herbert Spencer, just north of the grave of Karl Marx.
The academic journal Advances in Applied Clifford Algebras publishes on Clifford's legacy in kinematics and abstract algebra.
Mathematics
The discovery of non-Euclidean geometry opened new possibilities in geometry in Clifford's era. The field of intrinsic differential geometry was born, with the concept of curvature broadly applied to space itself as well as to curved lines and surfaces. Clifford was very much impressed by Bernhard Riemann’s 1854 essay "On the hypotheses which lie at the bases of geometry". In 1870, he reported to the Cambridge Philosophical Society on the curved space concepts of Riemann, and included speculation on the bending of space by gravity. Clifford's translation of Riemann's paper was publi
|
https://en.wikipedia.org/wiki/Implicit%20function
|
In mathematics, an implicit equation is a relation of the form where is a function of several variables (often a polynomial). For example, the implicit equation of the unit circle is
An implicit function is a function that is defined by an implicit equation, that relates one of the variables, considered as the value of the function, with the others considered as the arguments. For example, the equation of the unit circle defines as an implicit function of if , and is restricted to nonnegative values.
The implicit function theorem provides conditions under which some kinds of implicit equations define implicit functions, namely those that are obtained by equating to zero multivariable functions that are continuously differentiable.
Examples
Inverse functions
A common type of implicit function is an inverse function. Not all functions have a unique inverse function. If is a function of that has a unique inverse, then the inverse function of , called , is the unique function giving a solution of the equation
for in terms of . This solution can then be written as
Defining as the inverse of is an implicit definition. For some functions , can be written out explicitly as a closed-form expression — for instance, if , then . However, this is often not possible, or only by introducing a new notation (as in the product log example below).
Intuitively, an inverse function is obtained from by interchanging the roles of the dependent and independent variables.
Example: The product log is an implicit function giving the solution for of the equation .
Algebraic functions
An algebraic function is a function that satisfies a polynomial equation whose coefficients are themselves polynomials. For example, an algebraic function in one variable gives a solution for of an equation
where the coefficients are polynomial functions of . This algebraic function can be written as the right side of the solution equation . Written like this, is a multi-valued implicit function.
Algebraic functions play an important role in mathematical analysis and algebraic geometry. A simple example of an algebraic function is given by the left side of the unit circle equation:
Solving for gives an explicit solution:
But even without specifying this explicit solution, it is possible to refer to the implicit solution of the unit circle equation as , where is the multi-valued implicit function.
While explicit solutions can be found for equations that are quadratic, cubic, and quartic in , the same is not in general true for quintic and higher degree equations, such as
Nevertheless, one can still refer to the implicit solution involving the multi-valued implicit function .
Caveats
Not every equation implies a graph of a single-valued function, the circle equation being one prominent example. Another example is an implicit function given by where is a cubic polynomial having a "hump" in its graph. Thus, for an implicit function to be a true (single-val
|
https://en.wikipedia.org/wiki/Lemniscate%20of%20Bernoulli
|
In geometry, the lemniscate of Bernoulli is a plane curve defined from two given points and , known as foci, at distance from each other as the locus of points so that . The curve has a shape similar to the numeral 8 and to the ∞ symbol. Its name is from , which is Latin for "decorated with hanging ribbons". It is a special case of the Cassini oval and is a rational algebraic curve of degree 4.
This lemniscate was first described in 1694 by Jakob Bernoulli as a modification of an ellipse, which is the locus of points for which the sum of the distances to each of two fixed focal points is a constant. A Cassini oval, by contrast, is the locus of points for which the product of these distances is constant. In the case where the curve passes through the point midway between the foci, the oval is a lemniscate of Bernoulli.
This curve can be obtained as the inverse transform of a hyperbola, with the inversion circle centered at the center of the hyperbola (bisector of its two foci). It may also be drawn by a mechanical linkage in the form of Watt's linkage, with the lengths of the three bars of the linkage and the distance between its endpoints chosen to form a crossed parallelogram.
Equations
The equations can be stated in terms of the focal distance or the half-width of a lemniscate. These parameters are related as .
Its Cartesian equation is (up to translation and rotation):
As a parametric equation:
A rational parametrization:
In polar coordinates:
Its equation in the complex plane is:
In two-center bipolar coordinates:
In rational polar coordinates:
Arc length and elliptic functions
The determination of the arc length of arcs of the lemniscate leads to elliptic integrals, as was discovered in the eighteenth century. Around 1800, the elliptic functions inverting those integrals were studied by C. F. Gauss (largely unpublished at the time, but allusions in the notes to his Disquisitiones Arithmeticae). The period lattices are of a very special form, being proportional to the Gaussian integers. For this reason the case of elliptic functions with complex multiplication by is called the lemniscatic case in some sources.
Using the elliptic integral
the formula of the arc length can be given as
where is the gamma function and is the arithmetic–geometric mean.
Angles
Given two distinct points and , let be the midpoint of . Then the lemniscate of diameter can also be defined as the set of points , , , together with the locus of the points such that is a right angle (cf. Thales' theorem and its converse).
The following theorem about angles occurring in the lemniscate is due to German mathematician Gerhard Christoph Hermann Vechtmann, who described it 1843 in his dissertation on lemniscates.
and are the foci of the lemniscate, is the midpoint of the line segment and is any point on the lemniscate outside the line connecting and . The normal of the lemniscate in intersects the line connecting and in . Now the i
|
https://en.wikipedia.org/wiki/Existence%20theorem
|
In mathematics, an existence theorem is a theorem which asserts the existence of a certain object. It might be a statement which begins with the phrase "there exist(s)", or it might be a universal statement whose last quantifier is existential (e.g., "for all , , ... there exist(s) ..."). In the formal terms of symbolic logic, an existence theorem is a theorem with a prenex normal form involving the existential quantifier, even though in practice, such theorems are usually stated in standard mathematical language. For example, the statement that the sine function is continuous everywhere, or any theorem written in big O notation, can be considered as theorems which are existential by nature—since the quantification can be found in the definitions of the concepts used.
A controversy that goes back to the early twentieth century concerns the issue of purely theoretic existence theorems, that is, theorems which depend on non-constructive foundational material such as the axiom of infinity, the axiom of choice or the law of excluded middle. Such theorems provide no indication as to how to construct (or exhibit) the object whose existence is being claimed. From a constructivist viewpoint, such approaches are not viable as it lends to mathematics losing its concrete applicability, while the opposing viewpoint is that abstract methods are far-reaching, in a way that numerical analysis cannot be.
'Pure' existence results
In mathematics, an existence theorem is purely theoretical if the proof given for it does not indicate a construction of the object whose existence is asserted. Such a proof is non-constructive, since the whole approach may not lend itself to construction. In terms of algorithms, purely theoretical existence theorems bypass all algorithms for finding what is asserted to exist. These are to be contrasted with the so-called "constructive" existence theorems, which many constructivist mathematicians working in extended logics (such as intuitionistic logic) believe to be intrinsically stronger than their non-constructive counterparts.
Despite that, the purely theoretical existence results are nevertheless ubiquitous in contemporary mathematics. For example, John Nash's original proof of the existence of a Nash equilibrium in 1951 was such an existence theorem. An approach which is constructive was also later found in 1962.
Constructivist ideas
From the other direction, there has been considerable clarification of what constructive mathematics is—without the emergence of a 'master theory'. For example, according to Errett Bishop's definitions, the continuity of a function such as should be proved as a constructive bound on the modulus of continuity, meaning that the existential content of the assertion of continuity is a promise that can always be kept. Accordingly, Bishop rejects the standard idea of pointwise continuity, and proposed that continuity should be defined in terms of "local uniform continuity". One could get another explana
|
https://en.wikipedia.org/wiki/Monodromy
|
In mathematics, monodromy is the study of how objects from mathematical analysis, algebraic topology, algebraic geometry and differential geometry behave as they "run round" a singularity. As the name implies, the fundamental meaning of monodromy comes from "running round singly". It is closely associated with covering maps and their degeneration into ramification; the aspect giving rise to monodromy phenomena is that certain functions we may wish to define fail to be single-valued as we "run round" a path encircling a singularity. The failure of monodromy can be measured by defining a monodromy group: a group of transformations acting on the data that encodes what happens as we "run round" in one dimension. Lack of monodromy is sometimes called polydromy.
Definition
Let be a connected and locally connected based topological space with base point , and let be a covering with fiber . For a loop based at , denote a lift under the covering map, starting at a point , by . Finally, we denote by the endpoint , which is generally different from . There are theorems which state that this construction gives a well-defined group action of the fundamental group on , and that the stabilizer of is exactly , that is, an element fixes a point in if and only if it is represented by the image of a loop in based at . This action is called the monodromy action and the corresponding homomorphism into the automorphism group on is the algebraic monodromy. The image of this homomorphism is the monodromy group. There is another map whose image is called the topological monodromy group.
Example
These ideas were first made explicit in complex analysis. In the process of analytic continuation, a function that is an analytic function in some open subset of the punctured complex plane may be continued back into , but with different values. For example, take
then analytic continuation anti-clockwise round the circle
will result in the return, not to but
In this case the monodromy group is infinite cyclic and the covering space is the universal cover of the punctured complex plane. This cover can be visualized as the helicoid (as defined in the helicoid article) restricted to . The covering map is a vertical projection, in a sense collapsing the spiral in the obvious way to get a punctured plane.
Differential equations in the complex domain
One important application is to differential equations, where a single solution may give further linearly independent solutions by analytic continuation. Linear differential equations defined in an open, connected set S in the complex plane have a monodromy group, which (more precisely) is a linear representation of the fundamental group of S, summarising all the analytic continuations round loops within S. The inverse problem, of constructing the equation (with regular singularities), given a representation, is called the Riemann–Hilbert problem.
For a regular (and in particular Fuchsian) linear system one
|
https://en.wikipedia.org/wiki/Hypotenuse
|
In geometry, a hypotenuse is the longest side of a right-angled triangle, the side opposite the right angle. The length of the hypotenuse can be found using the Pythagorean theorem, which states that the square of the length of the hypotenuse equals the sum of the squares of the lengths of the other two sides. For example, if one of the other sides has a length of 3 (when squared, 9) and the other has a length of 4 (when squared, 16), then their squares add up to 25. The length of the hypotenuse is the square root of 25, that is, 5.
Etymology
The word hypotenuse is derived from Greek (sc. or ), meaning "[side] subtending the right angle" (Apollodorus), hupoteinousa being the feminine present active participle of the verb hupo-teinō "to stretch below, to subtend", from teinō "to stretch, extend". The nominalised participle, , was used for the hypotenuse of a triangle in the 4th century BCE (attested in Plato, Timaeus 54d). The Greek term was loaned into Late Latin, as hypotēnūsa. The spelling in -e, as hypotenuse, is French in origin (Estienne de La Roche 1520).
Calculating the hypotenuse
The length of the hypotenuse can be calculated using the square root function implied by the Pythagorean theorem. Using the common notation that the length of the two legs (or catheti) of the triangle (the sides perpendicular to each other) are a and b and that of the hypotenuse is c, we have
The Pythagorean theorem, and hence this length, can also be derived from the law of cosines by observing that the angle opposite the hypotenuse is 90° and noting that its cosine is 0:
Many computer languages support the ISO C standard function hypot(x,y), which returns the value above. The function is designed not to fail where the straightforward calculation might overflow or underflow and can be slightly more accurate and sometimes significantly slower.
Some scientific calculators provide a function to convert from rectangular coordinates to polar coordinates. This gives both the length of the hypotenuse and the angle the hypotenuse makes with the base line (c1 above) at the same time when given x and y. The angle returned is normally given by atan2(y,x).
Trigonometric ratios
By means of trigonometric ratios, one can obtain the value of two acute angles, and , of the right triangle.
Given the length of the hypotenuse and of a cathetus , the ratio is:
The trigonometric inverse function is:
in which is the angle opposite the cathetus .
The adjacent angle of the catheti is = 90° –
One may also obtain the value of the angle by the equation:
in which is the other cathetus.
See also
Cathetus
Triangle
Space diagonal
Nonhypotenuse number
Taxicab geometry
Trigonometry
Special right triangles
Pythagoras
Notes
References
Hypotenuse at Encyclopaedia of Mathematics
Parts of a triangle
Trigonometry
Pythagorean theorem
|
https://en.wikipedia.org/wiki/Glossary%20of%20ring%20theory
|
Ring theory is the branch of mathematics in which rings are studied: that is, structures supporting both an addition and a multiplication operation. This is a glossary of some terms of the subject.
For the items in commutative algebra (the theory of commutative rings), see Glossary of commutative algebra. For ring-theoretic concepts in the language of modules, see also Glossary of module theory.
For specific types of algebras, see also: Glossary of field theory and Glossary of Lie groups and Lie algebras. Since, currently, there is no glossary on not-necessarily-associative algebra structures in general, this glossary includes some concepts that do not need associativity; e.g., a derivation.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Z
See also
Glossary of module theory
Citations
References
Ring theory
Wikipedia glossaries using description lists
|
https://en.wikipedia.org/wiki/Quadratic%20form
|
In mathematics, a quadratic form is a polynomial with terms all of degree two ("form" is another name for a homogeneous polynomial). For example,
is a quadratic form in the variables and . The coefficients usually belong to a fixed field , such as the real or complex numbers, and one speaks of a quadratic form over . If , and the quadratic form equals zero only when all variables are simultaneously zero, then it is a definite quadratic form; otherwise it is an isotropic quadratic form.
Quadratic forms occupy a central place in various branches of mathematics, including number theory, linear algebra, group theory (orthogonal groups), differential geometry (the Riemannian metric, the second fundamental form), differential topology (intersection forms of four-manifolds), Lie theory (the Killing form), and statistics (where the exponent of a zero-mean multivariate normal distribution has the quadratic form )
Quadratic forms are not to be confused with a quadratic equation, which has only one variable and includes terms of degree two or less. A quadratic form is one case of the more general concept of homogeneous polynomials.
Introduction
Quadratic forms are homogeneous quadratic polynomials in n variables. In the cases of one, two, and three variables they are called unary, binary, and ternary and have the following explicit form:
where a, ..., f are the coefficients.
The theory of quadratic forms and methods used in their study depend in a large measure on the nature of the coefficients, which may be real or complex numbers, rational numbers, or integers. In linear algebra, analytic geometry, and in the majority of applications of quadratic forms, the coefficients are real or complex numbers. In the algebraic theory of quadratic forms, the coefficients are elements of a certain field. In the arithmetic theory of quadratic forms, the coefficients belong to a fixed commutative ring, frequently the integers Z or the p-adic integers Zp. Binary quadratic forms have been extensively studied in number theory, in particular, in the theory of quadratic fields, continued fractions, and modular forms. The theory of integral quadratic forms in n variables has important applications to algebraic topology.
Using homogeneous coordinates, a non-zero quadratic form in n variables defines an (n−2)-dimensional quadric in the (n−1)-dimensional projective space. This is a basic construction in projective geometry. In this way one may visualize 3-dimensional real quadratic forms as conic sections.
An example is given by the three-dimensional Euclidean space and the square of the Euclidean norm expressing the distance between a point with coordinates and the origin:
A closely related notion with geometric overtones is a quadratic space, which is a pair , with V a vector space over a field K, and a quadratic form on V. See below for the definition of a quadratic form on a vector space.
History
The study of quadratic forms, in particular the question of whe
|
https://en.wikipedia.org/wiki/John%20Venn
|
John Venn, FRS, FSA (4 August 1834 – 4 April 1923) was an English mathematician, logician and philosopher noted for introducing Venn diagrams, which are used in logic, set theory, probability, statistics, and computer science. In 1866, Venn published The Logic of Chance, a groundbreaking book which espoused the frequency theory of probability, arguing that probability should be determined by how often something is forecast to occur as opposed to "educated" assumptions. Venn then further developed George Boole's theories in the 1881 work Symbolic Logic, where he highlighted what would become known as Venn diagrams.
Early life
John Venn was born on 4 August 1834 in Kingston upon Hull, Yorkshire, to Martha Sykes and Rev. Henry Venn, who was the rector of the parish of Drypool. His mother died when he was three years old. Venn was descended from a long line of church evangelicals, including his grandfather John Venn. Venn was brought up in a very strict atmosphere at home. His father Henry had played a significant part in the Evangelical movement and he was also the secretary of the Society for Missions to Africa and the East, establishing eight bishoprics overseas. His grandfather was pastor to William Wilberforce of the abolitionist movement, in Clapham.
He began his education in London joining Sir Roger Cholmeley's School, now known as Highgate School, with his brother Henry in September 1846. He moved on to Islington Proprietary School.
University life and career
In October 1853, he went to Gonville and Caius College, Cambridge. He found the Mathematical Tripos unsuited to his mathematical style, complaining that the handful of private tutors he worked with "always had the Tripos prominently in view". In contrast, Venn wished to investigate interesting ideas beyond the syllabus. Nonetheless, he was Sixth Wrangler upon sitting the exams in January 1857.
Venn experienced, in his words, a "reaction and disgust" to the Tripos which led him to sell his books on mathematics and state that he would never return to the subject. Following his family vocation, he was ordained as an Anglican priest in 1859, serving first at the church in Cheshunt, Hertfordshire, and later in Mortlake, Surrey.
In 1862, he returned to Cambridge as a lecturer in moral science, studying and teaching political economy, philosophy, probability theory and logic. He reacquainted himself with logic and became a leading scholar in the field through his textbooks The Logic of Chance (1866), Symbolic Logic (1881) and The Principles of Empirical or Inductive Logic (1889). His academic writing was influenced by his teaching: he saw Venn diagrams, which he called "Eulerian Circles" and introduced in 1880, as a pedagogical tool. Venn was known for teaching students across multiple Cambridge colleges, which was rare at the time.
In 1883, he resigned from the clergy, having concluded that Anglicanism was incompatible with his philosophical beliefs.
In 1903 he was elected President o
|
https://en.wikipedia.org/wiki/Analytic%20number%20theory
|
In mathematics, analytic number theory is a branch of number theory that uses methods from mathematical analysis to solve problems about the integers. It is often said to have begun with Peter Gustav Lejeune Dirichlet's 1837 introduction of Dirichlet L-functions to give the first proof of Dirichlet's theorem on arithmetic progressions. It is well known for its results on prime numbers (involving the Prime Number Theorem and Riemann zeta function) and additive number theory (such as the Goldbach conjecture and Waring's problem).
Branches of analytic number theory
Analytic number theory can be split up into two major parts, divided more by the type of problems they attempt to solve than fundamental differences in technique.
Multiplicative number theory deals with the distribution of the prime numbers, such as estimating the number of primes in an interval, and includes the prime number theorem and Dirichlet's theorem on primes in arithmetic progressions.
Additive number theory is concerned with the additive structure of the integers, such as Goldbach's conjecture that every even number greater than 2 is the sum of two primes. One of the main results in additive number theory is the solution to Waring's problem.
History
Precursors
Much of analytic number theory was inspired by the prime number theorem. Let π(x) be the prime-counting function that gives the number of primes less than or equal to x, for any real number x. For example, π(10) = 4 because there are four prime numbers (2, 3, 5 and 7) less than or equal to 10. The prime number theorem then states that x / ln(x) is a good approximation to π(x), in the sense that the limit of the quotient of the two functions π(x) and x / ln(x) as x approaches infinity is 1:
known as the asymptotic law of distribution of prime numbers.
Adrien-Marie Legendre conjectured in 1797 or 1798 that π(a) is approximated by the function a/(A ln(a) + B), where A and B are unspecified constants. In the second edition of his book on number theory (1808) he then made a more precise conjecture, with A = 1 and B ≈ −1.08366. Carl Friedrich Gauss considered the same question: "Im Jahr 1792 oder 1793" ('in the year 1792 or 1793'), according to his own recollection nearly sixty years later in a letter to Encke (1849), he wrote in his logarithm table (he was then 15 or 16) the short note "Primzahlen unter " ('prime numbers under '). But Gauss never published this conjecture. In 1838 Peter Gustav Lejeune Dirichlet came up with his own approximating function, the logarithmic integral li(x) (under the slightly different form of a series, which he communicated to Gauss). Both Legendre's and Dirichlet's formulas imply the same conjectured asymptotic equivalence of π(x) and x / ln(x) stated above, although it turned out that Dirichlet's approximation is considerably better if one considers the differences instead of quotients.
Dirichlet
Johann Peter Gustav Lejeune Dirichlet is credited with the creation of analytic number t
|
https://en.wikipedia.org/wiki/Deming%20regression
|
In statistics, Deming regression, named after W. Edwards Deming, is an errors-in-variables model which tries to find the line of best fit for a two-dimensional dataset. It differs from the simple linear regression in that it accounts for errors in observations on both the x- and the y- axis. It is a special case of total least squares, which allows for any number of predictors and a more complicated error structure.
Deming regression is equivalent to the maximum likelihood estimation of an errors-in-variables model in which the errors for the two variables are assumed to be independent and normally distributed, and the ratio of their variances, denoted δ, is known. In practice, this ratio might be estimated from related data-sources; however the regression procedure takes no account for possible errors in estimating this ratio.
The Deming regression is only slightly more difficult to compute than the simple linear regression. Most statistical software packages used in clinical chemistry offer Deming regression.
The model was originally introduced by who considered the case δ = 1, and then more generally by with arbitrary δ. However their ideas remained largely unnoticed for more than 50 years, until they were revived by and later propagated even more by . The latter book became so popular in clinical chemistry and related fields that the method was even dubbed Deming regression in those fields.
Specification
Assume that the available data (yi, xi) are measured observations of the "true" values (yi*, xi*), which lie on the regression line:
where errors ε and η are independent and the ratio of their variances is assumed to be known:
In practice, the variances of the and parameters are often unknown, which complicates the estimate of . Note that when the measurement method for and is the same, these variances are likely to be equal, so for this case.
We seek to find the line of "best fit"
such that the weighted sum of squared residuals of the model is minimized:
See for a full derivation.
Solution
The solution can be expressed in terms of the second-degree sample moments. That is, we first calculate the following quantities (all sums go from i = 1 to n):
Finally, the least-squares estimates of model's parameters will be
Orthogonal regression
For the case of equal error variances, i.e., when , Deming regression becomes orthogonal regression: it minimizes the sum of squared perpendicular distances from the data points to the regression line. In this case, denote each observation as a point zj in the complex plane (i.e., the point (xj, yj) is written as zj = xj + iyj where i is the imaginary unit). Denote as Z the sum of the squared differences of the data points from the centroid (also denoted in complex coordinates), which is the point whose horizontal and vertical locations are the averages of those of the data points. Then:
If Z = 0, then every line through the centroid is a line of best orthogonal fit.
If Z ≠
|
https://en.wikipedia.org/wiki/Diophantine%20approximation
|
In number theory, the study of Diophantine approximation deals with the approximation of real numbers by rational numbers. It is named after Diophantus of Alexandria.
The first problem was to know how well a real number can be approximated by rational numbers. For this problem, a rational number a/b is a "good" approximation of a real number α if the absolute value of the difference between a/b and α may not decrease if a/b is replaced by another rational number with a smaller denominator. This problem was solved during the 18th century by means of continued fractions.
Knowing the "best" approximations of a given number, the main problem of the field is to find sharp upper and lower bounds of the above difference, expressed as a function of the denominator. It appears that these bounds depend on the nature of the real numbers to be approximated: the lower bound for the approximation of a rational number by another rational number is larger than the lower bound for algebraic numbers, which is itself larger than the lower bound for all real numbers. Thus a real number that may be better approximated than the bound for algebraic numbers is certainly a transcendental number.
This knowledge enabled Liouville, in 1844, to produce the first explicit transcendental number. Later, the proofs that and e are transcendental were obtained by a similar method.
Diophantine approximations and transcendental number theory are very close areas that share many theorems and methods. Diophantine approximations also have important applications in the study of Diophantine equations.
The 2022 Fields Medal was awarded to James Maynard for his work on Diophantine approximation.
Best Diophantine approximations of a real number
Given a real number , there are two ways to define a best Diophantine approximation of . For the first definition, the rational number is a best Diophantine approximation of if
for every rational number {{math|p/q' }} different from such that .
For the second definition, the above inequality is replaced by
A best approximation for the second definition is also a best approximation for the first one, but the converse is not true in general.
The theory of continued fractions allows us to compute the best approximations of a real number: for the second definition, they are the convergents of its expression as a regular continued fraction. For the first definition, one has to consider also the semiconvergents.
For example, the constant e = 2.718281828459045235... has the (regular) continued fraction representation
Its best approximations for the second definition are
while, for the first definition, they are
Measure of the accuracy of approximations
The obvious measure of the accuracy of a Diophantine approximation of a real number by a rational number is However, this quantity can always be made arbitrarily small by increasing the absolute values of and ; thus the accuracy of the approximation is usually estimated by compar
|
https://en.wikipedia.org/wiki/Quartic%20equation
|
In mathematics, a quartic equation is one which can be expressed as a quartic function equaling zero. The general form of a quartic equation is
where a ≠ 0.
The quartic is the highest order polynomial equation that can be solved by radicals in the general case (i.e., one in which the coefficients can take any value).
History
Lodovico Ferrari is attributed with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna (1545).
The proof that this was the highest order general polynomial for which such solutions could be found was first given in the Abel–Ruffini theorem in 1824, proving that all attempts at solving the higher order polynomials would be futile. The notes left by Évariste Galois before his death in a duel in 1832 later led to an elegant complete theory of the roots of polynomials, of which this theorem was one result.
Solving a quartic equation, special cases
Consider a quartic equation expressed in the form :
There exists a general formula for finding the roots to quartic equations, provided the coefficient of the leading term is non-zero. However, since the general method is quite complex and susceptible to errors in execution, it is better to apply one of the special cases listed below if possible.
Degenerate case
If the constant term a4 = 0, then one of the roots is x = 0, and the other roots can be found by dividing by x, and solving the resulting cubic equation,
Evident roots: 1 and −1 and −
Call our quartic polynomial . Since 1 raised to any power is 1,
Thus if and so = 1 is a root of . It can similarly be shown that if = −1 is a root.
In either case the full quartic can then be divided by the factor or respectively yielding a new cubic polynomial, which can be solved to find the quartic's other roots.
If and then is a root of the equation. The full quartic can then be factorized this way:
Alternatively, if and then and become two known roots. divided by is a quadratic polynomial.
Biquadratic equations
A quartic equation where a3 and a1 are equal to 0 takes the form
and thus is a biquadratic equation, which is easy to solve: let , so our equation turns to
which is a simple quadratic equation, whose solutions are easily found using the quadratic formula:
When we've solved it (i.e. found these two z values), we can extract x from them
If either of the z solutions were negative or complex numbers, then some of the x solutions are complex numbers.
Quasi-symmetric equations
Steps:
Divide by x 2.
Use variable change z = x + m/x.
So, z 2 = x 2 + (m/x) 2 + 2m.
This leads to:
,
,
(a quadratic in z = x + m/x)
Multiple roots
If the quartic has a double root, it can be found by taking the polynomi
|
https://en.wikipedia.org/wiki/Burnside%27s%20lemma
|
Burnside's lemma, sometimes also called Burnside's counting theorem, the Cauchy–Frobenius lemma, the orbit-counting theorem, or the lemma that is not Burnside's, is a result in group theory that is often useful in taking account of symmetry when counting mathematical objects. Its various eponyms are based on William Burnside, Augustin Louis Cauchy, and Ferdinand Georg Frobenius. The result is not due to Burnside himself, who merely quotes it in his book 'On the Theory of Groups of Finite Order', attributing it instead to . Burnside's Lemma counts "orbits", which is the same thing as counting distinct objects taking account of a symmetry. Other ways of saying it are counting distinct objects up to an equivalence relation R, or counting objects that are in canonical form.
In the following, let G be a finite group that acts on a set X. For each g in G, let Xg denote the set of elements in X that are fixed by g (also said to be left invariant by g), that is, Xg = { x ∈ X | g.x = x }. Burnside's lemma asserts the following formula for the number of orbits, denoted |X/G|:
Thus the number of orbits (a natural number or +∞) is equal to the average number of points fixed by an element of G (which is also a natural number or infinity). If G is infinite, the division by |G| may not be well-defined; in this case the following statement in cardinal arithmetic holds:
Examples of applications to enumeration
Necklaces
There are 8 possible bit vectors of length 3, but only four distinct 2-colored necklaces of length 3 (000, 001, 011, and 111), because 100 and 010 are equivalent to 001 by rotation, and similarly 110 and 101 are equivalent to 011. The formula is based on the number of rotations, which in this case is 3 (including the null rotation), and the number of bit vectors left unchanged by each rotation. All 8 bit vectors are unchanged by the null rotation, and two (000 and 111) are unchanged by each of the other two rotations. Applying Burnside's lemma recovers that the number of orbits is
For length 4, there are 16 possible bit vectors; 4 rotations; the null rotation leaves all 16 bit vectors unchanged; the 1-rotation and 3-rotation each leave two bit vectors unchanged (0000 and 1111); the 2-rotation leaves 4 bit vectors unchanged (0000, 0101, 1010, and 1111); giving . The six distinct necklaces are represented by the strings 0000, 0001, 0011, 0101, 0111, and 1111.
Colorings of a cube
The number of rotationally distinct colourings of the faces of a cube using three colours can be determined from this formula as follows.
Let X be the set of 36 possible face colour combinations that can be applied to a cube in one particular orientation, and let the rotation group G of the cube act on X in the natural manner. Then two elements of X belong to the same orbit precisely when one is simply a rotation of the other. The number of rotationally distinct colourings is thus the same as the number of orbits and can be found by counting the sizes of the fixed
|
https://en.wikipedia.org/wiki/Acute
|
Acute may refer to:
Language
Acute accent, a diacritic used in many modern written languages
Acute (phonetic), a perceptual classification
Science and mathematics
Acute angle
Acute triangle
Acute, a leaf shape in the glossary of leaf morphology
Acute (medicine), a disease that it is of short duration and of recent onset.
Acute toxicity, the adverse effects of a substance from a single exposure or in a short period of time
See also
Acutance, in photography, subjective perception of sharpness related to the edge contrast of an image
Acuity (disambiguation)
|
https://en.wikipedia.org/wiki/Radical%20of%20an%20ideal
|
In ring theory, a branch of mathematics, the radical of an ideal of a commutative ring is another ideal defined by the property that an element is in the radical if and only if some power of is in . Taking the radical of an ideal is called radicalization. A radical ideal (or semiprime ideal) is an ideal that is equal to its radical. The radical of a primary ideal is a prime ideal.
This concept is generalized to non-commutative rings in the Semiprime ring article.
Definition
The radical of an ideal in a commutative ring , denoted by or , is defined as
(note that ).
Intuitively, is obtained by taking all roots of elements of within the ring . Equivalently, is the preimage of the ideal of nilpotent elements (the nilradical) of the quotient ring (via the natural map ). The latter proves that is an ideal.
If the radical of is finitely generated, then some power of is contained in . In particular, if and are ideals of a Noetherian ring, then and have the same radical if and only if contains some power of and contains some power of .
If an ideal coincides with its own radical, then is called a radical ideal or semiprime ideal.
Examples
Consider the ring of integers.
The radical of the ideal of integer multiples of is .
The radical of is .
The radical of is .
In general, the radical of is , where is the product of all distinct prime factors of , the largest square-free factor of (see Radical of an integer). In fact, this generalizes to an arbitrary ideal (see the Properties section).
Consider the ideal . It is trivial to show (using the basic property ), but we give some alternative methods: The radical corresponds to the nilradical of the quotient ring , which is the intersection of all prime ideals of the quotient ring. This is contained in the Jacobson radical, which is the intersection of all maximal ideals, which are the kernels of homomorphisms to fields. Any ring homomorphism must have in the kernel in order to have a well-defined homomorphism (if we said, for example, that the kernel should be the composition of would be which is the same as trying to force ). Since is algebraically closed, every homomorphism must factor through , so we only have to compute the intersection of to compute the radical of We then find that
Properties
This section will continue the convention that I is an ideal of a commutative ring :
It is always true that , i.e. radicalization is an idempotent operation. Moreover, is the smallest radical ideal containing .
is the intersection of all the prime ideals of that contain and thus the radical of a prime ideal is equal to itself. Proof: On one hand, every prime ideal is radical, and so this intersection contains . Suppose is an element of which is not in , and let be the set . By the definition of , must be disjoint from . is also multiplicatively closed. Thus, by a variant of Krull's theorem, there exists a prime ideal that contains and is still disjoint f
|
https://en.wikipedia.org/wiki/G%CE%B4%20set
|
{{DISPLAYTITLE:Gδ set}}
In the mathematical field of topology, a Gδ set is a subset of a topological space that is a countable intersection of open sets. The notation originated from the German nouns and .
Historically Gδ sets were also called inner limiting sets, but that terminology is not in use anymore.
Gδ sets, and their dual, F sets, are the second level of the Borel hierarchy.
Definition
In a topological space a Gδ set is a countable intersection of open sets. The Gδ sets are exactly the level Π sets of the Borel hierarchy.
Examples
Any open set is trivially a Gδ set.
The irrational numbers are a Gδ set in the real numbers . They can be written as the countable intersection of the open sets (the superscript denoting the complement) where is rational.
The set of rational numbers is a Gδ set in . If were the intersection of open sets each would be dense in because is dense in . However, the construction above gave the irrational numbers as a countable intersection of open dense subsets. Taking the intersection of both of these sets gives the empty set as a countable intersection of open dense sets in , a violation of the Baire category theorem.
The continuity set of any real valued function is a Gδ subset of its domain (see the "Properties" section for a more general statement).
The zero-set of a derivative of an everywhere differentiable real-valued function on is a Gδ set; it can be a dense set with empty interior, as shown by Pompeiu's construction.
The set of functions in not differentiable at any point within contains a dense Gδ subset of the metric space . (See .)
Properties
The notion of Gδ sets in metric (and topological) spaces is related to the notion of completeness of the metric space as well as to the Baire category theorem. See the result about completely metrizable spaces in the list of properties below. sets and their complements are also of importance in real analysis, especially measure theory.
Basic properties
The complement of a Gδ set is an Fσ set, and vice versa.
The intersection of countably many Gδ sets is a Gδ set.
The union of many Gδ sets is a Gδ set.
A countable union of Gδ sets (which would be called a Gδσ set) is not a Gδ set in general. For example, the rational numbers do not form a Gδ set in .
In a topological space, the zero set of every real valued continuous function is a (closed) Gδ set, since is the intersection of the open sets , .
In a metrizable space, every closed set is a Gδ set and, dually, every open set is an Fσ set. Indeed, a closed set is the zero set of the continuous function , where indicates the distance from a point to a set. The same holds in pseudometrizable spaces.
In a first countable T1 space, every singleton is a Gδ set.
A subspace of a completely metrizable space is itself completely metrizable if and only if it is a Gδ set in .
A subspace of a Polish space is itself Polish if and only if it is a Gδ set in . This follows from
|
https://en.wikipedia.org/wiki/Intersection%20%28disambiguation%29
|
Intersection or intersect may refer to:
Intersection in mathematics, including:
Intersection (set theory), the set of elements common to some collection of sets
Intersection (geometry)
Intersection theory
Intersection (road), a place where two roads meet (line-line intersection)
Intersection (aviation), a virtual navigational fix
Intersection (land navigation), a method of obtaining a fix on an unknown position from two mapped points
Intersection matrix in DE-9IM, the dimensionally extended nine-intersection model
Intersectionality, a sociological theory about categorizations (e.g. ethnicity, gender, and religion) and the way those categorizations interact
Intersect (SQL), a set operator in SQL
Intersect (video game)
Logical conjunction
Intersection (group), a Japanese boy band
Media
Intersection (novel), a 1967 novel by Paul Guimard
Intersection (1994 film), a 1994 remake of the French film Les Choses de la vie, based on Guimard's novel
Collision (2013 film) a.k.a. Intersection, a French thriller film
Intersection (album), 2012 album by Nanci Griffith
An element in the reality TV series The Amazing Race
Intersections (1985–2005), a 2006 music CD box set released by Bruce Hornsby
Intersections (Dave House album), 2009
Intersections (Mekong Delta album), 2012
Intersect (2020 film), an American thriller film
Places
Intersections, Virginia
Events
Intersection, 53rd World Science Fiction Convention, held in Glasgow, Scotland, in 1995
Intersections (arts festival)
|
https://en.wikipedia.org/wiki/Localization%20%28commutative%20algebra%29
|
In commutative algebra and algebraic geometry, localization is a formal way to introduce the "denominators" to a given ring or module. That is, it introduces a new ring/module out of an existing ring/module R, so that it consists of fractions such that the denominator s belongs to a given subset S of R. If S is the set of the non-zero elements of an integral domain, then the localization is the field of fractions: this case generalizes the construction of the field of rational numbers from the ring of integers.
The technique has become fundamental, particularly in algebraic geometry, as it provides a natural link to sheaf theory. In fact, the term localization originated in algebraic geometry: if R is a ring of functions defined on some geometric object (algebraic variety) V, and one wants to study this variety "locally" near a point p, then one considers the set S of all functions that are not zero at p and localizes R with respect to S. The resulting ring contains information about the behavior of V near p, and excludes information that is not "local", such as the zeros of functions that are outside V (c.f. the example given at local ring).
Localization of a ring
The localization of a commutative ring by a multiplicatively closed set is a new ring whose elements are fractions with numerators in and denominators in .
If the ring is an integral domain the construction generalizes and follows closely that of the field of fractions, and, in particular, that of the rational numbers as the field of fractions of the integers. For rings that have zero divisors, the construction is similar but requires more care.
Multiplicative set
Localization is commonly done with respect to a multiplicatively closed set (also called a multiplicative set or a multiplicative system) of elements of a ring , that is a subset of that is closed under multiplication, and contains .
The requirement that must be a multiplicative set is natural, since it implies that all denominators introduced by the localization belong to . The localization by a set that is not multiplicatively closed can also be defined, by taking as possible denominators all products of elements of . However, the same localization is obtained by using the multiplicatively closed set of all products of elements of . As this often makes reasoning and notation simpler, it is standard practice to consider only localizations by multiplicative sets.
For example, the localization by a single element introduces fractions of the form but also products of such fractions, such as So, the denominators will belong to the multiplicative set of the powers of . Therefore, one generally talks of "the localization by the powers of an element" rather than of "the localization by an element".
The localization of a ring by a multiplicative set is generally denoted but other notations are commonly used in some special cases: if consists of the powers of a single element, is often denoted if is
|
https://en.wikipedia.org/wiki/Nilpotent
|
In mathematics, an element of a ring is called nilpotent if there exists some positive integer , called the index (or sometimes the degree), such that .
The term, along with its sister idempotent, was introduced by Benjamin Peirce in the context of his work on the classification of algebras.
Examples
This definition can be applied in particular to square matrices. The matrix
is nilpotent because . See nilpotent matrix for more.
In the factor ring , the equivalence class of 3 is nilpotent because 32 is congruent to 0 modulo 9.
Assume that two elements and in a ring satisfy . Then the element is nilpotent as An example with matrices (for a, b): Here and .
By definition, any element of a nilsemigroup is nilpotent.
Properties
No nilpotent element can be a unit (except in the trivial ring, which has only a single element ). All nilpotent elements are zero divisors.
An matrix with entries from a field is nilpotent if and only if its characteristic polynomial is .
If is nilpotent, then is a unit, because entails
More generally, the sum of a unit element and a nilpotent element is a unit when they commute.
Commutative rings
The nilpotent elements from a commutative ring form an ideal ; this is a consequence of the binomial theorem. This ideal is the nilradical of the ring. Every nilpotent element in a commutative ring is contained in every prime ideal of that ring, since . So is contained in the intersection of all prime ideals.
If is not nilpotent, we are able to localize with respect to the powers of : to get a non-zero ring . The prime ideals of the localized ring correspond exactly to those prime ideals of with . As every non-zero commutative ring has a maximal ideal, which is prime, every non-nilpotent is not contained in some prime ideal. Thus is exactly the intersection of all prime ideals.
A characteristic similar to that of Jacobson radical and annihilation of simple modules is available for nilradical: nilpotent elements of ring are precisely those that annihilate all integral domains internal to the ring (that is, of the form for prime ideals ). This follows from the fact that nilradical is the intersection of all prime ideals.
Nilpotent elements in Lie algebra
Let be a Lie algebra. Then an element is called nilpotent if it is in and is a nilpotent transformation. See also: Jordan decomposition in a Lie algebra.
Nilpotency in physics
Any ladder operator in a finite dimensional space is nilpotent. They represent creation and annihilation operators, which transform from one state to another, for example the raising and lowering Pauli matrices .
An operand that satisfies is nilpotent. Grassmann numbers which allow a path integral representation for Fermionic fields are nilpotents since their squares vanish. The BRST charge is an important example in physics.
As linear operators form an associative algebra and thus a ring, this is a special case of the initial definition. More generally,
|
https://en.wikipedia.org/wiki/Orbit%20%28dynamics%29
|
In mathematics, specifically in the study of dynamical systems, an orbit is a collection of points related by the evolution function of the dynamical system. It can be understood as the subset of phase space covered by the trajectory of the dynamical system under a particular set of initial conditions, as the system evolves. As a phase space trajectory is uniquely determined for any given set of phase space coordinates, it is not possible for different orbits to intersect in phase space, therefore the set of all orbits of a dynamical system is a partition of the phase space. Understanding the properties of orbits by using topological methods is one of the objectives of the modern theory of dynamical systems.
For discrete-time dynamical systems, the orbits are sequences; for real dynamical systems, the orbits are curves; and for holomorphic dynamical systems, the orbits are Riemann surfaces.
Definition
Given a dynamical system (T, M, Φ) with T a group, M a set and Φ the evolution function
where with
we define
then the set
is called orbit through x. An orbit which consists of a single point is called constant orbit. A non-constant orbit is called closed or periodic if there exists a in such that
.
Real dynamical system
Given a real dynamical system (R, M, Φ), I(x) is an open interval in the real numbers, that is . For any x in M
is called positive semi-orbit through x and
is called negative semi-orbit through x.
Discrete time dynamical system
For discrete time dynamical system :
forward orbit of x is a set :
backward orbit of x is a set :
and orbit of x is a set :
where :
is an evolution function which is here an iterated function,
set is dynamical space,
is number of iteration, which is natural number and
is initial state of system and
Usually different notation is used :
is written as
where is in the above notation.
General dynamical system
For a general dynamical system, especially in homogeneous dynamics, when one has a "nice" group acting on a probability space in a measure-preserving way, an orbit will be called periodic (or equivalently, closed) if the stabilizer is a lattice inside .
In addition, a related term is a bounded orbit, when the set is pre-compact inside .
The classification of orbits can lead to interesting questions with relations to other mathematical areas, for example the Oppenheim conjecture (proved by Margulis) and the Littlewood conjecture (partially proved by Lindenstrauss) are dealing with the question whether every bounded orbit of some natural action on the homogeneous space is indeed periodic one, this observation is due to Raghunathan and in different language due to Cassels and Swinnerton-Dyer . Such questions are intimately related to deep measure-classification theorems.
Notes
It is often the case that the evolution function can be understood to compose the elements of a group, in which case the group-theoretic orbits of the group action are the same thi
|
https://en.wikipedia.org/wiki/Sequent%20calculus
|
In mathematical logic, sequent calculus is a style of formal logical argumentation in which every line of a proof is a conditional tautology (called a sequent by Gerhard Gentzen) instead of an unconditional tautology. Each conditional tautology is inferred from other conditional tautologies on earlier lines in a formal argument according to rules and procedures of inference, giving a better approximation to the natural style of deduction used by mathematicians than to David Hilbert's earlier style of formal logic, in which every line was an unconditional tautology. More subtle distinctions may exist; for example, propositions may implicitly depend upon non-logical axioms. In that case, sequents signify conditional theorems in a first-order language rather than conditional tautologies.
Sequent calculus is one of several extant styles of proof calculus for expressing line-by-line logical arguments.
Hilbert style. Every line is an unconditional tautology (or theorem).
Gentzen style. Every line is a conditional tautology (or theorem) with zero or more conditions on the left.
Natural deduction. Every (conditional) line has exactly one asserted proposition on the right.
Sequent calculus. Every (conditional) line has zero or more asserted propositions on the right.
In other words, natural deduction and sequent calculus systems are particular distinct kinds of Gentzen-style systems. Hilbert-style systems typically have a very small number of inference rules, relying more on sets of axioms. Gentzen-style systems typically have very few axioms, if any, relying more on sets of rules.
Gentzen-style systems have significant practical and theoretical advantages compared to Hilbert-style systems. For example, both natural deduction and sequent calculus systems facilitate the elimination and introduction of universal and existential quantifiers so that unquantified logical expressions can be manipulated according to the much simpler rules of propositional calculus. In a typical argument, quantifiers are eliminated, then propositional calculus is applied to unquantified expressions (which typically contain free variables), and then the quantifiers are reintroduced. This very much parallels the way in which mathematical proofs are carried out in practice by mathematicians. Predicate calculus proofs are generally much easier to discover with this approach, and are often shorter. Natural deduction systems are more suited to practical theorem-proving. Sequent calculus systems are more suited to theoretical analysis.
Overview
In proof theory and mathematical logic, sequent calculus is a family of formal systems sharing a certain style of inference and certain formal properties. The first sequent calculi systems, LK and LJ, were introduced in 1934/1935 by Gerhard Gentzen as a tool for studying natural deduction in first-order logic (in classical and intuitionistic versions, respectively). Gentzen's so-called "Main Theorem" (Hauptsatz) about LK and LJ was the
|
https://en.wikipedia.org/wiki/Absolute%20continuity
|
In calculus and real analysis, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central operations of calculus—differentiation and integration. This relationship is commonly characterized (by the fundamental theorem of calculus) in the framework of Riemann integration, but with absolute continuity it may be formulated in terms of Lebesgue integration. For real-valued functions on the real line, two interrelated notions appear: absolute continuity of functions and absolute continuity of measures. These two notions are generalized in different directions. The usual derivative of a function is related to the Radon–Nikodym derivative, or density, of a measure.
We have the following chains of inclusions for functions over a compact subset of the real line:
absolutely continuous ⊆ uniformly continuous continuous
and, for a compact interval,
continuously differentiable ⊆ Lipschitz continuous ⊆ absolutely continuous ⊆ bounded variation ⊆ differentiable almost everywhere.
Absolute continuity of functions
A continuous function fails to be absolutely continuous if it fails to be uniformly continuous, which can happen if the domain of the function is not compact – examples are tan(x) over , x2 over the entire real line, and sin(1/x) over (0, 1]. But a continuous function f can fail to be absolutely continuous even on a compact interval. It may not be "differentiable almost everywhere" (like the Weierstrass function, which is not differentiable anywhere). Or it may be differentiable almost everywhere and its derivative f ′ may be Lebesgue integrable, but the integral of f ′ differs from the increment of f (how much f changes over an interval). This happens for example with the Cantor function.
Definition
Let be an interval in the real line . A function is absolutely continuous on if for every positive number , there is a positive number such that whenever a finite sequence of pairwise disjoint sub-intervals of with satisfies
then
The collection of all absolutely continuous functions on is denoted .
Equivalent definitions
The following conditions on a real-valued function f on a compact interval [a,b] are equivalent:
f is absolutely continuous;
f has a derivative f ′ almost everywhere, the derivative is Lebesgue integrable, and for all x on [a,b];
there exists a Lebesgue integrable function g on [a,b] such that for all x in [a,b].
If these equivalent conditions are satisfied then necessarily g = f ′ almost everywhere.
Equivalence between (1) and (3) is known as the fundamental theorem of Lebesgue integral calculus, due to Lebesgue.
For an equivalent definition in terms of measures see the section Relation between the two notions of absolute continuity.
Properties
The sum and difference of two absolutely continuous functions are also absolutely continuous. If
|
https://en.wikipedia.org/wiki/Simplicial%20complex
|
In mathematics, a simplicial complex is a set composed of points, line segments, triangles, and their n-dimensional counterparts (see illustration). Simplicial complexes should not be confused with the more abstract notion of a simplicial set appearing in modern simplicial homotopy theory. The purely combinatorial counterpart to a simplicial complex is an abstract simplicial complex. To distinguish a simplicial complex from an abstract simplicial complex, the former is often called a geometric simplicial complex.
Definitions
A simplicial complex is a set of simplices that satisfies the following conditions:
1. Every face of a simplex from is also in .
2. The non-empty intersection of any two simplices is a face of both and .
See also the definition of an abstract simplicial complex, which loosely speaking is a simplicial complex without an associated geometry.
A simplicial k-complex is a simplicial complex where the largest dimension of any simplex in equals k. For instance, a simplicial 2-complex must contain at least one triangle, and must not contain any tetrahedra or higher-dimensional simplices.
A pure or homogeneous simplicial k-complex is a simplicial complex where every simplex of dimension less than k is a face of some simplex of dimension exactly k. Informally, a pure 1-complex "looks" like it's made of a bunch of lines, a 2-complex "looks" like it's made of a bunch of triangles, etc. An example of a non-homogeneous complex is a triangle with a line segment attached to one of its vertices. Pure simplicial complexes can be thought of as triangulations and provide a definition of polytopes.
A facet is a maximal simplex, i.e., any simplex in a complex that is not a face of any larger simplex. (Note the difference from a "face" of a simplex). A pure simplicial complex can be thought of as a complex where all facets have the same dimension. For (boundary complexes of) simplicial polytopes this coincides with the meaning from polyhedral combinatorics.
Sometimes the term face is used to refer to a simplex of a complex, not to be confused with a face of a simplex.
For a simplicial complex embedded in a k-dimensional space, the k-faces are sometimes referred to as its cells. The term cell is sometimes used in a broader sense to denote a set homeomorphic to a simplex, leading to the definition of cell complex.
The underlying space, sometimes called the carrier of a simplicial complex is the union of its simplices. It is usually denoted by or .
Support
The relative interiors of all simplices in form a partition of its underlying space : for each point , there is exactly one simplex in containing in its relative interior. This simplex is called the support of x and denoted .
Closure, star, and link
Let K be a simplicial complex and let S be a collection of simplices in K.
The closure of S (denoted ) is the smallest simplicial subcomplex of K that contains each simplex in S. is obtained by repeatedly adding to S each face o
|
https://en.wikipedia.org/wiki/Unit%20disk
|
In mathematics, the open unit disk (or disc) around P (where P is a given point in the plane), is the set of points whose distance from P is less than 1:
The closed unit disk around P is the set of points whose distance from P is less than or equal to one:
Unit disks are special cases of disks and unit balls; as such, they contain the interior of the unit circle and, in the case of the closed unit disk, the unit circle itself.
Without further specifications, the term unit disk is used for the open unit disk about the origin, , with respect to the standard Euclidean metric. It is the interior of a circle of radius 1, centered at the origin. This set can be identified with the set of all complex numbers of absolute value less than one. When viewed as a subset of the complex plane (C), the unit disk is often denoted .
The open unit disk, the plane, and the upper half-plane
The function
is an example of a real analytic and bijective function from the open unit disk to the plane; its inverse function is also analytic. Considered as a real 2-dimensional analytic manifold, the open unit disk is therefore isomorphic to the whole plane. In particular, the open unit disk is homeomorphic to the whole plane.
There is however no conformal bijective map between the open unit disk and the plane. Considered as a Riemann surface, the open unit disk is therefore different from the complex plane.
There are conformal bijective maps between the open unit disk and the open upper half-plane. So considered as a Riemann surface, the open unit disk is isomorphic ("biholomorphic", or "conformally equivalent") to the upper half-plane, and the two are often used interchangeably.
Much more generally, the Riemann mapping theorem states that every simply connected open subset of the complex plane that is different from the complex plane itself admits a conformal and bijective map to the open unit disk.
One bijective conformal map from the open unit disk to the open upper half-plane is the Möbius transformation
which is the inverse of the Cayley transform.
Geometrically, one can imagine the real axis being bent and shrunk so that the upper half-plane becomes the disk's interior and the real axis forms the disk's circumference, save for one point at the top, the "point at infinity". A bijective conformal map from the open unit disk to the open upper half-plane can also be constructed as the composition of two stereographic projections: first the unit disk is stereographically projected upward onto the unit upper half-sphere, taking the "south-pole" of the unit sphere as the projection center, and then this half-sphere is projected sideways onto a vertical half-plane touching the sphere, taking the point on the half-sphere opposite to the touching point as projection center.
The unit disk and the upper half-plane are not interchangeable as domains for Hardy spaces. Contributing to this difference is the fact that the unit circle has finite (one-dimensional) Lebesgu
|
https://en.wikipedia.org/wiki/Calculus%20%28dental%29
|
In dentistry, calculus or tartar is a form of hardened dental plaque. It is caused by precipitation of minerals from saliva and gingival crevicular fluid (GCF) in plaque on the teeth. This process of precipitation kills the bacterial cells within dental plaque, but the rough and hardened surface that is formed provides an ideal surface for further plaque formation. This leads to calculus buildup, which compromises the health of the gingiva (gums). Calculus can form both along the gumline, where it is referred to as supragingival ("above the gum"), and within the narrow sulcus that exists between the teeth and the gingiva, where it is referred to as subgingival ("below the gum").
Calculus formation is associated with a number of clinical manifestations, including bad breath, receding gums and chronically inflamed gingiva. Brushing and flossing can remove plaque from which calculus forms; however, once formed, calculus is too hard (firmly attached) to be removed with a toothbrush. Calculus buildup can be removed with ultrasonic tools or dental hand instruments (such as a periodontal scaler).
Etymology
The word comes from Latin calculus "small stone", from calx "limestone, lime", probably related to Greek chalix "small stone, pebble, rubble", which many trace to a Proto-Indo-European root for "split, break up". Calculus was a term used for various kinds of stones. This spun off many modern words, including "calculate" (use stones for mathematical purposes), and "calculus", which came to be used, in the 18th century, for accidental or incidental mineral buildups in human and animal bodies, like kidney stones and minerals on teeth.
Tartar, on the other hand, originates in Greek as well (tartaron), but as the term for the white encrustation inside casks (a.k.a. potassium bitartrate, commonly known as cream of tartar). This came to be a term used for calcium phosphate on teeth in the early 19th century.
Calculus chemical composition
Calculus is composed of both inorganic (mineral) and organic (cellular and extracellular matrix) components. The mineral proportion of calculus ranges from approximately 40–60%, depending on its location in the dentition, and consists primarily of calcium phosphate crystals organized into four principal mineral phases, listed here in order of decreasing ratio of phosphate to calcium:
whitlockite,
hydroxyapatite,
octacalcium phosphate,
and brushite,
The organic component of calculus is approximately 85% cellular and 15% extracellular matrix. Cell density within dental plaque and calculus is very high, consisting of an estimated 200,000,000 cells per milligram. The cells within calculus are primarily bacterial, but also include at least one species of archaea (Methanobrevibacter oralis) and several species of yeast (e.g., Candida albicans). The organic extracellular matrix in calculus consists primarily of proteins and lipids (fatty acids, triglycerides, glycolipids, and phospholipids), as well as extracellular DNA. T
|
https://en.wikipedia.org/wiki/Method%20of%20Fluxions
|
Method of Fluxions () is a mathematical treatise by Sir Isaac Newton which served as the earliest written formulation of modern calculus. The book was completed in 1671 and published in 1736. Fluxion is Newton's term for a derivative. He originally developed the method at Woolsthorpe Manor during the closing of Cambridge during the Great Plague of London from 1665 to 1667, but did not choose to make his findings known (similarly, his findings which eventually became the Philosophiae Naturalis Principia Mathematica were developed at this time and hidden from the world in Newton's notes for many years). Gottfried Leibniz developed his form of calculus independently around 1673, 7 years after Newton had developed the basis for differential calculus, as seen in surviving documents like “the method of fluxions and fluents..." from 1666. Leibniz, however, published his discovery of differential calculus in 1684, nine years before Newton formally published his fluxion notation form of calculus in part during 1693. The calculus notation in use today is mostly that of Leibniz, although Newton's dot notation for differentiation for denoting derivatives with respect to time is still in current use throughout mechanics and circuit analysis.
Newton's Method of Fluxions was formally published posthumously, but following Leibniz's publication of the calculus a bitter rivalry erupted between the two mathematicians over who had developed the calculus first, provoking Newton to reveal his work on fluxions.
Newton's development of analysis
For a period of time encompassing Newton's working life, the discipline of analysis was a subject of controversy in the mathematical community. Although analytic techniques provided solutions to long-standing problems, including problems of quadrature and the finding of tangents, the proofs of these solutions were not known to be reducible to the synthetic rules of Euclidean geometry. Instead, analysts were often forced to invoke infinitesimal, or "infinitely small", quantities to justify their algebraic manipulations. Some of Newton's mathematical contemporaries, such as Isaac Barrow, were highly skeptical of such techniques, which had no clear geometric interpretation. Although in his early work Newton also used infinitesimals in his derivations without justifying them, he later developed something akin to the modern definition of limits in order to justify his work.
See also
History of calculus
Calorimetry
George Berkeley
Leonhard Euler
Non-standard analysis
Newton's method
Charles Hayes (mathematician)
John Landen
John Colson
Leibniz–Newton calculus controversy
Joseph Raphson
Time in physics
William Lax
References and notes
External links
Method of Fluxions at the Internet Archive
History of mathematics
Mathematics books
Books by Isaac Newton
1671 books
1736 books
Differential calculus
Mathematics literature
1736 in science
Books published posthumously
|
https://en.wikipedia.org/wiki/William%20Farr
|
William Farr CB (30 November 1807 – 14 April 1883) was a British epidemiologist, regarded as one of the founders of medical statistics.
Early life
William Farr was born in Kenley, Shropshire, to poor parents. He was effectively adopted by a local squire, Joseph Pryce, when Farr and his family moved to Dorrington. In 1826 he took a job as a dresser (surgeon's assistant) in the Salop Infirmary in Shrewsbury and served a nominal apprenticeship to an apothecary. Pryce died in November 1828, and left Farr £500 (), which allowed him to study medicine in France and Switzerland. In Paris he heard Pierre Charles Alexandre Louis lecture.
Farr returned to England in 1831 and continued his studies at University College London, qualifying as a licentiate of the Society of Apothecaries in March 1832. He married in 1833 and started a medical practice in Fitzroy Square, London. He became involved in medical journalism and statistics.
General Register Office
In 1837 the General Register Office (GRO) took on the responsibility for the United Kingdom Census 1841. Farr was hired there, initially on a temporary basis to handle data from vital registration. Then, with a recommendation from Edwin Chadwick and backing from Neil Arnott, Farr secured another post in the GRO as the first compiler of scientific abstracts (i.e. a statistician). Chadwick and Farr had an agenda, demography aimed at public health, and the support of the initial Registrar General Thomas Henry Lister. Lister worked with Farr on the census design, to forward the programme.
Farr was responsible for the collection of official medical statistics in England and Wales. His most important contribution was to set up a system for routinely recording the causes of death. For example, for the first time it allowed the mortality rates of different occupations to be compared.
Learned societies and associations
In 1839, Farr joined the Statistical Society, in which he played an active part as treasurer, vice-president and president over the years. In 1855 he was elected Fellow of the Royal Society. He was involved in the Social Science Association from its foundation in 1857, taking part in its Quarantine Committee and Committee on Trades' Societies and Strikes.
Law of epidemics
In 1840, Farr submitted a letter to the Annual Report of the Registrar General of Births, Deaths and Marriages in England. In that letter, he applied mathematics to the records of deaths during a recent smallpox epidemic, proposing that:
"If the latent cause of epidemics cannot be discovered, the mode in which it operates may be investigated. The laws of its action may be determined by observation, as well as the circumstances in which epidemics arise, or by which they may be controlled."
He showed that during the smallpox epidemic, a plot of the number of deaths per quarter followed a roughly bell-shaped or "normal curve", and that recent epidemics of other diseases had followed a similar pattern.
Research on cholera
|
https://en.wikipedia.org/wiki/Dimension%20of%20an%20algebraic%20variety
|
In mathematics and specifically in algebraic geometry, the dimension of an algebraic variety may be defined in various equivalent ways.
Some of these definitions are of geometric nature, while some other are purely algebraic and rely on commutative algebra. Some are restricted to algebraic varieties while others apply also to any algebraic set. Some are intrinsic, as independent of any embedding of the variety into an affine or projective space, while other are related to such an embedding.
Dimension of an affine algebraic set
Let be a field, and be an algebraically closed extension.
An affine algebraic set is the set of the common zeros in of the elements of an ideal in a polynomial ring Let be the algebra of the polynomial functions over . The dimension of is any of the following integers. It does not change if is enlarged, if is replaced by another algebraically closed extension of and if is replaced by another ideal having the same zeros (that is having the same radical). The dimension is also independent of the choice of coordinates; in other words it does not change if the are replaced by linearly independent linear combinations of them. The dimension of is
The maximal length of the chains of distinct nonempty (irreducible) subvarieties of .
This definition generalizes a property of the dimension of a Euclidean space or a vector space. It is thus probably the definition that gives the easiest intuitive description of the notion.
The Krull dimension of the coordinate ring .
This is the transcription of the preceding definition in the language of commutative algebra, the Krull dimension being the maximal length of the chains of prime ideals of .
The maximal Krull dimension of the local rings at the points of .
This definition shows that the dimension is a local property if is irreducible. If is irreducible, it turns out that all the local rings at closed points have the same Krull dimension (see ).
If is a variety, the Krull dimension of the local ring at any point of
This rephrases the previous definition into a more geometric language.
The maximal dimension of the tangent vector spaces at the non singular points of .
This relates the dimension of a variety to that of a differentiable manifold. More precisely, if if defined over the reals, then the set of its real regular points, if it is not empty, is a differentiable manifold that has the same dimension as a variety and as a manifold.
If is a variety, the dimension of the tangent vector space at any non singular point of .
This is the algebraic analogue to the fact that a connected manifold has a constant dimension. This can also be deduced from the result stated below the third definition, and the fact that the dimension of the tangent space is equal to the Krull dimension at any non-singular point (see Zariski tangent space).
The number of hyperplanes or hypersurfaces in general position which are needed to have an intersection with which is reduced
|
https://en.wikipedia.org/wiki/Algebraic%20curve
|
In mathematics, an affine algebraic plane curve is the zero set of a polynomial in two variables. A projective algebraic plane curve is the zero set in a projective plane of a homogeneous polynomial in three variables. An affine algebraic plane curve can be completed in a projective algebraic plane curve by homogenizing its defining polynomial. Conversely, a projective algebraic plane curve of homogeneous equation can be restricted to the affine algebraic plane curve of equation . These two operations are each inverse to the other; therefore, the phrase algebraic plane curve is often used without specifying explicitly whether it is the affine or the projective case that is considered.
More generally, an algebraic curve is an algebraic variety of dimension one. Equivalently, an algebraic curve is an algebraic variety that is birationally equivalent to an algebraic plane curve. If the curve is contained in an affine space or a projective space, one can take a projection for such a birational equivalence.
These birational equivalences reduce most of the study of algebraic curves to the study of algebraic plane curves. However, some properties are not kept under birational equivalence and must be studied on non-plane curves. This is, in particular, the case for the degree and smoothness. For example, there exist smooth curves of genus 0 and degree greater than two, but any plane projection of such curves has singular points (see Genus–degree formula).
A non-plane curve is often called a space curve or a skew curve.
In Euclidean geometry
An algebraic curve in the Euclidean plane is the set of the points whose coordinates are the solutions of a bivariate polynomial equation p(x, y) = 0. This equation is often called the implicit equation of the curve, in contrast to the curves that are the graph of a function defining explicitly y as a function of x.
With a curve given by such an implicit equation, the first problems are to determine the shape of the curve and to draw it. These problems are not as easy to solve as in the case of the graph of a function, for which y may easily be computed for various values of x. The fact that the defining equation is a polynomial implies that the curve has some structural properties that may help in solving these problems.
Every algebraic curve may be uniquely decomposed into a finite number of smooth monotone arcs (also called branches) sometimes connected by some points sometimes called "remarkable points", and possibly a finite number of isolated points called acnodes. A smooth monotone arc is the graph of a smooth function which is defined and monotone on an open interval of the x-axis. In each direction, an arc is either unbounded (usually called an infinite arc) or has an endpoint which is either a singular point (this will be defined below) or a point with a tangent parallel to one of the coordinate axes.
For example, for the Tschirnhausen cubic, there are two infinite arcs having the origin (0,0) as of
|
https://en.wikipedia.org/wiki/Almost
|
In set theory, when dealing with sets of infinite size, the term almost or nearly is used to refer to all but a negligible amount of elements in the set. The notion of "negligible" depends on the context, and may mean "of measure zero" (in a measure space), "finite" (when infinite sets are involved), or "countable" (when uncountably infinite sets are involved).
For example:
The set is almost for any in , because only finitely many natural numbers are less than .
The set of prime numbers is not almost , because there are infinitely many natural numbers that are not prime numbers.
The set of transcendental numbers are almost , because the algebraic real numbers form a countable subset of the set of real numbers (which is uncountable).
The Cantor set is uncountably infinite, but has Lebesgue measure zero. So almost all real numbers in (0, 1) are members of the complement of the Cantor set.
See also
Almost all
Almost surely
Approximation
List of mathematical jargon
References
Mathematical terminology
Set theory
de:Fast alle
|
https://en.wikipedia.org/wiki/Well-defined%20expression
|
In mathematics, a well-defined expression or unambiguous expression is an expression whose definition assigns it a unique interpretation or value. Otherwise, the expression is said to be not well defined, ill defined or ambiguous. A function is well defined if it gives the same result when the representation of the input is changed without changing the value of the input. For instance, if takes real numbers as input, and if does not equal then is not well defined (and thus not a function). The term well defined can also be used to indicate that a logical expression is unambiguous or uncontradictory.
A function that is not well defined is not the same as a function that is undefined. For example, if , then even though is undefined does not mean that the function is not well defined – but simply that 0 is not in the domain of .
Example
Let be sets, let and "define" as if and if .
Then is well defined if . For example, if and , then would be well defined and equal to .
However, if , then would not be well defined because is "ambiguous" for . For example, if and , then would have to be both 0 and 1, which makes it ambiguous. As a result, the latter is not well defined and thus not a function.
"Definition" as anticipation of definition
In order to avoid the quotation marks around "define" in the previous simple example, the "definition" of could be broken down into two simple logical steps:
While the definition in step 1 is formulated with the freedom of any definition and is certainly effective (without the need to classify it as "well defined"), the assertion in step 2 has to be proved. That is, is a function if and only if , in which case – as a function – is well defined.
On the other hand, if , then for an , we would have that and , which makes the binary relation not functional (as defined in Binary relation#Special types of binary relations) and thus not well defined as a function. Colloquially, the "function" is also called ambiguous at point (although there is per definitionem never an "ambiguous function"), and the original "definition" is pointless.
Despite these subtle logical problems, it is quite common to anticipatorily use the term definition (without apostrophes) for "definitions" of this kind – for three reasons:
It provides a handy shorthand of the two-step approach.
The relevant mathematical reasoning (i.e., step 2) is the same in both cases.
In mathematical texts, the assertion is "up to 100%" true.
Independence of representative
The question of well definedness of a function classically arises when the defining equation of a function does not (only) refer to the arguments themselves, but (also) to elements of the arguments, serving as representatives. This is sometimes unavoidable when the arguments are cosets and the equation refers to coset representatives. The result of a function application must then not depend on the choice of representative.
Functions with one argument
For example, c
|
https://en.wikipedia.org/wiki/Implied%20volatility
|
In financial mathematics, the implied volatility (IV) of an option contract is that value of the volatility of the underlying instrument which, when input in an option pricing model (such as Black–Scholes), will return a theoretical value equal to the current market price of said option. A non-option financial instrument that has embedded optionality, such as an interest rate cap, can also have an implied volatility. Implied volatility, a forward-looking and subjective measure, differs from historical volatility because the latter is calculated from known past returns of a security. To understand where implied volatility stands in terms of the underlying, implied volatility rank is used to understand its implied volatility from a one-year high and low IV.
Motivation
An option pricing model, such as Black–Scholes, uses a variety of inputs to derive a theoretical value for an option. Inputs to pricing models vary depending on the type of option being priced and the pricing model used. However, in general, the value of an option depends on an estimate of the future realized price volatility, σ, of the underlying. Or, mathematically:
where C is the theoretical value of an option, and f is a pricing model that depends on σ, along with other inputs.
The function f is monotonically increasing in σ, meaning that a higher value for volatility results in a higher theoretical value of the option. Conversely, by the inverse function theorem, there can be at most one value for σ that, when applied as an input to , will result in a particular value for C.
Put in other terms, assume that there is some inverse function g = f−1, such that
where is the market price for an option. The value is the volatility implied by the market price , or the implied volatility.
In general, it is not possible to give a closed form formula for implied volatility in terms of call price (for a review see ). However, in some cases (large strike, low strike, short expiry, large expiry) it is possible to give an asymptotic expansion of implied volatility in terms of call price. A different approach based on closed form approximations has been also investigated.
Example
A European call option, , on one share of non-dividend-paying XYZ Corp with a strike price of $50 expires in 32 days. The risk-free interest rate is 5%. XYZ stock is currently trading at $51.25 and the current market price of is $2.00. Using a standard Black–Scholes pricing model, the volatility implied by the market price is 18.7%, or:
To verify, we apply implied volatility to the pricing model, f , and generate a theoretical value of $2.0004:
which confirms our computation of the market implied volatility.
Solving the inverse pricing model function
In general, a pricing model function, f, does not have a closed-form solution for its inverse, g. Instead, a root finding technique is often used to solve the equation:
While there are many techniques for finding roots, two of the most commonly used are Newt
|
https://en.wikipedia.org/wiki/Matrix%20decomposition
|
In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.
Example
In numerical analysis, different decompositions are used to implement efficient matrix algorithms.
For instance, when solving a system of linear equations , the matrix A can be decomposed via the LU decomposition. The LU decomposition factorizes a matrix into a lower triangular matrix L and an upper triangular matrix U. The systems and require fewer additions and multiplications to solve, compared with the original system , though one might require significantly more digits in inexact arithmetic such as floating point.
Similarly, the QR decomposition expresses A as QR with Q an orthogonal matrix and R an upper triangular matrix. The system Q(Rx) = b is solved by Rx = QTb = c, and the system Rx = c is solved by 'back substitution'. The number of additions and multiplications required is about twice that of using the LU solver, but no more digits are required in inexact arithmetic because the QR decomposition is numerically stable.
Decompositions related to solving systems of linear equations
LU decomposition
Traditionally applicable to: square matrix A, although rectangular matrices can be applicable.
Decomposition: , where L is lower triangular and U is upper triangular
Related: the LDU decomposition is , where L is lower triangular with ones on the diagonal, U is upper triangular with ones on the diagonal, and D is a diagonal matrix.
Related: the LUP decomposition is , where L is lower triangular, U is upper triangular, and P is a permutation matrix.
Existence: An LUP decomposition exists for any square matrix A. When P is an identity matrix, the LUP decomposition reduces to the LU decomposition.
Comments: The LUP and LU decompositions are useful in solving an n-by-n system of linear equations . These decompositions summarize the process of Gaussian elimination in matrix form. Matrix P represents any row interchanges carried out in the process of Gaussian elimination. If Gaussian elimination produces the row echelon form without requiring any row interchanges, then P = I, so an LU decomposition exists.
LU reduction
Block LU decomposition
Rank factorization
Applicable to: m-by-n matrix A of rank r
Decomposition: where C is an m-by-r full column rank matrix and F is an r-by-n full row rank matrix
Comment: The rank factorization can be used to compute the Moore–Penrose pseudoinverse of A, which one can apply to obtain all solutions of the linear system .
Cholesky decomposition
Applicable to: square, hermitian, positive definite matrix
Decomposition: , where is upper triangular with real positive diagonal entries
Comment: if the matrix is Hermitian and positive semi-definite, then it has a decomposition of the form if the diagonal entries of are allowed to be zero
|
https://en.wikipedia.org/wiki/Proof%20of%20Bertrand%27s%20postulate
|
In mathematics, Bertrand's postulate (actually now a theorem) states that for each there is a prime such that . First conjectured in 1845 by Joseph Bertrand, it was first proven by Chebyshev, and a shorter but also advanced proof was given by Ramanujan.
The following elementary proof was published by Paul Erdős in 1932, as one of his earliest mathematical publications. The basic idea is to show that the central binomial coefficients need to have a prime factor within the interval in order to be large enough. This is achieved through analysis of the factorization of the central binomial coefficients.
The main steps of the proof are as follows. First, show that the contribution of every prime power factor in the prime decomposition of the central binomial coefficient is at most . Then show that every prime larger than appears at most once.
The next step is to prove that has no prime factors in the interval . As a consequence of these bounds, the contribution to the size of coming from the prime factors that are at most grows asymptotically as for some . Since the asymptotic growth of the central binomial coefficient is at least , the conclusion is that, by contradiction and for large enough , the binomial coefficient must have another prime factor, which can only lie between and .
The argument given is valid for all . The remaining values of are by direct inspection, which completes the proof.
Lemmas in the proof
The proof uses the following four lemmas to establish facts about the primes present in the central binomial coefficients.
Lemma 1
For any integer , we have
Proof: Applying the binomial theorem,
since is the largest term in the sum in the right-hand side, and the sum has terms (including the initial outside the summation).
Lemma 2
For a fixed prime , define to be the p-adic order of , that is, the largest natural number such that divides .
For any prime , .
Proof: The exponent of in is given by Legendre's formula
so
But each term of the last summation must be either zero (if ) or one (if ), and all terms with are zero. Therefore,
and
Lemma 3
If is odd and , then
Proof: There are exactly two factors of in the numerator of the expression , coming from the two terms and in , and also two factors of in the denominator from one copy of the term in each of the two factors of . These factors all cancel, leaving no factors of in . (The bound on in the preconditions of the lemma ensures that is too large to be a term of the numerator, and the assumption that is odd is needed to ensure that contributes only one factor of to the numerator.)
Lemma 4
An upper bound is supplied for the primorial function,
where the product is taken over all prime numbers less than or equal to the real number .
For all real numbers , .
Proof:
Since and , it suffices to prove the result under the assumption that is an integer, Since is an integer and all the primes appear in its numerator but not in its denom
|
https://en.wikipedia.org/wiki/Mathematical%20fallacy
|
In mathematics, certain kinds of mistaken proof are often exhibited, and sometimes collected, as illustrations of a concept called mathematical fallacy. There is a distinction between a simple mistake and a mathematical fallacy in a proof, in that a mistake in a proof leads to an invalid proof while in the best-known examples of mathematical fallacies there is some element of concealment or deception in the presentation of the proof.
For example, the reason why validity fails may be attributed to a division by zero that is hidden by algebraic notation. There is a certain quality of the mathematical fallacy: as typically presented, it leads not only to an absurd result, but does so in a crafty or clever way. Therefore, these fallacies, for pedagogic reasons, usually take the form of spurious proofs of obvious contradictions. Although the proofs are flawed, the errors, usually by design, are comparatively subtle, or designed to show that certain steps are conditional, and are not applicable in the cases that are the exceptions to the rules.
The traditional way of presenting a mathematical fallacy is to give an invalid step of deduction mixed in with valid steps, so that the meaning of fallacy is here slightly different from the logical fallacy. The latter usually applies to a form of argument that does not comply with the valid inference rules of logic, whereas the problematic mathematical step is typically a correct rule applied with a tacit wrong assumption. Beyond pedagogy, the resolution of a fallacy can lead to deeper insights into a subject (e.g., the introduction of Pasch's axiom of Euclidean geometry, the five colour theorem of graph theory). Pseudaria, an ancient lost book of false proofs, is attributed to Euclid.
Mathematical fallacies exist in many branches of mathematics. In elementary algebra, typical examples may involve a step where division by zero is performed, where a root is incorrectly extracted or, more generally, where different values of a multiple valued function are equated. Well-known fallacies also exist in elementary Euclidean geometry and calculus.
Howlers
Examples exist of mathematically correct results derived by incorrect lines of reasoning. Such an argument, however true the conclusion appears to be, is mathematically invalid and is commonly known as a howler. The following is an example of a howler involving anomalous cancellation:
Here, although the conclusion = is correct, there is a fallacious, invalid cancellation in the middle step. Another classical example of a howler is proving the Cayley–Hamilton theorem by simply substituting the scalar variables of the characteristic polynomial by the matrix.
Bogus proofs, calculations, or derivations constructed to produce a correct result in spite of incorrect logic or operations were termed "howlers" by Maxwell. Outside the field of mathematics the term howler has various meanings, generally less specific.
Division by zero
The division-by-zero fallacy has
|
https://en.wikipedia.org/wiki/Minimal%20polynomial
|
Minimal polynomial can mean:
Minimal polynomial (field theory)
Minimal polynomial of 2cos(2pi/n)
Minimal polynomial (linear algebra)
|
https://en.wikipedia.org/wiki/Isometry
|
In mathematics, an isometry (or congruence, or congruent transformation) is a distance-preserving transformation between metric spaces, usually assumed to be bijective. The word isometry is derived from the Ancient Greek: ἴσος isos meaning "equal", and μέτρον metron meaning "measure".
Introduction
Given a metric space (loosely, a set and a scheme for assigning distances between elements of the set), an isometry is a transformation which maps elements to the same or another metric space such that the distance between the image elements in the new metric space is equal to the distance between the elements in the original metric space.
In a two-dimensional or three-dimensional Euclidean space, two geometric figures are congruent if they are related by an isometry;
the isometry that relates them is either a rigid motion (translation or rotation), or a composition of a rigid motion and a reflection.
Isometries are often used in constructions where one space is embedded in another space. For instance, the completion of a metric space involves an isometry from into a quotient set of the space of Cauchy sequences on
The original space is thus isometrically isomorphic to a subspace of a complete metric space, and it is usually identified with this subspace.
Other embedding constructions show that every metric space is isometrically isomorphic to a closed subset of some normed vector space and that every complete metric space is isometrically isomorphic to a closed subset of some Banach space.
An isometric surjective linear operator on a Hilbert space is called a unitary operator.
Definition
Let and be metric spaces with metrics (e.g., distances) and
A map is called an isometry or distance preserving if for any one has
An isometry is automatically injective; otherwise two distinct points, a and b, could be mapped to the same point, thereby contradicting the coincidence axiom of the metric d.
This proof is similar to the proof that an order embedding between partially ordered sets is injective. Clearly, every isometry between metric spaces is a topological embedding.
A global isometry, isometric isomorphism or congruence mapping is a bijective isometry. Like any other bijection, a global isometry has a function inverse.
The inverse of a global isometry is also a global isometry.
Two metric spaces X and Y are called isometric if there is a bijective isometry from X to Y.
The set of bijective isometries from a metric space to itself forms a group with respect to function composition, called the isometry group.
There is also the weaker notion of path isometry or arcwise isometry:
A path isometry or arcwise isometry is a map which preserves the lengths of curves; such a map is not necessarily an isometry in the distance preserving sense, and it need not necessarily be bijective, or even injective.
This term is often abridged to simply isometry, so one should take care to determine from context which type is intended.
Examples
An
|
https://en.wikipedia.org/wiki/Tensor%20%28intrinsic%20definition%29
|
In mathematics, the modern component-free approach to the theory of a tensor views a tensor as an abstract object, expressing some definite type of multilinear concept. Their properties can be derived from their definitions, as linear maps or more generally; and the rules for manipulations of tensors arise as an extension of linear algebra to multilinear algebra.
In differential geometry, an intrinsic geometric statement may be described by a tensor field on a manifold, and then doesn't need to make reference to coordinates at all. The same is true in general relativity, of tensor fields describing a physical property. The component-free approach is also used extensively in abstract algebra and homological algebra, where tensors arise naturally.
Note: This article assumes an understanding of the tensor product of vector spaces without chosen bases. An overview of the subject can be found in the main tensor article.
Definition via tensor products of vector spaces
Given a finite set of vector spaces over a common field F, one may form their tensor product , an element of which is termed a tensor.
A tensor on the vector space V is then defined to be an element of (i.e., a vector in) a vector space of the form:
where V∗ is the dual space of V.
If there are m copies of V and n copies of V∗ in our product, the tensor is said to be of and contravariant of order m and covariant of order n and of total order . The tensors of order zero are just the scalars (elements of the field F), those of contravariant order 1 are the vectors in V, and those of covariant order 1 are the one-forms in V∗ (for this reason, the elements of the last two spaces are often called the contravariant and covariant vectors). The space of all tensors of type is denoted
Example 1. The space of type tensors, is isomorphic in a natural way to the space of linear transformations from V to V.
Example 2. A bilinear form on a real vector space V, corresponds in a natural way to a type tensor in An example of such a bilinear form may be defined, termed the associated metric tensor, and is usually denoted g.
Tensor rank
A simple tensor (also called a tensor of rank one, elementary tensor or decomposable tensor ) is a tensor that can be written as a product of tensors of the form
where a, b, ..., d are nonzero and in V or V∗ – that is, if the tensor is nonzero and completely factorizable. Every tensor can be expressed as a sum of simple tensors. The rank of a tensor T is the minimum number of simple tensors that sum to T .
The zero tensor has rank zero. A nonzero order 0 or 1 tensor always has rank 1. The rank of a non-zero order 2 or higher tensor is less than or equal to the product of the dimensions of all but the highest-dimensioned vectors in (a sum of products of) which the tensor can be expressed, which is d when each product is of n vectors from a finite-dimensional vector space of dimension d.
The term rank of a tensor extends the notion of the rank of a matri
|
https://en.wikipedia.org/wiki/Modus
|
Modus may refer to:
Modus, the Latin name for grammatical mood, in linguistics
Modus, the Latin name for mode (statistics)
Modus (company), an Alberta-based company
Modus (medieval music), a term used in several different technical meanings in medieval music theory
The Renault Modus, a small car
Modus (band), a pop music band in former Czechoslovakia
Modus (album), 1979, or the title track
Short for modus decimandi, a type of payment made in lieu of a tithe
Modus FX, a visual effects company based in Sainte-Thérèse, Quebec, Canada
Modus (TV series), a Swedish television series, 2015
"Modus", a song by Joji from his 2020 album Nectar
See also
Modus operandi
Modus operandi (disambiguation)
Modus vivendi
|
https://en.wikipedia.org/wiki/Experimental%20mathematics
|
Experimental mathematics is an approach to mathematics in which computation is used to investigate mathematical objects and identify properties and patterns. It has been defined as "that branch of mathematics that concerns itself ultimately with the codification and transmission of insights within the mathematical community through the use of experimental (in either the Galilean, Baconian, Aristotelian or Kantian sense) exploration of conjectures and more informal beliefs and a careful analysis of the data acquired in this pursuit."
As expressed by Paul Halmos: "Mathematics is not a deductive science—that's a cliché. When you try to prove a theorem, you don't just list the hypotheses, and then start to reason. What you do is trial and error, experimentation, guesswork. You want to find out what the facts are, and what you do is in that respect similar to what a laboratory technician does."
History
Mathematicians have always practiced experimental mathematics. Existing records of early mathematics, such as Babylonian mathematics, typically consist of lists of numerical examples illustrating algebraic identities. However, modern mathematics, beginning in the 17th century, developed a tradition of publishing results in a final, formal and abstract presentation. The numerical examples that may have led a mathematician to originally formulate a general theorem were not published, and were generally forgotten.
Experimental mathematics as a separate area of study re-emerged in the twentieth century, when the invention of the electronic computer vastly increased the range of feasible calculations, with a speed and precision far greater than anything available to previous generations of mathematicians. A significant milestone and achievement of experimental mathematics was the discovery in 1995 of the Bailey–Borwein–Plouffe formula for the binary digits of π. This formula was discovered not by formal reasoning, but instead
by numerical searches on a computer; only afterwards was a rigorous proof found.
Objectives and uses
The objectives of experimental mathematics are "to generate understanding and insight; to generate and confirm or confront conjectures; and generally to make mathematics more tangible, lively and fun for both the professional researcher and the novice".
The uses of experimental mathematics have been defined as follows:
Gaining insight and intuition.
Discovering new patterns and relationships.
Using graphical displays to suggest underlying mathematical principles.
Testing and especially falsifying conjectures.
Exploring a possible result to see if it is worth formal proof.
Suggesting approaches for formal proof.
Replacing lengthy hand derivations with computer-based derivations.
Confirming analytically derived results.
Tools and techniques
Experimental mathematics makes use of numerical methods to calculate approximate values for integrals and infinite series. Arbitrary precision arithmetic is often used to establish these values t
|
https://en.wikipedia.org/wiki/Mathematical%20problem
|
A mathematical problem is a problem that can be represented, analyzed, and possibly solved, with the methods of mathematics. This can be a real-world problem, such as computing the orbits of the planets in the solar system, or a problem of a more abstract nature, such as Hilbert's problems. It can also be a problem referring to the nature of mathematics itself, such as Russell's Paradox.
Real-world problems
Informal "real-world" mathematical problems are questions related to a concrete setting, such as "Adam has five apples and gives John three. How many has he left?". Such questions are usually more difficult to solve than regular mathematical exercises like "5 − 3", even if one knows the mathematics required to solve the problem. Known as word problems, they are used in mathematics education to teach students to connect real-world situations to the abstract language of mathematics.
In general, to use mathematics for solving a real-world problem, the first step is to construct a mathematical model of the problem. This involves abstraction from the details of the problem, and the modeller has to be careful not to lose essential aspects in translating the original problem into a mathematical one. After the problem has been solved in the world of mathematics, the solution must be translated back into the context of the original problem.
Abstract problems
Abstract mathematical problems arise in all fields of mathematics. While mathematicians usually study them for their own sake, by doing so, results may be obtained that find application outside the realm of mathematics. Theoretical physics has historically been a rich source of inspiration.
Some abstract problems have been rigorously proved to be unsolvable, such as squaring the circle and trisecting the angle using only the compass and straightedge constructions of classical geometry, and solving the general quintic equation algebraically. Also provably unsolvable are so-called undecidable problems, such as the halting problem for Turing machines.
Some well-known difficult abstract problems that have been solved relatively recently are the four-colour theorem, Fermat's Last Theorem, and the Poincaré conjecture.
Computers do not need to have a sense of the motivations of mathematicians in order to do what they do. Formal definitions and computer-checkable deductions are absolutely central to mathematical science.
Degradation of problems to exercises
Mathematics educators using problem solving for evaluation have an issue phrased by Alan H. Schoenfeld:
How can one compare test scores from year to year, when very different problems are used? (If similar problems are used year after year, teachers and students will learn what they are, students will practice them: problems become exercises, and the test no longer assesses problem solving).
The same issue was faced by Sylvestre Lacroix almost two centuries earlier:
... it is necessary to vary the questions that students might communicate with e
|
https://en.wikipedia.org/wiki/Ordered%20exponential
|
The ordered exponential, also called the path-ordered exponential, is a mathematical operation defined in non-commutative algebras, equivalent to the exponential of the integral in the commutative algebras. In practice the ordered exponential is used in matrix and operator algebras.
Definition
Let be an algebra over a real or complex field , and be a parameterized element of ,
The parameter in is often referred to as the time parameter in this context.
The ordered exponential of is denoted
where the term is equal to 1 and where is a higher-order operation that ensures the exponential is time-ordered: any product of that occurs in the expansion of the exponential must be ordered such that the value of is increasing from right to left of the product; a schematic example:
This restriction is necessary as products in the algebra are not necessarily commutative.
The operation maps a parameterized element onto another parameterized element, or symbolically,
There are various ways to define this integral more rigorously.
Product of exponentials
The ordered exponential can be defined as the left product integral of the infinitesimal exponentials, or equivalently, as an ordered product of exponentials in the limit as the number of terms grows to infinity:
where the time moments are defined as for , and .
The ordered exponential is in fact a geometric integral.
Solution to a differential equation
The ordered exponential is unique solution of the initial value problem:
Solution to an integral equation
The ordered exponential is the solution to the integral equation:
This equation is equivalent to the previous initial value problem.
Infinite series expansion
The ordered exponential can be defined as an infinite sum,
This can be derived by recursively substituting the integral equation into itself.
Example
Given a manifold where for a with group transformation it holds at a point :
Here, denotes exterior differentiation and is the connection operator (1-form field) acting on . When integrating above equation it holds (now, is the connection operator expressed in a coordinate basis)
with the path-ordering operator that orders factors in order of the path . For the special case that is an antisymmetric operator and is an infinitesimal rectangle with edge lengths and corners at points above expression simplifies as follows :
Hence, it holds the group transformation identity . If is a smooth connection, expanding above quantity to second order in infinitesimal quantities one obtains for the ordered exponential the identity with a correction term that is proportional to the curvature tensor.
See also
Path-ordering (essentially the same concept)
Magnus expansion
Product integral
List of derivatives and integrals in alternative calculi
Indefinite product
Fractal derivative
References
External links
Non-Newtonian calculus website
Abstract algebra
Ordinary differential equations
Non-Newtonian
|
https://en.wikipedia.org/wiki/Tensor%20field
|
In mathematics and physics, a tensor field assigns a tensor to each point of a mathematical space (typically a Euclidean space or manifold). Tensor fields are used in differential geometry, algebraic geometry, general relativity, in the analysis of stress and strain in materials, and in numerous applications in the physical sciences. As a tensor is a generalization of a scalar (a pure number representing a value, for example speed) and a vector (a pure number plus a direction, like velocity), a tensor field is a generalization of a scalar field or vector field that assigns, respectively, a scalar or vector to each point of space. If a tensor is defined on a vector fields set over a module , we call a tensor field on .
Many mathematical structures called "tensors" are also tensor fields. For example, the Riemann curvature tensor is a tensor field as it associates a tensor to each point of a Riemannian manifold, which is a topological space.
Definition
Let M be a manifold, for instance the Euclidean plane Rn.
Equivalently, it is a collection of elements Tx ∈ Vx⊗p ⊗ (Vx*)⊗q for all points x ∈ M, arranging into a smooth map T : M → V⊗p ⊗ (V*)⊗q. Elements Tx are called tensors.
Often we take V = TM to be the tangent bundle of M.
Geometric introduction
Intuitively, a vector field is best visualized as an "arrow" attached to each point of a region, with variable length and direction. One example of a vector field on a curved space is a weather map showing horizontal wind velocity at each point of the Earth's surface.
Now consider more complicated fields. For example, if the manifold is Riemannian, then it has a metric field , such that given any two vectors at point , their inner product is . The field could be given in matrix form, but it depends on a choice of coordinates. It could instead be given as an ellipsoid of radius 1 at each point, which is coordinate-free. Applied to the Earth's surface, this is Tissot's indicatrix.
In general, we want to specify tensor fields in a coordinate-independent way: It should exist independently of latitude and longitude, or whatever particular "cartographic projection" we are using to introduce numerical coordinates.
Via coordinate transitions
Following and , the concept of a tensor relies on a concept of a reference frame (or coordinate system), which may be fixed (relative to some background reference frame), but in general may be allowed to vary within some class of transformations of these coordinate systems.
For example, coordinates belonging to the n-dimensional real coordinate space may be subjected to arbitrary affine transformations:
(with n-dimensional indices, summation implied). A covariant vector, or covector, is a system of functions that transforms under this affine transformation by the rule
The list of Cartesian coordinate basis vectors transforms as a covector, since under the affine transformation . A contravariant vector is a system of functions of the coordinate
|
https://en.wikipedia.org/wiki/Thomas%20Heath%20%28classicist%29
|
Sir Thomas Little Heath (; 5 October 1861 – 16 March 1940) was a British civil servant, mathematician, classical scholar, historian of ancient Greek mathematics, translator, and mountaineer. He was educated at Clifton College. Heath translated works of Euclid of Alexandria, Apollonius of Perga, Aristarchus of Samos, and Archimedes of Syracuse into English.
Life
Heath was born in Barnetby-le-Wold, Lincolnshire, England, being the third son of a farmer, Samuel Heath, and his wife Mary Little. He had two brothers and three sisters. He was educated at Caistor Grammar School and Clifton College before entering Trinity College, Cambridge, where he was awarded an ScD in 1896 and became an Honorary Fellow in 1920. He got first class honours in both the classical tripos and mathematical tripos and was the twelfth wrangler in 1882. In 1884 he took the Civil Service examination and became an Assistant Secretary to the Treasury, finally becoming Joint Permanent Secretary to the Treasury and auditor of the Civil List in 1913. He held the position till 1919 when he was appointed as the comptroller of the National Debt Office, from which he retired at the end of 1926 because of age limitations. He was honoured for his work in the Civil Service by being appointed Companion of the Order of the Bath in 1903, Knight Commander of the Order of the Bath in 1909, and Knight Commander of the Royal Victorian Order in 1916. He was elected a Fellow of the Royal Society in 1912. He was a president of the Mathematical Association in 1922-23, and a fellow of the British Academy.
He had married professional musician Ada Mary Thomas in 1914; they had a son, Geoffrey Thomas Heath, and a daughter, Veronica Mary Heath. Heath's son Geoffrey went to Trinity College, Cambridge, before becoming a teacher at Ampleforth College, and had 6 children.
Heath died in Ashtead, Surrey, on 16 March 1940.
Work
Heath was distinguished for his work in Greek mathematics and was author of several books on Greek mathematicians. It is primarily through Heath's translations that modern English-speaking readers are aware of what Archimedes did. His translation of the celebrated Archimedes Palimpsest, however, was based on a transcription that had lacunae, which scholars such as Reviel Netz have been able to fill in to a certain extent, by exploiting scientific methods of imagery not available in Heath's time.
When Heath's Works of Archimedes was published in 1897, the Archimedes Palimpsest had not been extensively explored. Its significance was not recognised until 1906, when it was examined by Danish professor Johan Ludvig Heiberg. The palimpsest contained an extended version of Stomachion, and a treatise entitled The Method of Mechanical Theorems that had previously been thought lost. These works have been a focus of research by later scholars.
Translations and other works
Note: Only first editions are listed; many of these titles have been reprinted several times.
Diophantus of Alexandria:
|
https://en.wikipedia.org/wiki/Knuth%27s%20up-arrow%20notation
|
In mathematics, Knuth's up-arrow notation is a method of notation for very large integers, introduced by Donald Knuth in 1976.
In his 1947 paper, R. L. Goodstein introduced the specific sequence of operations that are now called hyperoperations. Goodstein also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation. The sequence starts with a unary operation (the successor function with n = 0), and continues with the binary operations of addition (n = 1), multiplication (n = 2), exponentiation (n = 3), tetration (n = 4), pentation (n = 5), etc.
Various notations have been used to represent hyperoperations. One such notation is .
Knuth's up-arrow notation is another.
For example:
the single arrow represents exponentiation (iterated multiplication)
the double arrow represents tetration (iterated exponentiation)
the triple arrow represents pentation (iterated tetration)
The general definition of the up-arrow notation is as follows (for ):
Here, stands for n arrows, so for example
The square brackets are another notation for hyperoperations.
Introduction
The hyperoperations naturally extend the arithmetical operations of addition and multiplication as follows.
Addition by a natural number is defined as iterated incrementation:
Multiplication by a natural number is defined as iterated addition:
For example,
Exponentiation for a natural power is defined as iterated multiplication, which Knuth denoted by a single up-arrow:
For example,
Tetration is defined as iterated exponentiation, which Knuth denoted by a “double arrow”:
For example,
Expressions are evaluated from right to left, as the operators are defined to be right-associative.
According to this definition,
etc.
This already leads to some fairly large numbers, but the hyperoperator sequence does not stop here.
Pentation, defined as iterated tetration, is represented by the “triple arrow”:
Hexation, defined as iterated pentation, is represented by the “quadruple arrow”:
and so on. The general rule is that an -arrow operator expands into a right-associative series of ()-arrow operators. Symbolically,
Examples:
Notation
In expressions such as , the notation for exponentiation is usually to write the exponent as a superscript to the base number . But many environments — such as programming languages and plain-text e-mail — do not support superscript typesetting. People have adopted the linear notation for such environments; the up-arrow suggests 'raising to the power of'. If the character set does not contain an up arrow, the caret (^) is used instead.
The superscript notation doesn't lend itself well to generalization, which explains why Knuth chose to work from the inline notation instead.
is a shorter alternative notation for n uparrows. Thus .
Writing out up-arrow notation in terms of powers
Attempting to write using the familiar superscript notation gives a power tower.
For example:
If b is a vari
|
https://en.wikipedia.org/wiki/Ergodic%20theory
|
Ergodic theory is a branch of mathematics that studies statistical properties of deterministic dynamical systems; it is the study of ergodicity. In this context, "statistical properties" refers to properties which are expressed through the behavior of time averages of various functions along trajectories of dynamical systems. The notion of deterministic dynamical systems assumes that the equations determining the dynamics do not contain any random perturbations, noise, etc. Thus, the statistics with which we are concerned are properties of the dynamics.
Ergodic theory, like probability theory, is based on general notions of measure theory. Its initial development was motivated by problems of statistical physics.
A central concern of ergodic theory is the behavior of a dynamical system when it is allowed to run for a long time. The first result in this direction is the Poincaré recurrence theorem, which claims that almost all points in any subset of the phase space eventually revisit the set. Systems for which the Poincaré recurrence theorem holds are conservative systems; thus all ergodic systems are conservative.
More precise information is provided by various ergodic theorems which assert that, under certain conditions, the time average of a function along the trajectories exists almost everywhere and is related to the space average. Two of the most important theorems are those of Birkhoff (1931) and von Neumann which assert the existence of a time average along each trajectory. For the special class of ergodic systems, this time average is the same for almost all initial points: statistically speaking, the system that evolves for a long time "forgets" its initial state. Stronger properties, such as mixing and equidistribution, have also been extensively studied.
The problem of metric classification of systems is another important part of the abstract ergodic theory. An outstanding role in ergodic theory and its applications to stochastic processes is played by the various notions of entropy for dynamical systems.
The concepts of ergodicity and the ergodic hypothesis are central to applications of ergodic theory. The underlying idea is that for certain systems the time average of their properties is equal to the average over the entire space. Applications of ergodic theory to other parts of mathematics usually involve establishing ergodicity properties for systems of special kind. In geometry, methods of ergodic theory have been used to study the geodesic flow on Riemannian manifolds, starting with the results of Eberhard Hopf for Riemann surfaces of negative curvature. Markov chains form a common context for applications in probability theory. Ergodic theory has fruitful connections with harmonic analysis, Lie theory (representation theory, lattices in algebraic groups), and number theory (the theory of diophantine approximations, L-functions).
Ergodic transformations
Ergodic theory is often concerned with ergodic transformations. The
|
https://en.wikipedia.org/wiki/City%20Technology%20College
|
In England, a City Technology College (CTC) is an urban all-ability specialist school for students aged 11 to 18 specialising in science, technology and mathematics. They charge no fees and are independent of local authority control, being overseen directly by the Department for Education. One fifth of the capital costs are met by private business sponsors, who also own or lease the buildings. The rest of the capital costs, and all running costs, are met by the Department.
Description
CTCs operate as limited companies with articles of association and a board of governors. A CTC is governed through an operating agreement made between the Secretary of State for Education and whoever is responsible for establishing and running the school. This agreement includes the regulations for the school's educational provision (e.g. its curriculum and admissions policy). These are negotiated between the two parties and must be enforced by the school should it wish to receive government funding from the Secretary of State. This funding covers most capital costs and all running costs, although one fifth of capital costs are instead met by private business sponsors, who also own or lease the buildings. More government funding is granted to be spent towards the school's pupils. This funding fluctuates on a per capita basis and depends on the size of the total pupil population.
CTCs teach the National Curriculum, but specialise in mainly technology-based subjects such as technology, science and mathematics.
Like maintained schools, they are regularly inspected by the Office for Standards in Education. CTCs also forge close links with businesses and industry (mainly through their sponsors), and often their governors are directors of local or national businesses that are supporting or have supported the colleges. The programme has been successful in the long term with all the CTCs being considered strong establishments with consistently high academic results.
Development
Plans to establish schools or colleges for technology in major urban areas were first reported in an article from The Sunday Times in December 1985. There would be between sixteen and twenty of these institutions serving 1000 pupils each. They would charge no fees and would be publicly funded through an educational trust, but would select their pupils on a "special" basis. Unlike other state-funded schools at this time, these institutions would not be run by their local education authority (LEA or simply local authority). These plans were the brainchild of Schools Minister Bob Dunn, who had been pushing the Secretary of State for Education and Science Keith Joseph to introduce British magnet schools, with the ultimate aim of encouraging specialisation and increased parental choice in the education system. These schools, if introduced, would be known as technology-plus schools, specialist schools for technology with extra funding from private sector sponsors.
In January 1986, a Centre for Policy
|
https://en.wikipedia.org/wiki/Theorema%20Egregium
|
Gauss's Theorema Egregium (Latin for "Remarkable Theorem") is a major result of differential geometry, proved by Carl Friedrich Gauss in 1827, that concerns the curvature of surfaces. The theorem says that Gaussian curvature can be determined entirely by measuring angles, distances and their rates on a surface, without reference to the particular manner in which the surface is embedded in the ambient 3-dimensional Euclidean space. In other words, the Gaussian curvature of a surface does not change if one bends the surface without stretching it. Thus the Gaussian curvature is an intrinsic invariant of a surface.
Gauss presented the theorem in this manner (translated from Latin):
Thus the formula of the preceding article leads itself to the remarkable Theorem. If a curved surface is developed upon any other surface whatever, the measure of curvature in each point remains unchanged.
The theorem is "remarkable" because the starting definition of Gaussian curvature makes direct use of position of the surface in space. So it is quite surprising that the result does not depend on its embedding in spite of all bending and twisting deformations undergone.
In modern mathematical terminology, the theorem may be stated as follows:
Elementary applications
A sphere of radius R has constant Gaussian curvature which is equal to 1/R2. At the same time, a plane has zero Gaussian curvature. As a corollary of Theorema Egregium, a piece of paper cannot be bent onto a sphere without crumpling. Conversely, the surface of a sphere cannot be unfolded onto a flat plane without distorting the distances. If one were to step on an empty egg shell, its edges have to split in expansion before being flattened. Mathematically, a sphere and a plane are not isometric, even locally. This fact is significant for cartography: it implies that no planar (flat) map of Earth can be perfect, even for a portion of the Earth's surface. Thus every cartographic projection necessarily distorts at least some distances.
The catenoid and the helicoid are two very different-looking surfaces. Nevertheless, each of them can be continuously bent into the other: they are locally isometric. It follows from Theorema Egregium that under this bending the Gaussian curvature at any two corresponding points of the catenoid and helicoid is always the same. Thus isometry is simply bending and twisting of a surface without internal crumpling or tearing, in other words without extra tension, compression, or shear.
An application of the theorem is seen when a flat object is somewhat folded or bent along a line, creating rigidity in the perpendicular direction. This is of practical use in construction, as well as in a common pizza-eating strategy: A flat slice of pizza can be seen as a surface with constant Gaussian curvature 0. Gently bending a slice must then roughly maintain this curvature (assuming the bend is roughly a local isometry). If one bends a slice horizontally along a radius, non-zero pr
|
https://en.wikipedia.org/wiki/Nicolas%20Chuquet
|
Nicolas Chuquet (; born ; died ) was a French mathematician. He invented his own notation for algebraic concepts and exponentiation. He may have been the first mathematician to recognize zero and negative numbers as exponents.
In 1475, Jehan Adam recorded the words "bymillion" and "trimillion" (for 1012 and 1018) and it is believed that these words or similar ones were in general use at that time.
In 1484, Chuquet wrote an article Triparty en la science des nombres, which was unpublished in his lifetime. Most of it, however, was copied without attribution by Estienne de La Roche in his 1520 textbook, l'Arismetique. In the 1870s, scholar Aristide Marre discovered Chuquet's manuscript and published it in 1880. The manuscript contained notes in de la Roche's handwriting. His article shows a huge number divided into groups of six digits, and in a short passage he states that the groups can be called:
"million, the second mark byllion, the third mark tryllion, the fourth quadrillion, the fifth quyillion, the sixth sixlion, the seventh septyllion, the eighth ottyllion, the ninth nonyllion and so on with others as far as you wish to go.
In a second passage, he wrote:
... Item lon doit savoir que ung million vault mille milliers de unitez, et ung byllion vault mille milliers de millions, et [ung] tryllion vault mille milliers de byllions, et ung quadrillion vault mille milliers de tryllions et ainsi des aultres : Et de ce en est pose ung exemple nombre divise et punctoye ainsi que devant est dit, tout lequel nombre monte 745324 tryllions 804300 byllions 700023 millions 654321. Exemple : 745324'8043000'700023'654321 ...
Item: one should know that a million is worth a thousand thousand units, and a byllion is worth a thousand thousand millions, and tryillion is worth a thousand thousand byllions, and a quadrillion is worth a thousand thousand tryllions, and so on for the others. And an example of this follows, a number divided up and punctuated as previously described, the whole number being seven hundred forty-five thousand three hundred and twenty-four tryllions, 804300 byllions 700023 millions 654321 ...
In the extract from Chuquet's manuscript, the transcription and translation provided here all contain an original mistake: one too many zeros in the 804300 portion of the fully written out example: 745324'8043000 '700023'654321 ...
Chuquet was, however, the original author of the earliest work using of a systematic, extended series of names ending in -illion or -yllion. The system in which the names million, billion, trillion, etc. refer to powers of one million is sometimes referred to as the Chuquet system.
In 1514, Budaeus introduced the term Milliard or Milliart for 1012, which was widely publicised around 1550 by the influential Jacques Peletier du Mans. Milliard was reduced to 109 around the end of the 17th century, leaving the modern Long scale system. This system is sometimes referred to as the Chuquet-Peletier system.
Much later, in Fr
|
https://en.wikipedia.org/wiki/East%20Africa
|
East Africa, Eastern Africa, or East of Africa, is the eastern subregion of the African continent. In the United Nations Statistics Division scheme of geographic regions, 10-11-(16*) territories make up Eastern Africa:
Scientific consensus states the region of East Africa is where anatomically modern humans first evolved circa 200,000 years ago before migrating northwards out of Africa.
Due to the historical Omani Empire and colonial territories of the British East Africa Protectorate and German East Africa, the term East Africa is often (especially in the English language) used to specifically refer to the area now comprising the three countries of Kenya, Tanzania, and Uganda. However, this has never been the convention in many other languages, where the term generally had a wider, strictly geographic context and therefore typically included Djibouti, Eritrea, Ethiopia, and Somalia.
Tanzania, Kenya, Uganda, Rwanda, Burundi, Democratic Republic of Congo and South Sudan are members of the East African Community. The first five are also included in the African Great Lakes region. Burundi and Rwanda are at times also considered to be part of Central Africa.
Djibouti, Eritrea, Ethiopia and Somalia are collectively known as the Horn of Africa. The area is the easternmost projection of the African continent.
Socotra - A governerate of Yemen located in the Indian Ocean.
Comoros, Mauritius, and Seychelles – small island nations in the Indian Ocean.
Réunion, Mayotte (geographically a part of the Comoro Islands) and the Scattered Islands in the Indian Ocean – French overseas territories also in the Indian Ocean.
Mozambique and Madagascar – often considered part of Southern Africa, on the eastern side of the sub-continent. Madagascar has close cultural ties to both Southeast Asia and East Africa, and the islands of the Indian Ocean.
Malawi, Zambia, and Zimbabwe – often also included in Southern Africa, and formerly constituted the Central African Federation (also known historically as the Federation of Rhodesia and Nyasaland).
South Sudan and Sudan – collectively part of the Nile Valley. They are situated in the northeastern portion of the continent. Also members of the Common Market for Eastern and Southern Africa (COMESA) free trade area.
Geography and climate
Some parts of East Africa have been renowned for their concentrations of wild animals, such as the "big five": the elephant, buffalo, lion, black rhinoceros, and leopard, though populations have been declining under increased stress in recent times, particularly those of the rhino and elephant.
The geography of East Africa is often stunning and scenic. Shaped by global plate tectonic forces that have created the East African Rift, East Africa is the site of Mount Kilimanjaro and Mount Kenya, the two tallest peaks in Africa. It also includes the world's second largest freshwater lake, Lake Victoria, and the world's second deepest lake, Lake Tanganyika.
The climate of East Africa is ra
|
https://en.wikipedia.org/wiki/Initialized%20fractional%20calculus
|
In mathematical analysis, initialization of the differintegrals is a topic in fractional calculus.
Composition rule of differintegral
A certain counterintuitive property of the differintegral operator should be pointed out, namely the composition law. Although
wherein D−q is the left inverse of Dq, the converse is not necessarily true:
Example
It is instructive to consider elementary integer-order calculus to see what's happening. First, integrate then differentiate, using the example function 3x2 + 1:
on exchanging the order of composition:
in which the constant of integration is c. Even if it wasn't obvious, the initialization terms ƒ'(0) = c, ƒ''(0) = d, etc. could be used. If we neglected those initialization terms, the last equation would show the composition of integration then differentiation (and vice versa) would not hold.
Description of initialization
This is the problem that with the differintegral. If the differintegral is initialized properly, then the hoped-for composition law holds. The problem is that in differentiation, we lose information, as we lost the c in the first equation.
In fractional calculus, however, since the operator has been fractionalized and is thus continuous, an entire complementary function is needed, not just a constant or set of constants. We call this complementary function .
Working with a properly initialized differintegral is the subject of initialized fractional calculus.
See also
Initial conditions
Dynamical systems
References
(technical report).
Fractional calculus
|
https://en.wikipedia.org/wiki/Minor%20%28linear%20algebra%29
|
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.
Definition and illustration
First minors
If A is a square matrix, then the minor of the entry in the ith row and jth column (also called the (i, j) minor, or a first minor) is the determinant of the submatrix formed by deleting the ith row and jth column. This number is often denoted Mi,j. The (i, j) cofactor is obtained by multiplying the minor by .
To illustrate these definitions, consider the following 3 by 3 matrix,
To compute the minor M2,3 and the cofactor C2,3, we find the determinant of the above matrix with row 2 and column 3 removed.
So the cofactor of the (2,3) entry is
General definition
Let A be an m × n matrix and k an integer with 0 < k ≤ m, and k ≤ n. A k × k minor of A, also called minor determinant of order k of A or, if m = n, (n−k)th minor determinant of A (the word "determinant" is often omitted, and the word "degree" is sometimes used instead of "order") is the determinant of a k × k matrix obtained from A by deleting m−k rows and n−k columns. Sometimes the term is used to refer to the k × k matrix obtained from A as above (by deleting m−k rows and n−k columns), but this matrix should be referred to as a (square) submatrix of A, leaving the term "minor" to refer to the determinant of this matrix. For a matrix A as above, there are a total of minors of size k × k. The minor of order zero is often defined to be 1. For a square matrix, the zeroth minor is just the determinant of the matrix.
Let and be ordered sequences (in natural order, as it is always assumed when talking about minors unless otherwise stated) of indexes, call them I and J, respectively. The minor corresponding to these choices of indexes is denoted or or or or or (where the denotes the sequence of indexes I, etc.), depending on the source. Also, there are two types of denotations in use in literature: by the minor associated to ordered sequences of indexes I and J, some authors mean the determinant of the matrix that is formed as above, by taking the elements of the original matrix from the rows whose indexes are in I and columns whose indexes are in J, whereas some other authors mean by a minor associated to I and J the determinant of the matrix formed from the original matrix by deleting the rows in I and columns in J. Which notation is used should always be checked from the source in question. In this article, we use the inclusive definition of choosing the elements from rows of I and columns of J. The exceptional case is the case of the first min
|
https://en.wikipedia.org/wiki/GAP%20%28computer%20algebra%20system%29
|
GAP (Groups, Algorithms and Programming) is a computer algebra system for computational discrete algebra with particular emphasis on computational group theory.
History
GAP was developed at Lehrstuhl D für Mathematik (LDFM), Rheinisch-Westfälische Technische Hochschule Aachen, Germany from 1986 to 1997. After the retirement of Joachim Neubüser from the chair of LDFM, the development and maintenance of GAP was coordinated by the School of Mathematical and Computational Sciences at the University of St Andrews, Scotland. In the summer of 2005 coordination was transferred to an equal partnership of four 'GAP Centres', located at the University of St Andrews, RWTH Aachen, Technische Universität Braunschweig, and Colorado State University at Fort Collins; in April 2020, a fifth GAP Centre located at the TU Kaiserslautern was added.
Distribution
GAP and its sources, including packages (sets of user contributed programs), data library (including a list of small groups) and the manual, are distributed freely, subject to "copyleft" conditions. GAP runs on any Unix system, under Windows, and on Macintosh systems. The standard distribution requires about 300 MB (about 400 MB if all the packages are loaded).
The user contributed packages are an important feature of the system, adding a great deal of functionality. GAP offers package authors the opportunity to submit these packages for a process of peer review, hopefully improving the quality of the final packages, and providing recognition akin to an academic publication for their authors. , there are 151 packages distributed with GAP, of which approximately 71 have been through this process.
An interface is available for using the SINGULAR computer algebra system from within GAP. GAP is also included in the mathematical software system SageMath.
Sample session
See also
Comparison of computer algebra systems
References
External links
Computer algebra system software for Linux
Computer algebra system software for macOS
Computer algebra system software for Windows
Free computer algebra systems
|
https://en.wikipedia.org/wiki/William%20Rankine
|
William John Macquorn Rankine (; 5 July 1820 – 24 December 1872) was a Scottish mechanical engineer who also contributed to civil engineering, physics and mathematics. He was a founding contributor, with Rudolf Clausius and William Thomson (Lord Kelvin), to the science of thermodynamics, particularly focusing on its First Law. He developed the Rankine scale, a Fahrenheit-based equivalent to the Celsius-based Kelvin scale of temperature.
Rankine developed a complete theory of the steam engine and indeed of all heat engines. His manuals of engineering science and practice were used for many decades after their publication in the 1850s and 1860s. He published several hundred papers and notes on science and engineering topics, from 1840 onwards, and his interests were extremely varied, including, in his youth, botany, music theory and number theory, and, in his mature years, most major branches of science, mathematics and engineering.
He was an enthusiastic amateur singer, pianist and cellist who composed his own humorous songs.
Life
Rankine was born in Edinburgh to Lt David Rankin (sic), a civil engineer from a military background, who later worked on the Edinburgh and Dalkeith Railway (locally known as the Innocent Railway). His mother was Barbara Grahame, of a prominent legal and banking family.
His father moved around Scotland on various projects and the family moved with him. William was initially educated at home but he later attended Ayr Academy (1828–29) and then the High School of Glasgow (1830). Around 1830 the family moved to Edinburgh when the father got a post as Manager of the Edinburgh to Dalkeith Railway. The family then lived at 2 Arniston Place.
In 1834 he was sent to the Scottish Naval and Military Academy on Lothian Road in Edinburgh with the mathematician George Lee. By that year William was already highly proficient in mathematics and received, as a gift from his uncle, Isaac Newton's Principia (1687) in the original Latin.
In 1836, Rankine began to study a spectrum of scientific topics at the University of Edinburgh, including natural history under Robert Jameson and natural philosophy under James David Forbes. Under Forbes he was awarded prizes for essays on methods of physical inquiry and on the undulatory (or wave) theory of light. During vacations, he assisted his father who, from 1830, was manager and, later, effective treasurer and engineer of the Edinburgh and Dalkeith Railway which brought coal into the growing city. He left the University of Edinburgh in 1838 without a degree (which was not then unusual) and, perhaps because of straitened family finances, became an apprentice to Sir John Benjamin Macneill, who was at the time surveyor to the Irish Railway Commission. During his pupilage he developed a technique, later known as Rankine's method, for laying out railway curves, fully exploiting the theodolite and making a substantial improvement in accuracy and productivity over existing methods. In fact, the techn
|
https://en.wikipedia.org/wiki/Cycle%20space
|
In graph theory, a branch of mathematics, the (binary) cycle space of an undirected graph is the set of its even-degree subgraphs.
This set of subgraphs can be described algebraically as a vector space over the two-element finite field. The dimension of this space is the circuit rank of the graph. The same space can also be described in terms from algebraic topology as the first homology group of the graph. Using homology theory, the binary cycle space may be generalized to cycle spaces over arbitrary rings.
Definitions
The cycle space of a graph can be described with increasing levels of mathematical sophistication as a set of subgraphs, as a binary vector space, or as a homology group.
Graph theory
A spanning subgraph of a given graph G may be defined from any subset S of the edges of G. The subgraph has the same set of vertices as G itself (this is the meaning of the word "spanning") but has the elements of S as its edges. Thus, a graph G with m edges has 2m spanning subgraphs, including G itself as well as the empty graph on the same set of vertices as G. The collection of all spanning subgraphs of a graph G forms the edge space of G.
A graph G, or one of its subgraphs, is said to be Eulerian if each of its vertices has an even number of incident edges (this number is called the degree of the vertex). This property is named after Leonhard Euler who proved in 1736, in his work on the Seven Bridges of Königsberg, that a connected graph has a tour that visits each edge exactly once if and only if it is Eulerian. However, for the purposes of defining cycle spaces, an Eulerian subgraph does not need to be connected; for instance, the empty graph, in which all vertices are disconnected from each other, is Eulerian in this sense. The cycle space of a graph is the collection of its Eulerian spanning subgraphs.
Algebra
If one applies any set operation such as union or intersection of sets to two spanning subgraphs of a given graph, the result will again be a subgraph. In this way, the edge space of an arbitrary graph can be interpreted as a Boolean algebra.
The cycle space, also, has an algebraic structure, but a more restrictive one. The union or intersection of two Eulerian subgraphs may fail to be Eulerian. However, the symmetric difference of two Eulerian subgraphs
(the graph consisting of the edges that belong to exactly one of the two given graphs) is again Eulerian. This follows from the fact that the symmetric difference of two sets with an even number of elements is also even. Applying this fact separately to the neighborhoods of each vertex shows that the symmetric difference operator preserves the property of being Eulerian.
A family of sets closed under the symmetric difference operation can be described algebraically as a vector space over the two-element finite field . This field has two elements, 0 and 1, and its addition and multiplication operations can be described as the familiar addition and multiplication of integers, take
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.