source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Alain%20Connes
|
Alain Connes (; born 1 April 1947 in Draguignan) is a French mathematician, known for his contributions to the study of operator algebras and noncommutative geometry. He is a professor at the , , Ohio State University and Vanderbilt University. He was awarded the Fields Medal in 1982.
Career
Alain Connes attended high school at in Marseille, and was then a student of the classes préparatoires in . Between 1966 and 1970 he studed at École normale supérieure in Paris, and in 1973 he obtained a PhD from Pierre and Marie Curie University, under the supervision of Jacques Dixmier.
From 1970 to 1974 he was research fellow at the French National Centre for Scientific Research and during 1975 he held a visiting position at Queen's University at Kingston in Canada.
In 1976 he returned to France and worked as professor at Pierre and Marie Curie University until 1980 and at CNRS between 1981 and 1984. Moreover, since 1979 he holds the Léon Motchane Chair at IHES. From 1984 until his retirement in 2017 he held the chair of Analysis and Geometry at Collège de France.
In parallel, he was awarded a distinguished professorship at Vanderbilt University between 2003 and 2012, and at Ohio State University between 2012 and 2021.
In 2000 he was an invited professor at the Conservatoire national des arts et métiers.
Research
Connes' main research interests revolved around operator algebras. Besides noncommutative geometry, he has applied his works in various areas of mathematics and theoretical physics, including number theory, differential geometry and particle physics.
In his early work on von Neumann algebras in the 1970s, he succeeded in obtaining the almost complete classification of injective factors. He also formulated the Connes embedding problem.
Following this, he made contributions in operator K-theory and index theory, which culminated in the Baum–Connes conjecture. He also introduced cyclic cohomology in the early 1980s as a first step in the study of noncommutative differential geometry.
He was a member of Nicolas Bourbaki.
Awards and honours
Connes was awarded the Peccot-Vimont Prize in 1976, the Ampère Prize in 1980, the Fields Medal in 1982, the Clay Research Award in 2000 and the Crafoord Prize in 2001. The French National Centre for Scientific Research granted him the silver medal in 1977 and the gold medal in 2004.
He was an invited speaker at the International Congress of Mathematicians in 1974 at Vancouver and in 1986 at Berkeley, and a plenary speaker at the ICM in 1978 at Helsinki.
He was awarded honorary degrees from Queen's University at Kingston in 1979, University of Rome Tor Vergata in 1997, University of Oslo in 1999, University of Southern Denmark in 2009, Université Libre de Bruxelles in 2010 and Shanghai Fudan University in 2017.
Since 1982 he is a member of the French Academy of Sciences. He was elected member of several foreign academies and societies, including the Royal Danish Academy of Sciences and Letters in 1980
|
https://en.wikipedia.org/wiki/Arithmetic%20mean
|
In mathematics and statistics, the arithmetic mean ( ), arithmetic average, or just the mean or average (when the context is clear) is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results from an experiment, an observational study, or a survey. The term "arithmetic mean" is preferred in some mathematics and statistics contexts because it helps distinguish it from other types of means, such as geometric and harmonic.
In addition to mathematics and statistics, the arithmetic mean is frequently used in economics, anthropology, history, and almost every academic field to some extent. For example, per capita income is the arithmetic average income of a nation's population.
While the arithmetic mean is often used to report central tendencies, it is not a robust statistic: it is greatly influenced by outliers (values much larger or smaller than most others). For skewed distributions, such as the distribution of income for which a few people's incomes are substantially higher than most people's, the arithmetic mean may not coincide with one's notion of "middle". In that case, robust statistics, such as the median, may provide a better description of central tendency.
Definition
Given a data set , the arithmetic mean (also mean or average), denoted (read bar), is the mean of the values .
The arithmetic mean is a data set's most commonly used and readily understood measure of central tendency. In statistics, the term average refers to any measurement of central tendency. The arithmetic mean of a set of observed data is equal to the sum of the numerical values of each observation, divided by the total number of observations. Symbolically, for a data set consisting of the values , the arithmetic mean is defined by the formula:
(For an explanation of the summation operator, see summation.)
For example, if the monthly salaries of employees are , then the arithmetic mean is:
If the data set is a statistical population (i.e., consists of every possible observation and not just a subset of them), then the mean of that population is called the population mean and denoted by the Greek letter . If the data set is a statistical sample (a subset of the population), it is called the sample mean (which for a data set is denoted as ).
The arithmetic mean can be similarly defined for vectors in multiple dimensions, not only scalar values; this is often referred to as a centroid. More generally, because the arithmetic mean is a convex combination (meaning its coefficients sum to ), it can be defined on a convex space, not only a vector space.
Motivating properties
The arithmetic mean has several properties that make it interesting, especially as a measure of central tendency. These include:
If numbers have mean , then . Since is the distance from a given number to the mean, one way to interpret this property is by saying that the numbers to the left of the mean are balanced by the nu
|
https://en.wikipedia.org/wiki/Argument%20%28disambiguation%29
|
In logic and philosophy, an argument is an attempt to persuade someone of something, or give evidence or reasons for accepting a particular conclusion.
Argument may also refer to:
Mathematics and computer science
Argument (complex analysis), a function which returns the polar angle of a complex number
Command-line argument, an item of information provided to a program when it is started
Parameter (computer programming), a piece of data provided as input to a subroutine
Argument principle, a theorem in complex analysis
An argument of a function, also known as an independent variable
Language and rhetoric
Argument (literature), a brief summary, often in prose, of a poem or section of a poem or other work
Argument (linguistics), a phrase that appears in a syntactic relationship with the verb in a clause
Oral argument in the United States, a spoken presentation to a judge or appellate court by a lawyer (or parties when representing themselves) of the legal reasons why they should prevail
Closing argument, in law, the concluding statement of each party's counsel reiterating the important arguments in a court case
Other uses
Musical argument, a concept in the theory of musical form
Argument (ship), an Australian sloop wrecked in 1809
Das Argument, a German academic journal
Argument Clinic, a Monty Python sketch
A disagreement between two or more parties or the discussion of the disagreement
Argument (horse)
See also
The Argument (disambiguation)
argumentation
|
https://en.wikipedia.org/wiki/Algorithms%20%28journal%29
|
Algorithms is a monthly peer-reviewed open-access scientific journal of mathematics, covering design, analysis, and experiments on algorithms. The journal is published by MDPI and was established in 2008. The founding editor-in-chief was Kazuo Iwama (Kyoto University). From May 2014 to September 2019, the editor-in-chief was Henning Fernau (Universität Trier). The current editor-in-chief is Frank Werner (Otto-von-Guericke-Universität Magdeburg).
Abstracting and indexing
According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.3.
The journal is abstracted and indexed in:
See also
Journals with similar scope include:
ACM Transactions on Algorithms
Algorithmica
Journal of Algorithms (Elsevier)
References
External links
Computer science journals
Open access journals
MDPI academic journals
English-language journals
Academic journals established in 2008
Mathematics journals
Monthly journals
|
https://en.wikipedia.org/wiki/Algorithm
|
In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning), achieving automation eventually. Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus".
In contrast, a heuristic is an approach to problem solving that may not be fully specified or may not guarantee correct or optimal results, especially in problem domains where there is no well-defined correct or optimal result.
As an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
History
Ancient algorithms
Since antiquity, step-by-step procedures for solving mathematical problems have been attested. This includes Babylonian mathematics (around 2500 BC), Egyptian mathematics (around 1550 BC), Indian mathematics (around 800 BC and later; e.g. Shulba Sutras, Kerala School, and Brāhmasphuṭasiddhānta), The Ifa Oracle (around 500 BC), Greek mathematics (around 240 BC, e.g. sieve of Eratosthenes and Euclidean algorithm), and Arabic mathematics (9th century, e.g. cryptographic algorithms for code-breaking based on frequency analysis).
Al-Khwārizmī and the term algorithm
Around 825, Muḥammad ibn Mūsā al-Khwārizmī wrote kitāb al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī ("Addition and subtraction in Indian arithmetic"). Both of these texts are lost in the original Arabic at this time. (However, his other book on algebra remains.)
In the early 12th century, Latin translations of said al-Khwarizmi texts involving the Hindu–Arabic numeral system and arithmetic appeared: Liber Alghoarismi de practica arismetrice (attributed to John of Seville) and Liber Algorismi de numero Indorum (attributed to Adelard of Bath). Hereby, alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi ("Thus spoke Al-Khwarizmi").
In 1240, Alexander of Villedieu writes a Latin text titled Carmen de Algorismo. It begins with:
which translates to:
The poem is a few hundred lines long and s
|
https://en.wikipedia.org/wiki/Axiom%20of%20choice
|
In mathematics, the axiom of choice, abbreviated AC or AoC, is an axiom of set theory equivalent to the statement that a Cartesian product of a collection of non-empty sets is non-empty. Informally put, the axiom of choice says that given any collection of sets, each containing at least one element, it is possible to construct a new set by arbitrarily choosing one element from each set, even if the collection is infinite. Formally, it states that for every indexed family of nonempty sets, there exists an indexed set such that for every . The axiom of choice was formulated in 1904 by Ernst Zermelo in order to formalize his proof of the well-ordering theorem.
In many cases, a set arising from choosing elements arbitrarily can be made without invoking the axiom of choice; this is, in particular, the case if the number of sets from which to choose the elements is finite, or if a canonical rule on how to choose the elements is available – some distinguishing property that happens to hold for exactly one element in each set. An illustrative example is sets picked from the natural numbers. From such sets, one may always select the smallest number, e.g. given the sets {{4, 5, 6}, {10, 12}, {1, 400, 617, 8000}}, the set containing each smallest element is {4, 10, 1}. In this case, "select the smallest number" is a choice function. Even if infinitely many sets were collected from the natural numbers, it will always be possible to choose the smallest element from each set to produce a set. That is, the choice function provides the set of chosen elements. However, no definite choice function is known for the collection of all non-empty subsets of the real numbers. In that case, the axiom of choice must be invoked.
Bertrand Russell coined an analogy: for any (even infinite) collection of pairs of shoes, one can pick out the left shoe from each pair to obtain an appropriate collection (i.e. set) of shoes; this makes it possible to define a choice function directly. For an infinite collection of pairs of socks (assumed to have no distinguishing features), there is no obvious way to make a function that forms a set out of selecting one sock from each pair, without invoking the axiom of choice.
Although originally controversial, the axiom of choice is now used without reservation by most mathematicians, and it is included in the standard form of axiomatic set theory, Zermelo–Fraenkel set theory with the axiom of choice (ZFC). One motivation for this use is that a number of generally accepted mathematical results, such as Tychonoff's theorem, require the axiom of choice for their proofs. Contemporary set theorists also study axioms that are not compatible with the axiom of choice, such as the axiom of determinacy. The axiom of choice is avoided in some varieties of constructive mathematics, although there are varieties of constructive mathematics in which the axiom of choice is embraced.
Statement
A choice function (also called selector or selection) is a f
|
https://en.wikipedia.org/wiki/Arable%20land
|
Arable land (from the , "able to be ploughed") is any land capable of being ploughed and used to grow crops. Alternatively, for the purposes of agricultural statistics, the term often has a more precise definition:
A more concise definition appearing in the Eurostat glossary similarly refers to actual rather than potential uses: "land worked (ploughed or tilled) regularly, generally under a system of crop rotation". In Britain, arable land has traditionally been contrasted with pasturable land such as heaths, which could be used for sheep-rearing but not as farmland.
Arable land is vulnerable to land degradation and some types of un-arable land can be enriched to create useful land. Climate change and biodiversity loss, are driving pressure on arable land.
By country
According to the Food and Agriculture Organization of the United Nations, in 2013, the world's arable land amounted to 1.407 billion hectares, out of a total of 4.924 billion hectares of land used for agriculture.
Arable land (hectares per person)
Non-arable land
Agricultural land that is not arable according to the FAO definition above includes:
Meadows and pasturesland used as pasture and grazed range, and those natural grasslands and sedge meadows that are used for hay production in some regions.
Permanent cropland that produces crops from woody vegetation, e.g. orchard land, vineyards, coffee plantations, rubber plantations, and land producing nut trees;
Other non-arable land includes land that is not suitable for any agricultural use. Land that is not arable, in the sense of lacking capability or suitability for cultivation for crop production, has one or more limitationsa lack of sufficient freshwater for irrigation, stoniness, steepness, adverse climate, excessive wetness with the impracticality of drainage, excessive salts, or a combination of these, among others. Although such limitations may preclude cultivation, and some will in some cases preclude any agricultural use, large areas unsuitable for cultivation may still be agriculturally productive. For example, United States NRCS statistics indicate that about 59 percent of US non-federal pasture and unforested rangeland is unsuitable for cultivation, yet such land has value for grazing of livestock. In British Columbia, Canada, 41 percent of the provincial Agricultural Land Reserve area is unsuitable for the production of cultivated crops, but is suitable for uncultivated production of forage usable by grazing livestock. Similar examples can be found in many rangeland areas elsewhere.
Changes in arability
Land conversion
Land incapable of being cultivated for the production of crops can sometimes be converted to arable land. New arable land makes more food and can reduce starvation. This outcome also makes a country more self-sufficient and politically independent, because food importation is reduced. Making non-arable land arable often involves digging new irrigation canals and new wells, aqueducts, desalin
|
https://en.wikipedia.org/wiki/Absolute%20value
|
In mathematics, the absolute value or modulus of a real number , is the non-negative value without regard to its sign. Namely, if is a positive number, and if is negative (in which case negating makes positive), and For example, the absolute value of 3 and the absolute value of −3 is The absolute value of a number may be thought of as its distance from zero.
Generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example, an absolute value is also defined for the complex numbers, the quaternions, ordered rings, fields and vector spaces. The absolute value is closely related to the notions of magnitude, distance, and norm in various mathematical and physical contexts.
Terminology and notation
In 1806, Jean-Robert Argand introduced the term module, meaning unit of measure in French, specifically for the complex absolute value, and it was borrowed into English in 1866 as the Latin equivalent modulus. The term absolute value has been used in this sense from at least 1806 in French and 1857 in English. The notation , with a vertical bar on each side, was introduced by Karl Weierstrass in 1841. Other names for absolute value include numerical value and magnitude. In programming languages and computational software packages, the absolute value of is generally represented by abs(x), or a similar expression.
The vertical bar notation also appears in a number of other mathematical contexts: for example, when applied to a set, it denotes its cardinality; when applied to a matrix, it denotes its determinant. Vertical bars denote the absolute value only for algebraic objects for which the notion of an absolute value is defined, notably an element of a normed division algebra, for example a real number, a complex number, or a quaternion. A closely related but distinct notation is the use of vertical bars for either the Euclidean norm or sup norm of a vector although double vertical bars with subscripts respectively) are a more common and less ambiguous notation.
Definition and properties
Real numbers
For any the absolute value or modulus is denoted , with a vertical bar on each side of the quantity, and is defined as
The absolute value is thus always either a positive number or zero, but never negative. When itself is negative then its absolute value is necessarily positive
From an analytic geometry point of view, the absolute value of a real number is that number's distance from zero along the real number line, and more generally the absolute value of the difference of two real numbers (their absolute difference) is the distance between them. The notion of an abstract distance function in mathematics can be seen to be a generalisation of the absolute value of the difference (see "Distance" below).
Since the square root symbol represents the unique positive square root, when applied to a positive number, it follows that
This is equivalent to the definition above, and may be us
|
https://en.wikipedia.org/wiki/Algebraically%20closed%20field
|
In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in .
Examples
As an example, the field of real numbers is not algebraically closed, because the polynomial equation has no solution in real numbers, even though all its coefficients (1 and 0) are real. The same argument proves that no subfield of the real field is algebraically closed; in particular, the field of rational numbers is not algebraically closed. By contrast, the fundamental theorem of algebra states that the field of complex numbers is algebraically closed. Another example of an algebraically closed field is the field of (complex) algebraic numbers.
No finite field F is algebraically closed, because if a1, a2, ..., an are the elements of F, then the polynomial (x − a1)(x − a2) ⋯ (x − an) + 1
has no zero in F. However, the union of all finite fields of a fixed characteristic p is an algebraically closed field, which is, in fact, the algebraic closure of the field with p elements.
Equivalent properties
Given a field F, the assertion "F is algebraically closed" is equivalent to other assertions:
The only irreducible polynomials are those of degree one
The field F is algebraically closed if and only if the only irreducible polynomials in the polynomial ring F[x] are those of degree one.
The assertion "the polynomials of degree one are irreducible" is trivially true for any field. If F is algebraically closed and p(x) is an irreducible polynomial of F[x], then it has some root a and therefore p(x) is a multiple of x − a. Since p(x) is irreducible, this means that p(x) = k(x − a), for some k ∈ F \ {0}. On the other hand, if F is not algebraically closed, then there is some non-constant polynomial p(x) in F[x] without roots in F. Let q(x) be some irreducible factor of p(x). Since p(x) has no roots in F, q(x) also has no roots in F. Therefore, q(x) has degree greater than one, since every first degree polynomial has one root in F.
Every polynomial is a product of first degree polynomials
The field F is algebraically closed if and only if every polynomial p(x) of degree n ≥ 1, with coefficients in F, splits into linear factors. In other words, there are elements k, x1, x2, ..., xn of the field F such that p(x) = k(x − x1)(x − x2) ⋯ (x − xn).
If F has this property, then clearly every non-constant polynomial in F[x] has some root in F; in other words, F is algebraically closed. On the other hand, that the property stated here holds for F if F is algebraically closed follows from the previous property together with the fact that, for any field K, any polynomial in K[x] can be written as a product of irreducible polynomials.
Polynomials of prime degree have roots
If every polynomial over F of prime degree has a root in F, then every non-constant polynomial has a root in F. It follows that a field is algebraically closed if and only if every polynomial over F of prime degree has a root
|
https://en.wikipedia.org/wiki/Algorithms%20for%20calculating%20variance
|
Algorithms for calculating variance play a major role in computational statistics. A key difficulty in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.
Naïve algorithm
A formula for calculating the variance of an entire population of size N is:
Using Bessel's correction to calculate an unbiased estimate of the population variance from a finite sample of n observations, the formula is:
Therefore, a naïve algorithm to calculate the estimated variance is given by the following:
Let
For each datum :
This algorithm can easily be adapted to compute the variance of a finite population: simply divide by n instead of n − 1 on the last line.
Because and can be very similar numbers, cancellation can lead to the precision of the result to be much less than the inherent precision of the floating-point arithmetic used to perform the computation. Thus this algorithm should not be used in practice, and several alternate, numerically stable, algorithms have been proposed. This is particularly bad if the standard deviation is small relative to the mean.
Computing shifted data
The variance is invariant with respect to changes in a location parameter, a property which can be used to avoid the catastrophic cancellation in this formula.
with any constant, which leads to the new formula
the closer is to the mean value the more accurate the result will be, but just choosing a value inside the
samples range will guarantee the desired stability. If the values are small then there are no problems with the sum of its squares, on the contrary, if they are large it necessarily means that the variance is large as well. In any case the second term in the formula is always smaller than the first one therefore no cancellation may occur.
If just the first sample is taken as the algorithm can be written in Python programming language as
def shifted_data_variance(data):
if len(data) < 2:
return 0.0
K = data[0]
n = Ex = Ex2 = 0.0
for x in data:
n += 1
Ex += x - K
Ex2 += (x - K) ** 2
variance = (Ex2 - Ex**2 / n) / (n - 1)
# use n instead of (n-1) if want to compute the exact variance of the given data
# use (n-1) if data are samples of a larger population
return variance
This formula also facilitates the incremental computation that can be expressed as
K = Ex = Ex2 = 0.0
n = 0
def add_variable(x):
global K, n, Ex, Ex2
if n == 0:
K = x
n += 1
Ex += x - K
Ex2 += (x - K) ** 2
def remove_variable(x):
global K, n, Ex, Ex2
n -= 1
Ex -= x - K
Ex2 -= (x - K) ** 2
def get_mean():
global K, n, Ex
return K + Ex / n
def get_variance():
global n, Ex, Ex2
return (Ex2 - Ex**2 / n) / (n - 1)
Two-pass algorithm
An alternative approach, using a different formula for the variance, f
|
https://en.wikipedia.org/wiki/Algebraic%20number
|
An algebraic number is a number that is a root of a non-zero polynomial in one variable with integer (or, equivalently, rational) coefficients. For example, the golden ratio, , is an algebraic number, because it is a root of the polynomial . That is, it is a value for x for which the polynomial evaluates to zero. As another example, the complex number is algebraic because it is a root of .
All integers and rational numbers are algebraic, as are all roots of integers. Real and complex numbers that are not algebraic, such as and , are called transcendental numbers.
The set of algebraic numbers is countably infinite and has measure zero in the Lebesgue measure as a subset of the uncountable complex numbers. In that sense, almost all complex numbers are transcendental.
Examples
All rational numbers are algebraic. Any rational number, expressed as the quotient of an integer and a (non-zero) natural number , satisfies the above definition, because is the root of a non-zero polynomial, namely .
Quadratic irrational numbers, irrational solutions of a quadratic polynomial with integer coefficients , , and , are algebraic numbers. If the quadratic polynomial is monic (), the roots are further qualified as quadratic integers.
Gaussian integers, complex numbers for which both and are integers, are also quadratic integers. This is because and are the two roots of the quadratic .
A constructible number can be constructed from a given unit length using a straightedge and compass. It includes all quadratic irrational roots, all rational numbers, and all numbers that can be formed from these using the basic arithmetic operations and the extraction of square roots. (By designating cardinal directions for +1, −1, +, and −, complex numbers such as are considered constructible.)
Any expression formed from algebraic numbers using any combination of the basic arithmetic operations and extraction of th roots gives another algebraic number.
Polynomial roots that cannot be expressed in terms of the basic arithmetic operations and extraction of th roots (such as the roots of ). That happens with many but not all polynomials of degree 5 or higher.
Values of trigonometric functions of rational multiples of (except when undefined): for example, , , and satisfy . This polynomial is irreducible over the rationals and so the three cosines are conjugate algebraic numbers. Likewise, , , , and satisfy the irreducible polynomial , and so are conjugate algebraic integers.
Some but not all irrational numbers are algebraic:
The numbers and are algebraic since they are roots of polynomials and , respectively.
The golden ratio is algebraic since it is a root of the polynomial .
The numbers and e are not algebraic numbers (see the Lindemann–Weierstrass theorem).
Properties
If a polynomial with rational coefficients is multiplied through by the least common denominator, the resulting polynomial with integer coefficients has the same roots. This shows
|
https://en.wikipedia.org/wiki/Automorphism
|
In mathematics, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object.
Definition
In the context of abstract algebra, a mathematical object is an algebraic structure such as a group, ring, or vector space. An automorphism is simply a bijective homomorphism of an object with itself. (The definition of a homomorphism depends on the type of algebraic structure; see, for example, group homomorphism, ring homomorphism, and linear operator.)
The identity morphism (identity mapping) is called the trivial automorphism in some contexts. Respectively, other (non-identity) automorphisms are called nontrivial automorphisms.
The exact definition of an automorphism depends on the type of "mathematical object" in question and what, precisely, constitutes an "isomorphism" of that object. The most general setting in which these words have meaning is an abstract branch of mathematics called category theory. Category theory deals with abstract objects and morphisms between those objects.
In category theory, an automorphism is an endomorphism (i.e., a morphism from an object to itself) which is also an isomorphism (in the categorical sense of the word, meaning there exists a right and left inverse endomorphism).
This is a very abstract definition since, in category theory, morphisms are not necessarily functions and objects are not necessarily sets. In most concrete settings, however, the objects will be sets with some additional structure and the morphisms will be functions preserving that structure.
Automorphism group
If the automorphisms of an object form a set (instead of a proper class), then they form a group under composition of morphisms. This group is called the automorphism group of .
Closure Composition of two automorphisms is another automorphism.
Associativity It is part of the definition of a category that composition of morphisms is associative.
Identity The identity is the identity morphism from an object to itself, which is an automorphism.
Inverses By definition every isomorphism has an inverse that is also an isomorphism, and since the inverse is also an endomorphism of the same object it is an automorphism.
The automorphism group of an object X in a category C is denoted AutC(X), or simply Aut(X) if the category is clear from context.
Examples
In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X.
In elementary arithmetic, the set of integers, Z, considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of
|
https://en.wikipedia.org/wiki/Antisymmetric%20relation
|
In mathematics, a binary relation on a set is antisymmetric if there is no pair of distinct elements of each of which is related by to the other. More formally, is antisymmetric precisely if for all
or equivalently,
The definition of antisymmetry says nothing about whether actually holds or not for any . An antisymmetric relation on a set may be reflexive (that is, for all ), irreflexive (that is, for no ), or neither reflexive nor irreflexive. A relation is asymmetric if and only if it is both antisymmetric and irreflexive.
Examples
The divisibility relation on the natural numbers is an important example of an antisymmetric relation. In this context, antisymmetry means that the only way each of two numbers can be divisible by the other is if the two are, in fact, the same number; equivalently, if and are distinct and is a factor of then cannot be a factor of For example, 12 is divisible by 4, but 4 is not divisible by 12.
The usual order relation on the real numbers is antisymmetric: if for two real numbers and both inequalities and hold, then and must be equal. Similarly, the subset order on the subsets of any given set is antisymmetric: given two sets and if every element in also is in and every element in is also in then and must contain all the same elements and therefore be equal:
A real-life example of a relation that is typically antisymmetric is "paid the restaurant bill of" (understood as restricted to a given occasion). Typically, some people pay their own bills, while others pay for their spouses or friends. As long as no two people pay each other's bills, the relation is antisymmetric.
Properties
Partial and total orders are antisymmetric by definition. A relation can be both symmetric and antisymmetric (in this case, it must be coreflexive), and there are relations which are neither symmetric nor antisymmetric (for example, the "preys on" relation on biological species).
Antisymmetry is different from asymmetry: a relation is asymmetric if and only if it is antisymmetric and irreflexive.
See also
Symmetry in mathematics
References
nLab antisymmetric relation
Binary relations
|
https://en.wikipedia.org/wiki/Angle
|
In Euclidean geometry, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle.
Angles formed by two rays are also known as plane angles as they lie in the plane that contains the rays. Angles are also formed by the intersection of two planes; these are called dihedral angles. Two intersecting curves may also define an angle, which is the angle of the rays lying tangent to the respective curves at their point of intersection.
The magnitude of an angle is called an angular measure or simply "angle". Angle of rotation is a measure conventionally defined as the ratio of a circular arc length to its radius, and may be a negative number. In the case of a geometric angle, the arc is centered at the vertex and delimited by the sides. In the case of a rotation, the arc is centered at the center of the rotation and delimited by any other point and its image by the rotation.
History and etymology
The word angle comes from the Latin word , meaning "corner." Cognate words include the Greek () meaning "crooked, curved" and the English word "ankle." Both are connected with the Proto-Indo-European root *ank-, meaning "to bend" or "bow."
Euclid defines a plane angle as the inclination to each other, in a plane, of two lines that meet each other and do not lie straight with respect to each other. According to the Neoplatonic metaphysician Proclus, an angle must be either a quality, a quantity, or a relationship. The first concept, angle as quality, was used by Eudemus of Rhodes, who regarded an angle as a deviation from a straight line; the second, angle as quality, by Carpus of Antioch, who regarded it as the interval or space between the intersecting lines; Euclid adopted the third: angle as a relationship.
Identifying angles
In mathematical expressions, it is common to use Greek letters (α, β, γ, θ, φ, . . . ) as variables denoting the size of some angle (to avoid confusion with its other meaning, the symbol is typically not used for this purpose). Lower case Roman letters (a, b, c, . . . ) are also used. In contexts where this is not confusing, an angle may be denoted by the upper case Roman letter denoting its vertex. See the figures in this article for examples.
The three defining points may also identify angles in geometric figures. For example, the angle with vertex A formed by the rays AB and AC (that is, the half-lines from point A through points B and C) is denoted or . Where there is no risk of confusion, the angle may sometimes be referred to by a single vertex alone (in this case, "angle A").
Potentially, an angle denoted as, say, might refer to any of four angles: the clockwise angle from B to C about A, the anticlockwise angle from B to C about A, the clockwise angle from C to B about A, or the anticlockwise angle from C to B about A, where the direction in which the angle is measured determines its sign (see ). However, in many geometrical situations, it is evi
|
https://en.wikipedia.org/wiki/Almost%20all
|
In mathematics, the term "almost all" means "all but a negligible quantity". More precisely, if is a set, "almost all elements of " means "all elements of but those in a negligible subset of ". The meaning of "negligible" depends on the mathematical context; for instance, it can mean finite, countable, or null.
In contrast, "almost no" means "a negligible quantity"; that is, "almost no elements of " means "a negligible quantity of elements of ".
Meanings in different areas of mathematics
Prevalent meaning
Throughout mathematics, "almost all" is sometimes used to mean "all (elements of an infinite set) except for finitely many". This use occurs in philosophy as well. Similarly, "almost all" can mean "all (elements of an uncountable set) except for countably many".
Examples:
Almost all positive integers are greater than 1012.
Almost all prime numbers are odd (2 is the only exception).
Almost all polyhedra are irregular (as there are only nine exceptions: the five platonic solids and the four Kepler–Poinsot polyhedra).
If P is a nonzero polynomial, then P(x) ≠ 0 for almost all x (if not all x).
Meaning in measure theory
When speaking about the reals, sometimes "almost all" can mean "all reals except for a null set". Similarly, if S is some set of reals, "almost all numbers in S" can mean "all numbers in S except for those in a null set". The real line can be thought of as a one-dimensional Euclidean space. In the more general case of an n-dimensional space (where n is a positive integer), these definitions can be generalised to "all points except for those in a null set" or "all points in S except for those in a null set" (this time, S is a set of points in the space). Even more generally, "almost all" is sometimes used in the sense of "almost everywhere" in measure theory, or in the closely related sense of "almost surely" in probability theory.
Examples:
In a measure space, such as the real line, countable sets are null. The set of rational numbers is countable, so almost all real numbers are irrational.
Georg Cantor's first set theory article proved that the set of algebraic numbers is countable as well, so almost all reals are transcendental.
Almost all reals are normal.
The Cantor set is also null. Thus, almost all reals are not in it even though it is uncountable.
The derivative of the Cantor function is 0 for almost all numbers in the unit interval. It follows from the previous example because the Cantor function is locally constant, and thus has derivative 0 outside the Cantor set.
Meaning in number theory
In number theory, "almost all positive integers" can mean "the positive integers in a set whose natural density is 1". That is, if A is a set of positive integers, and if the proportion of positive integers in A below n (out of all positive integers below n) tends to 1 as n tends to infinity, then almost all positive integers are in A.
More generally, let S be an infinite set of positive integers, such as the set of
|
https://en.wikipedia.org/wiki/Associative%20property
|
In mathematics, the associative property is a property of some binary operations, which means that rearranging the parentheses in an expression will not change the result. In propositional logic, associativity is a valid rule of replacement for expressions in logical proofs.
Within an expression containing two or more occurrences in a row of the same associative operator, the order in which the operations are performed does not matter as long as the sequence of the operands is not changed. That is (after rewriting the expression with parentheses and in infix notation if necessary), rearranging the parentheses in such an expression will not change its value. Consider the following equations:
Even though the parentheses were rearranged on each line, the values of the expressions were not altered. Since this holds true when performing addition and multiplication on any real numbers, it can be said that "addition and multiplication of real numbers are associative operations".
Associativity is not the same as commutativity, which addresses whether the order of two operands affects the result. For example, the order does not matter in the multiplication of real numbers, that is, , so we say that the multiplication of real numbers is a commutative operation. However, operations such as function composition and matrix multiplication are associative, but not (generally) commutative.
Associative operations are abundant in mathematics; in fact, many algebraic structures (such as semigroups and categories) explicitly require their binary operations to be associative.
However, many important and interesting operations are non-associative; some examples include subtraction, exponentiation, and the vector cross product. In contrast to the theoretical properties of real numbers, the addition of floating point numbers in computer science is not associative, and the choice of how to associate an expression can have a significant effect on rounding error.
Definition
Formally, a binary operation on a set is called associative if it satisfies the associative law:
Here, ∗ is used to replace the symbol of the operation, which may be any symbol, and even the absence of symbol (juxtaposition) as for multiplication.
The associative law can also be expressed in functional notation thus: .
Generalized associative law
If a binary operation is associative, repeated application of the operation produces the same result regardless of how valid pairs of parentheses are inserted in the expression. This is called the generalized associative law. For instance, a product of four elements may be written, without changing the order of the factors, in five possible ways:
If the product operation is associative, the generalized associative law says that all these expressions will yield the same result. So unless the expression with omitted parentheses already has a different meaning (see below), the parentheses can be considered unnecessary and "the" product can be writte
|
https://en.wikipedia.org/wiki/Kolmogorov%20complexity
|
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.
The notion of Kolmogorov complexity can be used to state and prove impossibility results akin to Cantor's diagonal argument, Gödel's incompleteness theorem, and Turing's halting problem.
In particular, no program P computing a lower bound for each text's Kolmogorov complexity can return a value essentially larger than P's own length (see section ); hence no single program can compute the exact Kolmogorov complexity for infinitely many texts.
Definition
Consider the following two strings of 32 lowercase letters and digits:
abababababababababababababababab , and
4c1j5b2p0cv4w1x8rx2y39umgw5q85s7
The first string has a short English-language description, namely "write ab 16 times", which consists of 17 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, i.e., "write 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7" which has 38 characters. Hence the operation of writing the first string can be said to have "less complexity" than writing the second.
More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings like the abab example above, whose Kolmogorov complexity is small relative to the string's size, are not considered to be complex.
The Kolmogorov complexity can be defined for any mathematical object, but for simplicity the scope of this article is restricted to strings. We must first specify a description language for strings. Such a description language can be based on any computer programming language, such as Lisp, Pascal, or Java. If P is a program which outputs a string x, then P is a description of x. The length of the description is just the length of P as a character string, multiplied by the number of bits in a character (e.g., 7 for ASCII).
We could, alternatively, choose an encoding for Turing machines, where an encoding is a function which associates to each Turing Machine M a bitstring <M>. If M is a Turing Machine which, on input w, outputs string x, then the concatenated string <M> w is a
|
https://en.wikipedia.org/wiki/Augustin-Louis%20Cauchy
|
Baron Augustin-Louis Cauchy ( , , ; 21 August 178923 May 1857) was a French mathematician, engineer, and physicist who made pioneering contributions to several branches of mathematics, including mathematical analysis and continuum mechanics. He was one of the first to state and rigorously prove theorems of calculus, rejecting the heuristic principle of the generality of algebra of earlier authors. He (nearly) single-handedly founded complex analysis and the study of permutation groups in abstract algebra.
A profound mathematician, Cauchy had a great influence over his contemporaries and successors; Hans Freudenthal stated: "More concepts and theorems have been named for Cauchy than for any other mathematician (in elasticity alone there are sixteen concepts and theorems named for Cauchy)." Cauchy was a prolific writer; he wrote approximately eight hundred research articles and five complete textbooks on a variety of topics in the fields of mathematics and mathematical physics.
Biography
Youth and education
Cauchy was the son of Louis François Cauchy (1760–1848) and Marie-Madeleine Desestre. Cauchy had two brothers: Alexandre Laurent Cauchy (1792–1857), who became a president of a division of the court of appeal in 1847 and a judge of the court of cassation in 1849, and Eugene François Cauchy (1802–1877), a publicist who also wrote several mathematical works.
Cauchy married Aloise de Bure in 1818. She was a close relative of the publisher who published most of Cauchy's works. They had two daughters, Marie Françoise Alicia (1819) and Marie Mathilde (1823).
Cauchy's father was a highly ranked official in the Parisian Police of the
Ancien Régime, but lost this position due to the French Revolution (July 14, 1789), which broke out one month before Augustin-Louis was born. The Cauchy family survived the revolution and the following Reign of Terror (1793–94) by escaping to Arcueil, where Cauchy received his first education, from his father. After the execution of Robespierre (1794), it was safe for the family to return to Paris. There Louis-François Cauchy found himself a new bureaucratic job in 1800, and quickly moved up the ranks. When Napoleon Bonaparte came to power (1799), Louis-François Cauchy was further promoted, and became Secretary-General of the Senate, working directly under Laplace (who is now better known for his work on mathematical physics). The famous mathematician Lagrange was also a friend of the Cauchy family.
On Lagrange's advice, Augustin-Louis was enrolled in the École Centrale du Panthéon, the best secondary school of Paris at that time, in the fall of 1802. Most of the curriculum consisted of classical languages; the young and ambitious Cauchy, being a brilliant student, won many prizes in Latin and the humanities. In spite of these successes, Augustin-Louis chose an engineering career, and prepared himself for the entrance examination to the École Polytechnique.
In 1805, he placed second of 293 applicants on this exam
|
https://en.wikipedia.org/wiki/Archimedean%20solid
|
In geometry, an Archimedean solid is one of 13 convex polyhedra whose faces are regular polygons and whose vertices are all symmetric to each other. They were first enumerated by Archimedes. The convex polyhedra with regular faces and symmetric vertices (the convex uniform polyhedra) include also the five Platonic solids (which are composed of only one type of polygon) and the two infinite families of prisms and antiprisms; these are not counted as Archimedean solids. The pseudorhombicuboctahedron has regular faces, and vertices that are symmetric in a weaker sense; it is also not generally counted as an Archimedean solid. The Archimedean solids are a subset of the Johnson solids, whose regular polygonal faces do not need to meet in identical vertices.
In these polyhedra, the vertices are identical, in the sense that a global isometry of the entire solid takes any one vertex to any other. observed that a 14th polyhedron, the elongated square gyrobicupola (or pseudo-rhombicuboctahedron), meets a weaker definition of an Archimedean solid, in which "identical vertices" means
merely that the parts of the polyhedron near any two vertices look the same (they have the same shapes of faces meeting around each vertex in the same order and forming the same angles). Grünbaum pointed out a frequent error in which authors define Archimedean solids using some form of this local definition but omit the 14th polyhedron. If only 13 polyhedra are to be listed, the definition must use global symmetries of the polyhedron rather than local neighborhoods.
Prisms and antiprisms, whose symmetry groups are the dihedral groups, are generally not considered to be Archimedean solids, even though their faces are regular polygons and their symmetry groups act transitively on their vertices. Excluding these two infinite families, there are 13 Archimedean solids. All the Archimedean solids (but not the elongated square gyrobicupola) can be made via Wythoff constructions from the Platonic solids with tetrahedral, octahedral and icosahedral symmetry.
Origin of name
The Archimedean solids take their name from Archimedes, who discussed them in a now-lost work. Pappus refers to it, stating that Archimedes listed 13 polyhedra. During the Renaissance, artists and mathematicians valued pure forms with high symmetry, and by around 1620 Johannes Kepler had completed the rediscovery of the 13 polyhedra, as well as defining the prisms, antiprisms, and the non-convex solids known as Kepler-Poinsot polyhedra. (See for more information about the rediscovery of the Archimedean solids during the renaissance.)
Kepler may have also found the elongated square gyrobicupola (pseudorhombicuboctahedron): at least, he once stated that there were 14 Archimedean solids. However, his published enumeration only includes the 13 uniform polyhedra, and the first clear statement of the pseudorhombicuboctahedron's existence was made in 1905, by Duncan Sommerville.
Classification
There are 13 Archimedean
|
https://en.wikipedia.org/wiki/Antiprism
|
In geometry, an antiprism or is a polyhedron composed of two parallel direct copies (not mirror images) of an polygon, connected by an alternating band of triangles. They are represented by the Conway notation .
Antiprisms are a subclass of prismatoids, and are a (degenerate) type of snub polyhedron.
Antiprisms are similar to prisms, except that the bases are twisted relatively to each other, and that the side faces (connecting the bases) are triangles, rather than quadrilaterals.
The dual polyhedron of an -gonal antiprism is an -gonal trapezohedron.
History
At the intersection of modern-day graph theory and coding theory, the triangulation of a set of points have interested mathematicians since Isaac Newton, who fruitlessly sought a mathematical proof of the kissing number problem in 1694. The existence of antiprisms was discussed, and their name was coined by Johannes Kepler, though it is possible that they were previously known to Archimedes, as they satisfy the same conditions on faces and on vertices as the Archimedean solids. According to Ericson and Zinoviev, Harold Scott MacDonald Coxeter wrote at length on the topic, and was among the first to apply the mathematics of Victor Schlegel to this field.
Knowledge in this field is "quite incomplete" and "was obtained fairly recently", i.e. in the 20th century. For example, as of 2001 it had been proven for only a limited number of non-trivial cases that the -gonal antiprism is the mathematically optimal arrangement of points in the sense of maximizing the minimum Euclidean distance between any two points on the set: in 1943 by László Fejes Tóth for 4 and 6 points (digonal and trigonal antiprisms, which are Platonic solids); in 1951 by Kurt Schütte and Bartel Leendert van der Waerden for 8 points (tetragonal antiprism, which is not a cube).
The chemical structure of binary compounds has been remarked to be in the family of antiprisms; especially those of the family of boron hydrides (in 1975) and carboranes because they are isoelectronic. This is a mathematically real conclusion reached by studies of X-ray diffraction patterns, and stems from the 1971 work of Kenneth Wade, the nominative source for Wade's rules of polyhedral skeletal electron pair theory.
Rare-earth metals such as the lanthanides form antiprismatic compounds with some of the halides or some of the iodides. The study of crystallography is useful here. Some lanthanides, when arranged in peculiar antiprismatic structures with chlorine and water, can form molecule-based magnets.
Right antiprism
For an antiprism with regular -gon bases, one usually considers the case where these two copies are twisted by an angle of degrees.
The axis of a regular polygon is the line perpendicular to the polygon plane and lying in the polygon centre.
For an antiprism with congruent regular -gon bases, twisted by an angle of degrees, more regularity is obtained if the bases have the same axis: are coaxial; i.e. (for non-coplanar
|
https://en.wikipedia.org/wiki/Algebraic%20geometry
|
Algebraic geometry is a branch of mathematics which classically studies zeros of multivariate polynomials. Modern algebraic geometry is based on the use of abstract algebraic techniques, mainly from commutative algebra, for solving geometrical problems about these sets of zeros.
The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves, and quartic curves like lemniscates and Cassini ovals. These are plane algebraic curves. A point of the plane lies on an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of points of special interest like singular points, inflection points and points at infinity. More advanced questions involve the topology of the curve and the relationship between curves defined by different equations.
Algebraic geometry occupies a central place in modern mathematics and has multiple conceptual connections with such diverse fields as complex analysis, topology and number theory. As a study of systems of polynomial equations in several variables, the subject of algebraic geometry begins with finding specific solutions via equation solving, and then proceeds to understand the intrinsic properties of the totality of solutions of a system of equations. This understanding requires both conceptual theory and computational technique.
In the 20th century, algebraic geometry split into several subareas.
The mainstream of algebraic geometry is devoted to the study of the complex points of the algebraic varieties and more generally to the points with coordinates in an algebraically closed field.
Real algebraic geometry is the study of the real algebraic varieties.
Diophantine geometry and, more generally, arithmetic geometry is the study of algebraic varieties over fields that are not algebraically closed and, specifically, over fields of interest in algebraic number theory, such as the field of rational numbers, number fields, finite fields, function fields, and p-adic fields.
A large part of singularity theory is devoted to the singularities of algebraic varieties.
Computational algebraic geometry is an area that has emerged at the intersection of algebraic geometry and computer algebra, with the rise of computers. It consists mainly of algorithm design and software development for the study of properties of explicitly given algebraic varieties.
Much of the development of the mainstream of algebraic geometry in the 20th century occurred within an abstract algebraic framework, with increasing emphasis being placed on "intrinsic" properties of algebraic varieties not dependent on any particular way of embedding the variety in an ambient coordinate space; this parallels developments in topology, differential and complex geometry
|
https://en.wikipedia.org/wiki/Andr%C3%A9%20Weil
|
André Weil (; ; 6 May 1906 – 6 August 1998) was a French mathematician, known for his foundational work in number theory and algebraic geometry. He was one of the most influential mathematicians of the twentieth century. His influence is due
both to his original contributions to a remarkably broad
spectrum of mathematical theories, and to the mark
he left on mathematical practice and style, through
some of his own works as well as through the Bourbaki group, of which he was one of the principal
founders.
Life
André Weil was born in Paris to agnostic Alsatian Jewish parents who fled the annexation of Alsace-Lorraine by the German Empire after the Franco-Prussian War in 1870–71. Simone Weil, who would later become a famous philosopher, was Weil's younger sister and only sibling. He studied in Paris, Rome and Göttingen and received his doctorate in 1928. While in Germany, Weil befriended Carl Ludwig Siegel. Starting in 1930, he spent two academic years at Aligarh Muslim University in India. Aside from mathematics, Weil held lifelong interests in classical Greek and Latin literature, in Hinduism and Sanskrit literature: he had taught himself Sanskrit in 1920. After teaching for one year at Aix-Marseille University, he taught for six years at University of Strasbourg. He married Éveline de Possel (née Éveline Gillet) in 1937.
Weil was in Finland when World War II broke out; he had been traveling in Scandinavia since April 1939. His wife Éveline returned to France without him. Weil was arrested in Finland at the outbreak of the Winter War on suspicion of spying; however, accounts of his life having been in danger were shown to be exaggerated. Weil returned to France via Sweden and the United Kingdom, and was detained at Le Havre in January 1940. He was charged with failure to report for duty, and was imprisoned in Le Havre and then Rouen. It was in the military prison in Bonne-Nouvelle, a district of Rouen, from February to May, that Weil completed the work that made his reputation. He was tried on 3 May 1940. Sentenced to five years, he requested to be attached to a military unit instead, and was given the chance to join a regiment in Cherbourg. After the fall of France in June 1940, he met up with his family in Marseille, where he arrived by sea. He then went to Clermont-Ferrand, where he managed to join his wife Éveline, who had been living in German-occupied France.
In January 1941, Weil and his family sailed from Marseille to New York. He spent the remainder of the war in the United States, where he was supported by the Rockefeller Foundation and the Guggenheim Foundation. For two years, he taught undergraduate mathematics at Lehigh University, where he was unappreciated, overworked and poorly paid, although he did not have to worry about being drafted, unlike his American students. He quit the job at Lehigh and moved to Brazil, where he taught at the Universidade de São Paulo from 1945 to 1947, working with Oscar Zariski. Weil and his wife h
|
https://en.wikipedia.org/wiki/Atle%20Selberg
|
Atle Selberg (14 June 1917 – 6 August 2007) was a Norwegian mathematician known for his work in analytic number theory and the theory of automorphic forms, and in particular for bringing them into relation with spectral theory. He was awarded the Fields Medal in 1950 and an honorary Abel Prize in 2002.
Early years
Selberg was born in Langesund, Norway, the son of teacher Anna Kristina Selberg and mathematician Ole Michael Ludvigsen Selberg. Two of his three brothers, Sigmund and Henrik, were also mathematicians. His other brother, Arne, was a professor of engineering.
While he was still at school he was influenced by the work of Srinivasa Ramanujan and he found an exact analytical formula for the partition function as suggested by the works of Ramanujan; however, this result was first published by Hans Rademacher.
He studied at the University of Oslo and completed his PhD in 1943.
World War II
During World War II, Selberg worked in isolation due to the German occupation of Norway. After the war, his accomplishments became known, including a proof that a positive proportion of the zeros of the Riemann zeta function lie on the line .
During the war, he fought against the German invasion of Norway, and was imprisoned several times.
Post-war in Norway
After the war, he turned to sieve theory, a previously neglected topic which Selberg's work brought into prominence. In a 1947 paper he introduced the Selberg sieve, a method well adapted in particular to providing auxiliary upper bounds, and which contributed to Chen's theorem, among other important results.
In 1948 Selberg submitted two papers in Annals of Mathematics in which he proved by elementary means the theorems for primes in arithmetic progression and the density of primes. This challenged the widely held view of his time that certain theorems are only obtainable with the advanced methods of complex analysis. Both results were based on his work on the asymptotic formula
where
for primes . He established this result by elementary means in March 1948, and by July of that year, Selberg and Paul Erdős each obtained elementary proofs of the prime number theorem, both using the asymptotic formula above as a starting point. Circumstances leading up to the proofs, as well as publication disagreements, led to a bitter dispute between the two mathematicians.
For his fundamental accomplishments during the 1940s, Selberg received the 1950 Fields Medal.
Institute for Advanced Study
Selberg moved to the United States and worked as an associate professor at Syracuse University and later settled at the Institute for Advanced Study in Princeton, New Jersey in the 1950s, where he remained until his death. During the 1950s he worked on introducing spectral theory into number theory, culminating in his development of the Selberg trace formula, the most famous and influential of his results. In its simplest form, this establishes a duality between the lengths of closed geodesics on a compact Riemann
|
https://en.wikipedia.org/wiki/Andrew%20Wiles
|
Sir Andrew John Wiles (born 11 April 1953) is an English mathematician and a Royal Society Research Professor at the University of Oxford, specialising in number theory. He is best known for proving Fermat's Last Theorem, for which he was awarded the 2016 Abel Prize and the 2017 Copley Medal by the Royal Society. He was appointed Knight Commander of the Order of the British Empire in 2000, and in 2018, was appointed the first Regius Professor of Mathematics at Oxford. Wiles is also a 1997 MacArthur Fellow.
Education and early life
Wiles was born on 11 April 1953 in Cambridge, England, the son of Maurice Frank Wiles (1923–2005) and Patricia Wiles (née Mowll). From 1952 to 1955, his father worked as the chaplain at Ridley Hall, Cambridge, and later became the Regius Professor of Divinity at the University of Oxford.
Wiles began his formal schooling in Nigeria, while living there as a very young boy with his parents. However, according to letters written by his parents, for at least the first several months after he was supposed to be attending classes, he refused to go. From that fact, Wiles himself concluded that he was not in his earliest years enthusiastic about spending time in academic institutions. He trusts the letters, though he could not remember himself a time when he did not enjoy solving mathematical problems.
Wiles attended King's College School, Cambridge, and The Leys School, Cambridge. Wiles states that he came across Fermat's Last Theorem on his way home from school when he was 10 years old. He stopped at his local library where he found a book The Last Problem, by Eric Temple Bell, about the theorem. Fascinated by the existence of a theorem that was so easy to state that he, a ten-year-old, could understand it, but that no one had proven, he decided to be the first person to prove it. However, he soon realised that his knowledge was too limited, so he abandoned his childhood dream until it was brought back to his attention at the age of 33 by Ken Ribet's 1986 proof of the epsilon conjecture, which Gerhard Frey had previously linked to Fermat's famous equation.
Career and research
In 1974, Wiles earned his bachelor's degree in mathematics at Merton College, Oxford. Wiles's graduate research was guided by John Coates, beginning in the summer of 1975. Together they worked on the arithmetic of elliptic curves with complex multiplication by the methods of Iwasawa theory. He further worked with Barry Mazur on the main conjecture of Iwasawa theory over the rational numbers, and soon afterward, he generalised this result to totally real fields.
In 1980, Wiles earned a PhD while at Clare College, Cambridge. After a stay at the Institute for Advanced Study in Princeton, New Jersey, in 1981, Wiles became a Professor of Mathematics at Princeton University.
In 1985–86, Wiles was a Guggenheim Fellow at the Institut des Hautes Études Scientifiques near Paris and at the .
From 1988 to 1990, Wiles was a Royal Society Research Professor at t
|
https://en.wikipedia.org/wiki/Alexander%20Grothendieck
|
Alexander Grothendieck (; ; ; 28 March 1928 – 13 November 2014) was a French mathematician who became the leading figure in the creation of modern algebraic geometry. His research extended the scope of the field and added elements of commutative algebra, homological algebra, sheaf theory, and category theory to its foundations, while his so-called "relative" perspective led to revolutionary advances in many areas of pure mathematics. He is considered by many to be the greatest mathematician of the twentieth century.
Grothendieck began his productive and public career as a mathematician in 1949. In 1958, he was appointed a research professor at the Institut des hautes études scientifiques (IHÉS) and remained there until 1970, when, driven by personal and political convictions, he left following a dispute over military funding. He received the Fields Medal in 1966 for advances in algebraic geometry, homological algebra, and K-theory. He later became professor at the University of Montpellier and, while still producing relevant mathematical work, he withdrew from the mathematical community and devoted himself to political and religious pursuits (first Buddhism and later, a more Catholic Christian vision). In 1991, he moved to the French village of Lasserre in the Pyrenees, where he lived in seclusion, still working tirelessly on mathematics and his philosophical and religious thoughts until his death in 2014.
Life
Family and childhood
Grothendieck was born in Berlin to anarchist parents. His father, Alexander "Sascha" Schapiro (also known as Alexander Tanaroff), had Hasidic Jewish roots and had been imprisoned in Russia before moving to Germany in 1922, while his mother, Johanna "Hanka" Grothendieck, came from a Protestant German family in Hamburg and worked as a journalist. As teenagers, both of his parents had broken away from their early backgrounds. At the time of his birth, Grothendieck's mother was married to the journalist Johannes Raddatz and initially, his birth name was recorded as "Alexander Raddatz." That marriage was dissolved in 1929 and Schapiro acknowledged his paternity, but never married Hanka Grothendieck. Grothendieck had a maternal sibling, his half sister Maidi.
Grothendieck lived with his parents in Berlin until the end of 1933, when his father moved to Paris to evade Nazism. His mother followed soon thereafter. Grothendieck was left in the care of Wilhelm Heydorn, a Lutheran pastor and teacher in Hamburg. According to Winfried Scharlau, during this time, his parents took part in the Spanish Civil War as non-combatant auxiliaries. However, others state that Schapiro fought in the anarchist militia.
World War II
In May 1939, Grothendieck was put on a train in Hamburg for France. Shortly afterward his father was interned in Le Vernet. He and his mother were then interned in various camps from 1940 to 1942 as "undesirable dangerous foreigners." The first camp was the Rieucros Camp, where his mother contracted the tubercul
|
https://en.wikipedia.org/wiki/Associative%20algebra
|
In mathematics, an associative algebra A over a commutative ring (often a field) K is a ring A together with a ring homomorphism from K into the center of A. This is thus an algebraic structure with an addition, a multiplication, and a scalar multiplication (the multiplication by the image by the ring homomorphism of an element of K). The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a module or vector space over K. In this article we will also use the term [[algebra over a field|K-algebra]] to mean an associative algebra over K. A standard first example of a K-algebra is a ring of square matrices over a commutative ring K, with the usual matrix multiplication.
A commutative algebra is an associative algebra that has a commutative multiplication, or, equivalently, an associative algebra that is also a commutative ring.
In this article associative algebras are assumed to have a multiplicative identity, denoted 1; they are sometimes called unital associative algebras for clarification. In some areas of mathematics this assumption is not made, and we will call such structures non-unital associative algebras. We will also assume that all rings are unital, and all ring homomorphisms are unital.
Every ring is an associative algebra over its center and over the integers.
Definition
Let R be a commutative ring (so R could be a field). An associative R-algebra (or more simply, an R-algebra) is a ring
that is also an R-module in such a way that the two additions (the ring addition and the module addition) are the same operation, and scalar multiplication satisfies
for all r in R and x, y in the algebra. (This definition implies that the algebra, being a ring, is unital, since rings are supposed to have a multiplicative identity.)
Equivalently, an associative algebra A is a ring together with a ring homomorphism from R to the center of A. If f is such a homomorphism, the scalar multiplication is (here the multiplication is the ring multiplication); if the scalar multiplication is given, the ring homomorphism is given by . (See also below).
Every ring is an associative Z-algebra, where Z denotes the ring of the integers.
A is an associative algebra that is also a commutative ring.
As a monoid object in the category of modules
The definition is equivalent to saying that a unital associative R-algebra is a monoid object in [[category of modules|R-Mod]] (the monoidal category of R-modules). By definition, a ring is a monoid object in the category of abelian groups; thus, the notion of an associative algebra is obtained by replacing the category of abelian groups with the category of modules.
Pushing this idea further, some authors have introduced a "generalized ring" as a monoid object in some other category that behaves like the category of modules. Indeed, this reinterpretation allows one to avoid making an explicit refere
|
https://en.wikipedia.org/wiki/Axiom%20of%20regularity
|
In mathematics, the axiom of regularity (also known as the axiom of foundation) is an axiom of Zermelo–Fraenkel set theory that states that every non-empty set A contains an element that is disjoint from A. In first-order logic, the axiom reads:
The axiom of regularity together with the axiom of pairing implies that no set is an element of itself, and that there is no infinite sequence (an) such that ai+1 is an element of ai for all i. With the axiom of dependent choice (which is a weakened form of the axiom of choice), this result can be reversed: if there are no such infinite sequences, then the axiom of regularity is true. Hence, in this context the axiom of regularity is equivalent to the sentence that there are no downward infinite membership chains.
The axiom is the contribution of ; it was adopted in a formulation closer to the one found in contemporary textbooks by . Virtually all results in the branches of mathematics based on set theory hold even in the absence of regularity; see chapter 3 of . However, regularity makes some properties of ordinals easier to prove; and it not only allows induction to be done on well-ordered sets but also on proper classes that are well-founded relational structures such as the lexicographical ordering on
Given the other axioms of Zermelo–Fraenkel set theory, the axiom of regularity is equivalent to the axiom of induction. The axiom of induction tends to be used in place of the axiom of regularity in intuitionistic theories (ones that do not accept the law of the excluded middle), where the two axioms are not equivalent.
In addition to omitting the axiom of regularity, non-standard set theories have indeed postulated the existence of sets that are elements of themselves.
Elementary implications of regularity
No set is an element of itself
Let A be a set, and apply the axiom of regularity to {A}, which is a set by the axiom of pairing. We see that there must be an element of {A} which is disjoint from {A}. Since the only element of {A} is A, it must be that A is disjoint from {A}. So, since , we cannot have A ∈ A (by the definition of disjoint).
No infinite descending sequence of sets exists
Suppose, to the contrary, that there is a function, f, on the natural numbers with f(n+1) an element of f(n) for each n. Define S = {f(n): n a natural number}, the range of f, which can be seen to be a set from the axiom schema of replacement. Applying the axiom of regularity to S, let B be an element of S which is disjoint from S. By the definition of S, B must be f(k) for some natural number k. However, we are given that f(k) contains f(k+1) which is also an element of S. So f(k+1) is in the intersection of f(k) and S. This contradicts the fact that they are disjoint sets. Since our supposition led to a contradiction, there must not be any such function, f.
The nonexistence of a set containing itself can be seen as a special case where the sequence is infinite and constant.
Notice that this argument only
|
https://en.wikipedia.org/wiki/Algebraic%20extension
|
In mathematics, an algebraic extension is a field extension such that every element of the larger field is algebraic over the smaller field ; that is, every element of is a root of a non-zero polynomial with coefficients in . A field extension that is not algebraic, is said to be transcendental, and must contain transcendental elements, that is, elements that are not algebraic.
The algebraic extensions of the field of the rational numbers are called algebraic number fields and are the main objects of study of algebraic number theory. Another example of a common algebraic extension is the extension of the real numbers by the complex numbers.
Some properties
All transcendental extensions are of infinite degree. This in turn implies that all finite extensions are algebraic. The converse is not true however: there are infinite extensions which are algebraic. For instance, the field of all algebraic numbers is an infinite algebraic extension of the rational numbers.
Let be an extension field of , and . The smallest subfield of that contains and is commonly denoted If is algebraic over , then the elements of can be expressed as polynomials in with coefficients in K; that is, is also the smallest ring containing and . In this case, is a finite extension of (it is a finite dimensional -vector space), and all its elements are algebraic over . These properties do not hold if is not algebraic. For example, and they are both infinite dimensional vector spaces over
An algebraically closed field F has no proper algebraic extensions, that is, no algebraic extensions E with F < E. An example is the field of complex numbers. Every field has an algebraic extension which is algebraically closed (called its algebraic closure), but proving this in general requires some form of the axiom of choice.
An extension L/K is algebraic if and only if every sub K-algebra of L is a field.
Properties
The following three properties hold:
If E is an algebraic extension of F and F is an algebraic extension of K then E is an algebraic extension of K.
If E and F are algebraic extensions of K in a common overfield C, then the compositum EF is an algebraic extension of K.
If E is an algebraic extension of F and E > K > F then E is an algebraic extension of K.
These finitary results can be generalized using transfinite induction:
This fact, together with Zorn's lemma (applied to an appropriately chosen poset), establishes the existence of algebraic closures.
Generalizations
Model theory generalizes the notion of algebraic extension to arbitrary theories: an embedding of M into N is called an algebraic extension if for every x in N there is a formula p with parameters in M, such that p(x) is true and the set
is finite. It turns out that applying this definition to the theory of fields gives the usual definition of algebraic extension. The Galois group of N over M can again be defined as the group of automorphisms, and it turns out that most of the theor
|
https://en.wikipedia.org/wiki/Analytic%20geometry
|
In mathematics, analytic geometry, also known as coordinate geometry or Cartesian geometry, is the study of geometry using a coordinate system. This contrasts with synthetic geometry.
Analytic geometry is used in physics and engineering, and also in aviation, rocketry, space science, and spaceflight. It is the foundation of most modern fields of geometry, including algebraic, differential, discrete and computational geometry.
Usually the Cartesian coordinate system is applied to manipulate equations for planes, straight lines, and circles, often in two and sometimes three dimensions. Geometrically, one studies the Euclidean plane (two dimensions) and Euclidean space. As taught in school books, analytic geometry can be explained more simply: it is concerned with defining and representing geometric shapes in a numerical way and extracting numerical information from shapes' numerical definitions and representations. That the algebra of the real numbers can be employed to yield results about the linear continuum of geometry relies on the Cantor–Dedekind axiom.
History
Ancient Greece
The Greek mathematician Menaechmus solved problems and proved theorems by using a method that had a strong resemblance to the use of coordinates and it has sometimes been maintained that he had introduced analytic geometry.
Apollonius of Perga, in On Determinate Section, dealt with problems in a manner that may be called an analytic geometry of one dimension; with the question of finding points on a line that were in a ratio to the others. Apollonius in the Conics further developed a method that is so similar to analytic geometry that his work is sometimes thought to have anticipated the work of Descartes by some 1800 years. His application of reference lines, a diameter and a tangent is essentially no different from our modern use of a coordinate frame, where the distances measured along the diameter from the point of tangency are the abscissas, and the segments parallel to the tangent and intercepted between the axis and the curve are the ordinates. He further developed relations between the abscissas and the corresponding ordinates that are equivalent to rhetorical equations (expressed in words) of curves. However, although Apollonius came close to developing analytic geometry, he did not manage to do so since he did not take into account negative magnitudes and in every case the coordinate system was superimposed upon a given curve a posteriori instead of a priori. That is, equations were determined by curves, but curves were not determined by equations. Coordinates, variables, and equations were subsidiary notions applied to a specific geometric situation.
Persia
The 11th-century Persian mathematician Omar Khayyam saw a strong relationship between geometry and algebra and was moving in the right direction when he helped close the gap between numerical and geometric algebra with his geometric solution of the general cubic equations, but the decisive step came la
|
https://en.wikipedia.org/wiki/Annals%20of%20Mathematics
|
The Annals of Mathematics is a mathematical journal published every two months by Princeton University and the Institute for Advanced Study.
History
The journal was established as The Analyst in 1874 and with Joel E. Hendricks as the founding editor-in-chief. It was "intended to afford a medium for the presentation and analysis of any and all questions of interest or importance in pure and applied Mathematics, embracing especially all new and interesting discoveries in theoretical and practical astronomy, mechanical philosophy, and engineering". It was published in Des Moines, Iowa, and was the earliest American mathematics journal to be published continuously for more than a year or two. This incarnation of the journal ceased publication after its tenth year, in 1883, giving as an explanation Hendricks' declining health, but Hendricks made arrangements to have it taken over by new management, and it was continued from March 1884 as the Annals of Mathematics. The new incarnation of the journal was edited by Ormond Stone (University of Virginia). It moved to Harvard in 1899 before reaching its current home in Princeton in 1911.
An important period for the journal was 1928–1958 with Solomon Lefschetz as editor. During this time, it became an increasingly well-known and respected journal. Its rise, in turn, stimulated American mathematics. Norman Steenrod characterized Lefschetz' impact as editor as follows: "The importance to American mathematicians of a first-class journal is that it sets high standards for them to aim at. In this somewhat indirect manner, Lefschetz profoundly affected the development of mathematics in the United States."
Princeton University continued to publish the Annals on its own until 1933, when the Institute for Advanced Study took joint editorial control. Since 1998 it has been available in an electronic edition, alongside its regular print edition. The electronic edition was available without charge, as an open access journal, but since 2008 this is no longer the case. Issues from before 2003 were transferred to the non-free JSTOR archive, and articles are not freely available until 5 years after publication.
Editors
The current () editors of the Annals of Mathematics are Helmut Hofer, Nick Katz, Sergiu Klainerman, Fernando Codá Marques, Assaf Naor, Peter Sarnak and Zoltán Szabó (all but Helmut Hofer from Princeton University, with Hofer being a professor at the Institute for Advanced Study and Peter Sarnak also being a professor there as a second affiliation).
Abstracting and indexing
The journal is abstracted and indexed in the Science Citation Index, Current Contents/Physical, Chemical & Earth Sciences, and Scopus. According to the Journal Citation Reports, the journal has a 2020 impact factor of 5.246, ranking it third out of 330 journals in the category "Mathematics".
References
External links
Mathematics journals
Publications established in 1874
English-language journals
Bimonthly journals
Princeto
|
https://en.wikipedia.org/wiki/Antiderivative
|
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a function is a differentiable function whose derivative is equal to the original function . This can be stated symbolically as . The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as and .
Antiderivatives are related to definite integrals through the second fundamental theorem of calculus: the definite integral of a function over a closed interval where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.
In physics, antiderivatives arise in the context of rectilinear motion (e.g., in explaining the relationship between position, velocity and acceleration). The discrete equivalent of the notion of antiderivative is antidifference.
Examples
The function is an antiderivative of , since the derivative of is . And since the derivative of a constant is zero, will have an infinite number of antiderivatives, such as , etc. Thus, all the antiderivatives of can be obtained by changing the value of in , where is an arbitrary constant known as the constant of integration. Essentially, the graphs of antiderivatives of a given function are vertical translations of each other, with each graph's vertical location depending upon the value .
More generally, the power function has antiderivative if , and if .
In physics, the integration of acceleration yields velocity plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on). Thus, integration produces the relations of acceleration, velocity and displacement:
Uses and properties
Antiderivatives can be used to compute definite integrals, using the fundamental theorem of calculus: if is an antiderivative of the integrable function over the interval , then:
Because of this, each of the infinitely many antiderivatives of a given function may be called the "indefinite integral" of f and written using the integral symbol with no bounds:
If is an antiderivative of , and the function is defined on some interval, then every other antiderivative of differs from by a constant: there exists a number such that for all . is called the constant of integration. If the domain of is a disjoint union of two or more (open) intervals, then a different constant of integration may be chosen for each of the intervals. For instance
is the most general antiderivative of on its natural domain
Every continuous function has an antiderivative, and one antideriv
|
https://en.wikipedia.org/wiki/Convex%20uniform%20honeycomb
|
In geometry, a convex uniform honeycomb is a uniform tessellation which fills three-dimensional Euclidean space with non-overlapping convex uniform polyhedral cells.
Twenty-eight such honeycombs are known:
the familiar cubic honeycomb and 7 truncations thereof;
the alternated cubic honeycomb and 4 truncations thereof;
10 prismatic forms based on the uniform plane tilings (11 if including the cubic honeycomb);
5 modifications of some of the above by elongation and/or gyration.
They can be considered the three-dimensional analogue to the uniform tilings of the plane.
The Voronoi diagram of any lattice forms a convex uniform honeycomb in which the cells are zonohedra.
History
1900: Thorold Gosset enumerated the list of semiregular convex polytopes with regular cells (Platonic solids) in his publication On the Regular and Semi-Regular Figures in Space of n Dimensions, including one regular cubic honeycomb, and two semiregular forms with tetrahedra and octahedra.
1905: Alfredo Andreini enumerated 25 of these tessellations.
1991: Norman Johnson's manuscript Uniform Polytopes identified the list of 28.
1994: Branko Grünbaum, in his paper Uniform tilings of 3-space, also independently enumerated all 28, after discovering errors in Andreini's publication. He found the 1905 paper, which listed 25, had 1 wrong, and 4 being missing. Grünbaum states in this paper that Norman Johnson deserves priority for achieving the same enumeration in 1991. He also mentions that I. Alexeyev of Russia had contacted him regarding a putative enumeration of these forms, but that Grünbaum was unable to verify this at the time.
2006: George Olshevsky, in his manuscript Uniform Panoploid Tetracombs, along with repeating the derived list of 11 convex uniform tilings, and 28 convex uniform honeycombs, expands a further derived list of 143 convex uniform tetracombs (Honeycombs of uniform 4-polytopes in 4-space).
Only 14 of the convex uniform polyhedra appear in these patterns:
three of the five Platonic solids (the tetrahedron, cube, and octahedron),
six of the thirteen Archimedean solids (the ones with reflective tetrahedral or octahedral symmetry), and
five of the infinite family of prisms (the 3-, 4-, 6-, 8-, and 12-gonal ones; the 4-gonal prism duplicates the cube).
The icosahedron, snub cube, and square antiprism appear in some alternations, but those honeycombs cannot be realised with all edges unit length.
Names
This set can be called the regular and semiregular honeycombs. It has been called the Archimedean honeycombs by analogy with the convex uniform (non-regular) polyhedra, commonly called Archimedean solids. Recently Conway has suggested naming the set as the Architectonic tessellations and the dual honeycombs as the Catoptric tessellations.
The individual honeycombs are listed with names given to them by Norman Johnson. (Some of the terms used below are defined in Uniform 4-polytope#Geometric derivations for 46 nonprismatic Wythoffian uniform 4-pol
|
https://en.wikipedia.org/wiki/Abelian%20group
|
In mathematics, an abelian group, also called a commutative group, is a group in which the result of applying the group operation to two group elements does not depend on the order in which they are written. That is, the group operation is commutative. With addition as an operation, the integers and the real numbers form abelian groups, and the concept of an abelian group may be viewed as a generalization of these examples. Abelian groups are named after early 19th century mathematician Niels Henrik Abel.
The concept of an abelian group underlies many fundamental algebraic structures, such as fields, rings, vector spaces, and algebras. The theory of abelian groups is generally simpler than that of their non-abelian counterparts, and finite abelian groups are very well understood and fully classified.
Definition
An abelian group is a set , together with an operation that combines any two elements and of to form another element of denoted . The symbol is a general placeholder for a concretely given operation. To qualify as an abelian group, the set and operation, , must satisfy four requirements known as the abelian group axioms (some authors include in the axioms some properties that belong to the definition of an operation: namely that the operation is defined for any ordered pair of elements of , that the result is well-defined, and that the result belongs to ):
Associativity For all , , and in , the equation holds.
Identity element There exists an element in , such that for all elements in , the equation holds.
Inverse element For each in there exists an element in such that , where is the identity element.
Commutativity For all , in , .
A group in which the group operation is not commutative is called a "non-abelian group" or "non-commutative group".
Facts
Notation
There are two main notational conventions for abelian groups – additive and multiplicative.
Generally, the multiplicative notation is the usual notation for groups, while the additive notation is the usual notation for modules and rings. The additive notation may also be used to emphasize that a particular group is abelian, whenever both abelian and non-abelian groups are considered, some notable exceptions being near-rings and partially ordered groups, where an operation is written additively even when non-abelian.
Multiplication table
To verify that a finite group is abelian, a table (matrix) – known as a Cayley table – can be constructed in a similar fashion to a multiplication table. If the group is under the the entry of this table contains the product .
The group is abelian if and only if this table is symmetric about the main diagonal. This is true since the group is abelian iff for all , which is iff the entry of the table equals the entry for all , i.e. the table is symmetric about the main diagonal.
Examples
For the integers and the operation addition , denoted , the operation + combines any two integers to form a third integer, addit
|
https://en.wikipedia.org/wiki/Arithmetic%E2%80%93geometric%20mean
|
In mathematics, the arithmetic–geometric mean of two positive real numbers and is the mutual limit of a sequence of arithmetic means and a sequence of geometric means:
Begin the sequences with x and y:
Then define the two interdependent sequences and as
These two sequences converge to the same number, the arithmetic–geometric mean of and ; it is denoted by , or sometimes by or .
The arithmetic–geometric mean is used in fast algorithms for exponential and trigonometric functions, as well as some mathematical constants, in particular, computing .
The arithmetic–geometric mean can be extended to complex numbers and when the branches of the square root are allowed to be taken inconsistently, it is, in general, a multivalued function.
Example
To find the arithmetic–geometric mean of and , iterate as follows:
The first five iterations give the following values:
The number of digits in which and agree (underlined) approximately doubles with each iteration. The arithmetic–geometric mean of 24 and 6 is the common limit of these two sequences, which is approximately .
History
The first algorithm based on this sequence pair appeared in the works of Lagrange. Its properties were further analyzed by Gauss.
Properties
The geometric mean of two positive numbers is never bigger than the arithmetic mean (see inequality of arithmetic and geometric means). As a consequence, for , is an increasing sequence, is a decreasing sequence, and . These are strict inequalities if .
is thus a number between the geometric and arithmetic mean of and ; it is also between and .
If , then .
There is an integral-form expression for :
where is the complete elliptic integral of the first kind:
Indeed, since the arithmetic–geometric process converges so quickly, it provides an efficient way to compute elliptic integrals via this formula. In engineering, it is used for instance in elliptic filter design.
The arithmetic–geometric mean is connected to the Jacobi theta function by
which upon setting gives
Related concepts
The reciprocal of the arithmetic–geometric mean of 1 and the square root of 2 is called Gauss's constant, after Carl Friedrich Gauss.
In 1799, Gauss proved that
where is the lemniscate constant.
In 1941, (and hence ) was proven transcendental by Theodor Schneider. The set is algebraically independent over , but the set (where the prime denotes the derivative with respect to the second variable) is not algebraically independent over . In fact,
The geometric–harmonic mean can be calculated by an analogous method, using sequences of geometric and harmonic means. One finds that .
The arithmetic–harmonic mean can be similarly defined, but takes the same value as the geometric mean (see section "Calculation" there).
The arithmetic–geometric mean can be used to compute – among others – logarithms, complete and incomplete elliptic integrals of the first and second kind, and Jacobi elliptic functions.
Proof of existence
From the ine
|
https://en.wikipedia.org/wiki/Asymptote
|
In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity. In projective geometry and related contexts, an asymptote of a curve is a line which is tangent to the curve at a point at infinity.
The word asymptote is derived from the Greek ἀσύμπτωτος (asumptōtos) which means "not falling together", from ἀ priv. + σύν "together" + πτωτ-ός "fallen". The term was introduced by Apollonius of Perga in his work on conic sections, but in contrast to its modern meaning, he used it to mean any line that does not intersect the given curve.
There are three kinds of asymptotes: horizontal, vertical and oblique. For curves given by the graph of a function , horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to Vertical asymptotes are vertical lines near which the function grows without bound. An oblique asymptote has a slope that is non-zero but finite, such that the graph of the function approaches it as x tends to
More generally, one curve is a curvilinear asymptote of another (as opposed to a linear asymptote) if the distance between the two curves tends to zero as they tend to infinity, although the term asymptote by itself is usually reserved for linear asymptotes.
Asymptotes convey information about the behavior of curves in the large, and determining the asymptotes of a function is an important step in sketching its graph. The study of asymptotes of functions, construed in a broad sense, forms a part of the subject of asymptotic analysis.
Introduction
The idea that a curve may come arbitrarily close to a line without actually becoming the same may seem to counter everyday experience. The representations of a line and a curve as marks on a piece of paper or as pixels on a computer screen have a positive width. So if they were to be extended far enough they would seem to merge, at least as far as the eye could discern. But these are physical representations of the corresponding mathematical entities; the line and the curve are idealized concepts whose width is 0 (see Line). Therefore, the understanding of the idea of an asymptote requires an effort of reason rather than experience.
Consider the graph of the function shown in this section. The coordinates of the points on the curve are of the form where x is a number other than 0. For example, the graph contains the points (1, 1), (2, 0.5), (5, 0.2), (10, 0.1), ... As the values of become larger and larger, say 100, 1,000, 10,000 ..., putting them far to the right of the illustration, the corresponding values of , .01, .001, .0001, ..., become infinitesimal relative to the scale shown. But no matter how large becomes, its reciprocal is never 0, so the curve never actually touches the x-axis. Similarly, as the values of become smaller and smaller, say .01, .001, .0001, ..., making them infinitesimal relative to the scale
|
https://en.wikipedia.org/wiki/Arithmetic
|
Arithmetic () is an elementary part of mathematics that consists of the study of the properties of the traditional operations on numbers—addition, subtraction, multiplication, division, exponentiation, and extraction of roots. In the 19th century, Italian mathematician Giuseppe Peano formalized arithmetic with his Peano axioms, which are highly important to the field of mathematical logic today.
History
The prehistory of arithmetic is limited to a small number of artifacts that may indicate the conception of addition and subtraction; the best-known is the Ishango bone from central Africa, dating from somewhere between 20,000 and 18,000 BC, although its interpretation is disputed.
The earliest written records indicate the Egyptians and Babylonians used all the elementary arithmetic operations: addition, subtraction, multiplication, and division, as early as 2000 BC. These artifacts do not always reveal the specific process used for solving problems, but the characteristics of the particular numeral system strongly influence the complexity of the methods. The hieroglyphic system for Egyptian numerals, like the later Roman numerals, descended from tally marks used for counting. In both cases, this origin resulted in values that used a decimal base but did not include positional notation. Complex calculations with Roman numerals required the assistance of a counting board (or the Roman abacus) to obtain the results.
Early number systems that included positional notation were not decimal; these include the sexagesimal (base 60) system for Babylonian numerals and the vigesimal (base 20) system that defined Maya numerals. Because of the place-value concept, the ability to reuse the same digits for different values contributed to simpler and more efficient methods of calculation.
The continuous historical development of modern arithmetic starts with the Hellenistic period of ancient Greece; it originated much later than the Babylonian and Egyptian examples. Prior to the works of Euclid around 300 BC, Greek studies in mathematics overlapped with philosophical and mystical beliefs. Nicomachus is an example of this viewpoint, using the earlier Pythagorean approach to numbers and their relationships to each other in his work, Introduction to Arithmetic.
Greek numerals were used by Archimedes, Diophantus, and others in a positional notation not very different from modern notation. The ancient Greeks lacked a symbol for zero until the Hellenistic period, and they used three separate sets of symbols as digits: one set for the units place, one for the tens place, and one for the hundreds. For the thousands place, they would reuse the symbols for the units place, and so on. Their addition algorithm was identical to the modern method, and their multiplication algorithm was only slightly different. Their long division algorithm was the same, and the digit-by-digit square root algorithm, popularly used as recently as the 20th century, was known to Archimedes (
|
https://en.wikipedia.org/wiki/Algebraic%20closure
|
In mathematics, particularly abstract algebra, an algebraic closure of a field K is an algebraic extension of K that is algebraically closed. It is one of many closures in mathematics.
Using Zorn's lemma or the weaker ultrafilter lemma, it can be shown that every field has an algebraic closure, and that the algebraic closure of a field K is unique up to an isomorphism that fixes every member of K. Because of this essential uniqueness, we often speak of the algebraic closure of K, rather than an algebraic closure of K.
The algebraic closure of a field K can be thought of as the largest algebraic extension of K.
To see this, note that if L is any algebraic extension of K, then the algebraic closure of L is also an algebraic closure of K, and so L is contained within the algebraic closure of K.
The algebraic closure of K is also the smallest algebraically closed field containing K,
because if M is any algebraically closed field containing K, then the elements of M that are algebraic over K form an algebraic closure of K.
The algebraic closure of a field K has the same cardinality as K if K is infinite, and is countably infinite if K is finite.
Examples
The fundamental theorem of algebra states that the algebraic closure of the field of real numbers is the field of complex numbers.
The algebraic closure of the field of rational numbers is the field of algebraic numbers.
There are many countable algebraically closed fields within the complex numbers, and strictly containing the field of algebraic numbers; these are the algebraic closures of transcendental extensions of the rational numbers, e.g. the algebraic closure of Q(π).
For a finite field of prime power order q, the algebraic closure is a countably infinite field that contains a copy of the field of order qn for each positive integer n (and is in fact the union of these copies).
Existence of an algebraic closure and splitting fields
Let be the set of all monic irreducible polynomials in K[x].
For each , introduce new variables where .
Let R be the polynomial ring over K generated by for all and all . Write
with .
Let I be the ideal in R generated by the . Since I is strictly smaller than R,
Zorn's lemma implies that there exists a maximal ideal M in R that contains I.
The field K1=R/M has the property that every polynomial with coefficients in K splits as the product of and hence has all roots in K1. In the same way, an extension K2 of K1 can be constructed, etc. The union of all these extensions is the algebraic closure of K, because any polynomial with coefficients in this new field has its coefficients in some Kn with sufficiently large n, and then its roots are in Kn+1, and hence in the union itself.
It can be shown along the same lines that for any subset S of K[x], there exists a splitting field of S over K.
Separable closure
An algebraic closure Kalg of K contains a unique separable extension Ksep of K containing all (algebraic) separable extensions of K within Kalg.
|
https://en.wikipedia.org/wiki/Alternative%20algebra
|
In abstract algebra, an alternative algebra is an algebra in which multiplication need not be associative, only alternative. That is, one must have
for all x and y in the algebra.
Every associative algebra is obviously alternative, but so too are some strictly non-associative algebras such as the octonions.
The associator
Alternative algebras are so named because they are the algebras for which the associator is alternating. The associator is a trilinear map given by
.
By definition, a multilinear map is alternating if it vanishes whenever two of its arguments are equal. The left and right alternative identities for an algebra are equivalent to
Both of these identities together imply that
for all and . This is equivalent to the flexible identity
The associator of an alternative algebra is therefore alternating. Conversely, any algebra whose associator is alternating is clearly alternative. By symmetry, any algebra which satisfies any two of:
left alternative identity:
right alternative identity:
flexible identity:
is alternative and therefore satisfies all three identities.
An alternating associator is always totally skew-symmetric. That is,
for any permutation . The converse holds so long as the characteristic of the base field is not 2.
Examples
Every associative algebra is alternative.
The octonions form a non-associative alternative algebra, a normed division algebra of dimension 8 over the real numbers.
More generally, any octonion algebra is alternative.
Non-examples
The sedenions and all higher Cayley–Dickson algebras lose alternativity.
Properties
Artin's theorem states that in an alternative algebra the subalgebra generated by any two elements is associative. Conversely, any algebra for which this is true is clearly alternative. It follows that expressions involving only two variables can be written unambiguously without parentheses in an alternative algebra. A generalization of Artin's theorem states that whenever three elements in an alternative algebra associate (i.e., ), the subalgebra generated by those elements is associative.
A corollary of Artin's theorem is that alternative algebras are power-associative, that is, the subalgebra generated by a single element is associative. The converse need not hold: the sedenions are power-associative but not alternative.
The Moufang identities
hold in any alternative algebra.
In a unital alternative algebra, multiplicative inverses are unique whenever they exist. Moreover, for any invertible element and all one has
This is equivalent to saying the associator vanishes for all such and .
If and are invertible then is also invertible with inverse . The set of all invertible elements is therefore closed under multiplication and forms a Moufang loop. This loop of units in an alternative ring or algebra is analogous to the group of units in an associative ring or algebra.
Kleinfeld's theorem states that any simple non-associative alternative ring is a generali
|
https://en.wikipedia.org/wiki/Arithmetic%20function
|
In number theory, an arithmetic, arithmetical, or number-theoretic function is for most authors any function f(n) whose domain is the positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n".
An example of an arithmetic function is the divisor function whose value at a positive integer n is equal to the number of divisors of n.
There is a larger class of number-theoretic functions that do not fit the above definition, for example, the prime-counting functions. This article provides links to functions of both classes.
Arithmetic functions are often extremely irregular (see table), but some of them have series expansions in terms of Ramanujan's sum.
Multiplicative and additive functions
An arithmetic function a is
completely additive if a(mn) = a(m) + a(n) for all natural numbers m and n;
completely multiplicative if a(mn) = a(m)a(n) for all natural numbers m and n;
Two whole numbers m and n are called coprime if their greatest common divisor is 1, that is, if there is no prime number that divides both of them.
Then an arithmetic function a is
additive if a(mn) = a(m) + a(n) for all coprime natural numbers m and n;
multiplicative if a(mn) = a(m)a(n) for all coprime natural numbers m and n.
Notation
In this article, and mean that the sum or product is over all prime numbers:
and
Similarly, and mean that the sum or product is over all prime powers with strictly positive exponent (so is not included):
The notations and mean that the sum or product is over all positive divisors of n, including 1 and n. For example, if , then
The notations can be combined: and mean that the sum or product is over all prime divisors of n. For example, if n = 18, then
and similarly and mean that the sum or product is over all prime powers dividing n. For example, if n = 24, then
Ω(n), ω(n), νp(n) – prime power decomposition
The fundamental theorem of arithmetic states that any positive integer n can be represented uniquely as a product of powers of primes: where p1 < p2 < ... < pk are primes and the aj are positive integers. (1 is given by the empty product.)
It is often convenient to write this as an infinite product over all the primes, where all but a finite number have a zero exponent. Define the p-adic valuation νp(n) to be the exponent of the highest power of the prime p that divides n. That is, if p is one of the pi then νp(n) = ai, otherwise it is zero. Then
In terms of the above the prime omega functions ω and Ω are defined by
To avoid repetition, whenever possible formulas for the functions listed in this article are given in terms of n and the corresponding pi, ai, ω, and Ω.
Multiplicative functions
σk(n), τ(n), d(n) – divisor sums
σk(n) is the sum of the kth powers of the positive divisors of n, including 1 and n, where k is a complex number.
σ1(n), the sum of the (positive) divis
|
https://en.wikipedia.org/wiki/Ascending%20chain%20condition
|
In mathematics, the ascending chain condition (ACC) and descending chain condition (DCC) are finiteness properties satisfied by some algebraic structures, most importantly ideals in certain commutative rings. These conditions played an important role in the development of the structure theory of commutative rings in the works of David Hilbert, Emmy Noether, and Emil Artin.
The conditions themselves can be stated in an abstract form, so that they make sense for any partially ordered set. This point of view is useful in abstract algebraic dimension theory due to Gabriel and Rentschler.
Definition
A partially ordered set (poset) P is said to satisfy the ascending chain condition (ACC) if no infinite strictly ascending sequence
of elements of P exists.
Equivalently, every weakly ascending sequence
of elements of P eventually stabilizes, meaning that there exists a positive integer n such that
Similarly, P is said to satisfy the descending chain condition (DCC) if there is no infinite descending chain of elements of P. Equivalently, every weakly descending sequence
of elements of P eventually stabilizes.
Comments
Assuming the axiom of dependent choice, the descending chain condition on (possibly infinite) poset P is equivalent to P being well-founded: every nonempty subset of P has a minimal element (also called the minimal condition or minimum condition). A totally ordered set that is well-founded is a well-ordered set.
Similarly, the ascending chain condition is equivalent to P being converse well-founded (again, assuming dependent choice): every nonempty subset of P has a maximal element (the maximal condition or maximum condition).
Every finite poset satisfies both the ascending and descending chain conditions, and thus is both well-founded and converse well-founded.
Example
Consider the ring
of integers. Each ideal of consists of all multiples of some number . For example, the ideal
consists of all multiples of . Let
be the ideal consisting of all multiples of . The ideal is contained inside the ideal , since every multiple of is also a multiple of . In turn, the ideal is contained in the ideal , since every multiple of is a multiple of . However, at this point there is no larger ideal; we have "topped out" at .
In general, if are ideals of such that is contained in , is contained in , and so on, then there is some for which all . That is, after some point all the ideals are equal to each other. Therefore, the ideals of satisfy the ascending chain condition, where ideals are ordered by set inclusion. Hence is a Noetherian ring.
See also
Artinian
Ascending chain condition for principal ideals
Krull dimension
Maximal condition on congruences
Noetherian
Notes
Citations
References
External links
Commutative algebra
Order theory
Wellfoundedness
|
https://en.wikipedia.org/wiki/Baseball%20statistics
|
Baseball statistics play an important role in evaluating the progress of a player or team.
Since the flow of a baseball game has natural breaks to it, and normally players act individually rather than performing in clusters, the sport lends itself to easy record-keeping and statistics. Statistics have been recorded since the game's earliest beginnings as a distinct sport in the middle of the nineteenth century, and as such are extensively available from leagues such as the National Association of Professional Base Ball Players and the Negro leagues, although the consistency to which these records have been kept and the standards with respect to which they were calculated (and their accuracy) has varied.
Since the National League (which along with the American League constitutes contemporary Major League Baseball) was founded in 1876, statistics in the most elite levels of professional baseball have been kept to a reasonably consistent standard which has continually evolved in tandem with advancement in available technology.
Development
The practice of keeping records of player achievements was started in the 19th century by Henry Chadwick. Based on his experience with the sport of cricket, Chadwick devised the predecessors to modern-day statistics including batting average, runs scored, and runs allowed.
Traditionally, statistics such as batting average (the number of hits divided by the number of at bats) and earned run average (the average number of earned runs allowed by a pitcher per nine innings) have dominated attention in the statistical world of baseball. However, the recent advent of sabermetrics has created statistics drawing from a greater breadth of player performance measures and playing field variables. Sabermetrics and comparative statistics attempt to provide an improved measure of a player's performance and contributions to his team from year to year, frequently against a statistical performance average.
Comprehensive, historical baseball statistics were difficult for the average fan to access until 1951, when researcher Hy Turkin published The Complete Encyclopedia of Baseball. In 1969, Macmillan Publishing printed its first Baseball Encyclopedia, using a computer to compile statistics for the first time. Known as "Big Mac", the encyclopedia became the standard baseball reference until 1988, when Total Baseball was released by Warner Books using more sophisticated technology. The publication of Total Baseball led to the discovery of several "phantom ballplayers", such as Lou Proctor, who did not belong in official record books and were removed.
Use
Throughout modern baseball, a few core statistics have been traditionally referenced – batting average, RBI, and home runs. To this day, a player who leads the league in all of these three statistics earns the "Triple Crown". For pitchers, wins, ERA, and strikeouts are the most often-cited statistics, and a pitcher leading his league in these statistics may also be referred to a
|
https://en.wikipedia.org/wiki/List%20of%20Major%20League%20Baseball%20career%20total%20bases%20leaders
|
In baseball statistics, total bases (TB) is the number of bases a player has gained with hits. It is a weighted sum for which the weight value is 1 for a single, 2 for a double, 3 for a triple and 4 for a home run. Only bases attained from hits count toward this total. Reaching base by other means (such as a base on balls) or advancing further after the hit (such as when a subsequent batter gets a hit) does not increase the player's total bases.
The total bases divided by the number of at bats is the player's slugging average.
Hank Aaron is the career leader in total bases with 6,856. Albert Pujols (6,211), Stan Musial (6,134), and Willie Mays (6,080) are the only other players with at least 6,000 career total bases.
As of October 2023, no active players are in the top 100 for career total bases. The active leader is Nelson Cruz, in 113th with 3,847.
Key
List
Stats updated as of October 1, 2023.
Notes
External links
Baseball Reference – Career Leaders & Records for Total Bases
Total
Major League Baseball statistics
|
https://en.wikipedia.org/wiki/Hit%20%28baseball%29
|
In baseball statistics, a hit (denoted by H), also called a base hit, is credited to a batter when the batter safely reaches or passes first base after hitting the ball into fair territory with neither the benefit of an error nor a fielder's choice.
Scoring a hit
To achieve a hit, the batter must reach first base before any fielder can either tag him with the ball, throw to another player protecting the base before the batter reaches it, or tag first base while carrying the ball. The hit is scored the moment the batter reaches first base safely; if he is put out while attempting to stretch his hit to a double or triple or home run on the same play, he still gets credit for a hit (according to the last base he reached safely on the play).
If a batter reaches first base because of offensive interference by a preceding runner (including if a preceding runner is hit by a batted ball), he is also credited with a hit.
Types of hits
A hit for one base is called a single, for two bases a double, and for three bases a triple. A home run is also scored as a hit. Doubles, triples, and home runs are also called extra base hits.
An "infield hit" is a hit where the ball does not leave the infield. Infield hits are uncommon by nature, and most often earned by speedy runners.
Pitching a no-hitter
A no-hitter is a game in which one of the teams prevented the other from getting a hit. Throwing a no-hitter is rare and considered an extraordinary accomplishment for a pitcher or pitching staff. In most cases in the professional game, no-hitters are accomplished by a single pitcher who throws a complete game. A pitcher who throws a no-hitter could still allow runners to reach base safely, by way of walks, errors, hit batsmen, or batter reaching base due to interference or obstruction. If the pitcher allows no runners to reach base in any manner whatsoever (hit, walk, hit batsman, error, etc.), the no-hitter is a perfect game.
1887 discrepancy
In 1887, Major League Baseball counted bases on balls (walks) as hits. The result was skyrocketing batting averages, including some near .500; Tip O'Neill of the St. Louis Browns batted .485 that season, which would still be a major league record if recognized. The experiment was abandoned the following season.
There is controversy regarding how the records of 1887 should be interpreted. The number of legitimate walks and at-bats are known for all players that year, so computing averages using the same method as in other years is straightforward. In 1968, Major League Baseball formed a Special Baseball Records Committee to resolve this (and other) issues. The Committee ruled that walks in 1887 should not be counted as hits. In 2000, Major League Baseball reversed its decision, ruling that the statistics which were recognized in each year's official records should stand, even in cases where they were later proven incorrect. Most current sources list O'Neill's 1887 average as .435, as calculated by omitting his walks
|
https://en.wikipedia.org/wiki/On-base%20percentage
|
In baseball statistics, on-base percentage (OBP) measures how frequently a batter reaches base. An official Major League Baseball (MLB) statistic since 1984, it is sometimes referred to as on-base average (OBA), as it is rarely presented as a true percentage.
Generally defined as "how frequently a batter reaches base per plate appearance", OBP is specifically calculated as the ratio of a batter's times on base (the sum of hits, bases on balls, and times hit by pitch) to the sum of at bats, bases on balls, hit by pitch, and sacrifice flies. OBP does not credit the batter for reaching base on fielding errors, fielder's choice, uncaught third strikes, fielder's obstruction, or catcher's interference.
OBP is added to slugging average (SLG) to determine on-base plus slugging (OPS).
The OBP of all batters faced by one pitcher or team is referred to as "on-base against".
On-base percentage is calculable for professional teams dating back to the first year of National Association of Professional Base Ball Players competition in 1871, because the component values of its formula have been recorded in box scores ever since.
History
The statistic was invented in the late 1940s by Brooklyn Dodgers statistician Allan Roth with then-Dodgers general manager Branch Rickey. In 1954, Rickey, who was then the general manager of the Pittsburgh Pirates, was featured in a Life Magazine graphic in which the formula for on-base percentage was shown as the first component of an all-encompassing "offense" equation. However, it was not named as on-base percentage, and there is little evidence that Roth's statistic was taken seriously at the time by the baseball community at large.
On-base percentage became an official MLB statistic in 1984. Its perceived importance jumped after the influential 2003 book Moneyball highlighted Oakland Athletics general manager Billy Beane's focus on the statistic. Many baseball observers, particularly those influenced by the field of sabermetrics, now consider on-base percentage superior to the statistic traditionally used to measure offensive skill, batting average, which accounts for hits but ignores other ways a batter can reach base.
Overview
Traditionally, players with the best on-base percentages bat as leadoff hitter, unless they are power hitters, who traditionally bat slightly lower in the batting order. The league average for on-base percentage in Major League Baseball has varied considerably over time; at its peak in the late 1990s, it was around .340, whereas it was typically .300 during the dead-ball era. On-base percentage can also vary quite considerably from player to player. The highest career OBP of a batter with more than 3,000 plate appearances is .482 by Ted Williams. The lowest is by Bill Bergen, who had an OBP of .194.
On-base percentage is calculated using this formula:
where
H = Hits
BB = Bases on Balls (Walks)
HBP = Hit By Pitch
AB = At bat
SF = Sacrifice fly
In certain unofficial calculations, the den
|
https://en.wikipedia.org/wiki/Binary
|
Binary may refer to:
Science and technology
Mathematics
Binary number, a representation of numbers using only two digits (0 and 1)
Binary function, a function that takes two arguments
Binary operation, a mathematical operation that takes two arguments
Binary relation, a relation involving two elements
Binary-coded decimal, a method for encoding for decimal digits in binary sequences
Finger binary, a system for counting in binary numbers on the fingers of human hands
Computing
Binary code, the digital representation of text and data
Bit, or binary digit, the basic unit of information in computers
Binary file, composed of something other than human-readable text
Executable, a type of binary file that contains machine code for the computer to execute
Binary tree, a computer tree data structure in which each node has at most two children
Astronomy
Binary star, a star system with two stars in it
Binary planet, two planetary bodies of comparable mass orbiting each other
Binary asteroid, two asteroids orbiting each other
Biology
Binary fission, the splitting of a single-celled organism into two daughter cells
Chemistry
Binary phase, a chemical compound containing two different chemical elements
Arts and entertainment
Binary (comics), a superheroine in the Marvel Universe
Binary (Doctor Who audio)
Music
Binary form, a way of structuring a piece of music
Binary (Ani DiFranco album), 2017
Binary (Kay Tse album), 2008
"Binary" (song), a 2007 single by Assemblage 23
Novel
Binary (novel), a 1972 novel by Michael Crichton (writing as John Lange)
Binary, an evil organization in the novel InterWorld
Other uses
Binary opposition, polar opposites, often ignoring the middle ground
Gender binary, the classification of sex and gender into two distinct and disconnected forms of masculine and feminine
See also
Binary logic (disambiguation)
Binomial (disambiguation)
Boolean (disambiguation)
Secondary (disambiguation)
Ternary (disambiguation)
Unary (disambiguation)
|
https://en.wikipedia.org/wiki/Binomial%20distribution
|
In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability p) or failure (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.
The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used.
Definitions
Probability mass function
In general, if the random variable X follows the binomial distribution with parameters n ∈ and p ∈ [0,1], we write X ~ B(n, p). The probability of getting exactly k successes in n independent Bernoulli trials (with the same rate p) is given by the probability mass function:
for k = 0, 1, 2, ..., n, where
is the binomial coefficient, hence the name of the distribution. The formula can be understood as follows: k successes occur with probability pk and n − k failures occur with probability . However, the k successes can occur anywhere among the n trials, and there are different ways of distributing k successes in a sequence of n trials.
In creating reference tables for binomial distribution probability, usually the table is filled in up to n/2 values. This is because for k > n/2, the probability can be calculated by its complement as
Looking at the expression f(k, n, p) as a function of k, there is a k value that maximizes it. This k value can be found by calculating
and comparing it to 1. There is always an integer M that satisfies
f(k, n, p) is monotone increasing for k < M and monotone decreasing for k > M, with the exception of the case where (n + 1)p is an integer. In this case, there are two values for which f is maximal: (n + 1)p and (n + 1)p − 1. M is the most probable outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the mode.
Example
Suppose a biased coin comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is
Cumulative distribution function
The cumulative distribution function can be expressed as:
where is the "floor" under k, i.e. the greatest integer less than or equal to k.
It can also be represented in terms of the regularized incomplete beta function, as follows:
which
|
https://en.wikipedia.org/wiki/Biostatistics
|
Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results.
History
Biostatistics and genetics
Biostatistical modeling forms an important part of numerous modern biological theories. Genetics studies, since its beginning, used statistical concepts to understand observed experimental results. Some genetics scientists even contributed with statistical advances with the development of methods and tools. Gregor Mendel started the genetics studies investigating genetics segregation patterns in families of peas and used statistics to explain the collected data. In the early 1900s, after the rediscovery of Mendel's Mendelian inheritance work, there were gaps in understanding between genetics and evolutionary Darwinism. Francis Galton tried to expand Mendel's discoveries with human data and proposed a different model with fractions of the heredity coming from each ancestral composing an infinite series. He called this the theory of "Law of Ancestral Heredity". His ideas were strongly disagreed by William Bateson, who followed Mendel's conclusions, that genetic inheritance were exclusively from the parents, half from each of them. This led to a vigorous debate between the biometricians, who supported Galton's ideas, as Raphael Weldon, Arthur Dukinfield Darbishire and Karl Pearson, and Mendelians, who supported Bateson's (and Mendel's) ideas, such as Charles Davenport and Wilhelm Johannsen. Later, biometricians could not reproduce Galton conclusions in different experiments, and Mendel's ideas prevailed. By the 1930s, models built on statistical reasoning had helped to resolve these differences and to produce the neo-Darwinian modern evolutionary synthesis.
Solving these differences also allowed to define the concept of population genetics and brought together genetics and evolution. The three leading figures in the establishment of population genetics and this synthesis all relied on statistics and developed its use in biology.
Ronald Fisher worked alongside statistician Betty Allan developing several basic statistical methods in support of his work studying the crop experiments at Rothamsted Research, published in Fisher's books Statistical Methods for Research Workers (1925) and The Genetical Theory of Natural Selection (1930), as well as Allan's scientific papers. Fisher went on to give many contributions to genetics and statistics. Some of them include the ANOVA, p-value concepts, Fisher's exact test and Fisher's equation for population dynamics. He is credited for the sentence "Natural selection is a mechanism for generating an exceedingly high degree of improbability".
Sewall G. Wright developed F-statistics and methods of computing them and defined inbreeding coefficient.
J. B. S. Haldane's book, The Causes of Evoluti
|
https://en.wikipedia.org/wiki/Binary%20relation
|
In mathematics, a binary relation associates elements of one set, called the domain, with elements of another set, called the codomain. A binary relation over sets and is a new set of ordered pairs consisting of elements from and from . It is a generalization of the more widely understood idea of a unary function. It encodes the common concept of relation: an element is related to an element , if and only if the pair belongs to the set of ordered pairs that defines the binary relation. A binary relation is the most studied special case of an -ary relation over sets , which is a subset of the Cartesian product
An example of a binary relation is the "divides" relation over the set of prime numbers and the set of integers , in which each prime is related to each integer that is a multiple of , but not to an integer that is not a multiple of . In this relation, for instance, the prime number 2 is related to numbers such as −4, 0, 6, 10, but not to 1 or 9, just as the prime number 3 is related to 0, 6, and 9, but not to 4 or 13.
Binary relations are used in many branches of mathematics to model a wide variety of concepts. These include, among others:
the "is greater than", "is equal to", and "divides" relations in arithmetic;
the "is congruent to" relation in geometry;
the "is adjacent to" relation in graph theory;
the "is orthogonal to" relation in linear algebra.
A function may be defined as a special kind of binary relation. Binary relations are also heavily used in computer science.
A binary relation over sets and is an element of the power set of Since the latter set is ordered by inclusion (⊆), each relation has a place in the lattice of subsets of A binary relation is called a homogeneous relation when X = Y. A binary relation is also called a heterogeneous relation when it is not necessary that X = Y.
Since relations are sets, they can be manipulated using set operations, including union, intersection, and complementation, and satisfying the laws of an algebra of sets. Beyond that, operations like the converse of a relation and the composition of relations are available, satisfying the laws of a calculus of relations, for which there are textbooks by Ernst Schröder, Clarence Lewis, and Gunther Schmidt. A deeper analysis of relations involves decomposing them into subsets called , and placing them in a complete lattice.
In some systems of axiomatic set theory, relations are extended to classes, which are generalizations of sets. This extension is needed for, among other things, modeling the concepts of "is an element of" or "is a subset of" in set theory, without running into logical inconsistencies such as Russell's paradox.
The terms , dyadic relation and two-place relation are synonyms for binary relation, though some authors use the term "binary relation" for any subset of a Cartesian product without reference to and , and reserve the term "correspondence" for a binary relation with reference to and .
Defini
|
https://en.wikipedia.org/wiki/Binary%20function
|
In mathematics, a binary function (also called bivariate function, or function of two variables) is a function that takes two inputs.
Precisely stated, a function is binary if there exists sets such that
where is the Cartesian product of and
Alternative definitions
Set-theoretically, a binary function can be represented as a subset of the Cartesian product , where belongs to the subset if and only if .
Conversely, a subset defines a binary function if and only if for any and , there exists a unique such that belongs to .
is then defined to be this .
Alternatively, a binary function may be interpreted as simply a function from to .
Even when thought of this way, however, one generally writes instead of .
(That is, the same pair of parentheses is used to indicate both function application and the formation of an ordered pair.)
Examples
Division of whole numbers can be thought of as a function. If is the set of integers, is the set of natural numbers (except for zero), and is the set of rational numbers, then division is a binary function .
Another example is that of inner products, or more generally functions of the form , where , are real-valued vectors of appropriate size and is a matrix. If is a positive definite matrix, this yields an inner product.
Functions of two real variables
Functions whose domain is a subset of are often also called functions of two variables even if their domain does not form a rectangle and thus the cartesian product of two sets.
Restrictions to ordinary functions
In turn, one can also derive ordinary functions of one variable from a binary function.
Given any element , there is a function , or , from to , given by .
Similarly, given any element , there is a function , or , from to , given by . In computer science, this identification between a function from to and a function from to , where is the set of all functions from to , is called currying.
Generalisations
The various concepts relating to functions can also be generalised to binary functions.
For example, the division example above is surjective (or onto) because every rational number may be expressed as a quotient of an integer and a natural number.
This example is injective in each input separately, because the functions f x and f y are always injective.
However, it's not injective in both variables simultaneously, because (for example) f (2,4) = f (1,2).
One can also consider partial binary functions, which may be defined only for certain values of the inputs.
For example, the division example above may also be interpreted as a partial binary function from Z and N to Q, where N is the set of all natural numbers, including zero.
But this function is undefined when the second input is zero.
A binary operation is a binary function where the sets X, Y, and Z are all equal; binary operations are often used to define algebraic structures.
In linear algebra, a bilinear transformation is a binary function where the sets X, Y, a
|
https://en.wikipedia.org/wiki/Binary%20operation
|
In mathematics, a binary operation or dyadic operation is a rule for combining two elements (called operands) to produce another element. More formally, a binary operation is an operation of arity two.
More specifically, an internal binary operation on a set is a binary operation whose two domains and the codomain are the same set. Examples include the familiar arithmetic operations of addition, subtraction, and multiplication. Other examples are readily found in different areas of mathematics, such as vector addition, matrix multiplication, and conjugation in groups.
An operation of arity two that involves several sets is sometimes also called a binary operation. For example, scalar multiplication of vector spaces takes a scalar and a vector to produce a vector, and scalar product takes two vectors to produce a scalar. Such binary operations may also be called binary functions.
Binary operations are the keystone of most structures that are studied in algebra, in particular in semigroups, monoids, groups, rings, fields, and vector spaces.
Terminology
More precisely, a binary operation on a set is a mapping of the elements of the Cartesian product to :
Because the result of performing the operation on a pair of elements of is again an element of , the operation is called a closed (or internal) binary operation on (or sometimes expressed as having the property of closure).
If is not a function but a partial function, then is called a partial binary operation. For instance, division of real numbers is a partial binary operation, because one can't divide by zero: is undefined for every real number . In both model theory and classical universal algebra, binary operations are required to be defined on all elements of . However, partial algebras generalize universal algebras to allow partial operations.
Sometimes, especially in computer science, the term binary operation is used for any binary function.
Properties and examples
Typical examples of binary operations are the addition () and multiplication () of numbers and matrices as well as composition of functions on a single set.
For instance,
On the set of real numbers , is a binary operation since the sum of two real numbers is a real number.
On the set of natural numbers , is a binary operation since the sum of two natural numbers is a natural number. This is a different binary operation than the previous one since the sets are different.
On the set of matrices with real entries, is a binary operation since the sum of two such matrices is a matrix.
On the set of matrices with real entries, is a binary operation since the product of two such matrices is a matrix.
For a given set , let be the set of all functions . Define by for all , the composition of the two functions and in . Then is a binary operation since the composition of the two functions is again a function on the set (that is, a member of ).
Many binary operations of interest in both algebra and
|
https://en.wikipedia.org/wiki/Boolean%20algebra%20%28structure%29
|
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets, or its elements can be viewed as generalized truth values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution).
Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference (not disjunction ∨). However, the theory of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean algebra express the symmetry of the theory described by the duality principle.
History
The term "Boolean algebra" honors George Boole (1815–1864), a self-educated English mathematician. He introduced the algebraic system initially in a small pamphlet, The Mathematical Analysis of Logic, published in 1847 in response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more substantial book, The Laws of Thought, published in 1854. Boole's formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 Universal Algebra. Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoff's 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing and Boolean-valued models.
Definition
A Boolean algebra is a set A, equipped with two binary operations ∧ (called "meet" or "and"), ∨ (called "join" or "or"), a unary operation ¬ (called "complement" or "not") and two elements 0 and 1 in A (called "bottom" and "top", or "least" and "greatest" element, also denoted by the symbols ⊥ and ⊤, respectively), such that for all elements a, b and c of A, the following axioms hold:
{| cellpadding=5
|
|
| associativity
|-
|
|
| commutativity
|-
|
|
| absorption
|-
|
|
| identity
|-
|
|
| distributivity
|-
|
|
| complements
|}
Note, however, that the absorption law and even the associativity law can be excluded from the set of axioms as they can be derived from the other axioms (see Proven properties).
A Bool
|
https://en.wikipedia.org/wiki/Banach%20space
|
In mathematics, more specifically in functional analysis, a Banach space (pronounced ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space.
Banach spaces are named after the Polish mathematician Stefan Banach, who introduced this concept and studied it systematically in 1920–1922 along with Hans Hahn and Eduard Helly.
Maurice René Fréchet was the first to use the term "Banach space" and Banach in turn then coined the term "Fréchet space".
Banach spaces originally grew out of the study of function spaces by Hilbert, Fréchet, and Riesz earlier in the century. Banach spaces play a central role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces.
Definition
A Banach space is a complete normed space
A normed space is a pair
consisting of a vector space over a scalar field (where is commonly or ) together with a distinguished
norm Like all norms, this norm induces a translation invariant
distance function, called the canonical or (norm) induced metric, defined for all vectors by
This makes into a metric space
A sequence is called or or if for every real there exists some index such that
whenever and are greater than
The normed space is called a and the canonical metric is called a if is a , which by definition means for every Cauchy sequence in there exists some such that
where because this sequence's convergence to can equivalently be expressed as:
The norm of a normed space is called a if is a Banach space.
L-semi-inner product
For any normed space there exists an L-semi-inner product on such that for all ; in general, there may be infinitely many L-semi-inner products that satisfy this condition. L-semi-inner products are a generalization of inner products, which are what fundamentally distinguish Hilbert spaces from all other Banach spaces. This shows that all normed spaces (and hence all Banach spaces) can be considered as being generalizations of (pre-)Hilbert spaces.
Characterization in terms of series
The vector space structure allows one to relate the behavior of Cauchy sequences to that of converging series of vectors.
A normed space is a Banach space if and only if each absolutely convergent series in converges in
Topology
The canonical metric of a normed space induces the usual metric topology on which is referred to as the canonical or norm induced topology.
Every normed space is automatically assumed to carry this Hausdorff topology, unless indicated otherwise.
With this topology, every Banach space is a Baire space, although there exist normed spaces that are Baire but not Banach. The norm is always a continuous function with respect to the topology that it induces.
The open and closed ball
|
https://en.wikipedia.org/wiki/Borsuk%E2%80%93Ulam%20theorem
|
In mathematics, the Borsuk–Ulam theorem states that every continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center.
Formally: if is continuous then there exists an such that: .
The case can be illustrated by saying that there always exist a pair of opposite points on the Earth's equator with the same temperature. The same is true for any circle. This assumes the temperature varies continuously in space, which is, however, not always the case.
The case is often illustrated by saying that at any moment, there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures, assuming that both parameters vary continuously in space. Since temperature, pressure or other such physical variables do not necessarily vary continuously, the predictions of the theorem are unlikely to be true in some necessary sense (as following from a mathematical necessity).
The Borsuk–Ulam theorem has several equivalent statements in terms of odd functions. Recall that is the n-sphere and is the n-ball:
If is a continuous odd function, then there exists an such that: .
If is a continuous function which is odd on (the boundary of ), then there exists an such that: .
History
According to , the first historical mention of the statement of the Borsuk–Ulam theorem appears in . The first proof was given by , where the formulation of the problem was attributed to Stanisław Ulam. Since then, many alternative proofs have been found by various authors, as collected by .
Equivalent statements
The following statements are equivalent to the Borsuk–Ulam theorem.
With odd functions
A function is called odd (aka antipodal or antipode-preserving) if for every : .
The Borsuk–Ulam theorem is equivalent to the following statement: A continuous odd function from an n-sphere into Euclidean n-space has a zero. PROOF:
If the theorem is correct, then it is specifically correct for odd functions, and for an odd function, iff . Hence every odd continuous function has a zero.
For every continuous function , the following function is continuous and odd: . If every odd continuous function has a zero, then has a zero, and therefore, . Hence the theorem is correct.
With retractions
Define a retraction as a function The Borsuk–Ulam theorem is equivalent to the following claim: there is no continuous odd retraction.
Proof: If the theorem is correct, then every continuous odd function from must include 0 in its range. However, so there cannot be a continuous odd function whose range is .
Conversely, if it is incorrect, then there is a continuous odd function with no zeroes. Then we can construct another odd function by:
since has no zeroes, is well-defined and continuous. Thus we have a continuous odd retraction.
Proofs
1-dimensional c
|
https://en.wikipedia.org/wiki/BQP
|
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.
A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3.
Definition
BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language L is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits , such that
For all , Qn takes n qubits as input and outputs 1 bit
For all x in L,
For all x not in L,
Alternatively, one can define BQP in terms of quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances.
Similarly to other "bounded error" probabilistic classes the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − n−c on the one hand, or requiring error as small as 2−nc on the other hand, where c is any positive constant, and n is the length of input.
A complete problem for Promise-BQP
Similar to the notion of NP-completeness and other complete problems, we can define a complete problem as a problem that is in Promise-BQP and that every problem in Promise-BQP reduces to it in polynomial time.
Here is an intuitive problem that is complete for efficient quantum computation, which stems directly from the definition of Promise-BQP. Note that for technical reasons, completeness proofs focus on the promise problem version of BQP. We show that the problem below is complete for the Promise-BQP complexity class (and not for the total BQP complexity class having a trivial promise, for which no complete problems are known).
APPROX-QCIRCUIT-PROB problem
Given a description of a quantum circuit acting on qubits with gates, where is a polynomial in and each gate acts on one or two qubits, and two numbers , distinguish between the following two cases:
measuring the first qubit of the state yields with probability
measuring the first qubit of the state yields with probability
Here, there is a promise on the inputs as the problem does not specify the behavior if an instance is not covered by these two cases.
Claim. Any BQP problem reduces to APPROX-QCIRCUIT-PROB.
Proof.
Suppose we have an algorithm that solves APPROX-QCIRCUIT-PROB, i.e., given a quantum
|
https://en.wikipedia.org/wiki/Brouwer%20fixed-point%20theorem
|
Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function mapping a nonempty compact convex set to itself, there is a point such that . The simplest forms of Brouwer's theorem are for continuous functions from a closed interval in the real numbers to itself or from a closed disk to itself. A more general form than the latter is for continuous functions from a nonempty convex compact subset of Euclidean space to itself.
Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem, the invariance of dimension and the Borsuk–Ulam theorem. This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu.
The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The case of differentiable mappings of the -dimensional closed ball was first proved in 1910 by Jacques Hadamard and the general case for continuous mappings by Brouwer in 1911.
Statement
The theorem has several formulations, depending on the context in which it is used and its degree of generalization. The simplest is sometimes given as follows:
In the plane Every continuous function from a closed disk to itself has at least one fixed point.
This can be generalized to an arbitrary finite dimension:
In Euclidean spaceEvery continuous function from a closed ball of a Euclidean space into itself has a fixed point.
A slightly more general version is as follows:
Convex compact setEvery continuous function from a nonempty convex compact subset K of a Euclidean space to K itself has a fixed point.
An even more general form is better known under a different name:
Schauder fixed point theoremEvery continuous function from a nonempty convex compact subset K of a Banach space to K itself has a fixed point.
Importance of the pre-conditions
The theorem holds only for functions that are endomorphisms (functions that have the same set as the domain and codomain) and for nonempty sets that are compact (thus,
|
https://en.wikipedia.org/wiki/Boltzmann%20distribution
|
In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form:
where is the probability of the system being in state , is the exponential function, is the energy of that state, and a constant of the distribution is the product of the Boltzmann constant and thermodynamic temperature . The symbol denotes proportionality (see for the proportionality constant).
The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied.
The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference:
The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium"
The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902.
The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle speeds or energies in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution.
The distribution
The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as
where:
is the exponential function,
is the probability of state ,
is the energy of state ,
is the Boltzmann constant,
is the absolute temperature of the system,
is the number of all states accessible to the system of interest,
(denoted by some authors by ) is the normalization denominator, which is the canonical partition function It results from the constraint that the probabilities of all accessible states must add up to 1.
Using Lagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes the entropy
subject to the normalization constraint that
|
https://en.wikipedia.org/wiki/Bill%20Schelter
|
William Frederick Schelter (1947 – July 30, 2001) was a professor of mathematics at The University of Texas at Austin and a Lisp developer and programmer. Schelter is credited with the development of the GNU Common Lisp (GCL) implementation of Common Lisp and the GPL'd version of the computer algebra system Macsyma called Maxima. Schelter authored Austin Kyoto Common Lisp (AKCL) under contract with IBM. AKCL formed the foundation for Axiom, another computer algebra system. AKCL eventually became GNU Common Lisp. He is also credited with the first port of the GNU C compiler to the Intel 386 architecture, used in the original implementation of the Linux kernel.
Schelter obtained his Ph.D. at McGill University in 1972. His mathematical specialties were noncommutative ring theory and computational algebra and its applications, including automated theorem proving in geometry.
In the summer of 2001, age 54, he died suddenly of a heart attack while traveling in Russia.
References
S. Chou and W. Schelter. Proving Geometry Theorems with Rewrite Rules Journal of Automated Reasoning, 1986.
External links
Maxima homepage. Maxima is now available under GPL.
1947 births
2001 deaths
Lisp (programming language) people
20th-century American mathematicians
Computer programmers
University of Texas at Austin faculty
McGill University Faculty of Science alumni
|
https://en.wikipedia.org/wiki/Borel%20measure
|
In mathematics, specifically in measure theory, a Borel measure on a topological space is a measure that is defined on all open sets (and thus on all Borel sets). Some authors require additional restrictions on the measure, as described below.
Formal definition
Let be a locally compact Hausdorff space, and let be the smallest σ-algebra that contains the open sets of ; this is known as the σ-algebra of Borel sets. A Borel measure is any measure defined on the σ-algebra of Borel sets. A few authors require in addition that is locally finite, meaning that for every compact set . If a Borel measure is both inner regular and outer regular, it is called a regular Borel measure. If is both inner regular, outer regular, and locally finite, it is called a Radon measure.
On the real line
The real line with its usual topology is a locally compact Hausdorff space; hence we can define a Borel measure on it. In this case, is the smallest σ-algebra that contains the open intervals of . While there are many Borel measures μ, the choice of Borel measure that assigns for every half-open interval is sometimes called "the" Borel measure on . This measure turns out to be the restriction to the Borel σ-algebra of the Lebesgue measure , which is a complete measure and is defined on the Lebesgue σ-algebra. The Lebesgue σ-algebra is actually the completion of the Borel σ-algebra, which means that it is the smallest σ-algebra that contains all the Borel sets and can be equipped with a complete measure. Also, the Borel measure and the Lebesgue measure coincide on the Borel sets (i.e., for every Borel measurable set, where is the Borel measure described above).
Product spaces
If X and Y are second-countable, Hausdorff topological spaces, then the set of Borel subsets of their product coincides with the product of the sets of Borel subsets of X and Y. That is, the Borel functor
from the category of second-countable Hausdorff spaces to the category of measurable spaces preserves finite products.
Applications
Lebesgue–Stieltjes integral
The Lebesgue–Stieltjes integral is the ordinary Lebesgue integral with respect to a measure known as the Lebesgue–Stieltjes measure, which may be associated to any function of bounded variation on the real line. The Lebesgue–Stieltjes measure is a regular Borel measure, and conversely every regular Borel measure on the real line is of this kind.
Laplace transform
One can define the Laplace transform of a finite Borel measure μ on the real line by the Lebesgue integral
An important special case is where μ is a probability measure or, even more specifically, the Dirac delta function. In operational calculus, the Laplace transform of a measure is often treated as though the measure came from a distribution function f. In that case, to avoid potential confusion, one often writes
where the lower limit of 0− is shorthand notation for
This limit emphasizes that any point mass located at 0 is entirely capture
|
https://en.wikipedia.org/wiki/Bilinear%20map
|
In mathematics, a bilinear map is a function combining elements of two vector spaces to yield an element of a third vector space, and is linear in each of its arguments. Matrix multiplication is an example.
Definition
Vector spaces
Let and be three vector spaces over the same base field . A bilinear map is a function
such that for all , the map
is a linear map from to and for all , the map
is a linear map from to In other words, when we hold the first entry of the bilinear map fixed while letting the second entry vary, the result is a linear operator, and similarly for when we hold the second entry fixed.
Such a map satisfies the following properties.
For any ,
The map is additive in both components: if and then and
If and we have for all then we say that B is symmetric. If X is the base field F, then the map is called a bilinear form, which are well-studied (for example: scalar product, inner product, and quadratic form).
Modules
The definition works without any changes if instead of vector spaces over a field F, we use modules over a commutative ring R. It generalizes to n-ary functions, where the proper term is multilinear.
For non-commutative rings R and S, a left R-module M and a right S-module N, a bilinear map is a map with T an -bimodule, and for which any n in N, is an R-module homomorphism, and for any m in M, is an S-module homomorphism. This satisfies
B(r ⋅ m, n) = r ⋅ B(m, n)
B(m, n ⋅ s) = B(m, n) ⋅ s
for all m in M, n in N, r in R and s in S, as well as B being additive in each argument.
Properties
An immediate consequence of the definition is that whenever or . This may be seen by writing the zero vector 0V as (and similarly for 0W) and moving the scalar 0 "outside", in front of B, by linearity.
The set of all bilinear maps is a linear subspace of the space (viz. vector space, module) of all maps from into X.
If V, W, X are finite-dimensional, then so is . For that is, bilinear forms, the dimension of this space is (while the space of linear forms is of dimension ). To see this, choose a basis for V and W; then each bilinear map can be uniquely represented by the matrix , and vice versa.
Now, if X is a space of higher dimension, we obviously have .
Examples
Matrix multiplication is a bilinear map .
If a vector space V over the real numbers carries an inner product, then the inner product is a bilinear map The product vector space has one dimension.
In general, for a vector space V over a field F, a bilinear form on V is the same as a bilinear map .
If V is a vector space with dual space V∗, then the application operator, is a bilinear map from to the base field.
Let V and W be vector spaces over the same base field F. If f is a member of V∗ and g a member of W∗, then defines a bilinear map .
The cross product in is a bilinear map
Let be a bilinear map, and be a linear map, then is a bilinear map on .
Continuity and separate continuity
Suppose and are topolo
|
https://en.wikipedia.org/wiki/Bra%E2%80%93ket%20notation
|
Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread.
Bra-ket notation was created by Paul Dirac in his 1939 publication A New Notation for Quantum Mechanics. The notation was introduced as an easier way to write quantum mechanical expressions. The name comes from the English word "Bracket".
Quantum mechanics
In quantum mechanics, bra–ket notation is used ubiquitously to denote quantum states. The notation uses angle brackets, and , and a vertical bar , to construct "bras" and "kets".
A ket is of the form . Mathematically it denotes a vector, , in an abstract (complex) vector space , and physically it represents a state of some quantum system.
A bra is of the form . Mathematically it denotes a linear form , i.e. a linear map that maps each vector in to a number in the complex plane . Letting the linear functional act on a vector is written as .
Assume that on there exists an inner product with antilinear first argument, which makes an inner product space. Then with this inner product each vector can be identified with a corresponding linear form, by placing the vector in the anti-linear first slot of the inner product: . The correspondence between these notations is then . The linear form is a covector to , and the set of all covectors form a subspace of the dual vector space , to the initial vector space . The purpose of this linear form can now be understood in terms of making projections on the state , to find how linearly dependent two states are, etc.
For the vector space , kets can be identified with column vectors, and bras with row vectors. Combinations of bras, kets, and linear operators are interpreted using matrix multiplication. If has the standard Hermitian inner product , under this identification, the identification of kets and bras and vice versa provided by the inner product is taking the Hermitian conjugate (denoted ).
It is common to suppress the vector or linear form from the bra–ket notation and only use a label inside the typography for the bra or ket. For example, the spin operator on a two dimensional space of spinors, has eigenvalues with eigenspinors . In bra–ket notation, this is typically denoted as , and . As above, kets and bras with the same label are interpreted as kets and bras corresponding to each other using the inner product. In particular, when also identified with row and column vectors, kets and bras with the same label are identified with Hermitian conjugate column and row vectors.
Bra–ket notation was effectively established in 1939 by Paul Dirac; it is thus also known as Dirac notation, despite the notation having a precursor in Hermann Grassmann's use of for inner
|
https://en.wikipedia.org/wiki/Banach%20algebra
|
In mathematics, especially functional analysis, a Banach algebra, named after Stefan Banach, is an associative algebra over the real or complex numbers (or over a non-Archimedean complete normed field) that at the same time is also a Banach space, that is, a normed space that is complete in the metric induced by the norm. The norm is required to satisfy
This ensures that the multiplication operation is continuous.
A Banach algebra is called unital if it has an identity element for the multiplication whose norm is and commutative if its multiplication is commutative.
Any Banach algebra (whether it has an identity element or not) can be embedded isometrically into a unital Banach algebra so as to form a closed ideal of . Often one assumes a priori that the algebra under consideration is unital: for one can develop much of the theory by considering and then applying the outcome in the original algebra. However, this is not the case all the time. For example, one cannot define all the trigonometric functions in a Banach algebra without identity.
The theory of real Banach algebras can be very different from the theory of complex Banach algebras. For example, the spectrum of an element of a nontrivial complex Banach algebra can never be empty, whereas in a real Banach algebra it could be empty for some elements.
Banach algebras can also be defined over fields of -adic numbers. This is part of -adic analysis.
Examples
The prototypical example of a Banach algebra is , the space of (complex-valued) continuous functions, defined on a locally compact Hausdorff space , that vanish at infinity. is unital if and only if is compact. The complex conjugation being an involution, is in fact a C*-algebra. More generally, every C*-algebra is a Banach algebra by definition.
The set of real (or complex) numbers is a Banach algebra with norm given by the absolute value.
The set of all real or complex -by- matrices becomes a unital Banach algebra if we equip it with a sub-multiplicative matrix norm.
Take the Banach space (or ) with norm and define multiplication componentwise:
The quaternions form a 4-dimensional real Banach algebra, with the norm being given by the absolute value of quaternions.
The algebra of all bounded real- or complex-valued functions defined on some set (with pointwise multiplication and the supremum norm) is a unital Banach algebra.
The algebra of all bounded continuous real- or complex-valued functions on some locally compact space (again with pointwise operations and supremum norm) is a Banach algebra.
The algebra of all continuous linear operators on a Banach space (with functional composition as multiplication and the operator norm as norm) is a unital Banach algebra. The set of all compact operators on is a Banach algebra and closed ideal. It is without identity if
If is a locally compact Hausdorff topological group and is its Haar measure, then the Banach space of all -integrable functions on becomes a Banac
|
https://en.wikipedia.org/wiki/Binomial%20coefficient
|
In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written It is the coefficient of the term in the polynomial expansion of the binomial power ; this coefficient can be computed by the multiplicative formula
which using factorial notation can be compactly expressed as
For example, the fourth power of is
and the binomial coefficient is the coefficient of the term.
Arranging the numbers in successive rows for gives a triangular array called Pascal's triangle, satisfying the recurrence relation
The binomial coefficients occur in many areas of mathematics, and especially in combinatorics. The symbol is usually read as " choose " because there are ways to choose an (unordered) subset of elements from a fixed set of elements. For example, there are ways to choose 2 elements from namely and
The binomial coefficients can be generalized to for any complex number and integer , and many of their properties continue to hold in this more general form.
History and notation
Andreas von Ettingshausen introduced the notation in 1826, although the numbers were known centuries earlier (see Pascal's triangle). In about 1150, the Indian mathematician Bhaskaracharya gave an exposition of binomial coefficients in his book Līlāvatī.
Alternative notations include , , , , , and in all of which the stands for combinations or choices. Many calculators use variants of the because they can represent it on a single-line display. In this form the binomial coefficients are easily compared to -permutations of , written as , etc.
Definition and interpretations
For natural numbers (taken to include 0) n and k, the binomial coefficient can be defined as the coefficient of the monomial Xk in the expansion of . The same coefficient also occurs (if ) in the binomial formula
(valid for any elements x, y of a commutative ring),
which explains the name "binomial coefficient".
Another occurrence of this number is in combinatorics, where it gives the number of ways, disregarding order, that k objects can be chosen from among n objects; more formally, the number of k-element subsets (or k-combinations) of an n-element set. This number can be seen as equal to the one of the first definition, independently of any of the formulas below to compute it: if in each of the n factors of the power one temporarily labels the term X with an index i (running from 1 to n), then each subset of k indices gives after expansion a contribution Xk, and the coefficient of that monomial in the result will be the number of such subsets. This shows in particular that is a natural number for any natural numbers n and k. There are many other combinatorial interpretations of binomial coefficients (counting problems for which the answer is given by a binomial coefficient expression), for instance the number of words formed of n bits (digits 0
|
https://en.wikipedia.org/wiki/Binomial%20theorem
|
In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the polynomial into a sum involving terms of the form , where the exponents and are nonnegative integers with , and the coefficient of each term is a specific positive integer depending on and . For example, for ,
The coefficient in the term of is known as the binomial coefficient or (the two have the same value). These coefficients for varying and can be arranged to form Pascal's triangle. These numbers also occur in combinatorics, where gives the number of different combinations of elements that can be chosen from an -element set. Therefore is often pronounced as " choose ".
History
Special cases of the binomial theorem were known since at least the 4th century BC when Greek mathematician Euclid mentioned the special case of the binomial theorem for exponent . Greek mathematican Diophantus cubed various binomials, including . Indian mathematican Aryabhata's method for finding cube roots, from around 510 CE, suggests that he knew the binomial formula for exponent .
Binomial coefficients, as combinatorial quantities expressing the number of ways of selecting objects out of without replacement, were of interest to ancient Indian mathematicians. The earliest known reference to this combinatorial problem is the Chandaḥśāstra by the Indian lyricist Pingala (c. 200 BC), which contains a method for its solution. The commentator Halayudha from the 10th century AD explains this method. By the 6th century AD, the Indian mathematicians probably knew how to express this as a quotient , and a clear statement of this rule can be found in the 12th century text Lilavati by Bhaskara.
The first formulation of the binomial theorem and the table of binomial coefficients, to our knowledge, can be found in a work by Al-Karaji, quoted by Al-Samaw'al in his "al-Bahir". Al-Karaji described the triangular pattern of the binomial coefficients and also provided a mathematical proof of both the binomial theorem and Pascal's triangle, using an early form of mathematical induction. The Persian poet and mathematician Omar Khayyam was probably familiar with the formula to higher orders, although many of his mathematical works are lost. The binomial expansions of small degrees were known in the 13th century mathematical works of Yang Hui and also Chu Shih-Chieh. Yang Hui attributes the method to a much earlier 11th century text of Jia Xian, although those writings are now also lost.
In 1544, Michael Stifel introduced the term "binomial coefficient" and showed how to use them to express in terms of , via "Pascal's triangle". Blaise Pascal studied the eponymous triangle comprehensively in his Traité du triangle arithmétique. However, the pattern of numbers was already known to the European mathematicians of the late Renaissance, including Stifel, Niccolò Fontana Tartaglia,
|
https://en.wikipedia.org/wiki/Bernoulli%27s%20inequality
|
In mathematics, Bernoulli's inequality (named after Jacob Bernoulli) is an inequality that approximates exponentiations of . It is often employed in real analysis. It has several useful variants:
Integer exponent
Case 1: for every integer and real number . The inequality is strict if and .
Case 2: for every integer and every real number .
Case 3: for every even integer and every real number .
Real exponent
for every real number and . The inequality is strict if and .
for every real number and .
History
Jacob Bernoulli first published the inequality in his treatise "Positiones Arithmeticae de Seriebus Infinitis" (Basel, 1689), where he used the inequality often.
According to Joseph E. Hofmann, Über die Exercitatio Geometrica des M. A. Ricci (1963), p. 177, the inequality is actually due to Sluse in his Mesolabum (1668 edition), Chapter IV "De maximis & minimis".
Proof for integer exponent
The first case has a simple inductive proof:
Suppose the statement is true for :
Then it follows that
Bernoulli's inequality can be proved for case 2, in which is a non-negative integer and , using mathematical induction in the following form:
we prove the inequality for ,
from validity for some r we deduce validity for .
For ,
is equivalent to which is true.
Similarly, for we have
Now suppose the statement is true for :
Then it follows that
since as well as . By the modified induction we conclude the statement is true for every non-negative integer .
By noting that if , then is negative gives case 3.
Generalizations
Generalization of exponent
The exponent can be generalized to an arbitrary real number as follows: if , then
for or , and
for .
This generalization can be proved by comparing derivatives. The strict versions of these inequalities require and .
Generalization of base
Instead of the inequality holds also in the form where are real numbers, all greater than , all with the same sign. Bernoulli's inequality is a special case when . This generalized inequality can be proved by mathematical induction.
In the first step we take . In this case the inequality is obviously true.
In the second step we assume validity of the inequality for numbers and deduce validity for numbers.
We assume thatis valid. After multiplying both sides with a positive number we get:
As all have the same sign, the products are all positive numbers. So the quantity on the right-hand side can be bounded as follows:what was to be shown.
Related inequalities
The following inequality estimates the -th power of from the other side. For any real numbers and with , one has
where 2.718.... This may be proved using the inequality .
Alternative form
An alternative form of Bernoulli's inequality for and is:
This can be proved (for any integer ) by using the formula for geometric series: (using )
or equivalently
Alternative proofs
Arithmetic and geometric means
An elementary proof for and x ≥ -1 can be given usi
|
https://en.wikipedia.org/wiki/Bayesian%20probability
|
Bayesian probability ( or ) is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.
The Bayesian interpretation of probability can be seen as an extension of propositional logic that enables reasoning with hypotheses; that is, with propositions whose truth or falsity is unknown. In the Bayesian view, a probability is assigned to a hypothesis, whereas under frequentist inference, a hypothesis is typically tested without being assigned a probability.
Bayesian probability belongs to the category of evidential probabilities; to evaluate the probability of a hypothesis, the Bayesian probabilist specifies a prior probability. This, in turn, is then updated to a posterior probability in the light of new, relevant data (evidence). The Bayesian interpretation provides a standard set of procedures and formulae to perform this calculation.
The term Bayesian derives from the 18th-century mathematician and theologian Thomas Bayes, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known as Bayesian inference. Mathematician Pierre-Simon Laplace pioneered and popularized what is now called Bayesian probability.
Bayesian methodology
Bayesian methods are characterized by concepts and procedures as follows:
The use of random variables, or more generally unknown quantities, to model all sources of uncertainty in statistical models including uncertainty resulting from lack of information (see also aleatoric and epistemic uncertainty).
The need to determine the prior probability distribution taking into account the available (prior) information.
The sequential use of Bayes' theorem: as more data become available, calculate the posterior distribution using Bayes' theorem; subsequently, the posterior distribution becomes the next prior.
While for the frequentist, a hypothesis is a proposition (which must be either true or false) so that the frequentist probability of a hypothesis is either 0 or 1, in Bayesian statistics, the probability that can be assigned to a hypothesis can also be in a range from 0 to 1 if the truth value is uncertain.
Objective and subjective Bayesian probabilities
Broadly speaking, there are two interpretations of Bayesian probability. For objectivists, who interpret probability as an extension of logic, probability quantifies the reasonable expectation that everyone (even a "robot") who shares the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by Cox's theorem. For subjectivists, probability corresponds to a personal belief. Rationality and coherence allow for substantial variation within the constraints they pose; the constraints are justified by the Dutch book argument or by decision theory and de Finetti's theorem. The obj
|
https://en.wikipedia.org/wiki/Naive%20set%20theory
|
Naive set theory is any of several theories of sets used in the discussion of the foundations of mathematics.
Unlike axiomatic set theories, which are defined using formal logic, naive set theory is defined informally, in natural language. It describes the aspects of mathematical sets familiar in discrete mathematics (for example Venn diagrams and symbolic reasoning about their Boolean algebra), and suffices for the everyday use of set theory concepts in contemporary mathematics.
Sets are of great importance in mathematics; in modern formal treatments, most mathematical objects (numbers, relations, functions, etc.) are defined in terms of sets. Naive set theory suffices for many purposes, while also serving as a stepping stone towards more formal treatments.
Method
A naive theory in the sense of "naive set theory" is a non-formalized theory, that is, a theory that uses natural language to describe sets and operations on sets. The words and, or, if ... then, not, for some, for every are treated as in ordinary mathematics. As a matter of convenience, use of naive set theory and its formalism prevails even in higher mathematics – including in more formal settings of set theory itself.
The first development of set theory was a naive set theory. It was created at the end of the 19th century by Georg Cantor as part of his study of infinite sets and developed by Gottlob Frege in his Grundgesetze der Arithmetik.
Naive set theory may refer to several very distinct notions. It may refer to
Informal presentation of an axiomatic set theory, e.g. as in Naive Set Theory by Paul Halmos.
Early or later versions of Georg Cantor's theory and other informal systems.
Decidedly inconsistent theories (whether axiomatic or not), such as a theory of Gottlob Frege that yielded Russell's paradox, and theories of Giuseppe Peano and Richard Dedekind.
Paradoxes
The assumption that any property may be used to form a set, without restriction, leads to paradoxes. One common example is Russell's paradox: there is no set consisting of "all sets that do not contain themselves". Thus consistent systems of naive set theory must include some limitations on the principles which can be used to form sets.
Cantor's theory
Some believe that Georg Cantor's set theory was not actually implicated in the set-theoretic paradoxes (see Frápolli 1991). One difficulty in determining this with certainty is that Cantor did not provide an axiomatization of his system. By 1899, Cantor was aware of some of the paradoxes following from unrestricted interpretation of his theory, for instance Cantor's paradox and the Burali-Forti paradox, and did not believe that they discredited his theory. Cantor's paradox can actually be derived from the above (false) assumption—that any property may be used to form a set—using for " is a cardinal number". Frege explicitly axiomatized a theory in which a formalized version of naive set theory can be interpreted, and it is this formal theory which Bertrand R
|
https://en.wikipedia.org/wiki/B%C3%A9zout%27s%20identity
|
In mathematics, Bézout's identity (also called Bézout's lemma), named after Étienne Bézout who proved it for polynomials, is the following theorem:
Here the greatest common divisor of and is taken to be . The integers and are called Bézout coefficients for ; they are not unique. A pair of Bézout coefficients can be computed by the extended Euclidean algorithm, and this pair is, in the case of integers one of the two pairs such that and equality occurs only if one of and is a multiple of the other.
As an example, the greatest common divisor of 15 and 69 is 3, and 3 can be written as a combination of 15 and 69 as with Bézout coefficients −9 and 2.
Many other theorems in elementary number theory, such as Euclid's lemma or the Chinese remainder theorem, result from Bézout's identity.
A Bézout domain is an integral domain in which Bézout's identity holds. In particular, Bézout's identity holds in principal ideal domains. Every theorem that results from Bézout's identity is thus true in all principal ideal domains.
Structure of solutions
If and are not both zero and one pair of Bézout coefficients has been computed (for example, using the extended Euclidean algorithm), all pairs can be represented in the form
where is an arbitrary integer, is the greatest common divisor of and , and the fractions simplify to integers.
If and are both nonzero, then exactly two of these pairs of Bézout coefficients satisfy
and equality may occur only if one of and divides the other.
This relies on a property of Euclidean division: given two non-zero integers and , if does not divide , there is exactly one pair such that and and another one such that and
The two pairs of small Bézout's coefficients are obtained from the given one by choosing for in the above formula either of the two integers next to .
The extended Euclidean algorithm always produces one of these two minimal pairs.
Example
Let and , then . Then the following Bézout's identities are had, with the Bézout coefficients written in red for the minimal pairs and in blue for the other ones.
If is the original pair of Bézout coefficients, then yields the minimal pairs via , respectively ; that is, , and .
Proof
Given any nonzero integers and , let The set is nonempty since it contains either or (with and ). Since is a nonempty set of positive integers, it has a minimum element , by the well-ordering principle. To prove that is the greatest common divisor of and , it must be proven that is a common divisor of and , and that for any other common divisor , one has
The Euclidean division of by may be written
The remainder is in , because
Thus is of the form , and hence However, and is the smallest positive integer in : the remainder can therefore not be in , making necessarily 0. This implies that is a divisor of . Similarly is also a divisor of , and therefore is a common divisor of and .
Now, let be any common divisor of and ; that is,
|
https://en.wikipedia.org/wiki/Bernoulli%20number
|
In mathematics, the Bernoulli numbers are a sequence of rational numbers which occur frequently in analysis. The Bernoulli numbers appear in (and can be defined by) the Taylor series expansions of the tangent and hyperbolic tangent functions, in Faulhaber's formula for the sum of m-th powers of the first n positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function.
The values of the first 20 Bernoulli numbers are given in the adjacent table. Two conventions are used in the literature, denoted here by and ; they differ only for , where and . For every odd , . For every even , is negative if is divisible by 4 and positive otherwise. The Bernoulli numbers are special values of the Bernoulli polynomials , with and .
The Bernoulli numbers were discovered around the same time by the Swiss mathematician Jacob Bernoulli, after whom they are named, and independently by Japanese mathematician Seki Takakazu. Seki's discovery was posthumously published in 1712 in his work Katsuyō Sanpō; Bernoulli's, also posthumously, in his Ars Conjectandi of 1713. Ada Lovelace's note G on the Analytical Engine from 1842 describes an algorithm for generating Bernoulli numbers with Babbage's machine. As a result, the Bernoulli numbers have the distinction of being the subject of the first published complex computer program.
Notation
The superscript used in this article distinguishes the two sign conventions for Bernoulli numbers. Only the term is affected:
with ( / ) is the sign convention prescribed by NIST and most modern textbooks.
with ( / ) was used in the older literature, and (since 2022) by Donald Knuth following Peter Luschny's "Bernoulli Manifesto".
In the formulas below, one can switch from one sign convention to the other with the relation , or for integer = 2 or greater, simply ignore it.
Since for all odd , and many formulas only involve even-index Bernoulli numbers, a few authors write "" instead of . This article does not follow that notation.
History
Early history
The Bernoulli numbers are rooted in the early history of the computation of sums of integer powers, which have been of interest to mathematicians since antiquity.
Methods to calculate the sum of the first positive integers, the sum of the squares and of the cubes of the first positive integers were known, but there were no real 'formulas', only descriptions given entirely in words. Among the great mathematicians of antiquity to consider this problem were Pythagoras (c. 572–497 BCE, Greece), Archimedes (287–212 BCE, Italy), Aryabhata (b. 476, India), Abu Bakr al-Karaji (d. 1019, Persia) and Abu Ali al-Hasan ibn al-Hasan ibn al-Haytham (965–1039, Iraq).
During the late sixteenth and early seventeenth centuries mathematicians made significant progress. In the West Thomas Harriot (1560–1621) of England, Johann Faulhaber (1580–1635) of Germany, Pierre de Fermat (1601–1665) and fellow French mathematician Blaise Pa
|
https://en.wikipedia.org/wiki/Balance
|
Balance may refer to:
Common meanings
Balance (ability) in biomechanics
Balance (accounting)
Balance or weighing scale
Balance, as in equality (mathematics) or equilibrium
Arts and entertainment
Film
Balance (1983 film), a Bulgarian film
Balance (1989 film), a short animated film
La Balance, a 1982 French film
Television
Balance: Television for Living Well, a Canadian television talk show
"The Balance" (Roswell), an episode of the television series Roswell
"The Balance", an episode of the animated series Justice League
Music
Performers
Balance (band), a 1980s pop-rock group
Albums
Balance (Akrobatik album), 2003
Balance (Kim-Lian album), 2004
Balance (Leo Kottke album), 1978
Balance (Joe Morris album), 2014
Balance (Swollen Members album), 1999
Balance (Ty Tabor album), 2008
Balance (Van Halen album), 1995
Balance (Armin van Buuren album), 2019
The Balance, a 2019 album by Catfish and the Bottlemen
Songs
"Balance", a song by Axium from The Story Thus Far
"Balance", a song by Band-Maid from Unleash
"Balance", a song by Ed Sheeran from - (Deluxe vinyl edition)
"The Balance", a Moody Blues song on the 1970 album A Question of Balance
Other
Balance (game design), the concept and the practice of tuning relationships between a game's component systems
Balance (installation), a 2013 glazed ceramic installation by Tim Ryan
Balance (puzzle), a mathematical puzzle
"Balance", a poem by Patti Smith from the book kodak
Government and law
BALANCE Act (Benefit Authors without Limiting Advancement or Net Consumer Expectations Act), a proposed US federal legislation
Balance (apportionment), a criterion for fair allocation of seats among parties or states
Other uses
Balance (advertisement), a 1989 award-winning television advertisement for the Lexus LS 400
Balance (metaphysics), a desirable point between two or more opposite forces
Balance (stereo), the amount of signal from each channel reproduced in a stereo audio recording
The Balance, a personal finance website owned by Dotdash
See also
Balancing (disambiguation)
Balanced, a wine tasting descriptor
|
https://en.wikipedia.org/wiki/Combinatorics
|
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science.
Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms.
A mathematician who studies combinatorics is called a .
Definition
The full scope of combinatorics is not universally agreed upon. According to H.J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. Insofar as an area can be described by the types of problems it addresses, combinatorics is involved with:
the enumeration (counting) of specified structures, sometimes referred to as arrangements or configurations in a very general sense, associated with finite systems,
the existence of such structures that satisfy certain given criteria,
the construction of these structures, perhaps in many ways, and
optimization: finding the "best" structure or solution among several possibilities, be it the "largest", "smallest" or satisfying some other optimality criterion.
Leon Mirsky has said: "combinatorics is a range of linked studies which have something in common and yet diverge widely in their objectives, their methods, and the degree of coherence they have attained." One way to define combinatorics is, perhaps, to describe its subdivisions with their problems and techniques. This is the approach that is used below. However, there are also purely historical reasons for including or not including some topics under the combinatorics umbrella. Although primarily concerned with finite systems, some combinatorial questions and techniques can be extended to an infinite (specifically, countable) but discrete setting.
History
Basic combinatorial concepts and enumerative results appeared throughout the ancient world. Indian physician Sushruta asserts in Sushruta Samhita that 63 combinations can be made out of 6 different tastes, taken one at a time, two at a time, etc., thus computing all 26 − 1 possibilities. Greek historian Plutarch discusse
|
https://en.wikipedia.org/wiki/Calculus
|
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations.
It has two major branches, differential calculus and integral calculus; the former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit.
Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. Today, calculus has widespread uses in science, engineering, and social science.
Etymology
In mathematics education, calculus denotes courses of elementary mathematical analysis, which are mainly devoted to the study of functions and limits. The word calculus is Latin for "small pebble" (the diminutive of calx, meaning "stone"), a meaning which still persists in medicine. Because such pebbles were used for counting out distances, tallying votes, and doing abacus arithmetic, the word came to mean a method of computation. In this sense, it was used in English at least as early as 1672, several years before the publications of Leibniz and Newton.
In addition to differential calculus and integral calculus, the term is also used for naming specific methods of calculation and related theories that seek to model a particular concept in terms of mathematics. Examples of this convention include propositional calculus, Ricci calculus, calculus of variations, lambda calculus, and process calculus. Furthermore, the term "calculus" has variously been applied in ethics and philosophy, for such systems as Bentham's felicific calculus, and the ethical calculus.
History
Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it first appeared in ancient Egypt and later Greece, then in China and the Middle East, and still later again in medieval Europe and India.
Ancient precursors
Egypt
Calculations of volume and area, one goal of integral calculus, can be found in the Egyptian Moscow papyrus ( BC), but the formulae are simple instructions, with no indication as to how they were obtained.
Greece
Laying the foundations for integral calculus and foreshadowing the concept of the limit, ancient Greek mathematician Eudoxus of Cnidus ( – 337 BC) developed the method of exhaustion to prove the formulas for cone and pyramid volumes.
During the Hellenistic period, this method was further developed by Archimedes ( – ), who combined it with a concept of the indivisibles—a precursor to
|
https://en.wikipedia.org/wiki/Demographics%20of%20Canada
|
Statistics Canada conducts a country-wide census that collects demographic data every five years on the first and sixth year of each decade. The 2021 Canadian census enumerated a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure, Between 2011 and May 2016, Canada's population grew by 1.7 million people, with immigrants accounting for two-thirds of the increase. Between 1990 and 2008, the population increased by 5.6 million, equivalent to 20.4 percent overall growth. The main driver of population growth is immigration, and to a lesser extent, natural growth.
Canada has one of the highest per-capita immigration rates in the world, driven mainly by economic policy and, to a lesser extent, family reunification. In 2021, a total of 405,330 immigrants were admitted to Canada, mainly from Asia. New immigrants settle mostly in major urban areas such as Toronto, Montreal, and Vancouver. Canada also accepts large numbers of refugees, accounting for over 10 percent of annual global refugee resettlements.
Population
The 2021 Canadian census had a total population count of 36,991,981 individuals, making up approximately 0.5% of the world's total population. A population estimate for 2023 put the total number of people in Canada at 40,097,761.
Demographic statistics according to the World Population Review in 2022.
One birth every 1 minutes
One death every 2 minutes
One net migrant every 2 minutes
Net gain of one person every 1 minute
Death rate
8.12 deaths/1,000 population (2022 est.) Country comparison to the world: 81
Net migration rate
5.46 migrant(s)/1,000 population (2022 est.) Country comparison to the world: 21st
Urbanization
urban population: 81.8% of total population (2022)
rate of urbanization: 0.95% annual rate of change (2020–25 est.)
Provinces and territories
<onlyinclude>
Population distribution
The vast majority of Canadians are positioned in a discontinuous band within approximately 300 km of the southern border with the United States; the most populated province is Ontario, followed by Quebec and British Columbia.
Sources: Statistics Canada
Cities
Census metropolitan areas
Fertility rate
The total fertility rate is the number of children born in a specific year cohort to the total number of women who can give birth in the country.
In 1971, the birth rate for the first time dipped below replacement and since then has not rebounded.
Canada’s fertility rate hit a record low of 1.4 children born per woman in 2020, below the population replacement level, which stands at 2.1 births per woman. In 2020, Canada also experienced the country’s lowest number of births in 15 years, also seeing the largest annual drop in childbirths (-3.6%) in a quarter of a century. The total birth rate is 10.17 births/1,000 population in 2022.
Mother's mean age at first birth
Canada is among late-childbearing countries, with the average age of mothers at the first birth being 31.3 years in 2020.
Population projec
|
https://en.wikipedia.org/wiki/Car%20%28disambiguation%29
|
A car is a wheeled motor vehicle used for transporting passengers.
Car(s), CAR(s), or The Car(s) may also refer to:
Computing
C.a.R. (Z.u.L.), geometry software
CAR and CDR, commands in LISP computer programming
Clock with Adaptive Replacement, a page replacement algorithm
Computer-assisted reporting
Computer-assisted reviewing
Economics
Capital adequacy ratio, a ratio of a bank's capital to its risk
Cost accrual ratio, an accounting formula
Cumulative abnormal return
Cumulative average return, a financial concept related to the time value of money
Film and television
Cars (franchise), a Disney/Pixar film series
Cars (film), a 2006 computer-animated film from Disney and Pixar
The Car (1977 film), an American horror film
Car, a BBC Two television ident first aired in 1993 (see BBC Two '1991–2001' idents)
The Car (1997 film), a Malayalam film
"The Car" (The Assistants episode)
Literature
Car (magazine), a British auto-enthusiast publication
The Car (novel), a novel by Gary Paulsen
Military
Canadian Airborne Regiment, a Canadian Forces formation
Colt Automatic Rifle, a 5.56mm NATO firearm
Combat Action Ribbon, a United States military decoration
U.S. Army Combat Arms Regimental System, a 1950s reorganisation of the regiments of the US Army
Conflict Armament Research, a UK-based investigative organization that tracks the supply of armaments into conflict-affected areas
Music
The Cars, an American band
Albums
Peter Gabriel (1977 album) or Car
The Cars (album), a 1978 album by The Cars
Cars (soundtrack), the soundtrack to the 2006 film
Cars (Now, Now Every Children album), 2009
Cars, a 2011 album by Kris Delmhorst
C.A.R. (album), a 2012 album by Serengeti
The Car (album), a 2022 album by Arctic Monkeys
Songs
"The Car" (song), a song by Jeff Carson
"Cars" (song), a 1979 single by Gary Numan
"Car", a 1994 song by Built to Spill from There's Nothing Wrong with Love
Paintings
Cars (painting), a series of paintings by Andy Warhol
The Car (Brack), a 1955 painting by John Brack
People
Car (surname)
Cars (surname)
Places
Car, Azerbaijan, a village
Čar, a village in Serbia
Cars, Gironde, France, a commune
Les Cars, Haute-Vienne, France, a commune
Central African Republic
Central Asian Republics
Cordillera Administrative Region, Philippines
County Carlow, Ireland, Chapman code
Science
Canonical anticommutation relation
Carina (constellation)
Chimeric antigen receptor, artificial T cell receptors
Coherent anti-Stokes Raman spectroscopy
Constitutive androstane receptor
Cortisol awakening response, on waking from sleep
Coxsackievirus and adenovirus receptor, a protein
Sports
Carolina Hurricanes, a National Hockey League team
Carolina Panthers, a National Football League team
Club Always Ready, a Bolivian football club from La Paz
Rugby Africa, formerly known as Confederation of African Rugby
Transportation
Railroad car
Canada Atlantic Railway, 1879–1914
Canadian Atlantic Railway, 1986–1994
C
|
https://en.wikipedia.org/wiki/Conditional
|
Conditional (if then) may refer to:
Causal conditional, if X then Y, where X is a cause of Y
Conditional probability, the probability of an event A given that another event B has occurred
Conditional proof, in logic: a proof that asserts a conditional, and proves that the antecedent leads to the consequent
Strict conditional, in philosophy, logic, and mathematics
Material conditional, in propositional calculus, or logical calculus in mathematics
Relevance conditional, in relevance logic
Conditional (computer programming), a statement or expression in computer programming languages
A conditional expression in computer programming languages such as ?:
Conditions in a contract
Grammar and linguistics
Conditional mood (or conditional tense), a verb form in many languages
Conditional sentence, a sentence type used to refer to hypothetical situations and their consequences
Indicative conditional, a conditional sentence expressing "if A then B" in a natural language
Counterfactual conditional, a conditional sentence indicating what would be the case if its antecedent were true
Other
"Conditional" (Laura Mvula song)
Conditional jockey, an apprentice jockey in British or Irish National Hunt racing
Conditional short-circuit current
Conditional Value-at-Risk
See also
Condition (disambiguation)
Conditional statement (disambiguation)
|
https://en.wikipedia.org/wiki/Cone%20%28disambiguation%29
|
A cone is a basic geometrical shape.
Cone may also refer to:
Mathematics
Cone (category theory)
Cone (formal languages)
Cone (graph theory), a graph in which one vertex is adjacent to all others
Cone (linear algebra), a subset of vector space
Mapping cone (homological algebra)
Cone (topology)
Conic bundle, a concept in algebraic geometry
Conical surface, generated by a moving line with one fixed point
Projective cone, the union of all lines that intersect a projective subspace and an arbitrary subset of some other disjoint subspace
Computing
Cone tracing, a derivative of the ray-tracing algorithm that replaces rays, which have no thickness, with cones
Second-order cone programming, a library of routines that implements a predictor corrector variant of the semidefinite programming algorithm
Astronomy
Cone Nebula (also known as NGC 2264), an H II region in the constellation of Monoceros
Ionization cone, cones of material extending out from spiral galaxies
Engineering and physical science
Antenna blind cone, the volume of space that cannot be scanned by an antenna
Carbon nanocones, conical structures which are made predominantly from carbon and which have at least one dimension of the order one micrometer or smaller
Cone algorithm identifies surface particles quickly and accurately for three-dimensional clusters composed of discrete particles
Cone beam reconstruction, a method of X-ray scanning in microtomography
Cone calorimeter, a modern device used to study the fire behavior of small samples of various materials in condensed phase
Cone clutch serves the same purpose as a disk or plate clutch
Cone of depression occurs in an aquifer when groundwater is pumped from a well
Cone penetration test (CPT), an in situ testing method used to determine the geotechnical engineering properties of soils
Cone Penetrometer apparatus, an alternative method to the Casagrande Device in measuring the Liquid Limit of a soil sample
Conical intersection of two potential energy surfaces of the same spatial and spin symmetries
Conical measure, a type of graduated laboratory glassware with a conical cup and a notch on the top to facilitate pouring of liquids
Conical mill (or conical screen mill), a machine used to reduce the size of material in a uniform manner
Conical pendulum, a weight (or bob) fixed on the end of a string (or rod) suspended from a pivot
Conical scanning, a system used in early radar units to improve their accuracy
Helical cone beam computed tomography, a type of three-dimensional computed tomography
Hertzian cone, the cone of force that propagates through a brittle, amorphous or cryptocrystalline solid material from a point of impact
Nose cone, used to refer to the forwardmost section of a rocket, guided missile or aircraft
Pyrometric cone, pyrometric devices that are used to gauge time and temperature during the firing of ceramic materials
Roller cone bit, a drill bit used for drilling through rock, for example when drilling for oil and gas
Skid c
|
https://en.wikipedia.org/wiki/Combination
|
In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by or , is equal to the binomial coefficient
which can be written using factorials as whenever , and which is zero when . This formula can be derived from the fact that each k-combination of a set S of n members has permutations so or . The set of all k-combinations of a set S is often denoted by .
A combination is a combination of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears.
Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960.
Number of k-combinations
The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by , or by a variation such as , , , or even (the last form is standard in French, Romanian, Russian, Chinese and Polish texts). The same number however occurs in many other mathematical contexts, where it is denoted by (often read as "n choose k"); notably it occurs as a coefficient in the binomial formula, hence its name binomial coefficient. One can define for all natural numbers k at once by the relation
from which it is clear that
and further,
for k > n.
To see that these coefficients count k-combinations from S, one can first consider a collection of n distinct variables Xs labeled by the elements s of S, and expand the product over all elements of S:
it has 2n distinct terms corresponding to all the subsets of S, each subset giving the product of the corresponding variables Xs. Now setting all of the Xs equal to the unlabeled variable X, so that the product becomes , the term for each k-combination from S becomes Xk, so that the coefficient of that power in the result equals the num
|
https://en.wikipedia.org/wiki/Condom
|
A condom is a sheath-shaped barrier device used during sexual intercourse to reduce the probability of pregnancy or a sexually transmitted infection (STI). There are both male and female condoms.
The male condom is rolled onto an erect penis before intercourse and works by forming a physical barrier which blocks semen from entering the body of a sexual partner. Male condoms are typically made from latex and, less commonly, from polyurethane, polyisoprene, or lamb intestine. Male condoms have the advantages of ease of use, ease of access, and few side effects. Individuals with latex allergy should use condoms made from a material other than latex, such as polyurethane. Female condoms are typically made from polyurethane and may be used multiple times.
With proper use—and use at every act of intercourse—women whose partners use male condoms experience a 2% per-year pregnancy rate. With typical use, the rate of pregnancy is 18% per-year. Their use greatly decreases the risk of gonorrhea, chlamydia, trichomoniasis, hepatitis B, and HIV/AIDS. To a lesser extent, they also protect against genital herpes, human papillomavirus (HPV), and syphilis.
Condoms as a method of preventing STIs have been used since at least 1564. Rubber condoms became available in 1855, followed by latex condoms in the 1920s. It is on the World Health Organization's List of Essential Medicines. As of 2019, globally around 21% of those using birth control use the condom, making it the second-most common method after female sterilization (24%). Rates of condom use are highest in East and Southeast Asia, Europe and North America. About six to nine billion are sold a year.
Medical uses
Birth control
The effectiveness of condoms, as of most forms of contraception, can be assessed two ways. Perfect use or method effectiveness rates only include people who use condoms properly and consistently. Actual use, or typical use effectiveness rates are of all condom users, including those who use condoms incorrectly or do not use condoms at every act of intercourse. Rates are generally presented for the first year of use. Most commonly the Pearl Index is used to calculate effectiveness rates, but some studies use decrement tables.
The typical use pregnancy rate among condom users varies depending on the population being studied, ranging from 10 to 18% per year. The perfect use pregnancy rate of condoms is 2% per year. Condoms may be combined with other forms of contraception (such as spermicide) for greater protection.
Sexually transmitted infections
Condoms are widely recommended for the prevention of sexually transmitted infections (STIs). They have been shown to be effective in reducing infection rates in both men and women. While not perfect, the condom is effective at reducing the transmission of organisms that cause AIDS, genital herpes, cervical cancer, genital warts, syphilis, chlamydia, gonorrhea, and other diseases. Condoms are often recommended as an adjunct to more effect
|
https://en.wikipedia.org/wiki/Continuum%20hypothesis
|
In mathematics, specifically set theory, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states that
or equivalently, that
In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: , or even shorter with beth numbers: .
The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.
The name of the hypothesis comes from the term the continuum for the real numbers.
History
Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated.
Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 by Paul Cohen.
Cardinality of infinite sets
Two sets are said to have the same cardinality or cardinal number if there exists a bijection (a one-to-one correspondence) between them. Intuitively, for two sets S and T to have the same cardinality means that it is possible to "pair off" elements of S with elements of T in such a fashion that every element of S is paired off with exactly one element of T and vice versa. Hence, the set {banana, apple, pear} has the same cardinality as {yellow, red, green}.
With infinite sets such as the set of integers or rational numbers, the existence of a bijection between two sets becomes more difficult to demonstrate. The rational numbers seemingly form a counterexample to the continuum hypothesis: the integers form a proper subset of the rationals, which themselves form a proper subset of the reals, so intuitively, there are more rational numbers than integers and more real numbers than rational numbers. However, this intuitive analysis is flawed; it does not take proper account of the fact that all three sets are infinite. It turns out the rational numbers can actually be placed in one-to-one correspondence with the integers, and therefore the set of rational numbers is the same size (cardinality) as the set of integers: they are both countable sets.
Cantor gave two proofs that the cardinality of the set of integers is strictly smaller than that of t
|
https://en.wikipedia.org/wiki/Cumulative%20distribution%20function
|
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .
Every probability distribution supported on the real numbers, discrete or "mixed" as well as continuous, is uniquely identified by a right-continuous monotone increasing function (a càdlàg function) satisfying and .
In the case of a scalar continuous distribution, it gives the area under the probability density function from minus infinity to . Cumulative distribution functions are also used to specify the distribution of multivariate random variables.
Definition
The cumulative distribution function of a real-valued random variable is the function given by
where the right-hand side represents the probability that the random variable takes on a value less than or equal to .
The probability that lies in the semi-closed interval , where , is therefore
In the definition above, the "less than or equal to" sign, "≤", is a convention, not a universally used one (e.g. Hungarian literature uses "<"), but the distinction is important for discrete distributions. The proper use of tables of the binomial and Poisson distributions depends upon this convention. Moreover, important formulas like Paul Lévy's inversion formula for the characteristic function also rely on the "less than or equal" formulation.
If treating several random variables etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is usually omitted. It is conventional to use a capital for a cumulative distribution function, in contrast to the lower-case used for probability density functions and probability mass functions. This applies when discussing general distributions: some specific distributions have their own conventional notation, for example the normal distribution uses and instead of and , respectively.
The probability density function of a continuous random variable can be determined from the cumulative distribution function by differentiating using the Fundamental Theorem of Calculus; i.e. given ,
as long as the derivative exists.
The CDF of a continuous random variable can be expressed as the integral of its probability density function as follows:
In the case of a random variable which has distribution having a discrete component at a value ,
If is continuous at , this equals zero and there is no discrete component at .
Properties
Every cumulative distribution function is non-decreasing and right-continuous, which makes it a càdlàg function. Furthermore,
Every function with these four properties is a CDF, i.e., for every such function, a random variable can be defined such that the function is the cumulative distribution function of that random variable.
If is a purely discrete random variable, then it attains values with probability , and the CDF of will be discontinuous at
|
https://en.wikipedia.org/wiki/Central%20tendency
|
In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution.
Colloquially, measures of central tendency are often called averages. The term central tendency dates from the late 1920s.
The most common measures of central tendency are the arithmetic mean, the median, and the mode. A middle tendency can be calculated for either a finite set of values or for a theoretical distribution, such as the normal distribution. Occasionally authors use central tendency to denote "the tendency of quantitative data to cluster around some central value."
The central tendency of a distribution is typically contrasted with its dispersion or variability; dispersion and central tendency are the often characterized properties of distributions. Analysis may judge whether data has a strong or a weak central tendency based on its dispersion.
Measures
The following may be applied to one-dimensional data. Depending on the circumstances, it may be appropriate to transform the data before calculating a central tendency. Examples are squaring the values or taking logarithms. Whether a transformation is appropriate and what it should be, depend heavily on the data being analyzed.
Arithmetic mean or simply, mean the sum of all measurements divided by the number of observations in the data set.
Median the middle value that separates the higher half from the lower half of the data set. The median and the mode are the only measures of central tendency that can be used for ordinal data, in which values are ranked relative to each other but are not measured absolutely.
Mode the most frequent value in the data set. This is the only central tendency measure that can be used with nominal data, which have purely qualitative category assignments.
Generalized mean A generalization of the Pythagorean means, specified by an exponent.
Geometric mean the nth root of the product of the data values, where there are n of these. This measure is valid only for data that are measured absolutely on a strictly positive scale.
Harmonic mean the reciprocal of the arithmetic mean of the reciprocals of the data values. This measure too is valid only for data that are measured absolutely on a strictly positive scale.
Weighted arithmetic mean an arithmetic mean that incorporates weighting to certain data elements.
Truncated mean or trimmed mean the arithmetic mean of data values after a certain number or proportion of the highest and lowest data values have been discarded.
Interquartile mean a truncated mean based on data within the interquartile range.
Midrange the arithmetic mean of the maximum and minimum values of a data set.
Midhinge the arithmetic mean of the first and third quartiles.
Quasi-arithmetic mean A generalization of the generalized mean, specified by a continuous injective function.
Trimean the weighted arithmetic mean of the median and two quartiles.
Winsorized mean an arithmetic mean in which extreme
|
https://en.wikipedia.org/wiki/Cluster%20sampling
|
In statistics, cluster sampling is a sampling plan used when mutually homogeneous yet internally heterogeneous groupings are evident in a statistical population. It is often used in marketing research.
In this sampling plan, the total population is divided into these groups (known as clusters) and a simple random sample of the groups is selected. The elements in each cluster are then sampled. If all elements in each sampled cluster are sampled, then this is referred to as a "one-stage" cluster sampling plan. If a simple random subsample of elements is selected within each of these groups, this is referred to as a "two-stage" cluster sampling plan. A common motivation for cluster sampling is to reduce the total number of interviews and costs given the desired accuracy. For a fixed sample size, the expected random error is smaller when most of the variation in the population is present internally within the groups, and not between the groups.
Cluster elemental
The population within a cluster should ideally be as heterogeneous as possible, but there should be homogeneity between clusters. Each cluster should be a small-scale representation of the total population. The clusters should be mutually exclusive and collectively exhaustive. A random sampling technique is then used on any relevant clusters to choose which clusters to include in the study. In single-stage cluster sampling, all the elements from each of the selected clusters are sampled. In two-stage cluster sampling, a random sampling technique is applied to the elements from each of the selected clusters.
The main difference between cluster sampling and stratified sampling is that in cluster sampling the cluster is treated as the sampling unit so sampling is done on a population of clusters (at least in the first stage). In stratified sampling, the sampling is done on elements within each stratum. In stratified sampling, a random sample is drawn from each of the strata, whereas in cluster sampling only the selected clusters are sampled. A common motivation for cluster sampling is to reduce costs by increasing sampling efficiency. This contrasts with stratified sampling where the motivation is to increase precision.
There is also multistage cluster sampling, where at least two stages are taken in selecting elements from clusters.
When clusters are of different sizes
Without modifying the estimated parameter, cluster sampling is unbiased when the clusters are approximately the same size. In this case, the parameter is computed by combining all the selected clusters. When the clusters are of different sizes there are several options:
One method is to sample clusters and then survey all elements in that cluster. Another method is a two-stage method of sampling a fixed proportion of units (be it 5% or 50%, or another number, depending on cost considerations) from within each of the selected clusters. Relying on the sample drawn from these options will yield an unbiased estimator. Howe
|
https://en.wikipedia.org/wiki/Complex%20number
|
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation ; every complex number can be expressed in the form , where and are real numbers. Because no real number satisfies the above equation, was called an imaginary number by René Descartes. For the complex number , is called the , and is called the . The set of complex numbers is denoted by either of the symbols or . Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers and are fundamental in many aspects of the scientific description of the natural world.
Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or complex coefficients has a solution which is a complex number. For example, the equation
has no real solution, since the square of a real number cannot be negative, but has the two nonreal complex solutions and .
Addition, subtraction and multiplication of complex numbers can be naturally defined by using the rule combined with the associative, commutative, and distributive laws. Every nonzero complex number has a multiplicative inverse. This makes the complex numbers a field that has the real numbers as a subfield. The complex numbers also form a real vector space of dimension two, with as a standard basis.
This standard basis makes the complex numbers a Cartesian plane, called the complex plane. This allows a geometric interpretation of the complex numbers and their operations, and conversely expressing in terms of complex numbers some geometric properties and constructions. For example, the real numbers form the real line which is identified to the horizontal axis of the complex plane. The complex numbers of absolute value one form the unit circle. The addition of a complex number is a translation in the complex plane, and the multiplication by a complex number is a similarity centered at the origin. The complex conjugation is the reflection symmetry with respect to the real axis. The complex absolute value is a Euclidean norm.
In summary, the complex numbers form a rich structure that is simultaneously an algebraically closed field, a commutative algebra over the reals, and a Euclidean vector space of dimension two.
Definition
A complex number is a number of the form , where and are real numbers, and is an indeterminate satisfying . For example, is a complex number.
This way, a complex number is defined as a polynomial with real coefficients in the single indeterminate , for which the relation is imposed. Based on this definition, complex numbers can be added and multiplied, using the addition and multiplication for polynomials. The relation induces the equalities and which hol
|
https://en.wikipedia.org/wiki/Circumference
|
In geometry, the circumference (from Latin circumferens, meaning "carrying around") is the perimeter of a circle or ellipse. The circumference is the arc length of the circle, as if it were opened up and straightened out to a line segment. More generally, the perimeter is the curve length around any closed figure.
Circumference may also refer to the circle itself, that is, the locus corresponding to the edge of a disk.
The is the circumference, or length, of any one of its great circles.
Circle
The circumference of a circle is the distance around it, but if, as in many elementary treatments, distance is defined in terms of straight lines, this cannot be used as a definition. Under these circumstances, the circumference of a circle may be defined as the limit of the perimeters of inscribed regular polygons as the number of sides increases without bound. The term circumference is used when measuring physical objects, as well as when considering abstract geometric forms.
Relationship with
The circumference of a circle is related to one of the most important mathematical constants. This constant, pi, is represented by the Greek letter The first few decimal digits of the numerical value of are 3.141592653589793 ... Pi is defined as the ratio of a circle's circumference to its diameter
Or, equivalently, as the ratio of the circumference to twice the radius. The above formula can be rearranged to solve for the circumference:
The ratio of the circle's circumference to its radius is called the circle constant, and is equivalent to . The value is also the amount of radians in one turn. The use of the mathematical constant is ubiquitous in mathematics, engineering, and science.
In Measurement of a Circle written circa 250 BCE, Archimedes showed that this ratio ( since he did not use the name ) was greater than 3 but less than 3 by calculating the perimeters of an inscribed and a circumscribed regular polygon of 96 sides. This method for approximating was used for centuries, obtaining more accuracy by using polygons of larger and larger number of sides. The last such calculation was performed in 1630 by Christoph Grienberger who used polygons with 1040 sides.
Ellipse
Circumference is used by some authors to denote the perimeter of an ellipse. There is no general formula for the circumference of an ellipse in terms of the semi-major and semi-minor axes of the ellipse that uses only elementary functions. However, there are approximate formulas in terms of these parameters. One such approximation, due to Euler (1773), for the canonical ellipse,
is
Some lower and upper bounds on the circumference of the canonical ellipse with are:
Here the upper bound is the circumference of a circumscribed concentric circle passing through the endpoints of the ellipse's major axis, and the lower bound is the perimeter of an inscribed rhombus with vertices at the endpoints of the major and minor axes.
The circumference of an ellipse can be expressed
|
https://en.wikipedia.org/wiki/Study%20heterogeneity
|
In statistics, (between-) study heterogeneity is a phenomenon that commonly occurs when attempting to undertake a meta-analysis. In a simplistic scenario, studies whose results are to be combined in the meta-analysis would all be undertaken in the same way and to the same experimental protocols. Differences between outcomes would only be due to measurement error (and studies would hence be homogeneous). Study heterogeneity denotes the variability in outcomes that goes beyond what would be expected (or could be explained) due to measurement error alone.
Introduction
Meta-analysis is a method used to combine the results of different trials in order to obtain a quantitative synthesis. The size of individual clinical trials is often too small to detect treatment effects reliably. Meta-analysis increases the power of statistical analyses by pooling the results of all available trials.
As one tries to use meta-analysis to estimate a combined effect from a group of similar studies, the effects found in the individual studies need to be similar enough that one can be confident that a combined estimate will be a meaningful description of the set of studies. However, the individual estimates of treatment effect will vary by chance; some variation is expected due to observational error. Any excess variation (whether it is apparent or detectable or not) is called (statistical) heterogeneity.
The presence of some heterogeneity is not unusual, e.g., analogous effects are also commonly encountered even within studies, in multicenter trials (between-center heterogeneity).
Reasons for the additional variability are usually differences in the studies themselves, the investigated populations, treatment schedules, endpoint definitions, or other circumstances ("clinical diversity"), or the way data were analyzed, what models were employed, or whether estimates have been adjusted in some way ("methodological diversity"). Different types of effect measures (e.g., odds ratio vs. relative risk) may also be more or less susceptible to heterogeneity.
Modeling
In case the origin of heterogeneity can be identified and may be attributed to certain study features, the analysis may be stratified (by considering subgroups of studies, which would then hopefully be more homogeneous), or by extending the analysis to a meta-regression, accounting for (continuous or categorical) moderator variables. Unfortunately, literature-based meta-analysis may often not allow for gathering data on all (potentially) relevant moderators.
In addition, heterogeneity is usually accommodated by using a random effects model, in which the heterogeneity then constitutes a variance component. The model represents the lack of knowledge about why treatment effects may differ by treating the (potential) differences as unknowns. The centre of this symmetric distribution describes the average of the effects, while its width describes the degree of heterogeneity. The obvious and conventional choice of d
|
https://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel%20method
|
In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of linear equations. It is named after the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either strictly diagonally dominant, or symmetric and positive definite. It was only mentioned in a private letter from Gauss to his student Gerling in 1823. A publication was not delivered before 1874 by Seidel.
Description
Let be a square system of linear equations, where:
When and are known, and is unknown, we can use the Gauss–Seidel method to approximate . The vector denotes our initial guess for (often for ). We denote as the -th approximation or iteration of , and is the next (or k+1) iteration of .
Matrix-based formula
The solution is obtained iteratively via
where the matrix is decomposed into a lower triangular component , and a strictly upper triangular component such that . More specifically, the decomposition of into and is given by:
Why the matrix-based formula works
The system of linear equations may be rewritten as:
The Gauss–Seidel method now solves the left hand side of this expression for , using previous value for on the right hand side. Analytically, this may be written as:
Element-based formula
However, by taking advantage of the triangular form of , the elements of can be computed sequentially for each row using forward substitution:
Notice that the formula uses two summations per iteration which can be expressed as one summation that uses the most recently calculated iteration of . The procedure is generally continued until the changes made by an iteration are below some tolerance, such as a sufficiently small residual.
Discussion
The element-wise formula for the Gauss–Seidel method is similar to that of the Jacobi method.
The computation of uses the elements of that have already been computed, and only the elements of that have not been computed in the -th iteration. This means that, unlike the Jacobi method, only one storage vector is required as elements can be overwritten as they are computed, which can be advantageous for very large problems.
However, unlike the Jacobi method, the computations for each element are generally much harder to implement in parallel, since they can have a very long critical path, and are thus most feasible for sparse matrices. Furthermore, the values at each iteration are dependent on the order of the original equations.
Gauss-Seidel is the same as successive over-relaxation with .
Convergence
The convergence properties of the Gauss–Seidel method are dependent on the matrix A. Namely, the procedure is known to converge if either:
is symmetric positive-definite, or
is strictly or irreducibly diagon
|
https://en.wikipedia.org/wiki/Truncated%205-cell
|
In geometry, a truncated 5-cell is a uniform 4-polytope (4-dimensional uniform polytope) formed as the truncation of the regular 5-cell.
There are two degrees of truncations, including a bitruncation.
Truncated 5-cell
The truncated 5-cell, truncated pentachoron or truncated 4-simplex is bounded by 10 cells: 5 tetrahedra, and 5 truncated tetrahedra. Each vertex is surrounded by 3 truncated tetrahedra and one tetrahedron; the vertex figure is an elongated tetrahedron.
Construction
The truncated 5-cell may be constructed from the 5-cell by truncating its vertices at 1/3 of its edge length. This transforms the 5 tetrahedral cells into truncated tetrahedra, and introduces 5 new tetrahedral cells positioned near the original vertices.
Structure
The truncated tetrahedra are joined to each other at their hexagonal faces, and to the tetrahedra at their triangular faces.
Seen in a configuration matrix, all incidence counts between elements are shown. The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time.
Projections
The truncated tetrahedron-first Schlegel diagram projection of the truncated 5-cell into 3-dimensional space has the following structure:
The projection envelope is a truncated tetrahedron.
One of the truncated tetrahedral cells project onto the entire envelope.
One of the tetrahedral cells project onto a tetrahedron lying at the center of the envelope.
Four flattened tetrahedra are joined to the triangular faces of the envelope, and connected to the central tetrahedron via 4 radial edges. These are the images of the remaining 4 tetrahedral cells.
Between the central tetrahedron and the 4 hexagonal faces of the envelope are 4 irregular truncated tetrahedral volumes, which are the images of the 4 remaining truncated tetrahedral cells.
This layout of cells in projection is analogous to the layout of faces in the face-first projection of the truncated tetrahedron into 2-dimensional space. The truncated 5-cell is the 4-dimensional analogue of the truncated tetrahedron.
Images
Alternate names
Truncated pentatope
Truncated 4-simplex
Truncated pentachoron (Acronym: tip) (Jonathan Bowers)
Coordinates
The Cartesian coordinates for the vertices of an origin-centered truncated 5-cell having edge length 2 are:
More simply, the vertices of the truncated 5-cell can be constructed on a hyperplane in 5-space as permutations of (0,0,0,1,2) or of (0,1,2,2,2). These coordinates come from positive orthant facets of the truncated pentacross and bitruncated penteract respectively.
Related polytopes
The convex hull of the truncated 5-cell and its dual (assuming that they are congruent) is a nonuniform polychoron composed of 60 cells: 10 tetrahedra, 20 octahedra (as triangular antiprisms), 30 tetrahedra (as tetragonal disphenoids), and 40 vertices. Its vertex figure is a hexakis triangular cupola.
Vertex figure Bitruncated 5-cell
The
|
https://en.wikipedia.org/wiki/Jacobi%20method
|
In numerical linear algebra, the Jacobi method (a.k.a. the Jacobi iteration method) is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. The method is named after Carl Gustav Jacob Jacobi.
Description
Let be a square system of n linear equations, where:
When and are known, and is unknown, we can use the Jacobi method to approximate . The vector denotes our initial guess for (often for ). We denote as the k-th approximation or iteration of , and is the next (or k+1) iteration of .
Matrix-based formula
Then A can be decomposed into a diagonal component D, a lower triangular part L and an upper triangular part U:The solution is then obtained iteratively via
Element-based formula
The element-based formula for each row is thus:The computation of requires each element in except itself. Unlike the Gauss–Seidel method, we can't overwrite with , as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size n.
Algorithm
Input: , (diagonal dominant) matrix A, right-hand side vector b, convergence criterion
Output:
Comments: pseudocode based on the element-based formula above
while convergence not reached do
for i := 1 step until n do
for j := 1 step until n do
if j ≠ i then
end
end
end
increment k
end
Convergence
The standard convergence condition (for any iterative method) is when the spectral radius of the iteration matrix is less than 1:
A sufficient (but not necessary) condition for the method to converge is that the matrix A is strictly or irreducibly diagonally dominant. Strict row diagonal dominance means that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other terms:
The Jacobi method sometimes converges even if these conditions are not satisfied.
Note that the Jacobi method does not converge for every symmetric positive-definite matrix. For example,
Examples
Example 1
A linear system of the form with initial estimate is given by
We use the equation , described above, to estimate . First, we rewrite the equation in a more convenient form , where and . From the known values
we determine as
Further, is found as
With and calculated, we estimate as :
The next iteration yields
This process is repeated until convergence (i.e., until is small). The solution after 25 iterations is
Example 2
Suppose we are given the following linear system:
If we choose as the initial approximation, then the first approximate solution is given by
Using the approximations obtained, the iterative procedure is repeated until t
|
https://en.wikipedia.org/wiki/Polite%20number
|
In number theory, a polite number is a positive integer that can be written as the sum of two or more consecutive positive integers. A positive integer which is not polite is called impolite. The impolite numbers are exactly the powers of two, and the polite numbers are the natural numbers that are not powers of two.
Polite numbers have also been called staircase numbers because the Young diagrams which represent graphically the partitions of a polite number into consecutive integers (in the French notation of drawing these diagrams) resemble staircases. If all numbers in the sum are strictly greater than one, the numbers so formed are also called trapezoidal numbers because they represent patterns of points arranged in a trapezoid.
The problem of representing numbers as sums of consecutive integers and of counting the number of representations of this type has been studied by Sylvester, Mason, Leveque, and many other more recent authors. The polite numbers describe the possible numbers of sides of the Reinhardt polygons.
Examples and characterization
The first few polite numbers are
3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, ... .
The impolite numbers are exactly the powers of two. It follows from the Lambek–Moser theorem that the nth polite number is f(n + 1), where
Politeness
The politeness of a positive number is defined as the number of ways it can be expressed as the sum of consecutive integers. For every x, the politeness of x equals the number of odd divisors of x that are greater than one.
The politeness of the numbers 1, 2, 3, ... is
0, 0, 1, 0, 1, 1, 1, 0, 2, 1, 1, 1, 1, 1, 3, 0, 1, 2, 1, 1, 3, ... .
For instance, the politeness of 9 is 2 because it has two odd divisors, 3 and 9, and two polite representations
9 = 2 + 3 + 4 = 4 + 5;
the politeness of 15 is 3 because it has three odd divisors, 3, 5, and 15, and (as is familiar to cribbage players) three polite representations
15 = 4 + 5 + 6 = 1 + 2 + 3 + 4 + 5 = 7 + 8.
An easy way of calculating the politeness of a positive number by decomposing the number into its prime factors, taking the powers of all prime factors greater than 2, adding 1 to all of them, multiplying the numbers thus obtained with each other and subtracting 1. For instance 90 has politeness 5 because ; the powers of 3 and 5 are respectively 2 and 1, and applying this method .
Construction of polite representations from odd divisors
To see the connection between odd divisors and polite representations, suppose a number x has the odd divisor y > 1. Then y consecutive integers centered on x/y (so that their average value is x/y) have x as their sum:
Some of the terms in this sum may be zero or negative. However, if a term is zero it can be omitted and any negative terms may be used to cancel positive ones, leading to a polite representation for x. (The requirement that y > 1 corresponds
|
https://en.wikipedia.org/wiki/Cartan%E2%80%93Dieudonn%C3%A9%20theorem
|
In mathematics, the Cartan–Dieudonné theorem, named after Élie Cartan and Jean Dieudonné, establishes that every orthogonal transformation in an n-dimensional symmetric bilinear space can be described as the composition of at most n reflections.
The notion of a symmetric bilinear space is a generalization of Euclidean space whose structure is defined by a symmetric bilinear form (which need not be positive definite, so is not necessarily an inner product – for instance, a pseudo-Euclidean space is also a symmetric bilinear space). The orthogonal transformations in the space are those automorphisms which preserve the value of the bilinear form between every pair of vectors; in Euclidean space, this corresponds to preserving distances and angles. These orthogonal transformations form a group under composition, called the orthogonal group.
For example, in the two-dimensional Euclidean plane, every orthogonal transformation is either a reflection across a line through the origin or a rotation about the origin (which can be written as the composition of two reflections). Any arbitrary composition of such rotations and reflections can be rewritten as a composition of no more than 2 reflections. Similarly, in three-dimensional Euclidean space, every orthogonal transformation can be described as a single reflection, a rotation (2 reflections), or an improper rotation (3 reflections). In four dimensions, double rotations are added that represent 4 reflections.
Formal statement
Let be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group is a composition of at most n reflections.
See also
Indefinite orthogonal group
Coordinate rotations and reflections
Householder reflections
References
Theorems in group theory
Bilinear forms
|
https://en.wikipedia.org/wiki/Mechanically%20interlocked%20molecular%20architectures
|
In chemistry, mechanically interlocked molecular architectures (MIMAs) are molecules that are connected as a consequence of their topology. This connection of molecules is analogous to keys on a keychain loop. The keys are not directly connected to the keychain loop but they cannot be separated without breaking the loop. On the molecular level, the interlocked molecules cannot be separated without the breaking of the covalent bonds that comprise the conjoined molecules; this is referred to as a mechanical bond. Examples of mechanically interlocked molecular architectures include catenanes, rotaxanes, molecular knots, and molecular Borromean rings. Work in this area was recognized with the 2016 Nobel Prize in Chemistry to Bernard L. Feringa, Jean-Pierre Sauvage, and J. Fraser Stoddart.
The synthesis of such entangled architectures has been made efficient by combining supramolecular chemistry with traditional covalent synthesis, however mechanically interlocked molecular architectures have properties that differ from both "supramolecular assemblies" and "covalently bonded molecules". The terminology "mechanical bond" has been coined to describe the connection between the components of mechanically interlocked molecular architectures. Although research into mechanically interlocked molecular architectures is primarily focused on artificial compounds, many examples have been found in biological systems including: cystine knots, cyclotides or lasso-peptides such as microcin J25 which are proteins, and a variety of peptides.
Residual topology
Residual topology is a descriptive stereochemical term to classify a number of intertwined and interlocked molecules, which cannot be disentangled in an experiment without breaking of covalent bonds, while the strict rules of mathematical topology allow such a disentanglement. Examples of such molecules are rotaxanes, catenanes with covalently linked rings (so-called pretzelanes), and open knots (pseudoknots) which are abundant in proteins.
The term "residual topology" was suggested on account of a striking similarity of these compounds to the well-established topologically nontrivial species, such as catenanes and knotanes (molecular knots). The idea of residual topological isomerism introduces a handy scheme of modifying the molecular graphs and generalizes former efforts of systemization of mechanically bound and bridged molecules.
History
Experimentally the first examples of mechanically interlocked molecular architectures appeared in the 1960s with catenanes being synthesized by Wasserman and Schill and rotaxanes by Harrison and Harrison. The chemistry of MIMAs came of age when Sauvage pioneered their synthesis using templating methods. In the early 1990s the usefulness and even the existence of MIMAs were challenged. The latter concern was addressed by X ray crystallographer and structural chemist David Williams. Two postdoctoral researchers who took on the challenge of producing [5]catenane (olympiada
|
https://en.wikipedia.org/wiki/Li%27s%20criterion
|
In number theory, Li's criterion is a particular statement about the positivity of a certain sequence that is equivalent to the Riemann hypothesis. The criterion is named after Xian-Jin Li, who presented it in 1997. In 1999, Enrico Bombieri and Jeffrey C. Lagarias provided a generalization, showing that Li's positivity condition applies to any collection of points that lie on the Re(s) = 1/2 axis.
Definition
The Riemann function is given by
where ζ is the Riemann zeta function. Consider the sequence
Li's criterion is then the statement that
the Riemann hypothesis is equivalent to the statement that for every positive integer .
The numbers (sometimes defined with a slightly different normalization) are called Keiper-Li coefficients or Li coefficients. They may also be expressed in terms of the non-trivial zeros of the Riemann zeta function:
where the sum extends over ρ, the non-trivial zeros of the zeta function. This conditionally convergent sum should be understood in the sense that is usually used in number theory, namely, that
(Re(s) and Im(s) denote the real and imaginary parts of s, respectively.)
The positivity of has been verified up to by direct computation.
Proof
Note that .
Then, starting with an entire function , let .
vanishes when . Hence, is holomorphic on the unit disk iff .
Write the Taylor series . Since
we have
so that
.
Finally, if each zero comes paired with its complex conjugate , then we may combine terms to get
The condition then becomes equivalent to . The right-hand side of () is obviously nonnegative when both and . Conversely, ordering the by , we see that the largest term () dominates the sum as , and hence becomes negative sometimes.
A generalization
Bombieri and Lagarias demonstrate that a similar criterion holds for any collection of complex numbers, and is thus not restricted to the Riemann hypothesis. More precisely, let R = {ρ} be any collection of complex numbers ρ, not containing ρ = 1, which satisfies
Then one may make several equivalent statements about such a set. One such statement is the following:
One has for every ρ if and only if
for all positive integers n.
One may make a more interesting statement, if the set R obeys a certain functional equation under the replacement s ↦ 1 − s. Namely, if, whenever ρ is in R, then both the complex conjugate and are in R, then Li's criterion can be stated as:
One has Re(ρ) = 1/2 for every ρ if and only if
for all positive integers n.
Bombieri and Lagarias also show that Li's criterion follows from Weil's criterion for the Riemann hypothesis.
References
Zeta and L-functions
|
https://en.wikipedia.org/wiki/Pi-system
|
In mathematics, a -system (or pi-system) on a set is a collection of certain subsets of such that
is non-empty.
If then
That is, is a non-empty family of subsets of that is closed under non-empty finite intersections.
The importance of -systems arises from the fact that if two probability measures agree on a -system, then they agree on the -algebra generated by that -system. Moreover, if other properties, such as equality of integrals, hold for the -system, then they hold for the generated -algebra as well. This is the case whenever the collection of subsets for which the property holds is a -system. -systems are also useful for checking independence of random variables.
This is desirable because in practice, -systems are often simpler to work with than -algebras. For example, it may be awkward to work with -algebras generated by infinitely many sets So instead we may examine the union of all -algebras generated by finitely many sets This forms a -system that generates the desired -algebra. Another example is the collection of all intervals of the real line, along with the empty set, which is a -system that generates the very important Borel -algebra of subsets of the real line.
Definitions
A -system is a non-empty collection of sets that is closed under non-empty finite intersections, which is equivalent to containing the intersection of any two of its elements.
If every set in this -system is a subset of then it is called a
For any non-empty family of subsets of there exists a -system called the , that is the unique smallest -system of containing every element of
It is equal to the intersection of all -systems containing and can be explicitly described as the set of all possible non-empty finite intersections of elements of
A non-empty family of sets has the finite intersection property if and only if the -system it generates does not contain the empty set as an element.
Examples
For any real numbers and the intervals form a -system, and the intervals form a -system if the empty set is also included.
The topology (collection of open subsets) of any topological space is a -system.
Every filter is a -system. Every -system that doesn't contain the empty set is a prefilter (also known as a filter base).
For any measurable function the set defines a -system, and is called the -system by (Alternatively, defines a -system generated by )
If and are -systems for and respectively, then is a -system for the Cartesian product
Every -algebra is a -system.
Relationship to -systems
A -system on is a set of subsets of satisfying
if then
if is a sequence of (pairwise) subsets in then
Whilst it is true that any -algebra satisfies the properties of being both a -system and a -system, it is not true that any -system is a -system, and moreover it is not true that any -system is a -algebra. However, a useful classification is that any set system which is both a -system and a -system is a -al
|
https://en.wikipedia.org/wiki/Structure%20%28mathematical%20logic%29
|
In universal algebra and in model theory, a structure consists of a set along with a collection of finitary operations and relations that are defined on it.
Universal algebra studies structures that generalize the algebraic structures such as groups, rings, fields and vector spaces. The term universal algebra is used for structures of first-order theories with no relation symbols. Model theory has a different scope that encompasses more arbitrary first-order theories, including foundational structures such as models of set theory.
From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic, cf. also Tarski's theory of truth or Tarskian semantics.
For a given theory in model theory, a structure is called a model if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a semantic model when one discusses the notion in the more general setting of mathematical models. Logicians sometimes refer to structures as "interpretations", whereas the term "interpretation" generally has a different (although related) meaning in model theory, see interpretation (model theory).
In database theory, structures with no functions are studied as models for relational databases, in the form of relational models.
History
In the context of mathematical logic, the term "model" was first applied in 1940 by the philosopher Willard Van Orman Quine, in a reference to mathematician Richard Dedekind (1831 – 1916), a pioneer in the development of set theory. Since the 19th century, one main method for proving the consistency of a set of axioms has been to provide a model for it.
Definition
Formally, a structure can be defined as a triple consisting of a domain a signature and an interpretation function that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature one can refer to it as a -structure.
Domain
The domain of a structure is an arbitrary set; it is also called the of the structure, its (especially in universal algebra), its (especially in model theory, cf. universe), or its . In classical first-order logic, the definition of a structure prohibits the empty domain.
Sometimes the notation or is used for the domain of but often no notational distinction is made between a structure and its domain (that is, the same symbol refers both to the structure and its domain.)
Signature
The signature of a structure consists of:
a set of function symbols and relation symbols, along with
a function that ascribes to each symbol a natural number
The natural number of a symbol is called the arity of because it is the arity of the interpretation of
Since the signatures that arise in algebra often contain only function symbols, a signature with no relation symbols is called an algebraic signature. A structure with such a signature is also called an algebra; this should not be confused with the
|
https://en.wikipedia.org/wiki/Grand%20antiprism
|
In geometry, the grand antiprism or pentagonal double antiprismoid is a uniform 4-polytope (4-dimensional uniform polytope) bounded by 320 cells: 20 pentagonal antiprisms, and 300 tetrahedra. It is an anomalous, non-Wythoffian uniform 4-polytope, discovered in 1965 by Conway and Guy. Topologically, under its highest symmetry, the pentagonal antiprisms have D5d symmetry and there are two types of tetrahedra, one with S4 symmetry and one with Cs symmetry.
Alternate names
Pentagonal double antiprismoid Norman W. Johnson
Gap (Jonathan Bowers: for grand antiprism)
Structure
20 stacked pentagonal antiprisms occur in two disjoint rings of 10 antiprisms each. The antiprisms in each ring are joined to each other via their pentagonal faces. The two rings are mutually perpendicular, in a structure similar to a duoprism.
The 300 tetrahedra join the two rings to each other, and are laid out in a 2-dimensional arrangement topologically equivalent to the 2-torus and the ridge of the duocylinder. These can be further divided into three sets. 100 face mate to one ring, 100 face mate to the other ring, and 100 are centered at the exact midpoint of the duocylinder and edge mate to both rings. This latter set forms a flat torus and can be "unrolled" into a flat 10×10 square array of tetrahedra that meet only at their edges and vertices. See figure below.
In addition the 300 tetrahedra can be partitioned into 10 disjoint Boerdijk–Coxeter helices of 30 cells each that close back on each other. The two pentagonal antiprism tubes, plus the 10 BC helices, form an irregular discrete Hopf fibration of the grand antiprism that Hopf maps to the faces of a pentagonal antiprism. The two tubes map to the two pentagonal faces and the 10 BC helices map to the 10 triangular faces.
The structure of the grand antiprism is analogous to that of the 3-dimensional antiprisms. However, the grand antiprism is the only convex uniform analogue of the antiprism in 4 dimensions (although the 16-cell may be regarded as a regular analogue of the digonal antiprism). The only nonconvex uniform 4-dimensional antiprism analogue uses pentagrammic crossed-antiprisms instead of pentagonal antiprisms, and is called the pentagrammic double antiprismoid.
Vertex figure
The vertex figure of the grand antiprism is a sphenocorona or dissected regular icosahedron: a regular icosahedron with two adjacent vertices removed. In their place 8 triangles are replaced by a pair of trapezoids, edge lengths φ, 1, 1, 1 (where φ is the golden ratio), joined together along their edge of length φ, to give a tetradecahedron whose faces are the 2 trapezoids and the 12 remaining equilateral triangles.
Construction
The grand antiprism can be constructed by diminishing the 600-cell: subtracting 20 pyramids whose bases are three-dimensional pentagonal antiprisms. Conversely, the two rings of pentagonal antiprisms in the grand antiprism may be triangulated by 10 tetrahedra joined to the triangular faces of each
|
https://en.wikipedia.org/wiki/Mixture%20%28probability%29
|
In probability theory and statistics, a mixture is a probabilistic combination of two or more probability distributions. The concept arises mostly in two contexts:
A mixture defining a new probability distribution from some existing ones, as in a mixture distribution or a compound distribution. Here a major problem often is to derive the properties of the resulting distribution.
A mixture used as a statistical model such as is often used for statistical classification. The model may represent the population from which observations arise as a mixture of several components, and the problem is that of a mixture model, in which the task is to infer from which of a discrete set of sub-populations each observation originated.
See also
Mixture distribution
Compound distribution
Mixture model
classification
Cluster analysis
References
Probability theory
Compound probability distributions
Statistical classification
|
https://en.wikipedia.org/wiki/Fermat%20cubic
|
In geometry, the Fermat cubic, named after Pierre de Fermat, is a surface defined by
Methods of algebraic geometry provide the following parameterization of Fermat's cubic:
In projective space the Fermat cubic is given by
The 27 lines lying on the Fermat cubic are easy to describe explicitly: they are the 9 lines of the form (w : aw : y : by) where a and b are fixed numbers with cube −1, and their 18 conjugates under permutations of coordinates.
Real points of Fermat cubic surface.
References
Algebraic surfaces
|
https://en.wikipedia.org/wiki/Hong%20Kong%20Mathematical%20High%20Achievers%20Selection%20Contest
|
Hong Kong Mathematical High Achievers Selection Contest (HKMHASC, Traditional Chinese: 香港青少年數學精英選拔賽) is a yearly mathematics competition for students of or below Secondary 3 in Hong Kong. It is jointly organized by Po Leung Kuk and Hong Kong Association of Science and Mathematics Education since the academic year 1998-1999. Recently, there are more than 250 secondary schools participating.
Format and Scoring
Each participating school may send at most 5 students into the contest. There is one paper, divided into Part A and Part B, with two hours given. Part A is usually made up of 14 - 18 easier questions, carrying one mark each. In Part A, only answers are required. Part B is usually made up of 2 - 4 problems with different difficulties, and may carry different number of marks, varying from 4 to 8. In Part B, workings are required and marked. No calculators or calculation assisting equipments (e.g. printed mathematical tables) are allowed.
Awards and Further Training
Awards are given according to the total mark. The top 40 contestants are given the First Honour Award (一等獎), the next 80 the Second Honour Award (二等獎), and the Third Honour Award (三等獎) for the next 120. Moreover, the top 4 can obtain an award, namely the Champion and the 1st, 2nd and 3rd Runner-up.
Group Awards are given to schools, according to the sum of marks of the 3 contestants with highest mark. The first 4 are given the honour of Champion and 1st, 2nd and 3rd Runner-up. The honour of Top 10 (首十名最佳成績) is given to the 5th-10th, and Group Merit Award (團體優異獎) is given to the next 10.
First Honour Award achievers would receive further training. Eight students with best performance will be chosen to participate in the Invitational World Youth Mathematics Inter-City Competition (IWYMIC).
List of Past Champions (1999-2019)
98-99: Queen Elizabeth School, Ying Wa College
99-00: Queen's College
00-01: La Salle College
01-02: St. Paul's College
02-03: Queen's College
03-04: La Salle College
04-05: La Salle College
05-06: La Salle College
06-07: La Salle College
07-08: La Salle College
08-09: Diocesan Boys' School
09-10: St. Paul's Co-educational College
10-11: La Salle College
11-12: La Salle College
12-13: Queen Elizabeth School
13-14: Po Leung Kuk Centenary Li Shiu Chung Memorial College
14-15: Queen's College
15-16: Pui Ching Middle School
16-17: La Salle College
17-18: Queen's College
18-19: La Salle College
22-23: Diocesan Boys' School
Performance by school
See also
List of mathematics competitions
Hong Kong Mathematics Olympiad
Invitational World Youth Mathematics Inter-City Competition
Education in Hong Kong
Po Leung Kuk
Hong Kong Association of Science and Mathematics Education
External links
Official website (in Traditional Chinese)
Competitions in Hong Kong
Mathematics competitions
|
https://en.wikipedia.org/wiki/Tensor%20bundle
|
In mathematics, the tensor bundle of a manifold is the direct sum of all tensor products of the tangent bundle and the cotangent bundle of that manifold. To do calculus on the tensor bundle a connection is needed, except for the special case of the exterior derivative of antisymmetric tensors.
Definition
A tensor bundle is a fiber bundle where the fiber is a tensor product of any number of copies of the tangent space and/or cotangent space of the base space, which is a manifold. As such, the fiber is a vector space and the tensor bundle is a special kind of vector bundle.
References
See also
Vector bundles
|
https://en.wikipedia.org/wiki/Higman%27s%20embedding%20theorem
|
In group theory, Higman's embedding theorem states that every finitely generated recursively presented group R can be embedded as a subgroup of some finitely presented group G. This is a result of Graham Higman from the 1960s.
On the other hand, it is an easy theorem that every finitely generated subgroup of a finitely presented group is recursively presented, so the recursively presented finitely generated groups are (up to isomorphism) exactly the finitely generated subgroups of finitely presented groups.
Since every countable group is a subgroup of a finitely generated group, the theorem can be restated for those groups.
As a corollary, there is a universal finitely presented group that contains all finitely presented groups as subgroups (up to isomorphism); in fact, its finitely generated subgroups are exactly the finitely generated recursively presented groups (again, up to isomorphism).
Higman's embedding theorem also implies the Novikov-Boone theorem (originally proved in the 1950s by other methods) about the existence of a finitely presented group with algorithmically undecidable word problem. Indeed, it is fairly easy to construct a finitely generated recursively presented group with undecidable word problem. Then any finitely presented group that contains this group as a subgroup will have undecidable word problem as well.
The usual proof of the theorem uses a sequence of HNN extensions starting with R and ending with a group G which can be shown to have a finite presentation.
References
Infinite group theory
Theorems in group theory
|
https://en.wikipedia.org/wiki/Defense-independent%20ERA
|
In baseball statistics, defense-independent ERA (dERA) is a statistic that projects what a pitcher's earned run average (ERA) would have been, if not for the effects of defense and luck on the actual games in which he pitched. The statistic was first devised by Voros McCracken in 1999.
Method
Version 2.0 of dERA uses the following statistics:
batters faced
Home runs allowed
Base on balls
Intentional base on balls
Strikeouts
Hit by pitch
0) Multiply BFP by .0074 to get the number of intentional walks allowed (dIBB).
1) Divide HB by BFP-IBB. Call this $HB. Then multiply $HB by BFP-dIBB. This number gives the DIPS number of Hit Batsmen (dHB).
2) Divide (BB-IBB) by (BFP-IBB-HB), and call this number $BB. Multiply BFP by 0.0074, and call this dIBB.
2a) Then multiply $BB by (BFP-dIBB-dHB). Take this number and add IBB. This number is now the DIPS number of total walks allowed (dBB).
3) Divide K by (BFP-HB-BB) and call this number $K. Remember this number for later.
3a) Multiply $K by (BFP-dBB-dHB). This gives the DIPS number of strikeouts (dK).
4) Divide HR by (BFP-HB-BB-K) and call this number $HR. Remember this number for later.
4a) Multiply $HR by (BFP-dBB-dHB-dK). This gives the DIPS number of Home Runs (dHR).
5) Calculate the number of 'Balls Hit in the Field of Play'. This is BFP-dHR-dBB-dK-dHB.
6) Estimate hits per balls in the field of play ($H):
6a) Take the number 0.304396 and subtract 0.010830 if the pitcher is strictly a knuckleball pitcher. If not keep the 0.304396 number.
6b) Take the result from the last step and add 0.002321 if the pitcher is left-handed, if not keep the number from the above step.
6c) Take the $K figure from above and multiply it by 0.04782. Subtract this number from the number in 6b.
6d) Take the $HR figure from way above and multiply it by 0.08095. Subtract this number from the number in 6c.
6e) Whatever remains is the $H figure.
7) To get the projected number of Hits Allowed (DIPS 'Hits Allowed', or dH), multiply $H by the number of balls hit in the field of play (BHFP).
7a) Add this number to dHR. This number is the DIPS total of Hits Allowed (dH).
8) Take BFP-dBB-dHB-dK-dH and multiply that number by 1.048. Add dK to that number. Take that number and divide by 3. This is the DIPS total of Innings Pitched (dIP).
9) Sum the following products:
(dH-dHR)*0.49674
dHR*1.294375
(dBB-dIBB)*0.3325
dIBB*0.0864336
dK*(-0.084691)
dHB*0.3077
(BFP-dHB-dBB-dK-dH)*(-0.082927)
The sum of all of these is the DIPS total of earned runs (dER).
10) Calculate ERA as usual: (9*dER)/dIP. This is the DIPS ERA (dERA).
Alternative formulation
0)
1)
2)
3)
4)
See also
Earned run
Earned run average
Component ERA
PERA
QERA
References
External links
Defense Independent Pitching Stats
Pitching statistics
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.