text
stringlengths 4
118k
| source
stringlengths 15
79
|
---|---|
In mathematics, the magnitude or size of a mathematical object is a property which determines whether the object is larger or smaller than other objects of the same kind. More formally, an object's magnitude is the displayed result of an ordering (or ranking) of the class of objects to which it belongs. Magnitude as a concept dates to Ancient Greece and has been applied as a measure of distance from one object to another. For numbers, the absolute value of a number is commonly applied as the measure of units between a number and zero. In vector spaces, the Euclidean norm is a measure of magnitude used to define a distance between two points in space. In physics, magnitude can be defined as quantity or distance. An order of magnitude is typically defined as a unit of distance between one number and another's numerical places on the decimal scale. == History == Ancient Greeks distinguished between several types of magnitude, including: Positive fractions Line segments (ordered by length) Plane figures (ordered by area) Solids (ordered by volume) Angles (ordered by angular magnitude) They proved that the first two could not be the same, or even isomorphic systems of magnitude. They did not consider negative magnitudes to be meaningful, and magnitude is still primarily used in contexts in which zero is either the smallest size or less than all possible sizes. == Numbers == The magnitude of any number x {\displaystyle x} is usually called its absolute value or modulus, denoted by | x | {\displaystyle |x|} . === Real numbers === The absolute value of a real number r is defined by: | r | = r , if r ≥ 0 {\displaystyle \left|r\right|=r,{\text{ if }}r{\text{ ≥ }}0} | r | = − r , if r < 0. {\displaystyle \left|r\right|=-r,{\text{ if }}r<0.} Absolute value may also be thought of as the number's distance from zero on the real number line. For example, the absolute value of both 70 and −70 is 70. === Complex numbers === A complex number z may be viewed as the position of a point P in a 2-dimensional space, called the complex plane. The absolute value (or modulus) of z may be thought of as the distance of P from the origin of that space. The formula for the absolute value of z = a + bi is similar to that for the Euclidean norm of a vector in a 2-dimensional Euclidean space: | z | = a 2 + b 2 {\displaystyle \left|z\right|={\sqrt {a^{2}+b^{2}}}} where the real numbers a and b are the real part and the imaginary part of z, respectively. For instance, the modulus of −3 + 4i is ( − 3 ) 2 + 4 2 = 5 {\displaystyle {\sqrt {(-3)^{2}+4^{2}}}=5} . Alternatively, the magnitude of a complex number z may be defined as the square root of the product of itself and its complex conjugate, z ¯ {\displaystyle {\bar {z}}} , where for any complex number z = a + b i {\displaystyle z=a+bi} , its complex conjugate is z ¯ = a − b i {\displaystyle {\bar {z}}=a-bi} . | z | = z z ¯ = ( a + b i ) ( a − b i ) = a 2 − a b i + a b i − b 2 i 2 = a 2 + b 2 {\displaystyle \left|z\right|={\sqrt {z{\bar {z}}}}={\sqrt {(a+bi)(a-bi)}}={\sqrt {a^{2}-abi+abi-b^{2}i^{2}}}={\sqrt {a^{2}+b^{2}}}} (where i 2 = − 1 {\displaystyle i^{2}=-1} ). == Vector spaces == === Euclidean vector space === A Euclidean vector represents the position of a point P in a Euclidean space. Geometrically, it can be described as an arrow from the origin of the space (vector tail) to that point (vector tip). Mathematically, a vector x in an n-dimensional Euclidean space can be defined as an ordered list of n real numbers (the Cartesian coordinates of P): x = [x1, x2, ..., xn]. Its magnitude or length, denoted by ‖ x ‖ {\displaystyle \|x\|} , is most commonly defined as its Euclidean norm (or Euclidean length): ‖ x ‖ = x 1 2 + x 2 2 + ⋯ + x n 2 . {\displaystyle \|\mathbf {x} \|={\sqrt {x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}}}.} For instance, in a 3-dimensional space, the magnitude of [3, 4, 12] is 13 because 3 2 + 4 2 + 12 2 = 169 = 13. {\displaystyle {\sqrt {3^{2}+4^{2}+12^{2}}}={\sqrt {169}}=13.} This is equivalent to the square root of the dot product of the vector with itself: ‖ x ‖ = x ⋅ x . {\displaystyle \|\mathbf {x} \|={\sqrt {\mathbf {x} \cdot \mathbf {x} }}.} The Euclidean norm of a vector is just a special case of Euclidean distance: the distance between its tail and its tip. Two similar notations are used for the Euclidean norm of a vector x: ‖ x ‖ , {\displaystyle \left\|\mathbf {x} \right\|,} | x | . {\displaystyle \left|\mathbf {x} \right|.} A disadvantage of the second notation is that it can also be used to denote the absolute value of scalars and the determinants of matrices, which introduces an element of ambiguity. === Normed vector spaces === By definition, all Euclidean vectors have a magnitude (see above). However, a vector in an abstract vector space does not possess a magnitude. A vector space endowed with a norm, such as the Euclidean space, is called a normed vector space. The norm of a vector v in a normed vector space can be considered to be the magnitude of v. === Pseudo-Euclidean space === In a pseudo-Euclidean space, the magnitude of a vector is the value of the quadratic form for that vector. == Logarithmic magnitudes == When comparing magnitudes, a logarithmic scale is often used. Examples include the loudness of a sound (measured in decibels), the brightness of a star, and the Richter scale of earthquake intensity. Logarithmic magnitudes can be negative. In the natural sciences, a logarithmic magnitude is typically referred to as a level. == Order of magnitude == Orders of magnitude denote differences in numeric quantities, usually measurements, by a factor of 10—that is, a difference of one digit in the location of the decimal point. == Other mathematical measures == == See also == Number sense Vector notation Set size == References ==
|
https://en.wikipedia.org/wiki/Magnitude_(mathematics)
|
Order in mathematics may refer to: == Set theory == Total order and partial order, a binary relation generalizing the usual ordering of numbers and of words in a dictionary Ordered set Order in Ramsey theory, uniform structures in consequence to critical set cardinality == Algebra == Order (group theory), the cardinality of a group or period of an element Order of a polynomial (disambiguation) Order of a square matrix, its dimension Order (ring theory), an algebraic structure Ordered group Ordered field == Analysis == Order (differential equation) or order of highest derivative, of a differential equation Leading-order terms NURBS order, a number one greater than the degree of the polynomial representation of a non-uniform rational B-spline Order of convergence, a measurement of convergence Order of derivation Order of an entire function Order of a power series, the lowest degree of its terms Ordered list, a sequence or tuple Orders of approximation in Big O notation Z-order (curve), a space-filling curve == Arithmetic == Multiplicative order in modular arithmetic Order of operations Orders of magnitude, a class of scale or magnitude of any amount == Combinatorics == Order in the Josephus permutation Ordered selections and partitions of the twelvefold way in combinatorics Ordered set, a bijection, cyclic order, or permutation Weak order of permutations == Fractals == Complexor, or complex order in fractals Order of extension in Lakes of Wada Order of fractal dimension (Rényi dimensions) Orders of construction in the Pythagoras tree == Geometry == Long-range aperiodic order, in pinwheel tiling, for instance == Graphs == Graph order, the number of nodes in a graph First order and second order logic of graphs Topological ordering of directed acyclic graphs Degeneracy ordering of undirected graphs Elimination ordering of chordal graphs Order, the complexity of a structure within a graph: see haven (graph theory) and bramble (graph theory) == Logic == In logic, model theory and type theory: Zeroth-order logic First-order logic Second-order logic Higher-order logic == Order theory == Order (journal), an academic journal on order theory Dense order, a total order wherein between any unequal pair of elements there is always an intervening element in the order Glossary of order theory Lexicographical order, an ordering method on sequences analogous to alphabetical order on words List of order topics, list of order theory topics Order theory, study of various binary relations known as orders Order topology, a topology of total order for totally ordered sets Ordinal numbers, numbers assigned to sets based on their set-theoretic order Partial order, often called just "order" in order theory texts, a transitive antisymmetric relation Total order, a partial order that is also total, in that either the relation or its inverse holds between any unequal elements == Statistics == Order statistics First-order statistics, e.g., arithmetic mean, median, quantiles Second-order statistics, e.g., correlation, power spectrum, variance Higher-order statistics, e.g., bispectrum, kurtosis, skewness
|
https://en.wikipedia.org/wiki/Order_(mathematics)
|
Mathematics competitions or mathematical olympiads are competitive events where participants complete a math test. These tests may require multiple choice or numeric answers, or a detailed written solution or proof. == International mathematics competitions == Championnat International de Jeux Mathématiques et Logiques — for all ages, mainly for French-speaking countries, but participation is not limited by language. China Girls Mathematical Olympiad (CGMO) — held annually for teams of girls representing different regions within China and a few other countries. European Girls' Mathematical Olympiad (EGMO) — since April 2012 Integration Bee — competition in integral calculus held in various institutions of higher learning in the United States and some other countries International Mathematical Modeling Challenge — team contest for high school students International Mathematical Olympiad (IMO) — the oldest international Olympiad, occurring annually since 1959. International Mathematics Competition for University Students (IMC) — international competition for undergraduate students. Mathematical Contest in Modeling (MCM) — team contest for undergraduates Mathematical Kangaroo — worldwide competition. Mental Calculation World Cup — contest for the best mental calculators Primary Mathematics World Contest (PMWC) — worldwide competition Rocket City Math League (RCML) — Competition run by students at Virgil I. Grissom High School with levels ranging from Explorer (Pre-Algebra) to Discovery (Comprehensive) Romanian Master of Mathematics and Sciences — Olympiad for the selection of the top 20 countries in the last IMO. Tournament of the Towns — worldwide competition. == Multinational regional mathematics competitions == Asian Pacific Mathematics Olympiad (APMO) — Pacific rim Balkan Mathematical Olympiad — for students from Balkan area Baltic Way — Baltic area ICAS-Mathematics (formerly Australasian Schools Mathematics Assessment) Mediterranean Mathematics Competition. Olympiad for countries in the Mediterranean zone. Noetic Learning math contest — United States and Canada (primary schools) Nordic Mathematical Contest (NMC) — the five Nordic countries North East Asian Mathematics Competition (NEAMC) — North-East Asia Pan African Mathematics Olympiads (PAMO) South East Asian Mathematics Competition (SEAMC) — South-East Asia William Lowell Putnam Mathematical Competition — United States and Canada == National mathematics olympiads == === Australia === Australian Mathematics Competition === Bangladesh === Bangladesh Mathematical Olympiad (Jatiyo Gonit Utshob) === Belgium === Olympiade Mathématique Belge — competition for French-speaking students in Belgium Vlaamse Wiskunde Olympiade — competition for Dutch-speaking students in Belgium === Brazil === Olimpíada Brasileira de Matemática (OBM) — national competition open to all students from sixth grade to university Olimpíada Brasileira de Matemática das Escolas Públicas (OBMEP) — national competition open to public-school students from fourth grade to high school === Canada === Canadian Open Mathematics Challenge — Canada's premier national mathematics competition open to any student with an interest in and grasp of high school math and organised by Canadian Mathematical Society Canadian Mathematical Olympiad — competition whose top performers represent Canada at the International Mathematical Olympiad The Centre for Education in Mathematics and Computing (CEMC) based out of the University of Waterloo hosts long-standing national competitions for grade levels 7–12 MathChallengers (formerly MathCounts BC) — for eighth, ninth, and tenth grade students === China === Chinese Mathematical Olympiad (CMO) === France === Concours général — competition whose mathematics portion is open to twelfth grade students === Hong Kong === Hong Kong Mathematics Olympiad Hong Kong Mathematical High Achievers Selection Contest — for students from Form 1 to Form 3 Pui Ching Invitational Mathematics Competition Primary Mathematics World Contest === Hungary === Miklós Schweitzer Competition Középiskolai Matematikai Lapok — correspondence competition for students from 9th–12th grade National Secondary School Academic Competition – Mathematics === India === Indian National Mathematical Olympiad Science Olympiad Foundation - Conducts Mathematics Olympiads === Indonesia === National Science Olympiad (Olimpiade Sains Nasional) — includes mathematics along with various science topics === Kenya === Moi National Mathematics Contest — prepared and hosted by Mang'u High School but open to students from all Kenyan high schools === Nigeria === Cowbellpedia. This contest is sponsored by Promasidor Nigeria. It is open to students from eight to eighteen, at public and private schools in Nigeria. === Philippines === Philippine Math Olympiad, the selection event of the Mathematical Society of the Philippines for the Philippine IMO team === Saudi Arabia === KFUPM mathematics olympiad – organized by King Fahd University of Petroleum and Minerals (KFUPM). === Singapore === Singapore Mathematical Olympiad (SMO) — organized by the Singapore Mathematical Society, the competition is open to all pre-university students in Singapore. === South Africa === University of Cape Town Mathematics Competition — open to students in grades 8 through 12 in the Western Cape province. === United States === ==== National elementary school competitions (K–5) and higher ==== Math League (grades 4–12) Mathematical Olympiads for Elementary and Middle Schools (MOEMS) (grades 4–6 and 7–8) Noetic Learning math contest (grades 2-8) Pi Math Contest (for elementary, middle and high school students) ==== National middle school competitions (grades 6–8) and lower/higher ==== American Mathematics Contest 8 (AMC->8), formerly the American Junior High School Mathematics Examination (AJHSME) Math League (grades 4–12) MATHCOUNTS Mathematical Olympiads for Elementary and Middle Schools (MOEMS) Noetic Learning math contest (grades 2-8) Pi Math Contest (for elementary, middle and high school students) Rocket City Math League (pre-algebra to calculus) United States of America Mathematical Talent Search (USAMTS) ==== National high school competitions (grade 9–12) and lower ==== American Invitational Mathematics Examination (AIME) American Mathematics Contest 10 (AMC10) American Mathematics Contest 12 (AMC12), formerly the American High School Mathematics Examination (AHSME) American Regions Mathematics League (ARML) Harvard-MIT Mathematics Tournament (HMMT) iTest High School Mathematical Contest in Modeling (HiMCM) Math League (grades 4–12) Math-O-Vision (grades 9–12) Math Prize for Girls MathWorks Math Modeling Challenge Mu Alpha Theta Pi Math Contest (for elementary, middle and high school students) United States of America Mathematical Olympiad (USAMO) United States of America Mathematical Talent Search (USAMTS) Rocket City Math League (pre-algebra to calculus) ==== National college competitions ==== AMATYC Mathematics Contest Mathematical Contest in Modeling (MCM) William Lowell Putnam Mathematical Competition === Spain === Liga Matemática, mathematics competition organized by the National Association of Mathematics Students === Vietnam === Kì thi Học sinh giỏi Quốc gia môn Toán — hosted annually by Vietnamese Ministry of Education and Training for high-schoolers around the nation. Consists of 2 rounds, the best-scorers of Round 1 will proceed to Round 2 in order to qualify for the country's various math Olympiads teams. == See also == Mathematical software Mathethon - computer-based math competition == References ==
|
https://en.wikipedia.org/wiki/List_of_mathematics_competitions
|
Lottery mathematics is used to calculate probabilities of winning or losing a lottery game. It is based primarily on combinatorics, particularly the twelvefold way and combinations without replacement. It can also be used to analyze coincidences that happen in lottery drawings, such as repeated numbers appearing across different draws. == Choosing 6 from 49 == In a typical 6/49 game, each player chooses six distinct numbers from a range of 1–49. If the six numbers on a ticket match the numbers drawn by the lottery, the ticket holder is a jackpot winner—regardless of the order of the numbers. The probability of this happening is 1 in 13,983,816. The chance of winning can be demonstrated as follows: The first number drawn has a 1 in 49 chance of matching. When the draw comes to the second number, there are now only 48 balls left in the bag, because the balls are drawn without replacement. So there is now a 1 in 48 chance of predicting this number. Thus for each of the 49 ways of choosing the first number there are 48 different ways of choosing the second. This means that the probability of correctly predicting 2 numbers drawn from 49 in the correct order is calculated as 1 in 49 × 48. On drawing the third number there are only 47 ways of choosing the number; but we could have arrived at this point in any of 49 × 48 ways, so the chances of correctly predicting 3 numbers drawn from 49, again in the correct order, is 1 in 49 × 48 × 47. This continues until the sixth number has been drawn, giving the final calculation, 49 × 48 × 47 × 46 × 45 × 44, which can also be written as 49 ! ( 49 − 6 ) ! {\displaystyle {49! \over (49-6)!}} or 49 factorial divided by 43 factorial or FACT(49)/FACT(43) or simply PERM(49,6) . 608281864034267560872252163321295376887552831379210240000000000 / 60415263063373835637355132068513997507264512000000000 = 10068347520 This works out to 10,068,347,520, which is much bigger than the ~14 million stated above. Perm(49,6)=10068347520 and 49 nPr 6 =10068347520. However, the order of the 6 numbers is not significant for the payout. That is, if a ticket has the numbers 1, 2, 3, 4, 5, and 6, it wins as long as all the numbers 1 through 6 are drawn, no matter what order they come out in. Accordingly, given any combination of 6 numbers, there are 6 × 5 × 4 × 3 × 2 × 1 = 6! or 720 orders in which they can be drawn. Dividing 10,068,347,520 by 720 gives 13,983,816, also written as 49 ! 6 ! ∗ ( 49 − 6 ) ! {\displaystyle {49! \over 6!*(49-6)!}} , or COMBIN(49,6) or 49 nCr 6 or more generally as ( n k ) = n ! k ! ( n − k ) ! {\displaystyle {n \choose k}={n! \over k!(n-k)!}} , where n is the number of alternatives and k is the number of choices. Further information is available at binomial coefficient and multinomial coefficient. This function is called the combination function, COMBIN(n,k). For the rest of this article, we will use the notation ( n k ) {\displaystyle {n \choose k}} . "Combination" means the group of numbers selected, irrespective of the order in which they are drawn. A combination of numbers is usually presented in ascending order. An eventual 7th drawn number, the reserve or bonus, is presented at the end. An alternative method of calculating the odds is to note that the probability of the first ball corresponding to one of the six chosen is 6/49; the probability of the second ball corresponding to one of the remaining five chosen is 5/48; and so on. This yields a final formula of ( n k ) = ( 49 6 ) = 49 6 ∗ 48 5 ∗ 47 4 ∗ 46 3 ∗ 45 2 ∗ 44 1 {\displaystyle {n \choose k}={49 \choose 6}={49 \over 6}*{48 \over 5}*{47 \over 4}*{46 \over 3}*{45 \over 2}*{44 \over 1}} A 7th ball often is drawn as reserve ball, in the past only a second chance to get 5+1 numbers correct with 6 numbers played. == Odds of getting other possibilities in choosing 6 from 49 == One must divide the number of combinations producing the given result by the total number of possible combinations (for example, ( 49 6 ) = 13 , 983 , 816 {\displaystyle {49 \choose 6}=13,983,816} ). The numerator equates to the number of ways to select the winning numbers multiplied by the number of ways to select the losing numbers. For a score of n (for example, if 3 choices match three of the 6 balls drawn, then n = 3), ( 6 n ) {\displaystyle {6 \choose n}} describes the odds of selecting n winning numbers from the 6 winning numbers. This means that there are 6 - n losing numbers, which are chosen from the 43 losing numbers in ( 43 6 − n ) {\displaystyle {43 \choose 6-n}} ways. The total number of combinations giving that result is, as stated above, the first number multiplied by the second. The expression is therefore ( 6 n ) ( 43 6 − n ) ( 49 6 ) {\displaystyle {6 \choose n}{43 \choose 6-n} \over {49 \choose 6}} . This can be written in a general form for all lotteries as: ( K B ) ( N − K K − B ) ( N K ) {\displaystyle {K \choose B}{N-K \choose K-B} \over {N \choose K}} where N {\displaystyle N} is the number of balls in lottery, K {\displaystyle K} is the number of balls in a single ticket, and B {\displaystyle B} is the number of matching balls for a winning ticket. The generalisation of this formula is called the hypergeometric distribution. This gives the following results: When a 7th number is drawn as bonus number then we have 49!/6!/1!/42!.=combin(49,6)*combin(49-6,1)=601304088 different possible drawing results. You would expect to score 3 of 6 or better once in around 36.19 drawings. Notice that It takes a 3 if 6 wheel of 163 combinations to be sure of at least one 3/6 score. 1/p changes when several distinct combinations are played together. It mostly is about winning something, not just the jackpot. == Ensuring to win the jackpot == There is only one known way to ensure winning the jackpot. That is to buy at least one lottery ticket for every possible number combination. For example, one has to buy 13,983,816 different tickets to ensure to win the jackpot in a 6/49 game. Lottery organizations have laws, rules and safeguards in place to prevent gamblers from executing such an operation. Further, just winning the jackpot by buying every possible combination does not guarantee that one will break even or make a profit. If p {\displaystyle p} is the probability to win; c t {\displaystyle c_{t}} the cost of a ticket; c l {\displaystyle c_{l}} the cost for obtaining a ticket (e.g. including the logistics); c f {\displaystyle c_{f}} one time costs for the operation (such as setting up and conducting the operation); then the jackpot m j {\displaystyle m_{j}} should contain at least m j ≥ c f + c t + c l p {\displaystyle m_{j}\geq c_{f}+{\frac {c_{t}+c_{l}}{p}}} to have a chance to at least break even. The above theoretical "chance to break-even" point is slightly offset by the sum ∑ i m i {\displaystyle \sum _{i}{}m_{i}} of the minor wins also included in all the lottery tickets: m j ≥ c f + c t + c l p − ∑ i m i {\displaystyle m_{j}\geq c_{f}+{\frac {c_{t}+c_{l}}{p}}-\sum _{i}{}m_{i}} Still, even if the above relation is satisfied, it does not guarantee to break even. The payout depends on the number of winning tickets for all the prizes n x {\displaystyle n_{x}} , resulting in the relation m j n j ≥ c f + c t + c l p − ∑ i m i n i {\displaystyle {\frac {m_{j}}{n_{j}}}\geq c_{f}+{\frac {c_{t}+c_{l}}{p}}-\sum _{i}{}{\frac {m_{i}}{n_{i}}}} In probably the only known successful operations the threshold to execute an operation was set at three times the cost of the tickets alone for unknown reasons m j ≥ 3 × c t p {\displaystyle m_{j}\geq 3\times {\frac {c_{t}}{p}}} I.e. n j p c t ( c f + c t + c l p − ∑ i m i n i ) ≪ 3 {\displaystyle {\frac {n_{j}p}{c_{t}}}\left(c_{f}+{\frac {c_{t}+c_{l}}{p}}-\sum _{i}{}{\frac {m_{i}}{n_{i}}}\right)\ll 3} This does, however, not eliminate all risks to make no profit. The success of the operations still depended on a bit of luck. In addition, in one operation the logistics failed and not all combinations could be obtained. This added the risk of not even winning the jackpot at all. == Powerballs and bonus balls == Many lotteries have a Powerball (or "bonus ball"). If the powerball is drawn from a pool of numbers different from the main lottery, the odds are multiplied by the number of powerballs. For example, in the 6 from 49 lottery, given 10 powerball numbers, then the odds of getting a score of 3 and the powerball would be 1 in 56.66 × 10, or 566.6 (the probability would be divided by 10, to give an exact value of 8815 4994220 {\textstyle {\frac {8815}{4994220}}} ). Another example of such a game is Mega Millions, albeit with different jackpot odds. Where more than 1 powerball is drawn from a separate pool of balls to the main lottery (for example, in the EuroMillions game), the odds of the different possible powerball matching scores are calculated using the method shown in the "other scores" section above (in other words, the powerballs are like a mini-lottery in their own right), and then multiplied by the odds of achieving the required main-lottery score. If the powerball is drawn from the same pool of numbers as the main lottery, then, for a given target score, the number of winning combinations includes the powerball. For games based on the Canadian lottery (such as the lottery of the United Kingdom), after the 6 main balls are drawn, an extra ball is drawn from the same pool of balls, and this becomes the powerball (or "bonus ball"). An extra prize is given for matching 5 balls and the bonus ball. As described in the "other scores" section above, the number of ways one can obtain a score of 5 from a single ticket is ( 6 5 ) ( 43 1 ) = 258 {\textstyle {6 \choose 5}{43 \choose 1}=258} . Since the number of remaining balls is 43, and the ticket has 1 unmatched number remaining, 1/43 of these 258 combinations will match the next ball drawn (the powerball), leaving 258/43 = 6 ways of achieving it. Therefore, the odds of getting a score of 5 and the powerball are 6 ( 49 6 ) = 1 2 , 330 , 636 {\textstyle {6 \over {49 \choose 6}}={1 \over 2,330,636}} . Of the 258 combinations that match 5 of the main 6 balls, in 42/43 of them the remaining number will not match the powerball, giving odds of 258 ⋅ 42 43 ( 49 6 ) = 3 166 , 474 ≈ 1.802 × 10 − 5 {\textstyle {{258\cdot {\frac {42}{43}}} \over {49 \choose 6}}={\frac {3}{166,474}}\approx 1.802\times 10^{-5}} for obtaining a score of 5 without matching the powerball. Using the same principle, the odds of getting a score of 2 and the powerball are ( 6 2 ) ( 43 4 ) = 1 , 851 , 150 {\textstyle {6 \choose 2}{43 \choose 4}=1,\!851,\!150} for the score of 2 multiplied by the probability of one of the remaining four numbers matching the bonus ball, which is 4/43. Since 1 , 851 , 150 ⋅ 4 43 = 172 , 200 {\textstyle 1,851,150\cdot {\frac {4}{43}}=172,\!200} , the probability of obtaining the score of 2 and the bonus ball is 172 , 200 ( 49 6 ) = 1025 83237 = 1.231 % {\textstyle {\frac {172,200}{49 \choose 6}}={\frac {1025}{83237}}=1.231\%} , approximate decimal odds of 1 in 81.2. The general formula for B {\displaystyle B} matching balls in a N {\displaystyle N} choose K {\displaystyle K} lottery with one bonus ball from the N {\displaystyle N} pool of balls is: K − B N − K ( K B ) ( N − K K − B ) ( N K ) {\displaystyle {\frac {{\frac {K-B}{N-K}}{K \choose B}{N-K \choose K-B}}{N \choose K}}} The general formula for B {\displaystyle B} matching balls in a N {\displaystyle N} choose K {\displaystyle K} lottery with zero bonus ball from the N {\displaystyle N} pool of balls is: N − K − K + B N − K ( K B ) ( N − K K − B ) ( N K ) {\displaystyle {N-K-K+B \over N-K}{K \choose B}{N-K \choose K-B} \over {N \choose K}} The general formula for B {\displaystyle B} matching balls in a N {\displaystyle N} choose K {\displaystyle K} lottery with one bonus ball from a separate pool of P {\displaystyle P} balls is: 1 P ( K B ) ( N − K K − B ) ( N K ) {\displaystyle {1 \over P}{K \choose B}{N-K \choose K-B} \over {N \choose K}} The general formula for B {\displaystyle B} matching balls in a N {\displaystyle N} choose K {\displaystyle K} lottery with no bonus ball from a separate pool of P {\displaystyle P} balls is: P − 1 P ( K B ) ( N − K K − B ) ( N K ) {\displaystyle {P-1 \over P}{K \choose B}{N-K \choose K-B} \over {N \choose K}} == Minimum number of tickets for a match == It is a hard (and often open) problem to calculate the minimum number of tickets one needs to purchase to guarantee that at least one of these tickets matches at least 2 numbers. In the 5-from-90 lotto, the minimum number of tickets that can guarantee a ticket with at least 2 matches is 100. == Coincidences involving lottery numbers == Coincidences in lottery drawings often capture our imagination and can make news headlines as they seemingly highlight patterns in what should be entirely random outcomes. For example, repeated numbers appearing across different draws may appear on the surface to be too implausible to be by pure chance. For instance, on September 6, 2009, the six numbers 4, 15, 23, 24, 35, and 42 were drawn from 49 in the Bulgarian national 6/49 lottery, and in the very next drawing on September 10th, the same six numbers were drawn again. Lottery mathematics can be used to analyze these extraordinary events. == Information theoretic results == As a discrete probability space, the probability of any particular lottery outcome is atomic, meaning it is greater than zero. Therefore, the probability of any event is the sum of probabilities of the outcomes of the event. This makes it easy to calculate quantities of interest from information theory. For example, the information content of any event is easy to calculate, by the formula I ( E ) := − log [ Pr ( E ) ] = − log ( P ) . {\displaystyle \operatorname {I} (E):=-\log {\left[\Pr {\left(E\right)}\right]}=-\log {\left(P\right)}.} In particular, the information content of outcome x {\displaystyle x} of discrete random variable X {\displaystyle X} is I X ( x ) := − log [ p X ( x ) ] = log ( 1 p X ( x ) ) . {\displaystyle \operatorname {I} _{X}(x):=-\log {\left[p_{X}{\left(x\right)}\right]}=\log {\left({\frac {1}{p_{X}{\left(x\right)}}}\right)}.} For example, winning in the example § Choosing 6 from 49 above is a Bernoulli-distributed random variable X {\displaystyle X} with a 1/13,983,816 chance of winning ("success") We write X ∼ B e r n o u l l i ( p ) = B ( 1 , p ) {\textstyle X\sim \mathrm {Bernoulli} \!\left(p\right)=\mathrm {B} \!\left(1,p\right)} with p = 1 13 , 983 , 816 {\textstyle p={\tfrac {1}{13,983,816}}} and q = 13 , 983 , 815 13 , 983 , 816 {\textstyle q={\tfrac {13,983,815}{13,983,816}}} . The information content of winning is I X ( win ) = − log 2 p X ( win ) = − log 2 1 13 , 983 , 816 ≈ 23.73725 {\displaystyle \operatorname {I} _{X}({\text{win}})=-\log _{2}{p_{X}{({\text{win}})}}=-\log _{2}\!{\tfrac {1}{13,983,816}}\approx 23.73725} shannons or bits of information. (See units of information for further explanation of terminology.) The information content of losing is I X ( lose ) = − log 2 p X ( lose ) = − log 2 13 , 983 , 815 13 , 983 , 816 ≈ 1.0317 × 10 − 7 shannons . {\displaystyle {\begin{aligned}\operatorname {I} _{X}({\text{lose}})&=-\log _{2}{p_{X}{({\text{lose}})}}=-\log _{2}\!{\tfrac {13,983,815}{13,983,816}}\\&\approx 1.0317\times 10^{-7}{\text{ shannons}}.\end{aligned}}} The information entropy of a lottery probability distribution is also easy to calculate as the expected value of the information content. H ( X ) = ∑ x − p X ( x ) log p X ( x ) = ∑ x p X ( x ) I X ( x ) = d e f E [ I X ( x ) ] {\displaystyle {\begin{alignedat}{2}\mathrm {H} (X)&=\sum _{x}{-p_{X}{\left(x\right)}\log {p_{X}{\left(x\right)}}}\ &=\sum _{x}{p_{X}{\left(x\right)}\operatorname {I} _{X}(x)}\\&{\overset {\underset {\mathrm {def} }{}}{=}}\ \mathbb {E} {\left[\operatorname {I} _{X}(x)\right]}\end{alignedat}}} Oftentimes the random variable of interest in the lottery is a Bernoulli trial. In this case, the Bernoulli entropy function may be used. Using X {\displaystyle X} representing winning the 6-of-49 lottery, the Shannon entropy of 6-of-49 above is H ( X ) = − p log ( p ) − q log ( q ) = − 1 13 , 983 , 816 log 1 13 , 983 , 816 − 13 , 983 , 815 13 , 983 , 816 log 13 , 983 , 815 13 , 983 , 816 ≈ 1.80065 × 10 − 6 shannons. {\displaystyle {\begin{aligned}\mathrm {H} (X)&=-p\log(p)-q\log(q)=-{\tfrac {1}{13,983,816}}\log \!{\tfrac {1}{13,983,816}}-{\tfrac {13,983,815}{13,983,816}}\log \!{\tfrac {13,983,815}{13,983,816}}\\&\approx 1.80065\times 10^{-6}{\text{ shannons.}}\end{aligned}}} == References == == External links == Euler's Analysis of the Genoese Lottery – Convergence (August 2010), Mathematical Association of America Lottery Mathematics – INFAROM Publishing 13,983,816 and the Lottery – YouTube video with James Clewett, Numberphile, March 2012
|
https://en.wikipedia.org/wiki/Lottery_mathematics
|
In mathematics, a degenerate case is a limiting case of a class of objects which appears to be qualitatively different from (and usually simpler than) the rest of the class; "degeneracy" is the condition of being a degenerate case. The definitions of many classes of composite or structured objects often implicitly include inequalities. For example, the angles and the side lengths of a triangle are supposed to be positive. The limiting cases, where one or several of these inequalities become equalities, are degeneracies. In the case of triangles, one has a degenerate triangle if at least one side length or angle is zero. Equivalently, it becomes a "line segment". Often, the degenerate cases are the exceptional cases where changes to the usual dimension or the cardinality of the object (or of some part of it) occur. For example, a triangle is an object of dimension two, and a degenerate triangle is contained in a line, which makes its dimension one. This is similar to the case of a circle, whose dimension shrinks from two to zero as it degenerates into a point. As another example, the solution set of a system of equations that depends on parameters generally has a fixed cardinality and dimension, but cardinality and/or dimension may be different for some exceptional values, called degenerate cases. In such a degenerate case, the solution set is said to be degenerate. For some classes of composite objects, the degenerate cases depend on the properties that are specifically studied. In particular, the class of objects may often be defined or characterized by systems of equations. In most scenarios, a given class of objects may be defined by several different systems of equations, and these different systems of equations may lead to different degenerate cases, while characterizing the same non-degenerate cases. This may be the reason for which there is no general definition of degeneracy, despite the fact that the concept is widely used and defined (if needed) in each specific situation. A degenerate case thus has special features which makes it non-generic, or a special case. However, not all non-generic or special cases are degenerate. For example, right triangles, isosceles triangles and equilateral triangles are non-generic and non-degenerate. In fact, degenerate cases often correspond to singularities, either in the object or in some configuration space. For example, a conic section is degenerate if and only if it has singular points (e.g., point, line, intersecting lines). == In geometry == === Conic section === A degenerate conic is a conic section (a second-degree plane curve, defined by a polynomial equation of degree two) that fails to be an irreducible curve. A point is a degenerate circle, namely one with radius 0. The line is a degenerate case of a parabola if the parabola resides on a tangent plane. In inversive geometry, a line is a degenerate case of a circle, with infinite radius. Two parallel lines also form a degenerate parabola. A line segment can be viewed as a degenerate case of an ellipse in which the semiminor axis goes to zero, the foci go to the endpoints, and the eccentricity goes to one. A circle can be thought of as a degenerate ellipse, as the eccentricity approaches 0 and the foci merge. An ellipse can also degenerate into a single point. A hyperbola can degenerate into two lines crossing at a point, through a family of hyperbolae having those lines as common asymptotes. === Triangle === A degenerate triangle is a "flat" triangle in the sense that it is contained in a line segment. It has thus collinear vertices and zero area. If the three vertices are all distinct, it has two 0° angles and one 180° angle. If two vertices are equal, it has one 0° angle and two undefined angles. If all three vertices are equal, all three angles are undefined. === Rectangle === A rectangle with one pair of opposite sides of length zero degenerates to a line segment, with zero area. If both of the rectangle's pairs of opposite sides have length zero, the rectangle degenerates to a point. === Hyperrectangle === A hyperrectangle is the n-dimensional analog of a rectangle. If its sides along any of the n axes has length zero, it degenerates to a lower-dimensional hyperrectangle, all the way down to a point if the sides aligned with every axis have length zero. === Convex polygon === A convex polygon is degenerate if at least two consecutive sides coincide at least partially, or at least one side has zero length, or at least one angle is 180°. Thus a degenerate convex polygon of n sides looks like a polygon with fewer sides. In the case of triangles, this definition coincides with the one that has been given above. === Convex polyhedron === A convex polyhedron is degenerate if either two adjacent facets are coplanar or two edges are aligned. In the case of a tetrahedron, this is equivalent to saying that all of its vertices lie in the same plane, giving it a volume of zero. === Standard torus === In contexts where self-intersection is allowed, a double-covered sphere is a degenerate standard torus where the axis of revolution passes through the center of the generating circle, rather than outside it. A torus degenerates to a circle when its minor radius goes to 0. === Sphere === When the radius of a sphere goes to zero, the resulting degenerate sphere of zero volume is a point. === Other === See general position for other examples. == Elsewhere == A set containing a single point is a degenerate continuum. Objects such as the digon and monogon can be viewed as degenerate cases of polygons: valid in a general abstract mathematical sense, but not part of the original Euclidean conception of polygons. A random variable which can only take one value has a degenerate distribution; if that value is the real number 0, then its probability density is the Dirac delta function. A root of a polynomial is sometimes said to be degenerate if it is a multiple root, since generically the n roots of an nth degree polynomial are all distinct. This usage carries over to eigenproblems: a degenerate eigenvalue is a multiple root of the characteristic polynomial. In quantum mechanics, any such multiplicity in the eigenvalues of the Hamiltonian operator gives rise to degenerate energy levels. Usually any such degeneracy indicates some underlying symmetry in the system. == See also == Degeneracy (graph theory) Degenerate form Trivial (mathematics) Pathological (mathematics) Vacuous truth == References ==
|
https://en.wikipedia.org/wiki/Degeneracy_(mathematics)
|
In mathematics, a projection is an idempotent mapping of a set (or other mathematical structure) into a subset (or sub-structure). In this case, idempotent means that projecting twice is the same as projecting once. The restriction to a subspace of a projection is also called a projection, even if the idempotence property is lost. An everyday example of a projection is the casting of shadows onto a plane (sheet of paper): the projection of a point is its shadow on the sheet of paper, and the projection (shadow) of a point on the sheet of paper is that point itself (idempotency). The shadow of a three-dimensional sphere is a disk. Originally, the notion of projection was introduced in Euclidean geometry to denote the projection of the three-dimensional Euclidean space onto a plane in it, like the shadow example. The two main projections of this kind are: The projection from a point onto a plane or central projection: If C is a point, called the center of projection, then the projection of a point P different from C onto a plane that does not contain C is the intersection of the line CP with the plane. The points P such that the line CP is parallel to the plane does not have any image by the projection, but one often says that they project to a point at infinity of the plane (see Projective geometry for a formalization of this terminology). The projection of the point C itself is not defined. The projection parallel to a direction D, onto a plane or parallel projection: The image of a point P is the intersection of the plane with the line parallel to D passing through P. See Affine space § Projection for an accurate definition, generalized to any dimension. The concept of projection in mathematics is a very old one, and most likely has its roots in the phenomenon of the shadows cast by real-world objects on the ground. This rudimentary idea was refined and abstracted, first in a geometric context and later in other branches of mathematics. Over time different versions of the concept developed, but today, in a sufficiently abstract setting, we can unify these variations. In cartography, a map projection is a map of a part of the surface of the Earth onto a plane, which, in some cases, but not always, is the restriction of a projection in the above meaning. The 3D projections are also at the basis of the theory of perspective. The need for unifying the two kinds of projections and of defining the image by a central projection of any point different of the center of projection are at the origin of projective geometry. == Definition == Generally, a mapping where the domain and codomain are the same set (or mathematical structure) is a projection if the mapping is idempotent, which means that a projection is equal to its composition with itself. A projection may also refer to a mapping which has a right inverse. Both notions are strongly related, as follows. Let p be an idempotent mapping from a set A into itself (thus p ∘ p = p) and B = p(A) be the image of p. If we denote by π the map p viewed as a map from A onto B and by i the injection of B into A (so that p = i ∘ π), then we have π ∘ i = IdB (so that π has a right inverse). Conversely, if π has a right inverse i, then π ∘ i = IdB implies that i ∘ π ∘ i ∘ π = i ∘ IdB ∘ π = i ∘ π; that is, p = i ∘ π is idempotent. == Applications == The original notion of projection has been extended or generalized to various mathematical situations, frequently, but not always, related to geometry, for example: In set theory: An operation typified by the j-th projection map, written projj, that takes an element x = (x1, ..., xj, ..., xn) of the Cartesian product X1 × ⋯ × Xj × ⋯ × Xn to the value projj(x) = xj. This map is always surjective and, when each space Xk has a topology, this map is also continuous and open. A mapping that takes an element to its equivalence class under a given equivalence relation is known as the canonical projection. The evaluation map sends a function f to the value f(x) for a fixed x. The space of functions YX can be identified with the Cartesian product ∏ i ∈ X Y {\textstyle \prod _{i\in X}Y} , and the evaluation map is a projection map from the Cartesian product. For relational databases and query languages, the projection is a unary operation written as Π a 1 , … , a n ( R ) {\displaystyle \Pi _{a_{1},\ldots ,a_{n}}(R)} where a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} is a set of attribute names. The result of such projection is defined as the set that is obtained when all tuples in R are restricted to the set { a 1 , … , a n } {\displaystyle \{a_{1},\ldots ,a_{n}\}} . R is a database-relation. In spherical geometry, projection of a sphere upon a plane was used by Ptolemy (~150) in his Planisphaerium. The method is called stereographic projection and uses a plane tangent to a sphere and a pole C diametrically opposite the point of tangency. Any point P on the sphere besides C determines a line CP intersecting the plane at the projected point for P. The correspondence makes the sphere a one-point compactification for the plane when a point at infinity is included to correspond to C, which otherwise has no projection on the plane. A common instance is the complex plane where the compactification corresponds to the Riemann sphere. Alternatively, a hemisphere is frequently projected onto a plane using the gnomonic projection. In linear algebra, a linear transformation that remains unchanged if applied twice: p(u) = p(p(u)). In other words, an idempotent operator. For example, the mapping that takes a point (x, y, z) in three dimensions to the point (x, y, 0) is a projection. This type of projection naturally generalizes to any number of dimensions n for the domain and k ≤ n for the codomain of the mapping. See Orthogonal projection, Projection (linear algebra). In the case of orthogonal projections, the space admits a decomposition as a product, and the projection operator is a projection in that sense as well. In differential topology, any fiber bundle includes a projection map as part of its definition. Locally at least this map looks like a projection map in the sense of the product topology and is therefore open and surjective. In topology, a retraction is a continuous map r: X → X which restricts to the identity map on its image. This satisfies a similar idempotency condition r2 = r and can be considered a generalization of the projection map. The image of a retraction is called a retract of the original space. A retraction which is homotopic to the identity is known as a deformation retraction. This term is also used in category theory to refer to any split epimorphism. The scalar projection (or resolute) of one vector onto another. In category theory, the above notion of Cartesian product of sets can be generalized to arbitrary categories. The product of some objects has a canonical projection morphism to each factor. Special cases include the projection from the Cartesian product of sets, the product topology of topological spaces (which is always surjective and open), or from the direct product of groups, etc. Although these morphisms are often epimorphisms and even surjective, they do not have to be. == References == == Further reading == Craig, Thomas (1882) A Treatise on Projections from University of Michigan Historical Math Collection. Henrici, Olaus Magnus Friedrich (1911). "Projection" . Encyclopædia Britannica. Vol. 22 (11th ed.). pp. 427–434.
|
https://en.wikipedia.org/wiki/Projection_(mathematics)
|
In mathematics, a singleton (also known as a unit set or one-point set) is a set with exactly one element. For example, the set { 0 } {\displaystyle \{0\}} is a singleton whose single element is 0 {\displaystyle 0} . == Properties == Within the framework of Zermelo–Fraenkel set theory, the axiom of regularity guarantees that no set is an element of itself. This implies that a singleton is necessarily distinct from the element it contains, thus 1 and { 1 } {\displaystyle \{1\}} are not the same thing, and the empty set is distinct from the set containing only the empty set. A set such as { { 1 , 2 , 3 } } {\displaystyle \{\{1,2,3\}\}} is a singleton as it contains a single element (which itself is a set, but not a singleton). A set is a singleton if and only if its cardinality is 1. In von Neumann's set-theoretic construction of the natural numbers, the number 1 is defined as the singleton { 0 } . {\displaystyle \{0\}.} In axiomatic set theory, the existence of singletons is a consequence of the axiom of pairing: for any set A, the axiom applied to A and A asserts the existence of { A , A } , {\displaystyle \{A,A\},} which is the same as the singleton { A } {\displaystyle \{A\}} (since it contains A, and no other set, as an element). If A is any set and S is any singleton, then there exists precisely one function from A to S, the function sending every element of A to the single element of S. Thus every singleton is a terminal object in the category of sets. A singleton has the property that every function from it to any arbitrary set is injective. The only non-singleton set with this property is the empty set. Every singleton set is an ultra prefilter. If X {\displaystyle X} is a set and x ∈ X {\displaystyle x\in X} then the upward of { x } {\displaystyle \{x\}} in X , {\displaystyle X,} which is the set { S ⊆ X : x ∈ S } , {\displaystyle \{S\subseteq X:x\in S\},} is a principal ultrafilter on X {\displaystyle X} . Moreover, every principal ultrafilter on X {\displaystyle X} is necessarily of this form. The ultrafilter lemma implies that non-principal ultrafilters exist on every infinite set (these are called free ultrafilters). Every net valued in a singleton subset X {\displaystyle X} of is an ultranet in X . {\displaystyle X.} The Bell number integer sequence counts the number of partitions of a set (OEIS: A000110), if singletons are excluded then the numbers are smaller (OEIS: A000296). == In category theory == Structures built on singletons often serve as terminal objects or zero objects of various categories: The statement above shows that the singleton sets are precisely the terminal objects in the category Set of sets. No other sets are terminal. Any singleton admits a unique topological space structure (both subsets are open). These singleton topological spaces are terminal objects in the category of topological spaces and continuous functions. No other spaces are terminal in that category. Any singleton admits a unique group structure (the unique element serving as identity element). These singleton groups are zero objects in the category of groups and group homomorphisms. No other groups are terminal in that category. == Definition by indicator functions == Let S be a class defined by an indicator function b : X → { 0 , 1 } . {\displaystyle b:X\to \{0,1\}.} Then S is called a singleton if and only if there is some y ∈ X {\displaystyle y\in X} such that for all x ∈ X , {\displaystyle x\in X,} b ( x ) = ( x = y ) . {\displaystyle b(x)=(x=y).} == Definition in Principia Mathematica == The following definition was introduced in Principia Mathematica by Whitehead and Russell ι {\displaystyle \iota } ‘ x = y ^ ( y = x ) {\displaystyle x={\hat {y}}(y=x)} Df. The symbol ι {\displaystyle \iota } ‘ x {\displaystyle x} denotes the singleton { x } {\displaystyle \{x\}} and y ^ ( y = x ) {\displaystyle {\hat {y}}(y=x)} denotes the class of objects identical with x {\displaystyle x} aka { y : y = x } {\displaystyle \{y:y=x\}} . This occurs as a definition in the introduction, which, in places, simplifies the argument in the main text, where it occurs as proposition 51.01 (p. 357 ibid.). The proposition is subsequently used to define the cardinal number 1 as 1 = α ^ ( ( ∃ x ) α = ι {\displaystyle 1={\hat {\alpha }}((\exists x)\alpha =\iota } ‘ x ) {\displaystyle x)} Df. That is, 1 is the class of singletons. This is definition 52.01 (p. 363 ibid.) == See also == Class (set theory) – Collection of sets in mathematics that can be defined based on a property of its members Isolated point – Point of a subset S around which there are no other points of S Uniqueness quantification – Logical quantifier Urelement – Concept in set theory == References ==
|
https://en.wikipedia.org/wiki/Singleton_(mathematics)
|
In mathematics, a set is a collection of different things; these things are called elements or members of the set and are typically mathematical objects of any kind: numbers, symbols, points in space, lines, other geometric shapes, variables, or even other sets. A set may be finite or infinite, depending whether the number of its elements is finite or not. There is a unique set with no elements, called the empty set; a set with a single element is a singleton. Sets are ubiquitous in modern mathematics. Indeed, set theory, more specifically Zermelo–Fraenkel set theory, has been the standard way to provide rigorous foundations for all branches of mathematics since the first half of the 20th century. == Context == Before the end of the 19th century, sets were not studied specifically, and were not clearly distinguished from sequences. Most mathematicians considered infinity as potential—meaning that it is the result of an endless process—and were reluctant to consider infinite sets, that is sets whose number of members is not a natural number. Specifically, a line was not considered as the set of its points, but as a locus where points may be located. The mathematical study of infinite sets began with Georg Cantor (1845–1918). This provided some counterintuitive facts and paradoxes. For example, the number line has an infinite number of elements that is strictly larger than the infinite number of natural numbers, and any line segment has the same number of elements as the whole space. Also, Russell's paradox implies that the phrase "the set of all sets" is self-contradictory. Together with other counterintuitive results, this led to the foundational crisis of mathematics, which was eventually resolved with the general adoption of Zermelo–Fraenkel set theory as a robust foundation of set theory and all mathematics. Meanwhile, sets started to be widely used in all mathematics. In particular, algebraic structures and mathematical spaces are typically defined in terms of sets. Also, many older mathematical results are restated in terms of sets. For example, Euclid's theorem is often stated as "the set of the prime numbers is infinite". This wide use of sets in mathematics was prophesied by David Hilbert when saying: "No one will drive us from the paradise which Cantor created for us." Generally, the common usage of sets in mathematics does not require the full power of Zermelo–Fraenkel set theory. In mathematical practice, sets can be manipulated independently of the logical framework of this theory. The object of this article is to summarize the manipulation rules and properties of sets that are commonly used in mathematics, without reference to any logical framework. For the branch of mathematics that studies sets, see Set theory; for an informal presentation of the corresponding logical framework, see Naive set theory; for a more formal presentation, see Axiomatic set theory and Zermelo–Fraenkel set theory. == Basic notions == In mathematics, a set is a collection of different things. These things are called elements or members of the set and are typically mathematical objects of any kind such as numbers, symbols, points in space, lines, other geometrical shapes, variables, functions, or even other sets. A set may also be called a collection or family, especially when its elements are themselves sets; this may avoid the confusion between the set and its members, and may make reading easier. A set may be specified either by listing its elements or by a property that characterizes its elements, such as for the set of the prime numbers or the set of all students in a given class. If x {\displaystyle x} is an element of a set S {\displaystyle S} , one says that x {\displaystyle x} belongs to S {\displaystyle S} or is in S {\displaystyle S} , and this is written as x ∈ S {\displaystyle x\in S} . The statement " y {\displaystyle y} is not in S {\displaystyle S\,} " is written as y ∉ S {\displaystyle y\not \in S} , which can also be read as "y is not in B". For example, if Z {\displaystyle \mathbb {Z} } is the set of the integers, one has − 3 ∈ Z {\displaystyle -3\in \mathbb {Z} } and 1.5 ∉ Z {\displaystyle 1.5\not \in \mathbb {Z} } . Each set is uniquely characterized by its elements. In particular, two sets that have precisely the same elements are equal (they are the same set). This property, called extensionality, can be written in formula as A = B ⟺ ∀ x ( x ∈ A ⟺ x ∈ B ) . {\displaystyle A=B\iff \forall x\;(x\in A\iff x\in B).} This implies that there is only one set with no element, the empty set (or null set) that is denoted ∅ , ∅ {\displaystyle \varnothing ,\emptyset } , or { } . {\displaystyle \{\,\}.} A singleton is a set with exactly one element. If x {\displaystyle x} is this element, the singleton is denoted { x } . {\displaystyle \{x\}.} If x {\displaystyle x} is itself a set, it must not be confused with { x } . {\displaystyle \{x\}.} For example, ∅ {\displaystyle \emptyset } is a set with no elements, while { ∅ } {\displaystyle \{\emptyset \}} is a singleton with ∅ {\displaystyle \emptyset } as its unique element. A set is finite if there exists a natural number n {\displaystyle n} such that the n {\displaystyle n} first natural numbers can be put in one to one correspondence with the elements of the set. In this case, one says that n {\displaystyle n} is the number of elements of the set. A set is infinite if such an n {\displaystyle n} does not exist. The empty set is a finite set with 0 {\displaystyle 0} elements. The natural numbers form an infinite set, commonly denoted N {\displaystyle \mathbb {N} } . Other examples of infinite sets include number sets that contain the natural numbers, real vector spaces, curves and most sorts of spaces. == Specifying a set == Extensionality implies that for specifying a set, one has either to list its elements or to provide a property that uniquely characterizes the set elements. === Roster notation === Roster or enumeration notation is a notation introduced by Ernst Zermelo in 1908 that specifies a set by listing its elements between braces, separated by commas. For example, one knows that { 4 , 2 , 1 , 3 } {\displaystyle \{4,2,1,3\}} and { blue, white, red } {\displaystyle \{{\text{blue, white, red}}\}} denote sets and not tuples because of the enclosing braces. Above notations { } {\displaystyle \{\,\}} and { x } {\displaystyle \{x\}} for the empty set and for a singleton are examples of roster notation. When specifying sets, it only matters whether each distinct element is in the set or not; this means a set does not change if elements are repeated or arranged in a different order. For example, { 1 , 2 , 3 , 4 } = { 4 , 2 , 1 , 3 } = { 4 , 2 , 4 , 3 , 1 , 3 } . {\displaystyle \{1,2,3,4\}=\{4,2,1,3\}=\{4,2,4,3,1,3\}.} When there is a clear pattern for generating all set elements, one can use ellipses for abbreviating the notation, such as in { 1 , 2 , 3 , … , 1000 } {\displaystyle \{1,2,3,\ldots ,1000\}} for the positive integers not greater than 1000 {\displaystyle 1000} . Ellipses allow also expanding roster notation to some infinite sets. For example, the set of all integers can be denoted as { … , − 3 , − 2 , − 1 , 0 , 1 , 2 , 3 , … } {\displaystyle \{\ldots ,-3,-2,-1,0,1,2,3,\ldots \}} or { 0 , 1 , − 1 , 2 , − 2 , 3 , − 3 , … } . {\displaystyle \{0,1,-1,2,-2,3,-3,\ldots \}.} === Set-builder notation === Set-builder notation specifies a set as being the set of all elements that satisfy some logical formula. More precisely, if P ( x ) {\displaystyle P(x)} is a logical formula depending on a variable x {\displaystyle x} , which evaluates to true or false depending on the value of x {\displaystyle x} , then { x ∣ P ( x ) } {\displaystyle \{x\mid P(x)\}} or { x : P ( x ) } {\displaystyle \{x:P(x)\}} denotes the set of all x {\displaystyle x} for which P ( x ) {\displaystyle P(x)} is true. For example, a set F can be specified as follows: F = { n ∣ n is an integer, and 0 ≤ n ≤ 19 } . {\displaystyle F=\{n\mid n{\text{ is an integer, and }}0\leq n\leq 19\}.} In this notation, the vertical bar "|" is read as "such that", and the whole formula can be read as "F is the set of all n such that n is an integer in the range from 0 to 19 inclusive". Some logical formulas, such as S is a set {\displaystyle \color {red}{S{\text{ is a set}}}} or S is a set and S ∉ S {\displaystyle \color {red}{S{\text{ is a set and }}S\not \in S}} cannot be used in set-builder notation because there is no set for which the elements are characterized by the formula. There are several ways for avoiding the problem. One may prove that the formula defines a set; this is often almost immediate, but may be very difficult. One may also introduce a larger set U {\displaystyle U} that must contain all elements of the specified set, and write the notation as { x ∣ x ∈ U and ... } {\displaystyle \{x\mid x\in U{\text{ and ...}}\}} or { x ∈ U ∣ ... } . {\displaystyle \{x\in U\mid {\text{ ...}}\}.} One may also define U {\displaystyle U} once for all and take the convention that every variable that appears on the left of the vertical bar of the notation represents an element of U {\displaystyle U} . This amounts to say that x ∈ U {\displaystyle x\in U} is implicit in set-builder notation. In this case, U {\displaystyle U} is often called the domain of discourse or a universe. For example, with the convention that a lower case Latin letter may represent a real number and nothing else, the expression { x ∣ x ∉ Q } {\displaystyle \{x\mid x\not \in \mathbb {Q} \}} is an abbreviation of { x ∈ R ∣ x ∉ Q } , {\displaystyle \{x\in \mathbb {R} \mid x\not \in \mathbb {Q} \},} which defines the irrational numbers. == Subsets == A subset of a set B {\displaystyle B} is a set A {\displaystyle A} such that every element of A {\displaystyle A} is also an element of B {\displaystyle B} . If A {\displaystyle A} is a subset of B {\displaystyle B} , one says commonly that A {\displaystyle A} is contained in B {\displaystyle B} , B {\displaystyle B} contains A {\displaystyle A} , or B {\displaystyle B} is a superset of A {\displaystyle A} . This denoted A ⊆ B {\displaystyle A\subseteq B} and B ⊇ A {\displaystyle B\supseteq A} . However many authors use A ⊂ B {\displaystyle A\subset B} and B ⊃ A {\displaystyle B\supset A} instead. The definition of a subset can be expressed in notation as A ⊆ B if and only if ∀ x ( x ∈ A ⟹ x ∈ B ) . {\displaystyle A\subseteq B\quad {\text{if and only if}}\quad \forall x\;(x\in A\implies x\in B).} A set A {\displaystyle A} is a proper subset of a set B {\displaystyle B} if A ⊆ B {\displaystyle A\subseteq B} and A ≠ B {\displaystyle A\neq B} . This is denoted A ⊂ B {\displaystyle A\subset B} and B ⊃ A {\displaystyle B\supset A} . When A ⊂ B {\displaystyle A\subset B} is used for the subset relation, or in case of possible ambiguity, one uses commonly A ⊊ B {\displaystyle A\subsetneq B} and B ⊋ A {\displaystyle B\supsetneq A} . The relationship between sets established by ⊆ is called inclusion or containment. Equality between sets can be expressed in terms of subsets. Two sets are equal if and only if they contain each other: that is, A ⊆ B and B ⊆ A is equivalent to A = B. The empty set is a subset of every set: ∅ ⊆ A. Examples: The set of all humans is a proper subset of the set of all mammals. {1, 3} ⊂ {1, 2, 3, 4}. {1, 2, 3, 4} ⊆ {1, 2, 3, 4} == Basic operations == There are several standard operations that produce new sets from given sets, in the same way as addition and multiplication produce new numbers from given numbers. The operations that are considered in this section are those such that all elements of the produced sets belong to a previously defined set. These operations are commonly illustrated with Euler diagrams and Venn diagrams. The main basic operations on sets are the following ones. === Intersection === The intersection of two sets A {\displaystyle A} and B {\displaystyle B} is a set denoted A ∩ B {\displaystyle A\cap B} whose elements are those elements that belong to both A {\displaystyle A} and B {\displaystyle B} . That is, A ∩ B = { x ∣ x ∈ A ∧ x ∈ B } , {\displaystyle A\cap B=\{x\mid x\in A\land x\in B\},} where ∧ {\displaystyle \land } denotes the logical and. Intersection is associative and commutative; this means that for proceeding a sequence of intersections, one may proceed in any order, without the need of parentheses for specifying the order of operations. Intersection has no general identity element. However, if one restricts intersection to the subsets of a given set U {\displaystyle U} , intersection has U {\displaystyle U} as identity element. If S {\displaystyle {\mathcal {S}}} is a nonempty set of sets, its intersection, denoted ⋂ A ∈ S A , {\textstyle \bigcap _{A\in {\mathcal {S}}}A,} is the set whose elements are those elements that belong to all sets in S {\displaystyle {\mathcal {S}}} . That is, ⋂ A ∈ S A = { x ∣ ( ∀ A ∈ S ) x ∈ A } . {\displaystyle \bigcap _{A\in {\mathcal {S}}}A=\{x\mid (\forall A\in {\mathcal {S}})\;x\in A\}.} These two definitions of the intersection coincide when S {\displaystyle {\mathcal {S}}} has two elements. === Union === The union of two sets A {\displaystyle A} and B {\displaystyle B} is a set denoted A ∪ B {\displaystyle A\cup B} whose elements are those elements that belong to A {\displaystyle A} or B {\displaystyle B} or both. That is, A ∪ B = { x ∣ x ∈ A ∨ x ∈ B } , {\displaystyle A\cup B=\{x\mid x\in A\lor x\in B\},} where ∨ {\displaystyle \lor } denotes the logical or. Union is associative and commutative; this means that for proceeding a sequence of intersections, one may proceed in any order, without the need of parentheses for specifying the order of operations. The empty set is an identity element for the union operation. If S {\displaystyle {\mathcal {S}}} is a set of sets, its union, denoted ⋃ A ∈ S A , {\textstyle \bigcup _{A\in {\mathcal {S}}}A,} is the set whose elements are those elements that belong to at least one set in S {\displaystyle {\mathcal {S}}} . That is, ⋃ A ∈ S A = { x ∣ ( ∃ A ∈ S ) x ∈ A } . {\displaystyle \bigcup _{A\in {\mathcal {S}}}A=\{x\mid (\exists A\in {\mathcal {S}})\;x\in A\}.} These two definitions of the union coincide when S {\displaystyle {\mathcal {S}}} has two elements. === Set difference === The set difference of two sets A {\displaystyle A} and B {\displaystyle B} , is a set, denoted A ∖ B {\displaystyle A\setminus B} or A − B {\displaystyle A-B} , whose elements are those elements that belong to A {\displaystyle A} , but not to B {\displaystyle B} . That is, A ∖ B = { x ∣ x ∈ A ∧ x ∉ B } , {\displaystyle A\setminus B=\{x\mid x\in A\land x\not \in B\},} where ∧ {\displaystyle \land } denotes the logical and. When B ⊆ A {\displaystyle B\subseteq A} the difference A ∖ B {\displaystyle A\setminus B} is also called the complement of B {\displaystyle B} in A {\displaystyle A} . When all sets that are considered are subsets of a fixed universal set U {\displaystyle U} , the complement U ∖ A {\displaystyle U\setminus A} is often called the absolute complement of A {\displaystyle A} . The symmetric difference of two sets A {\displaystyle A} and B {\displaystyle B} , denoted A Δ B {\displaystyle A\,\Delta \,B} , is the set of those elements that belong to A or B but not to both: A Δ B = ( A ∖ B ) ∪ ( B ∖ A ) . {\displaystyle A\,\Delta \,B=(A\setminus B)\cup (B\setminus A).} === Algebra of subsets === The set of all subsets of a set U {\displaystyle U} is called the powerset of U {\displaystyle U} , often denoted P ( U ) {\displaystyle {\mathcal {P}}(U)} . The powerset is an algebraic structure whose main operations are union, intersection, set difference, symmetric difference and absolute complement (complement in U {\displaystyle U} ). The powerset is a Boolean ring that has the symmetric difference as addition, the intersection as multiplication, the empty set as additive identity, U {\displaystyle U} as multiplicative identity, and complement as additive inverse. The powerset is also a Boolean algebra for which the join ∨ {\displaystyle \lor } is the union ∪ {\displaystyle \cup } , the meet ∧ {\displaystyle \land } is the intersection ∩ {\displaystyle \cap } , and the negation is the set complement. As every Boolean algebra, the power set is also a partially ordered set for set inclusion. It is also a complete lattice. The axioms of these structures induce many identities relating subsets, which are detailed in the linked articles. == Functions == A function from a set A—the domain—to a set B—the codomain—is a rule that assigns to each element of A a unique element of B. For example, the square function maps every real number x to x2. Functions can be formally defined in terms of sets by means of their graph, which are subsets of the Cartesian product (see below) of the domain and the codomain. Functions are fundamental for set theory, and examples are given in following sections. === Indexed families === Intuitively, an indexed family is a set whose elements are labelled with the elements of another set, the index set. These labels allow the same element to occur several times in the family. Formally, an indexed family is a function that has the index set as its domain. Generally, the usual functional notation f ( x ) {\displaystyle f(x)} is not used for indexed families. Instead, the element of the index set is written as a subscript of the name of the family, such as in a i {\displaystyle a_{i}} . When the index set is { 1 , 2 } {\displaystyle \{1,2\}} , an indexed family is called an ordered pair. When the index set is the set of the n {\displaystyle n} first natural numbers, an indexed family is called an n {\displaystyle n} -tuple. When the index set is the set of all natural numbers an indexed family is called a sequence. In all these cases, the natural order of the natural numbers allows omitting indices for explicit indexed families. For example, ( b , 2 , b ) {\displaystyle (b,2,b)} denotes the 3-tuple A {\displaystyle A} such that A 1 = b , A 2 = 2 , A 3 = b {\displaystyle A_{1}=b,A_{2}=2,A_{3}=b} . The above notations ⋃ A ∈ S A {\textstyle \bigcup _{A\in {\mathcal {S}}}A} and ⋂ A ∈ S A {\textstyle \bigcap _{A\in {\mathcal {S}}}A} are commonly replaced with a notation involving indexed families, namely ⋃ i ∈ I A i = { x ∣ ( ∃ i ∈ I ) x ∈ A i } {\displaystyle \bigcup _{i\in {\mathcal {I}}}A_{i}=\{x\mid (\exists i\in {\mathcal {I}})\;x\in A_{i}\}} and ⋂ i ∈ I A i = { x ∣ ( ∀ i ∈ I ) x ∈ A i } . {\displaystyle \bigcap _{i\in {\mathcal {I}}}A_{i}=\{x\mid (\forall i\in {\mathcal {I}})\;x\in A_{i}\}.} The formulas of the above sections are special cases of the formulas for indexed families, where S = I {\displaystyle {\mathcal {S}}={\mathcal {I}}} and i = A = A i {\displaystyle i=A=A_{i}} . The formulas remain correct, even in the case where A i = A j {\displaystyle A_{i}=A_{j}} for some i ≠ j {\displaystyle i\neq j} , since A = A ∪ A = A ∩ A . {\displaystyle A=A\cup A=A\cap A.} == External operations == In § Basic operations, all elements of sets produced by set operations belong to previously defined sets. In this section, other set operations are considered, which produce sets whose elements can be outside all previously considered sets. These operations are Cartesian product, disjoint union, set exponentiation and power set. === Cartesian product === The Cartesian product of two sets has already be used for defining functions. Given two sets A 1 {\displaystyle A_{1}} and A 2 {\displaystyle A_{2}} , their Cartesian product, denoted A 1 × A 2 {\displaystyle A_{1}\times A_{2}} is the set formed by all ordered pairs ( a 1 , a 2 ) {\displaystyle (a_{1},a_{2})} such that a 1 ∈ A 1 {\displaystyle a_{1}\in A_{1}} and a i ∈ A 1 {\displaystyle a_{i}\in A_{1}} ; that is, A 1 × A 2 = { ( a 1 , a 2 ) ∣ a 1 ∈ A 1 ∧ a 2 ∈ A 2 } . {\displaystyle A_{1}\times A_{2}=\{(a_{1},a_{2})\mid a_{1}\in A_{1}\land a_{2}\in A_{2}\}.} This definition does not supposes that the two sets are different. In particular, A × A = { ( a 1 , a 2 ) ∣ a 1 ∈ A ∧ a 2 ∈ A } . {\displaystyle A\times A=\{(a_{1},a_{2})\mid a_{1}\in A\land a_{2}\in A\}.} Since this definition involves a pair of indices (1,2), it generalizes straightforwardly to the Cartesian product or direct product of any indexed family of sets: ∏ i ∈ I A i = { ( a i ) i ∈ I ∣ ( ∀ i ∈ I ) a i ∈ A i } . {\displaystyle \prod _{i\in {\mathcal {I}}}A_{i}=\{(a_{i})_{i\in {\mathcal {I}}}\mid (\forall i\in {\mathcal {I}})\;a_{i}\in A_{i}\}.} That is, the elements of the Cartesian product of a family of sets are all families of elements such that each one belongs to the set of the same index. The fact that, for every indexed family of nonempty sets, the Cartesian product is a nonempty set is insured by the axiom of choice. === Set exponentiation === Given two sets E {\displaystyle E} and F {\displaystyle F} , the set exponentiation, denoted F E {\displaystyle F^{E}} , is the set that has as elements all functions from E {\displaystyle E} to F {\displaystyle F} . Equivalently, F E {\displaystyle F^{E}} can be viewed as the Cartesian product of a family, indexed by E {\displaystyle E} , of sets that are all equal to F {\displaystyle F} . This explains the terminology and the notation, since exponentiation with integer exponents is a product where all factors are equal to the base. === Power set === The power set of a set E {\displaystyle E} is the set that has all subsets of E {\displaystyle E} as elements, including the empty set and E {\displaystyle E} itself. It is often denoted P ( E ) {\displaystyle {\mathcal {P}}(E)} . For example, P ( { 1 , 2 , 3 } ) = { ∅ , { 1 } , { 2 } , { 3 } , { 1 , 2 } , { 1 , 3 } , { 2 , 3 } , { 1 , 2 , 3 } } . {\displaystyle {\mathcal {P}}(\{1,2,3\})=\{\emptyset ,\{1\},\{2\},\{3\},\{1,2\},\{1,3\},\{2,3\},\{1,2,3\}\}.} There is a natural one-to-one correspondence (bijection) between the subsets of E {\displaystyle E} and the functions from E {\displaystyle E} to { 0 , 1 } {\displaystyle \{0,1\}} ; this correspondence associates to each subset the function that takes the value 1 {\displaystyle 1} on the subset and 0 {\displaystyle 0} elsewhere. Because of this correspondence, the power set of E {\displaystyle E} is commonly identified with a set exponentiation: P ( E ) = { 0 , 1 } E . {\displaystyle {\mathcal {P}}(E)=\{0,1\}^{E}.} In this notation, { 0 , 1 } {\displaystyle \{0,1\}} is often abbreviated as 2 {\displaystyle 2} , which gives P ( E ) = 2 E . {\displaystyle {\mathcal {P}}(E)=2^{E}.} In particular, if E {\displaystyle E} has n {\displaystyle n} elements, then 2 E {\displaystyle 2^{E}} has 2 n {\displaystyle 2^{n}} elements. === Disjoint union === The disjoint union of two or more sets is similar to the union, but, if two sets have elements in common, these elements are considered as distinct in the disjoint union. This is obtained by labelling the elements by the indexes of the set they are coming from. The disjoint union of two sets A {\displaystyle A} and B {\displaystyle B} is commonly denoted A ⊔ B {\displaystyle A\sqcup B} and is thus defined as A ⊔ B = { ( a , i ) ∣ ( i = 1 ∧ a ∈ A ) ∨ ( i = 2 ∧ a ∈ B } . {\displaystyle A\sqcup B=\{(a,i)\mid (i=1\land a\in A)\lor (i=2\land a\in B\}.} If A = B {\displaystyle A=B} is a set with n {\displaystyle n} elements, then A ∪ A = A {\displaystyle A\cup A=A} has n {\displaystyle n} elements, while A ⊔ A {\displaystyle A\sqcup A} has 2 n {\displaystyle 2n} elements. The disjoint union of two sets is a particular case of the disjoint union of an indexed family of sets, which is defined as ⨆ i ∈ I = { ( a , i ) ∣ i ∈ I ∧ a ∈ A i } . {\displaystyle \bigsqcup _{i\in {\mathcal {I}}}=\{(a,i)\mid i\in {\mathcal {I}}\land a\in A_{i}\}.} The disjoint union is the coproduct in the category of sets. Therefore the notation ∐ i ∈ I = { ( a , i ) ∣ i ∈ I ∧ a ∈ A i } {\displaystyle \coprod _{i\in {\mathcal {I}}}=\{(a,i)\mid i\in {\mathcal {I}}\land a\in A_{i}\}} is commonly used. ==== Internal disjoint union ==== Given an indexed family of sets ( A i ) i ∈ I {\displaystyle (A_{i})_{i\in {\mathcal {I}}}} , there is a natural map ⨆ i ∈ I A i → ⋃ i ∈ I A i ( a , i ) ↦ a , {\displaystyle {\begin{aligned}\bigsqcup _{i\in {\mathcal {I}}}A_{i}&\to \bigcup _{i\in {\mathcal {I}}}A_{i}\\(a,i)&\mapsto a,\end{aligned}}} which consists in "forgetting" the indices. This maps is always surjective; it is bijective if and only if the A i {\displaystyle A_{i}} are pairwise disjoint, that is, all intersections of two sets of the family are empty. In this case, ⋃ i ∈ I A i {\textstyle \bigcup _{i\in {\mathcal {I}}}A_{i}} and ⨆ i ∈ I A i {\textstyle \bigsqcup _{i\in {\mathcal {I}}}A_{i}} are commonly identified, and one says that their union is the disjoint union of the members of the family. If a set is the disjoint union of a family of subsets, one says also that the family is a partition of the set. == Cardinality == Informally, the cardinality of a set S, often denoted |S|, is the number of its members. This number is the natural number n {\displaystyle n} when there is a bijection between the set that is considered and the set { 1 , 2 , … , n } {\displaystyle \{1,2,\ldots ,n\}} of the n {\displaystyle n} first natural numbers. The cardinality of the empty set is 0 {\displaystyle 0} . A set with the cardinality of a natural number is called a finite set which is true for both cases. Otherwise, one has an infinite set. The fact that natural numbers measure the cardinality of finite sets is the basis of the concept of natural number, and predates for several thousands years the concept of sets. A large part of combinatorics is devoted to the computation or estimation of the cardinality of finite sets. === Infinite cardinalities === The cardinality of an infinite set is commonly represented by a cardinal number, exactly as the number of elements of a finite set is represented by a natural numbers. The definition of cardinal numbers is too technical for this article; however, many properties of cardinalities can be dealt without referring to cardinal numbers, as follows. Two sets S {\displaystyle S} and T {\displaystyle T} have the same cardinality if there exists a one-to-one correspondence (bijection) between them. This is denoted | S | = | T | , {\displaystyle |S|=|T|,} and would be an equivalence relation on sets, if a set of all sets would exist. For example, the natural numbers and the even natural numbers have the same cardinality, since multiplication by two provides such a bijection. Similarly, the interval ( − 1 , 1 ) {\displaystyle (-1,1)} and the set of all real numbers have the same cardinality, a bijection being provided by the function x ↦ tan ( π x / 2 ) {\displaystyle x\mapsto \tan(\pi x/2)} . Having the same cardinality of a proper subset is a characteristic property of infinite sets: a set is infinite if and only if it has the same cardinality as one of its proper subsets. So, by the above example, the natural numbers form an infinite set. Besides equality, there is a natural inequality between cardinalities: a set S {\displaystyle S} has a cardinality smaller than or equal to the cardinality of another set T {\displaystyle T} if there is an injection frome S {\displaystyle S} to T {\displaystyle T} . This is denoted | S | ≤ | T | . {\displaystyle |S|\leq |T|.} Schröder–Bernstein theorem implies that | S | ≤ | T | {\displaystyle |S|\leq |T|} and | T | ≤ | S | {\displaystyle |T|\leq |S|} imply | S | = | T | . {\displaystyle |S|=|T|.} Also, one has | S | ≤ | T | , {\displaystyle |S|\leq |T|,} if and only if there is a surjection from T {\displaystyle T} to S {\displaystyle S} . For every two sets S {\displaystyle S} and T {\displaystyle T} , one has either | S | ≤ | T | {\displaystyle |S|\leq |T|} or | T | ≤ | S | . {\displaystyle |T|\leq |S|.} So, inequality of cardinalities is a total order. The cardinality of the set N {\displaystyle \mathbb {N} } of the natural numbers, denoted | N | = ℵ 0 , {\displaystyle |\mathbb {N} |=\aleph _{0},} is the smallest infinite cardinality. This means that if S {\displaystyle S} is a set of natural numbers, then either S {\displaystyle S} is finite or | S | = | N | . {\displaystyle |S|=|\mathbb {N} |.} Sets with cardinality less than or equal to | N | = ℵ 0 {\displaystyle |\mathbb {N} |=\aleph _{0}} are called countable sets; these are either finite sets or countably infinite sets (sets of cardinality ℵ 0 {\displaystyle \aleph _{0}} ); some authors use "countable" to mean "countably infinite". Sets with cardinality strictly greater than ℵ 0 {\displaystyle \aleph _{0}} are called uncountable sets. Cantor's diagonal argument shows that, for every set S {\displaystyle S} , its power set (the set of its subsets) 2 S {\displaystyle 2^{S}} has a greater cardinality: | S | < | 2 S | . {\displaystyle |S|<\left|2^{S}\right|.} This implies that there is no greatest cardinality. === Cardinality of the real numbers === The cardinality of set of the real numbers is called the cardinality of the continuum and denoted c {\displaystyle {\mathfrak {c}}} . (The term "continuum" referred to the real line before the 20th century, when the real line was not commonly viewed as a set of numbers.) Since, as seen above, the real line R {\displaystyle \mathbb {R} } has the same cardinality of an open interval, every subset of R {\displaystyle \mathbb {R} } that contains a nonempty open interval has also the cardinality c {\displaystyle {\mathfrak {c}}} . One has c = 2 ℵ 0 , {\displaystyle {\mathfrak {c}}=2^{\aleph _{0}},} meaning that the cardinality of the real numbers equals the cardinality of the power set of the natural numbers. In particular, c > ℵ 0 . {\displaystyle {\mathfrak {c}}>\aleph _{0}.} When published in 1878 by Georg Cantor, this result was so astonishing that it was refused by mathematicians, and several tens years were needed before its common acceptance. It can be shown that c {\displaystyle {\mathfrak {c}}} is also the cardinality of the entire plane, and of any finite-dimensional Euclidean space. The continuum hypothesis, was a conjecture formulated by Georg Cantor in 1878 that there is no set with cardinality strictly between ℵ 0 {\displaystyle \aleph _{0}} and c {\displaystyle {\mathfrak {c}}} . In 1963, Paul Cohen proved that the continuum hypothesis is independent of the axioms of Zermelo–Fraenkel set theory with the axiom of choice. This means that if the most widely used set theory is consistent (that is not self-contradictory), then the same is true for both the set theory with the continuum hypothesis added as a further axiom, and the set theory with the negation of the continuum hypothesis added. == Axiom of choice == Informally, the axiom of choice says that, given any family of nonempty sets, one can choose simultaneously an element in each of them. Formulated this way, acceptability of this axiom sets a foundational logical question, because of the difficulty of conceiving an infinite instantaneous action. However, there are several equivalent formulations that are much less controversial and have strong consequences in many areas of mathematics. In the present days, the axiom of choice is thus commonly accepted in mainstream mathematics. A more formal statement of the axiom of choice is: the Cartesian product of every indexed family of nonempty sets is non empty. Other equivalent forms are described in the following subsections. === Zorn's lemma === Zorn's lemma is an assertion that is equivalent to the axiom of choice under the other axioms of set theory, and is easier to use in usual mathematics. Let S {\displaystyle S} be a partial ordered set. A chain in S {\displaystyle S} is a subset that is totally ordered under the induced order. Zorn's lemma states that, if every chain in S {\displaystyle S} has an upper bound in S {\displaystyle S} , then S {\displaystyle S} has (at least) a maximal element, that is, an element that is not smaller than another element of S {\displaystyle S} . In most uses of Zorn's lemma, S {\displaystyle S} is a set of sets, the order is set inclusion, and the upperbound of a chain is taken as the union of its members. An example of use of Zorn's lemma, is the proof that every vector space has a basis. Here the elements of S {\displaystyle S} are linearly independent subsets of the vector space. The union of a chain of elements of S {\displaystyle S} is linearly independent, since an infinite set is linearly independent if and only if each finite subset is, and every finite subset of the union of a chain must be included in a member of the chain. So, there exist a maximal linearly independent set. This linearly independant set must span the vector space because of maximality, and is therefore a basis. Another classical use of Zorn's lemma is the proof that every proper ideal—that is, an ideal that is not the whole ring—of a ring is contained in a maximal ideal. Here, S {\displaystyle S} is the set of the proper ideals containing the given ideal. The union of chain of ideals is an ideal, since the axioms of an ideal involve a finite number of elements. The union of a chain of proper ideals is a proper ideal, since otherwise 1 {\displaystyle 1} would belong to the union, and this implies that it would belong to a member of the chain. === Transfinite induction === The axiom of choice is equivalent with the fact that a well-order can be defined on every set, where a well-order is a total order such that every nonempty subset has a least element. Simple examples of well-ordered sets are the natural numbers (with the natural order), and, for every n, the set of the n-tuples of natural numbers, with the lexicographic order. Well-orders allow a generalization of mathematical induction, which is called transfinite induction. Given a property (predicate) P ( n ) {\displaystyle P(n)} depending on a natural number, mathematical induction is the fact that for proving that P ( n ) {\displaystyle P(n)} is always true, it suffice to prove that for every n {\displaystyle n} , ( m < n ⟹ P ( m ) ) ⟹ P ( n ) . {\displaystyle (m<n\implies P(m))\implies P(n).} Transfinite induction is the same, replacing natural numbers by the elements of a well-ordered set. Often, a proof by transfinite induction easier if three cases are proved separately, the two first cases being the same as for usual induction: P ( 0 ) {\displaystyle P(0)} is true, where 0 {\displaystyle 0} denotes the least element of the well-ordered set P ( x ) ⟹ P ( S ( x ) ) , {\displaystyle P(x)\implies P(S(x)),\quad } where S ( x ) {\displaystyle S(x)} denotes the successor of x {\displaystyle x} , that is the least element that is greater than x {\displaystyle x} ( ∀ y ; y < x ⟹ P ( y ) ) ⟹ P ( x ) , {\displaystyle (\forall y;\;y<x\implies P(y))\implies P(x),\quad } when x {\displaystyle x} is not a successor. Transfinite induction is fundamental for defining ordinal numbers and cardinal numbers. == See also == == Notes == == Citations == == References == Dauben, Joseph W. (1979). Georg Cantor: His Mathematics and Philosophy of the Infinite. Boston: Harvard University Press. ISBN 0-691-02447-2. Halmos, Paul R. (1960). Naive Set Theory. Princeton, N.J.: Van Nostrand. ISBN 0-387-90092-6. {{cite book}}: ISBN / Date incompatibility (help) Stoll, Robert R. (1979). Set Theory and Logic. Mineola, N.Y.: Dover Publications. ISBN 0-486-63829-4. Velleman, Daniel (2006). How To Prove It: A Structured Approach. Cambridge University Press. ISBN 0-521-67599-5. Gödel, Kurt (9 November 1938). "The Consistency of the Axiom of Choice and of the Generalized Continuum-Hypothesis". Proceedings of the National Academy of Sciences of the United States of America. 24 (12): 556–557. Bibcode:1938PNAS...24..556G. doi:10.1073/pnas.24.12.556. PMC 1077160. PMID 16577857. Cohen, Paul (1963b). "The Independence of the Axiom of Choice" (PDF). Stanford University Libraries. Archived (PDF) from the original on 2022-10-09. Retrieved 2019-03-22. == External links == The dictionary definition of set at Wiktionary Cantor's "Beiträge zur Begründung der transfiniten Mengenlehre" (in German)
|
https://en.wikipedia.org/wiki/Set_(mathematics)
|
Everyday Mathematics is a pre-K and elementary school mathematics curriculum, developed by the University of Chicago School Mathematics Project (not to be confused with the University of Chicago School of Mathematics). The program, now published by McGraw-Hill Education, has sparked debate. == Company History == Everyday Mathematics curriculum was developed by the University of Chicago School Math Project (or UCSMP ) which was founded in 1983. Work on it started in the summer of 1985. The 1st edition was released in 1998 and the 2nd in 2002. A third edition was released in 2007 and a fourth in 2014-2015. A new one was released in 2020, dropping Pre-K. For Pre-K, schools use a 2012 Pre-K version. == Curriculum structure == Below is an outline of the components of EM as they are generally seen throughout the curriculum. Lessons A typical lesson outlined in one of the teacher’s manuals includes three parts Teaching the Lesson—Provides main instructional activities for the lesson. Ongoing Learning and Practice—Supports previously introduced concepts and skills; essential for maintaining skills. Differentiation Options—Includes options for supporting the needs of all students; usually an extension of Part 1, Teaching the Lesson. Daily Routines Every day, there are certain things that each EM lesson requires the student to do routinely. These components can be dispersed throughout the day or they can be part of the main math lesson. Math Messages—These are problems, displayed in a manner chosen by the teacher, that students complete before the lesson and then discuss as an opener to the main lesson. Mental Math and Reflexes—These are brief (no longer than 5 min) sessions “…designed to strengthen children's number sense and to review and advance essential basic skills…” (Program Components 2003). Math Boxes—These are pages intended to have students routinely practice problems independently. Home Links—Everyday homework is sent home. They are called Home Links. They are meant to reinforce instruction as well as connect home to the work at school. Supplemental Aspects Beyond the components already listed, there are supplemental resources to the program. The two most common are games and explorations. Games— “…Everyday Mathematics sees games as enjoyable ways to practice number skills, especially those that help children develop fact power…” (Program Components 2003). Therefore, authors of the series have interwoven games throughout daily lessons and activities. == Scientific support for the curriculum == What Works Clearinghouse ( or WWC ) reviewed the evidence in support of the Everyday Mathematics program. Of the 61 pieces of evidence submitted by the publisher, 57 did not meet the WWC minimum standards for scientific evidence, four met evidence standards with reservations, and one of those four showed a statistically significant positive effect. Based on the four studies considered, the WWC gave Everyday Math a rating of "Potentially Positive Effect" with the four studies showing a mean improvement in elementary math achievement (versus unspecified alternative programs) of 6 percentile rank points with a range of -7 to +14 percentile rank points, on a scale from -50 to +50. == Criticism == After the first edition was released, it became part of a nationwide controversy over reform mathematics. In October 1999, US Department of Education issued a report labeling Everyday Mathematics one of five "promising" new math programs. The debate has continued at the state and local level as school districts across the country consider the adoption of Everyday Math. Two states where the controversy has attracted national attention are California and Texas. California has one of the most rigorous textbook adoption processes and in January 2001 rejected Everyday Mathematics for failing to meet state content standards. Everyday Math stayed off the California textbook lists until 2007 when the publisher released a California version of the 3rd edition that is supplemented with more traditional arithmetic, reigniting debate at the local level. In late 2007, the Texas State Board of Education took the unusual step of rejecting the 3rd edition of Everyday Math after earlier editions had been in use in more than 70 districts across the state. The fact that they singled out Everyday Math while approving all 162 other books and educational materials raised questions about the board's legal powers. The state of Texas dropped Everyday Mathematics, saying it was leaving public school graduates unprepared for college. == Notes == Additional references == External links == Everyday Mathematics microsite (University of Chicago) Everyday Mathematics microsite (McGraw Hill Education)
|
https://en.wikipedia.org/wiki/Everyday_Mathematics
|
In mathematics, especially representation theory, a quiver is another name for a multidigraph; that is, a directed graph where loops and multiple arrows between two vertices are allowed. Quivers are commonly used in representation theory: a representation V of a quiver assigns a vector space V(x) to each vertex x of the quiver and a linear map V(a) to each arrow a. In category theory, a quiver can be understood to be the underlying structure of a category, but without composition or a designation of identity morphisms. That is, there is a forgetful functor from Cat (the category of categories) to Quiv (the category of multidigraphs). Its left adjoint is a free functor which, from a quiver, makes the corresponding free category. == Definition == A quiver Γ consists of: The set V of vertices of Γ The set E of edges of Γ Two functions: s : E → V {\displaystyle s:E\to V} giving the start or source of the edge, and another function, t : E → V {\displaystyle t:E\to V} giving the target of the edge. This definition is identical to that of a multidigraph. A morphism of quivers is a mapping from vertices to vertices which takes directed edges to directed edges. Formally, if Γ = ( V , E , s , t ) {\displaystyle \Gamma =(V,E,s,t)} and Γ ′ = ( V ′ , E ′ , s ′ , t ′ ) {\displaystyle \Gamma '=(V',E',s',t')} are two quivers, then a morphism m = ( m v , m e ) {\displaystyle m=(m_{v},m_{e})} of quivers consists of two functions m v : V → V ′ {\displaystyle m_{v}:V\to V'} and m e : E → E ′ {\displaystyle m_{e}:E\to E'} such that the following diagrams commute: That is, m v ∘ s = s ′ ∘ m e {\displaystyle m_{v}\circ s=s'\circ m_{e}} and m v ∘ t = t ′ ∘ m e {\displaystyle m_{v}\circ t=t'\circ m_{e}} == Category-theoretic definition == The above definition is based in set theory; the category-theoretic definition generalizes this into a functor from the free quiver to the category of sets. The free quiver (also called the walking quiver, Kronecker quiver, 2-Kronecker quiver or Kronecker category) Q is a category with two objects, and four morphisms: The objects are V and E. The four morphisms are s : E → V , {\displaystyle s:E\to V,} t : E → V , {\displaystyle t:E\to V,} and the identity morphisms i d V : V → V {\displaystyle \mathrm {id} _{V}:V\to V} and i d E : E → E . {\displaystyle \mathrm {id} _{E}:E\to E.} That is, the free quiver is the category E s ⇉ t V {\displaystyle E\;{\begin{matrix}s\\[-6pt]\rightrightarrows \\[-4pt]t\end{matrix}}\;V} A quiver is then a functor Γ : Q → S e t {\displaystyle \Gamma :Q\to \mathbf {Set} } . (That is to say, Γ {\displaystyle \Gamma } specifies two sets Γ ( V ) {\displaystyle \Gamma (V)} and Γ ( E ) {\displaystyle \Gamma (E)} , and two functions Γ ( s ) , Γ ( t ) : Γ ( E ) ⟶ Γ ( V ) {\displaystyle \Gamma (s),\Gamma (t)\colon \Gamma (E)\longrightarrow \Gamma (V)} ; this is the full extent of what it means to be a functor from Q {\displaystyle Q} to S e t {\displaystyle \mathbf {Set} } .) More generally, a quiver in a category C is a functor Γ : Q → C . {\displaystyle \Gamma :Q\to C.} The category Quiv(C) of quivers in C is the functor category where: objects are functors Γ : Q → C , {\displaystyle \Gamma :Q\to C,} morphisms are natural transformations between functors. Note that Quiv is the category of presheaves on the opposite category Qop. == Path algebra == If Γ is a quiver, then a path in Γ is a sequence of arrows a n a n − 1 … a 3 a 2 a 1 {\displaystyle a_{n}a_{n-1}\dots a_{3}a_{2}a_{1}} such that the head of ai+1 is the tail of ai for i = 1, …, n−1, using the convention of concatenating paths from right to left. Note that a path in graph theory has a stricter definition, and that this concept instead coincides with what in graph theory is called a walk. If K is a field then the quiver algebra or path algebra K Γ is defined as a vector space having all the paths (of length ≥ 0) in the quiver as basis (including, for each vertex i of the quiver Γ, a trivial path ei of length 0; these paths are not assumed to be equal for different i), and multiplication given by concatenation of paths. If two paths cannot be concatenated because the end vertex of the first is not equal to the starting vertex of the second, their product is defined to be zero. This defines an associative algebra over K. This algebra has a unit element if and only if the quiver has only finitely many vertices. In this case, the modules over K Γ are naturally identified with the representations of Γ. If the quiver has infinitely many vertices, then K Γ has an approximate identity given by e F := ∑ v ∈ F 1 v {\textstyle e_{F}:=\sum _{v\in F}1_{v}} where F ranges over finite subsets of the vertex set of Γ. If the quiver has finitely many vertices and arrows, and the end vertex and starting vertex of any path are always distinct (i.e. Q has no oriented cycles), then K Γ is a finite-dimensional hereditary algebra over K. Conversely, if K is algebraically closed, then any finite-dimensional, hereditary, associative algebra over K is Morita equivalent to the path algebra of its Ext quiver (i.e., they have equivalent module categories). == Representations of quivers == A representation of a quiver Q is an association of an R-module to each vertex of Q, and a morphism between each module for each arrow. A representation V of a quiver Q is said to be trivial if V ( x ) = 0 {\displaystyle V(x)=0} for all vertices x in Q. A morphism, f : V → V ′ , {\displaystyle f:V\to V',} between representations of the quiver Q, is a collection of linear maps f ( x ) : V ( x ) → V ′ ( x ) {\displaystyle f(x):V(x)\to V'(x)} such that for every arrow a in Q from x to y, V ′ ( a ) f ( x ) = f ( y ) V ( a ) , {\displaystyle V'(a)f(x)=f(y)V(a),} i.e. the squares that f forms with the arrows of V and V' all commute. A morphism, f, is an isomorphism, if f (x) is invertible for all vertices x in the quiver. With these definitions the representations of a quiver form a category. If V and W are representations of a quiver Q, then the direct sum of these representations, V ⊕ W , {\displaystyle V\oplus W,} is defined by ( V ⊕ W ) ( x ) = V ( x ) ⊕ W ( x ) {\displaystyle (V\oplus W)(x)=V(x)\oplus W(x)} for all vertices x in Q and ( V ⊕ W ) ( a ) {\displaystyle (V\oplus W)(a)} is the direct sum of the linear mappings V(a) and W(a). A representation is said to be decomposable if it is isomorphic to the direct sum of non-zero representations. A categorical definition of a quiver representation can also be given. The quiver itself can be considered a category, where the vertices are objects and paths are morphisms. Then a representation of Q is just a covariant functor from this category to the category of finite dimensional vector spaces. Morphisms of representations of Q are precisely natural transformations between the corresponding functors. For a finite quiver Γ (a quiver with finitely many vertices and edges), let K Γ be its path algebra. Let ei denote the trivial path at vertex i. Then we can associate to the vertex i the projective K Γ-module K Γei consisting of linear combinations of paths which have starting vertex i. This corresponds to the representation of Γ obtained by putting a copy of K at each vertex which lies on a path starting at i and 0 on each other vertex. To each edge joining two copies of K we associate the identity map. This theory was related to cluster algebras by Derksen, Weyman, and Zelevinsky. == Quiver with relations == To enforce commutativity of some squares inside a quiver a generalization is the notion of quivers with relations (also named bound quivers). A relation on a quiver Q is a K linear combination of paths from Q. A quiver with relation is a pair (Q, I) with Q a quiver and I ⊆ K Γ {\displaystyle I\subseteq K\Gamma } an ideal of the path algebra. The quotient K Γ / I is the path algebra of (Q, I). === Quiver Variety === Given the dimensions of the vector spaces assigned to every vertex, one can form a variety which characterizes all representations of that quiver with those specified dimensions, and consider stability conditions. These give quiver varieties, as constructed by King (1994). == Gabriel's theorem == A quiver is of finite type if it has only finitely many isomorphism classes of indecomposable representations. Gabriel (1972) classified all quivers of finite type, and also their indecomposable representations. More precisely, Gabriel's theorem states that: A (connected) quiver is of finite type if and only if its underlying graph (when the directions of the arrows are ignored) is one of the ADE Dynkin diagrams: An, Dn, E6, E7, E8. The indecomposable representations are in a one-to-one correspondence with the positive roots of the root system of the Dynkin diagram. Dlab & Ringel (1973) found a generalization of Gabriel's theorem in which all Dynkin diagrams of finite dimensional semisimple Lie algebras occur. This was generalized to all quivers and their corresponding Kac–Moody algebras by Victor Kac. == See also == ADE classification Adhesive category Assembly theory Graph algebra Group ring Incidence algebra Quiver diagram Semi-invariant of a quiver Toric variety Derived noncommutative algebraic geometry - Quivers help encode the data of derived noncommutative schemes == References == === Books === Kirillov, Alexander (2016), Quiver Representations and Quiver Varieties, American Mathematical Society, ISBN 978-1-4704-2307-0 === Lecture Notes === Crawley-Boevey, William, Lectures on Representations of Quivers (PDF), archived from the original on 2017-08-20{{citation}}: CS1 maint: bot: original URL status unknown (link) Quiver representations in toric geometry === Research === Projective toric varieties as fine moduli spaces of quiver representations == Sources == Derksen, Harm; Weyman, Jerzy (February 2005), "Quiver Representations" (PDF), Notices of the American Mathematical Society, 52 (2) Dlab, Vlastimil; Ringel, Claus Michael (1973), On algebras of finite representation type, Carleton Mathematical Lecture Notes, vol. 2, Department of Mathematics, Carleton Univ., Ottawa, Ont., MR 0347907 Crawley-Boevey, William (1992), Notes on Quiver Representations (PDF), Oxford University, archived from the original (PDF) on 2011-07-24, retrieved 2007-02-17 Gabriel, Peter (1972), "Unzerlegbare Darstellungen. I", Manuscripta Mathematica, 6 (1): 71–103, doi:10.1007/BF01298413, ISSN 0025-2611, MR 0332887. Victor Kac, "Root systems, representations of quivers and invariant theory". Invariant theory (Montecatini, 1982), pp. 74–108, Lecture Notes in Math. 996, Springer-Verlag, Berlin 1983. ISBN 3-540-12319-9 King, Alastair (1994), "Moduli of representations of finite-dimensional algebras", Quart. J. Math., 45 (180): 515–530, doi:10.1093/qmath/45.4.515 Savage, Alistair (2006) [2005], "Finite-dimensional algebras and quivers", in Francoise, J.-P.; Naber, G. L.; Tsou, S.T. (eds.), Encyclopedia of Mathematical Physics, vol. 2, Elsevier, pp. 313–320, arXiv:math/0505082, Bibcode:2005math......5082S Simson, Daniel; Skowronski, Andrzej; Assem, Ibrahim (2007), Elements of the Representation Theory of Associative Algebras, Cambridge University Press, ISBN 978-0-521-88218-7 Bernšteĭn, I. N.; Gelʹfand, I. M.; Ponomarev, V. A., "Coxeter functors, and Gabriel's theorem" (Russian), Uspekhi Mat. Nauk 28 (1973), no. 2(170), 19–33. Translation on Bernstein's website. Quiver at the nLab
|
https://en.wikipedia.org/wiki/Quiver_(mathematics)
|
The number 𝜏 ( ; spelled out as tau) is a mathematical constant that is the ratio of a circle's circumference to its radius. It is approximately equal to 6.28 and exactly equal to 2π. 𝜏 and π are both circle constants relating the circumference of a circle to its linear dimension: the radius in the case of 𝜏; the diameter in the case of π. While π is used almost exclusively in mainstream mathematical education and practice, it has been proposed, most notably by Michael Hartl in 2010, that 𝜏 should be used instead. Hartl and other proponents argue that 𝜏 is the more natural circle constant and its use leads to conceptually simpler and more intuitive mathematical notation. Critics have responded that the benefits of using 𝜏 over π are trivial and that given the ubiquity and historical significance of π a change is unlikely to occur. The proposal did not initially gain widespread acceptance in the mathematical community, but awareness of 𝜏 has become more widespread, having been added to several major programming languages and calculators. == Fundamentals == === Definition === 𝜏 is commonly defined as the ratio of a circle's circumference C {\textstyle {C}} to its radius r {\textstyle {r}} : τ = C r {\displaystyle \tau ={\frac {C}{r}}} A circle is defined as a closed curve formed by the set of all points in a plane that are a given distance from a fixed point, where the given distance is called the radius. The distance around the circle is the circumference, and the ratio C r {\textstyle {\frac {C}{r}}} is constant regardless of the circle's size. Thus, 𝜏 denotes the fixed relationship between the circumference of any circle and the fundamental defining property of that circle, the radius. === Units of angle === When radians are used as the unit of angular measure there are 𝜏 radians in one full turn of a circle, and the radian angle is aligned with the proportion of a full turn around the circle: τ 8 {\textstyle {\frac {\tau }{8}}} rad is an eighth of a turn; 3 τ 4 {\textstyle {\frac {3\tau }{4}}} rad is three-quarters of a turn. === Relationship to π === As 𝜏 is exactly equal to 2π it shares many of the properties of π including being both an irrational and transcendental number. == History == The proposal to use the Greek letter 𝜏 as a circle constant representing 2π dates to Michael Hartl's 2010 publication, The Tau Manifesto, although the symbol had been independently suggested earlier by Joseph Lindenburg (c.1990), John Fisher (2004) and Peter Harremoës (2010). Hartl offered two reasons for the choice of notation. First, τ is the number of radians in one turn, and both τ and turn begin with a sound. Second, τ visually resembles π, whose association with the circle constant is unavoidable. === Earlier proposals === There had been a number of earlier proposals for a new circle constant equal to 2π, together with varying suggestions for its name and symbol. In 2001, Robert Palais of the University of Utah proposed that π was "wrong" as the fundamental circle constant arguing instead that 2π was the proper value. His proposal used a "π with three legs" symbol to denote the constant ( π π = 2 π {\displaystyle \pi \!\;\!\!\!\pi =2\pi } ), and referred to angles as fractions of a "turn" ( 1 4 π π = 1 4 t u r n {\displaystyle {\tfrac {1}{4}}\pi \!\;\!\!\!\pi ={\tfrac {1}{4}}\,\mathrm {turn} } ). Palais stated that the word "turn" served as both the name of the new constant and a reference to the ordinary language meaning of turn. In 2008, Robert P. Crease proposed defining a constant as the ratio of circumference to radius, an idea supported by John Horton Conway. Crease used the Greek letter psi: ψ = 2 π {\displaystyle \psi =2\pi } . The same year, Thomas Colignatus proposed the uppercase Greek letter theta, Θ, to represent 2π due to its visual resemblance of a circle. For a similar reason another proposal suggested the Phoenician and Hebrew letter teth, 𐤈 or ט, (from which the letter theta was derived), due to its connection with wheels and circles in ancient cultures. === Use of the symbol π to represent 6.28 === The meaning of the symbol π {\displaystyle \pi } was not originally defined as the ratio of circumference to diameter, and at times was used in representations of the 6.28...constant. Early works in circle geometry used the letter π to designate the perimeter (i.e., circumference) in different fractional representations of circle constants and in 1697 David Gregory used π/ρ (pi over rho) to denote the perimeter divided by the radius (6.28...). Subsequently π came to be used as a single symbol to represent the ratios in whole. Leonhard Euler initially used the single letter π was to denote the constant 6.28... in his 1727 Essay Explaining the Properties of Air. Euler would later use the letter π for 3.14... in his 1736 Mechanica and 1748 Introductio in analysin infinitorum, though defined as half the circumference of a circle of radius 1 rather than the ratio of circumference to diameter. Elsewhere in Mechanica, Euler instead used the letter π for one-fourth of the circumference of a unit circle, or 1.57... . Usage of the letter π, sometimes for 3.14... and other times for 6.28..., became widespread, with the definition varying as late as 1761; afterward, π was standardized as being equal to 3.14... . == Notion using 𝜏 == Proponents argue that while use of 𝜏 in place of 2π does not change any of the underlying mathematics, it does lead to simpler and more intuitive notation in many areas. Michael Hartl's Tau Manifesto gives many examples of formulas that are asserted to be clearer where τ is used instead of π. === Units of angle === Hartl and Robert Palais have argued that 𝜏 allows radian angles to be expressed more directly and in a way that makes clear the link between the radian measure and rotation around the unit circle. For instance, 3τ/4 rad can be easily interpreted as 3/4 of a turn around the unit circle in contrast with the numerically equal 3π/2 rad, where the meaning could be obscured, particularly for children and students of mathematics. Critics have responded that a full rotation is not necessarily the correct or fundamental reference measure for angles and two other possibilities, the right angle and straight angle, each have historical precedent. Euclid used the right angle as the basic unit of angle, and David Butler has suggested that τ/4 = π/2 ≈ 1.57, which he denotes with the Greek letter η (eta), should be seen as the fundamental circle constant. === Trigonometric Functions === Hartl has argued that the periodic trigonometric functions are simplified using 𝜏 as it aligns the function argument (radians) with the function period: sin θ repeats with period T = τ rad, reaches a maximum at T/4=τ/4 rad and a minimum at 3T/4=3τ/4 rad. === Area of a circle === Critics have argued that the formula for the area of a circle is more complicated when restated as A = 1/2𝜏r2. Hartl and others respond that the 1/2 factor is meaningful, arising from either integration or geometric proofs for the area of a circle as half the circumference times the radius. === Euler's identity === A common criticism of τ is that Euler's identity, eiπ + 1 = 0, sometimes claimed to be "the most beautiful theorem in mathematics" is made less elegant rendered as eiτ/2 + 1 = 0. Hartl has asserted that eiτ = 1 (which he also called "Euler's identity") is more fundamental and meaningful. John Conway noted that Euler's identity is a specific case of the general formula of the nth roots of unity, n√1 = eiτk/n (k = 1,2,..,n), which he maintained is preferable and more economical than Euler's. === Comparison of identities === The following table shows how various identities appear when τ = 2π is used instead of π. For a more complete list, see List of formulae involving π. == In culture == 𝜏 has made numerous appearances in culture. It is celebrated annually on June 28, known as Tau Day. Supporters of 𝜏 are called tauists. 𝜏 has been covered in videos by Vi Hart, Numberphile, SciShow, Steve Mould, Khan Academy, and 3Blue1Brown, and it has appeared in the comics xkcd, Saturday Morning Breakfast Cereal, and Sally Forth. The Massachusetts Institute of Technology usually announces admissions on March 14 at 6:28 p.m., which is on Pi Day at Tau Time. Peter Harremoës has used τ in a mathematical research article which was granted Editor's award of the year. == In programming languages and calculators == The following table documents various programming languages that have implemented the circle constant for converting between turns and radians. All of the languages below support the name "Tau" in some casing, but Processing also supports "TWO_PI" and Raku also supports the symbol "τ" for accessing the same value. The constant τ is made available in the Google calculator, Desmos graphing calculator, and the iPhone's Convert Angle option expresses the turn as τ. == Notes == == References == == External links == The Tau Manifesto
|
https://en.wikipedia.org/wiki/Tau_(mathematics)
|
In mathematics and other fields, a lemma (pl.: lemmas or lemmata) is a generally minor, proven proposition which is used to prove a larger statement. For that reason, it is also known as a "helping theorem" or an "auxiliary theorem". In many cases, a lemma derives its importance from the theorem it aims to prove; however, a lemma can also turn out to be more important than originally thought. == Etymology == From the Ancient Greek λῆμμα, (perfect passive εἴλημμαι) something received or taken. Thus something taken for granted in an argument. == Comparison with theorem == There is no formal distinction between a lemma and a theorem, only one of intention (see Theorem terminology). However, a lemma can be considered a minor result whose sole purpose is to help prove a more substantial theorem – a step in the direction of proof. == Well-known lemmas == Some powerful results in mathematics are known as lemmas, first named for their originally minor purpose. These include, among others: While these results originally seemed too simple or too technical to warrant independent interest, they have eventually turned out to be central to the theories in which they occur. == See also == == Notes == == References == == External links == Doron Zeilberger, Opinion 82: A Good Lemma is Worth a Thousand Theorems This article incorporates material from Lemma on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Lemma_(mathematics)
|
In mathematics, and particularly in set theory, category theory, type theory, and the foundations of mathematics, a universe is a collection that contains all the entities one wishes to consider in a given situation. In set theory, universes are often classes that contain (as elements) all sets for which one hopes to prove a particular theorem. These classes can serve as inner models for various axiomatic systems such as ZFC or Morse–Kelley set theory. Universes are of critical importance to formalizing concepts in category theory inside set-theoretical foundations. For instance, the canonical motivating example of a category is Set, the category of all sets, which cannot be formalized in a set theory without some notion of a universe. In type theory, a universe is a type whose elements are types. == In a specific context == Perhaps the simplest version is that any set can be a universe, so long as the object of study is confined to that particular set. If the object of study is formed by the real numbers, then the real line R, which is the real number set, could be the universe under consideration. Implicitly, this is the universe that Georg Cantor was using when he first developed modern naive set theory and cardinality in the 1870s and 1880s in applications to real analysis. The only sets that Cantor was originally interested in were subsets of R. This concept of a universe is reflected in the use of Venn diagrams. In a Venn diagram, the action traditionally takes place inside a large rectangle that represents the universe U. One generally says that sets are represented by circles; but these sets can only be subsets of U. The complement of a set A is then given by that portion of the rectangle outside of A's circle. Strictly speaking, this is the relative complement U \ A of A relative to U; but in a context where U is the universe, it can be regarded as the absolute complement AC of A. Similarly, there is a notion of the nullary intersection, that is the intersection of zero sets (meaning no sets, not null sets). Without a universe, the nullary intersection would be the set of absolutely everything, which is generally regarded as impossible; but with the universe in mind, the nullary intersection can be treated as the set of everything under consideration, which is simply U. These conventions are quite useful in the algebraic approach to basic set theory, based on Boolean lattices. Except in some non-standard forms of axiomatic set theory (such as New Foundations), the class of all sets is not a Boolean lattice (it is only a relatively complemented lattice). In contrast, the class of all subsets of U, called the power set of U, is a Boolean lattice. The absolute complement described above is the complement operation in the Boolean lattice; and U, as the nullary intersection, serves as the top element (or nullary meet) in the Boolean lattice. Then De Morgan's laws, which deal with complements of meets and joins (which are unions in set theory) apply, and apply even to the nullary meet and the nullary join (which is the empty set). == In ordinary mathematics == However, once subsets of a given set X (in Cantor's case, X = R) are considered, the universe may need to be a set of subsets of X. (For example, a topology on X is a set of subsets of X.) The various sets of subsets of X will not themselves be subsets of X but will instead be subsets of PX, the power set of X. This may be continued; the object of study may next consist of such sets of subsets of X, and so on, in which case the universe will be P(PX). In another direction, the binary relations on X (subsets of the Cartesian product X × X) may be considered, or functions from X to itself, requiring universes like P(X × X) or XX. Thus, even if the primary interest is X, the universe may need to be considerably larger than X. Following the above ideas, one may want the superstructure over X as the universe. This can be defined by structural recursion as follows: Let S0X be X itself. Let S1X be the union of X and PX. Let S2X be the union of S1X and P(S1X). In general, let Sn+1X be the union of SnX and P(SnX). Then the superstructure over X, written SX, is the union of S0X, S1X, S2X, and so on; or S X := ⋃ i = 0 ∞ S i X . {\displaystyle \mathbf {S} X:=\bigcup _{i=0}^{\infty }\mathbf {S} _{i}X{\mbox{.}}\!} No matter what set X is the starting point, the empty set {} will belong to S1X. The empty set is the von Neumann ordinal [0]. Then {[0]}, the set whose only element is the empty set, will belong to S2X; this is the von Neumann ordinal [1]. Similarly, {[1]} will belong to S3X, and thus so will {[0],[1]}, as the union of {[0]} and {[1]}; this is the von Neumann ordinal [2]. Continuing this process, every natural number is represented in the superstructure by its von Neumann ordinal. Next, if x and y belong to the superstructure, then so does {{x},{x,y}}, which represents the ordered pair (x,y). Thus the superstructure will contain the various desired Cartesian products. Then the superstructure also contains functions and relations, since these may be represented as subsets of Cartesian products. The process also gives ordered n-tuples, represented as functions whose domain is the von Neumann ordinal [n], and so on. So if the starting point is just X = {}, a great deal of the sets needed for mathematics appear as elements of the superstructure over {}. But each of the elements of S{} will be a finite set. Each of the natural numbers belongs to it, but the set N of all natural numbers does not (although it is a subset of S{}). In fact, the superstructure over {} consists of all of the hereditarily finite sets. As such, it can be considered the universe of finitist mathematics. Speaking anachronistically, one could suggest that the 19th-century finitist Leopold Kronecker was working in this universe; he believed that each natural number existed but that the set N (a "completed infinity") did not. However, S{} is unsatisfactory for ordinary mathematicians (who are not finitists), because even though N may be available as a subset of S{}, still the power set of N is not. In particular, arbitrary sets of real numbers are not available. So it may be necessary to start the process all over again and form S(S{}). However, to keep things simple, one can take the set N of natural numbers as given and form SN, the superstructure over N. This is often considered the universe of ordinary mathematics. The idea is that all of the mathematics that is ordinarily studied refers to elements of this universe. For example, any of the usual constructions of the real numbers (say by Dedekind cuts) belongs to SN. Even non-standard analysis can be done in the superstructure over a non-standard model of the natural numbers. There is a slight shift in philosophy from the previous section, where the universe was any set U of interest. There, the sets being studied were subsets of the universe; now, they are members of the universe. Thus although P(SX) is a Boolean lattice, what is relevant is that SX itself is not. Consequently, it is rare to apply the notions of Boolean lattices and Venn diagrams directly to the superstructure universe as they were to the power-set universes of the previous section. Instead, one can work with the individual Boolean lattices PA, where A is any relevant set belonging to SX; then PA is a subset of SX (and in fact belongs to SX). In Cantor's case X = R in particular, arbitrary sets of real numbers are not available, so there it may indeed be necessary to start the process all over again. == In set theory == It is possible to give a precise meaning to the claim that SN is the universe of ordinary mathematics; it is a model of Zermelo set theory, the axiomatic set theory originally developed by Ernst Zermelo in 1908. Zermelo set theory was successful precisely because it was capable of axiomatising "ordinary" mathematics, fulfilling the programme begun by Cantor over 30 years earlier. But Zermelo set theory proved insufficient for the further development of axiomatic set theory and other work in the foundations of mathematics, especially model theory. For a dramatic example, the description of the superstructure process above cannot itself be carried out in Zermelo set theory. The final step, forming S as an infinitary union, requires the axiom of replacement, which was added to Zermelo set theory in 1922 to form Zermelo–Fraenkel set theory, the set of axioms most widely accepted today. So while ordinary mathematics may be done in SN, discussion of SN goes beyond the "ordinary", into metamathematics. But if high-powered set theory is brought in, the superstructure process above reveals itself to be merely the beginning of a transfinite recursion. Going back to X = {}, the empty set, and introducing the (standard) notation Vi for Si{}, V0 = {}, V1 = P{}, and so on as before. But what used to be called "superstructure" is now just the next item on the list: Vω, where ω is the first infinite ordinal number. This can be extended to arbitrary ordinal numbers: V i := ⋃ j < i P V j {\displaystyle V_{i}:=\bigcup _{j<i}\mathbf {P} V_{j}\!} defines Vi for any ordinal number i. The union of all of the Vi is the von Neumann universe V: V := ⋃ i V i {\displaystyle V:=\bigcup _{i}V_{i}\!} . Every individual Vi is a set, but their union V is a proper class. The axiom of foundation, which was added to ZF set theory at around the same time as the axiom of replacement, says that every set belongs to V. Kurt Gödel's constructible universe L and the axiom of constructibility Inaccessible cardinals yield models of ZF and sometimes additional axioms, and are equivalent to the existence of the Grothendieck universe set == In predicate calculus == In an interpretation of first-order logic, the universe (or domain of discourse) is the set of individuals (individual constants) over which the quantifiers range. A proposition such as ∀x (x2 ≠ 2) is ambiguous, if no domain of discourse has been identified. In one interpretation, the domain of discourse could be the set of real numbers; in another interpretation, it could be the set of natural numbers. If the domain of discourse is the set of real numbers, the proposition is false, with x = √2 as counterexample; if the domain is the set of naturals, the proposition is true, since 2 is not the square of any natural number. == In category theory == There is another approach to universes which is historically connected with category theory. This is the idea of a Grothendieck universe. Roughly speaking, a Grothendieck universe is a set inside which all the usual operations of set theory can be performed. This version of a universe is defined to be any set for which the following axioms hold: x ∈ u ∈ U {\displaystyle x\in u\in U} implies x ∈ U {\displaystyle x\in U} u ∈ U {\displaystyle u\in U} and v ∈ U {\displaystyle v\in U} imply {u,v}, (u,v), and u × v ∈ U {\displaystyle u\times v\in U} . x ∈ U {\displaystyle x\in U} implies P x ∈ U {\displaystyle {\mathcal {P}}x\in U} and ∪ x ∈ U {\displaystyle \cup x\in U} ω ∈ U {\displaystyle \omega \in U} (here ω = { 0 , 1 , 2 , . . . } {\displaystyle \omega =\{0,1,2,...\}} is the set of all finite ordinals.) if f : a → b {\displaystyle f:a\to b} is a surjective function with a ∈ U {\displaystyle a\in U} and b ⊂ U {\displaystyle b\subset U} , then b ∈ U {\displaystyle b\in U} . The most common use of a Grothendieck universe U is to take U as a replacement for the category of all sets. One says that a set S is U-small if S ∈U, and U-large otherwise. The category U-Set of all U-small sets has as objects all U-small sets and as morphisms all functions between these sets. Both the object set and the morphism set are sets, so it becomes possible to discuss the category of "all" sets without invoking proper classes. Then it becomes possible to define other categories in terms of this new category. For example, the category of all U-small categories is the category of all categories whose object set and whose morphism set are in U. Then the usual arguments of set theory are applicable to the category of all categories, and one does not have to worry about accidentally talking about proper classes. Because Grothendieck universes are extremely large, this suffices in almost all applications. Often when working with Grothendieck universes, mathematicians assume the Axiom of Universes: "For any set x, there exists a universe U such that x ∈U." The point of this axiom is that any set one encounters is then U-small for some U, so any argument done in a general Grothendieck universe can be applied. This axiom is closely related to the existence of strongly inaccessible cardinals. == In type theory == In some type theories, especially in systems with dependent types, types themselves can be regarded as terms. There is a type called the universe (often denoted U {\displaystyle {\mathcal {U}}} ) which has types as its elements. To avoid paradoxes such as Girard's paradox (an analogue of Russell's paradox for type theory), type theories are often equipped with a countably infinite hierarchy of such universes, with each universe being a term of the next one. There are at least two kinds of universes that one can consider in type theory: Russell-style universes (named after Bertrand Russell) and Tarski-style universes (named after Alfred Tarski). A Russell-style universe is a type whose terms are types. A Tarski-style universe is a type together with an interpretation operation allowing us to regard its terms as types. For example: The openendedness of Martin-Löf type theory is particularly manifest in the introduction of so-called universes. Type universes encapsulate the informal notion of reflection whose role may be explained as follows. During the course of developing a particular formalization of type theory, the type theorist may look back over the rules for types, say C, which have been introduced hitherto and perform the step of recognizing that they are valid according to Martin-Löf’s informal semantics of meaning explanation. This act of ‘introspection’ is an attempt to become aware of the conceptions which have governed our constructions in the past. It gives rise to a “reflection principle which roughly speaking says whatever we are used to doing with types can be done inside a universe” (Martin-Löf 1975, 83). On the formal level, this leads to an extension of the existing formalization of type theory in that the type forming capacities of C become enshrined in a type universe UC mirroring C. == See also == Conglomerate (mathematics) Domain of discourse Grothendieck universe Herbrand universe Free object Open formula Space (mathematics) == Notes == == References == Mac Lane, Saunders (1998). Categories for the Working Mathematician. Springer-Verlag New York, Inc. == External links == "Universe", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Universal Set". MathWorld.
|
https://en.wikipedia.org/wiki/Universe_(mathematics)
|
In mathematics, the concept of a measure is a generalization and formalization of geometrical measures (length, area, volume) and other common notions, such as magnitude, mass, and probability of events. These seemingly distinct concepts have many similarities and can often be treated together in a single mathematical context. Measures are foundational in probability theory, integration theory, and can be generalized to assume negative values, as with electrical charge. Far-reaching generalizations (such as spectral measures and projection-valued measures) of measure are widely used in quantum physics and physics in general. The intuition behind this concept dates back to ancient Greece, when Archimedes tried to calculate the area of a circle. But it was not until the late 19th and early 20th centuries that measure theory became a branch of mathematics. The foundations of modern measure theory were laid in the works of Émile Borel, Henri Lebesgue, Nikolai Luzin, Johann Radon, Constantin Carathéodory, and Maurice Fréchet, among others. == Definition == Let X {\displaystyle X} be a set and Σ {\displaystyle \Sigma } a σ-algebra over X . {\displaystyle X.} A set function μ {\displaystyle \mu } from Σ {\displaystyle \Sigma } to the extended real number line is called a measure if the following conditions hold: Non-negativity: For all E ∈ Σ , μ ( E ) ≥ 0. {\displaystyle E\in \Sigma ,\ \ \mu (E)\geq 0.} μ ( ∅ ) = 0. {\displaystyle \mu (\varnothing )=0.} Countable additivity (or σ-additivity): For all countable collections { E k } k = 1 ∞ {\displaystyle \{E_{k}\}_{k=1}^{\infty }} of pairwise disjoint sets in Σ, μ ( ⋃ k = 1 ∞ E k ) = ∑ k = 1 ∞ μ ( E k ) . {\displaystyle \mu {\left(\bigcup _{k=1}^{\infty }E_{k}\right)}=\sum _{k=1}^{\infty }\mu (E_{k}).} If at least one set E {\displaystyle E} has finite measure, then the requirement μ ( ∅ ) = 0 {\displaystyle \mu (\varnothing )=0} is met automatically due to countable additivity: μ ( E ) = μ ( E ∪ ∅ ) = μ ( E ) + μ ( ∅ ) , {\displaystyle \mu (E)=\mu (E\cup \varnothing )=\mu (E)+\mu (\varnothing ),} and therefore μ ( ∅ ) = 0. {\displaystyle \mu (\varnothing )=0.} If the condition of non-negativity is dropped, and μ {\displaystyle \mu } takes on at most one of the values of ± ∞ , {\displaystyle \pm \infty ,} then μ {\displaystyle \mu } is called a signed measure. The pair ( X , Σ ) {\displaystyle (X,\Sigma )} is called a measurable space, and the members of Σ {\displaystyle \Sigma } are called measurable sets. A triple ( X , Σ , μ ) {\displaystyle (X,\Sigma ,\mu )} is called a measure space. A probability measure is a measure with total measure one – that is, μ ( X ) = 1. {\displaystyle \mu (X)=1.} A probability space is a measure space with a probability measure. For measure spaces that are also topological spaces various compatibility conditions can be placed for the measure and the topology. Most measures met in practice in analysis (and in many cases also in probability theory) are Radon measures. Radon measures have an alternative definition in terms of linear functionals on the locally convex topological vector space of continuous functions with compact support. This approach is taken by Bourbaki (2004) and a number of other sources. For more details, see the article on Radon measures. == Instances == Some important measures are listed here. The counting measure is defined by μ ( S ) {\displaystyle \mu (S)} = number of elements in S . {\displaystyle S.} The Lebesgue measure on R {\displaystyle \mathbb {R} } is a complete translation-invariant measure on a σ-algebra containing the intervals in R {\displaystyle \mathbb {R} } such that μ ( [ 0 , 1 ] ) = 1 {\displaystyle \mu ([0,1])=1} ; and every other measure with these properties extends the Lebesgue measure. Circular angle measure is invariant under rotation, and hyperbolic angle measure is invariant under squeeze mapping. The Haar measure for a locally compact topological group is a generalization of the Lebesgue measure (and also of counting measure and circular angle measure) and has similar uniqueness properties. Every (pseudo) Riemannian manifold ( M , g ) {\displaystyle (M,g)} has a canonical measure μ g {\displaystyle \mu _{g}} that in local coordinates x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} looks like | det g | d n x {\displaystyle {\sqrt {\left|\det g\right|}}d^{n}x} where d n x {\displaystyle d^{n}x} is the usual Lebesgue measure. The Hausdorff measure is a generalization of the Lebesgue measure to sets with non-integer dimension, in particular, fractal sets. Every probability space gives rise to a measure which takes the value 1 on the whole space (and therefore takes all its values in the unit interval [0, 1]). Such a measure is called a probability measure or distribution. See the list of probability distributions for instances. The Dirac measure δa (cf. Dirac delta function) is given by δa(S) = χS(a), where χS is the indicator function of S . {\displaystyle S.} The measure of a set is 1 if it contains the point a {\displaystyle a} and 0 otherwise. Other 'named' measures used in various theories include: Borel measure, Jordan measure, ergodic measure, Gaussian measure, Baire measure, Radon measure, Young measure, and Loeb measure. In physics an example of a measure is spatial distribution of mass (see for example, gravity potential), or another non-negative extensive property, conserved (see conservation law for a list of these) or not. Negative values lead to signed measures, see "generalizations" below. Liouville measure, known also as the natural volume form on a symplectic manifold, is useful in classical statistical and Hamiltonian mechanics. Gibbs measure is widely used in statistical mechanics, often under the name canonical ensemble. Measure theory is used in machine learning. One example is the Flow Induced Probability Measure in GFlowNet. == Basic properties == Let μ {\displaystyle \mu } be a measure. === Monotonicity === If E 1 {\displaystyle E_{1}} and E 2 {\displaystyle E_{2}} are measurable sets with E 1 ⊆ E 2 {\displaystyle E_{1}\subseteq E_{2}} then μ ( E 1 ) ≤ μ ( E 2 ) . {\displaystyle \mu (E_{1})\leq \mu (E_{2}).} === Measure of countable unions and intersections === ==== Countable subadditivity ==== For any countable sequence E 1 , E 2 , E 3 , … {\displaystyle E_{1},E_{2},E_{3},\ldots } of (not necessarily disjoint) measurable sets E n {\displaystyle E_{n}} in Σ : {\displaystyle \Sigma :} μ ( ⋃ i = 1 ∞ E i ) ≤ ∑ i = 1 ∞ μ ( E i ) . {\displaystyle \mu \left(\bigcup _{i=1}^{\infty }E_{i}\right)\leq \sum _{i=1}^{\infty }\mu (E_{i}).} ==== Continuity from below ==== If E 1 , E 2 , E 3 , … {\displaystyle E_{1},E_{2},E_{3},\ldots } are measurable sets that are increasing (meaning that E 1 ⊆ E 2 ⊆ E 3 ⊆ … {\displaystyle E_{1}\subseteq E_{2}\subseteq E_{3}\subseteq \ldots } ) then the union of the sets E n {\displaystyle E_{n}} is measurable and μ ( ⋃ i = 1 ∞ E i ) = lim i → ∞ μ ( E i ) = sup i ≥ 1 μ ( E i ) . {\displaystyle \mu \left(\bigcup _{i=1}^{\infty }E_{i}\right)~=~\lim _{i\to \infty }\mu (E_{i})=\sup _{i\geq 1}\mu (E_{i}).} ==== Continuity from above ==== If E 1 , E 2 , E 3 , … {\displaystyle E_{1},E_{2},E_{3},\ldots } are measurable sets that are decreasing (meaning that E 1 ⊇ E 2 ⊇ E 3 ⊇ … {\displaystyle E_{1}\supseteq E_{2}\supseteq E_{3}\supseteq \ldots } ) then the intersection of the sets E n {\displaystyle E_{n}} is measurable; furthermore, if at least one of the E n {\displaystyle E_{n}} has finite measure then μ ( ⋂ i = 1 ∞ E i ) = lim i → ∞ μ ( E i ) = inf i ≥ 1 μ ( E i ) . {\displaystyle \mu \left(\bigcap _{i=1}^{\infty }E_{i}\right)=\lim _{i\to \infty }\mu (E_{i})=\inf _{i\geq 1}\mu (E_{i}).} This property is false without the assumption that at least one of the E n {\displaystyle E_{n}} has finite measure. For instance, for each n ∈ N , {\displaystyle n\in \mathbb {N} ,} let E n = [ n , ∞ ) ⊆ R , {\displaystyle E_{n}=[n,\infty )\subseteq \mathbb {R} ,} which all have infinite Lebesgue measure, but the intersection is empty. == Other properties == === Completeness === A measurable set X {\displaystyle X} is called a null set if μ ( X ) = 0. {\displaystyle \mu (X)=0.} A subset of a null set is called a negligible set. A negligible set need not be measurable, but every measurable negligible set is automatically a null set. A measure is called complete if every negligible set is measurable. A measure can be extended to a complete one by considering the σ-algebra of subsets Y {\displaystyle Y} which differ by a negligible set from a measurable set X , {\displaystyle X,} that is, such that the symmetric difference of X {\displaystyle X} and Y {\displaystyle Y} is contained in a null set. One defines μ ( Y ) {\displaystyle \mu (Y)} to equal μ ( X ) . {\displaystyle \mu (X).} === "Dropping the Edge" === If f : X → [ 0 , + ∞ ] {\displaystyle f:X\to [0,+\infty ]} is ( Σ , B ( [ 0 , + ∞ ] ) ) {\displaystyle (\Sigma ,{\cal {B}}([0,+\infty ]))} -measurable, then μ { x ∈ X : f ( x ) ≥ t } = μ { x ∈ X : f ( x ) > t } {\displaystyle \mu \{x\in X:f(x)\geq t\}=\mu \{x\in X:f(x)>t\}} for almost all t ∈ [ − ∞ , ∞ ] . {\displaystyle t\in [-\infty ,\infty ].} This property is used in connection with Lebesgue integral. === Additivity === Measures are required to be countably additive. However, the condition can be strengthened as follows. For any set I {\displaystyle I} and any set of nonnegative r i , i ∈ I {\displaystyle r_{i},i\in I} define: ∑ i ∈ I r i = sup { ∑ i ∈ J r i : | J | < ∞ , J ⊆ I } . {\displaystyle \sum _{i\in I}r_{i}=\sup \left\lbrace \sum _{i\in J}r_{i}:|J|<\infty ,J\subseteq I\right\rbrace .} That is, we define the sum of the r i {\displaystyle r_{i}} to be the supremum of all the sums of finitely many of them. A measure μ {\displaystyle \mu } on Σ {\displaystyle \Sigma } is κ {\displaystyle \kappa } -additive if for any λ < κ {\displaystyle \lambda <\kappa } and any family of disjoint sets X α , α < λ {\displaystyle X_{\alpha },\alpha <\lambda } the following hold: ⋃ α ∈ λ X α ∈ Σ {\displaystyle \bigcup _{\alpha \in \lambda }X_{\alpha }\in \Sigma } μ ( ⋃ α ∈ λ X α ) = ∑ α ∈ λ μ ( X α ) . {\displaystyle \mu \left(\bigcup _{\alpha \in \lambda }X_{\alpha }\right)=\sum _{\alpha \in \lambda }\mu \left(X_{\alpha }\right).} The second condition is equivalent to the statement that the ideal of null sets is κ {\displaystyle \kappa } -complete. === Sigma-finite measures === A measure space ( X , Σ , μ ) {\displaystyle (X,\Sigma ,\mu )} is called finite if μ ( X ) {\displaystyle \mu (X)} is a finite real number (rather than ∞ {\displaystyle \infty } ). Nonzero finite measures are analogous to probability measures in the sense that any finite measure μ {\displaystyle \mu } is proportional to the probability measure 1 μ ( X ) μ . {\displaystyle {\frac {1}{\mu (X)}}\mu .} A measure μ {\displaystyle \mu } is called σ-finite if X {\displaystyle X} can be decomposed into a countable union of measurable sets of finite measure. Analogously, a set in a measure space is said to have a σ-finite measure if it is a countable union of sets with finite measure. For example, the real numbers with the standard Lebesgue measure are σ-finite but not finite. Consider the closed intervals [ k , k + 1 ] {\displaystyle [k,k+1]} for all integers k ; {\displaystyle k;} there are countably many such intervals, each has measure 1, and their union is the entire real line. Alternatively, consider the real numbers with the counting measure, which assigns to each finite set of reals the number of points in the set. This measure space is not σ-finite, because every set with finite measure contains only finitely many points, and it would take uncountably many such sets to cover the entire real line. The σ-finite measure spaces have some very convenient properties; σ-finiteness can be compared in this respect to the Lindelöf property of topological spaces. They can be also thought of as a vague generalization of the idea that a measure space may have 'uncountable measure'. === Strictly localizable measures === === Semifinite measures === Let X {\displaystyle X} be a set, let A {\displaystyle {\cal {A}}} be a sigma-algebra on X , {\displaystyle X,} and let μ {\displaystyle \mu } be a measure on A . {\displaystyle {\cal {A}}.} We say μ {\displaystyle \mu } is semifinite to mean that for all A ∈ μ pre { + ∞ } , {\displaystyle A\in \mu ^{\text{pre}}\{+\infty \},} P ( A ) ∩ μ pre ( R > 0 ) ≠ ∅ . {\displaystyle {\cal {P}}(A)\cap \mu ^{\text{pre}}(\mathbb {R} _{>0})\neq \emptyset .} Semifinite measures generalize sigma-finite measures, in such a way that some big theorems of measure theory that hold for sigma-finite but not arbitrary measures can be extended with little modification to hold for semifinite measures. (To-do: add examples of such theorems; cf. the talk page.) ==== Basic examples ==== Every sigma-finite measure is semifinite. Assume A = P ( X ) , {\displaystyle {\cal {A}}={\cal {P}}(X),} let f : X → [ 0 , + ∞ ] , {\displaystyle f:X\to [0,+\infty ],} and assume μ ( A ) = ∑ a ∈ A f ( a ) {\displaystyle \mu (A)=\sum _{a\in A}f(a)} for all A ⊆ X . {\displaystyle A\subseteq X.} We have that μ {\displaystyle \mu } is sigma-finite if and only if f ( x ) < + ∞ {\displaystyle f(x)<+\infty } for all x ∈ X {\displaystyle x\in X} and f pre ( R > 0 ) {\displaystyle f^{\text{pre}}(\mathbb {R} _{>0})} is countable. We have that μ {\displaystyle \mu } is semifinite if and only if f ( x ) < + ∞ {\displaystyle f(x)<+\infty } for all x ∈ X . {\displaystyle x\in X.} Taking f = X × { 1 } {\displaystyle f=X\times \{1\}} above (so that μ {\displaystyle \mu } is counting measure on P ( X ) {\displaystyle {\cal {P}}(X)} ), we see that counting measure on P ( X ) {\displaystyle {\cal {P}}(X)} is sigma-finite if and only if X {\displaystyle X} is countable; and semifinite (without regard to whether X {\displaystyle X} is countable). (Thus, counting measure, on the power set P ( X ) {\displaystyle {\cal {P}}(X)} of an arbitrary uncountable set X , {\displaystyle X,} gives an example of a semifinite measure that is not sigma-finite.) Let d {\displaystyle d} be a complete, separable metric on X , {\displaystyle X,} let B {\displaystyle {\cal {B}}} be the Borel sigma-algebra induced by d , {\displaystyle d,} and let s ∈ R > 0 . {\displaystyle s\in \mathbb {R} _{>0}.} Then the Hausdorff measure H s | B {\displaystyle {\cal {H}}^{s}|{\cal {B}}} is semifinite. Let d {\displaystyle d} be a complete, separable metric on X , {\displaystyle X,} let B {\displaystyle {\cal {B}}} be the Borel sigma-algebra induced by d , {\displaystyle d,} and let s ∈ R > 0 . {\displaystyle s\in \mathbb {R} _{>0}.} Then the packing measure H s | B {\displaystyle {\cal {H}}^{s}|{\cal {B}}} is semifinite. ==== Involved example ==== The zero measure is sigma-finite and thus semifinite. In addition, the zero measure is clearly less than or equal to μ . {\displaystyle \mu .} It can be shown there is a greatest measure with these two properties: We say the semifinite part of μ {\displaystyle \mu } to mean the semifinite measure μ sf {\displaystyle \mu _{\text{sf}}} defined in the above theorem. We give some nice, explicit formulas, which some authors may take as definition, for the semifinite part: μ sf = ( sup { μ ( B ) : B ∈ P ( A ) ∩ μ pre ( R ≥ 0 ) } ) A ∈ A . {\displaystyle \mu _{\text{sf}}=(\sup\{\mu (B):B\in {\cal {P}}(A)\cap \mu ^{\text{pre}}(\mathbb {R} _{\geq 0})\})_{A\in {\cal {A}}}.} μ sf = ( sup { μ ( A ∩ B ) : B ∈ μ pre ( R ≥ 0 ) } ) A ∈ A } . {\displaystyle \mu _{\text{sf}}=(\sup\{\mu (A\cap B):B\in \mu ^{\text{pre}}(\mathbb {R} _{\geq 0})\})_{A\in {\cal {A}}}\}.} μ sf = μ | μ pre ( R > 0 ) ∪ { A ∈ A : sup { μ ( B ) : B ∈ P ( A ) } = + ∞ } × { + ∞ } ∪ { A ∈ A : sup { μ ( B ) : B ∈ P ( A ) } < + ∞ } × { 0 } . {\displaystyle \mu _{\text{sf}}=\mu |_{\mu ^{\text{pre}}(\mathbb {R} _{>0})}\cup \{A\in {\cal {A}}:\sup\{\mu (B):B\in {\cal {P}}(A)\}=+\infty \}\times \{+\infty \}\cup \{A\in {\cal {A}}:\sup\{\mu (B):B\in {\cal {P}}(A)\}<+\infty \}\times \{0\}.} Since μ sf {\displaystyle \mu _{\text{sf}}} is semifinite, it follows that if μ = μ sf {\displaystyle \mu =\mu _{\text{sf}}} then μ {\displaystyle \mu } is semifinite. It is also evident that if μ {\displaystyle \mu } is semifinite then μ = μ sf . {\displaystyle \mu =\mu _{\text{sf}}.} ==== Non-examples ==== Every 0 − ∞ {\displaystyle 0-\infty } measure that is not the zero measure is not semifinite. (Here, we say 0 − ∞ {\displaystyle 0-\infty } measure to mean a measure whose range lies in { 0 , + ∞ } {\displaystyle \{0,+\infty \}} : ( ∀ A ∈ A ) ( μ ( A ) ∈ { 0 , + ∞ } ) . {\displaystyle (\forall A\in {\cal {A}})(\mu (A)\in \{0,+\infty \}).} ) Below we give examples of 0 − ∞ {\displaystyle 0-\infty } measures that are not zero measures. Let X {\displaystyle X} be nonempty, let A {\displaystyle {\cal {A}}} be a σ {\displaystyle \sigma } -algebra on X , {\displaystyle X,} let f : X → { 0 , + ∞ } {\displaystyle f:X\to \{0,+\infty \}} be not the zero function, and let μ = ( ∑ x ∈ A f ( x ) ) A ∈ A . {\displaystyle \mu =(\sum _{x\in A}f(x))_{A\in {\cal {A}}}.} It can be shown that μ {\displaystyle \mu } is a measure. μ = { ( ∅ , 0 ) } ∪ ( A ∖ { ∅ } ) × { + ∞ } . {\displaystyle \mu =\{(\emptyset ,0)\}\cup ({\cal {A}}\setminus \{\emptyset \})\times \{+\infty \}.} X = { 0 } , {\displaystyle X=\{0\},} A = { ∅ , X } , {\displaystyle {\cal {A}}=\{\emptyset ,X\},} μ = { ( ∅ , 0 ) , ( X , + ∞ ) } . {\displaystyle \mu =\{(\emptyset ,0),(X,+\infty )\}.} Let X {\displaystyle X} be uncountable, let A {\displaystyle {\cal {A}}} be a σ {\displaystyle \sigma } -algebra on X , {\displaystyle X,} let C = { A ∈ A : A is countable } {\displaystyle {\cal {C}}=\{A\in {\cal {A}}:A{\text{ is countable}}\}} be the countable elements of A , {\displaystyle {\cal {A}},} and let μ = C × { 0 } ∪ ( A ∖ C ) × { + ∞ } . {\displaystyle \mu ={\cal {C}}\times \{0\}\cup ({\cal {A}}\setminus {\cal {C}})\times \{+\infty \}.} It can be shown that μ {\displaystyle \mu } is a measure. ==== Involved non-example ==== Measures that are not semifinite are very wild when restricted to certain sets. Every measure is, in a sense, semifinite once its 0 − ∞ {\displaystyle 0-\infty } part (the wild part) is taken away. We say the 0 − ∞ {\displaystyle \mathbf {0-\infty } } part of μ {\displaystyle \mu } to mean the measure μ 0 − ∞ {\displaystyle \mu _{0-\infty }} defined in the above theorem. Here is an explicit formula for μ 0 − ∞ {\displaystyle \mu _{0-\infty }} : μ 0 − ∞ = ( sup { μ ( B ) − μ sf ( B ) : B ∈ P ( A ) ∩ μ sf pre ( R ≥ 0 ) } ) A ∈ A . {\displaystyle \mu _{0-\infty }=(\sup\{\mu (B)-\mu _{\text{sf}}(B):B\in {\cal {P}}(A)\cap \mu _{\text{sf}}^{\text{pre}}(\mathbb {R} _{\geq 0})\})_{A\in {\cal {A}}}.} ==== Results regarding semifinite measures ==== Let F {\displaystyle \mathbb {F} } be R {\displaystyle \mathbb {R} } or C , {\displaystyle \mathbb {C} ,} and let T : L F ∞ ( μ ) → ( L F 1 ( μ ) ) ∗ : g ↦ T g = ( ∫ f g d μ ) f ∈ L F 1 ( μ ) . {\displaystyle T:L_{\mathbb {F} }^{\infty }(\mu )\to \left(L_{\mathbb {F} }^{1}(\mu )\right)^{*}:g\mapsto T_{g}=\left(\int fgd\mu \right)_{f\in L_{\mathbb {F} }^{1}(\mu )}.} Then μ {\displaystyle \mu } is semifinite if and only if T {\displaystyle T} is injective. (This result has import in the study of the dual space of L 1 = L F 1 ( μ ) {\displaystyle L^{1}=L_{\mathbb {F} }^{1}(\mu )} .) Let F {\displaystyle \mathbb {F} } be R {\displaystyle \mathbb {R} } or C , {\displaystyle \mathbb {C} ,} and let T {\displaystyle {\cal {T}}} be the topology of convergence in measure on L F 0 ( μ ) . {\displaystyle L_{\mathbb {F} }^{0}(\mu ).} Then μ {\displaystyle \mu } is semifinite if and only if T {\displaystyle {\cal {T}}} is Hausdorff. (Johnson) Let X {\displaystyle X} be a set, let A {\displaystyle {\cal {A}}} be a sigma-algebra on X , {\displaystyle X,} let μ {\displaystyle \mu } be a measure on A , {\displaystyle {\cal {A}},} let Y {\displaystyle Y} be a set, let B {\displaystyle {\cal {B}}} be a sigma-algebra on Y , {\displaystyle Y,} and let ν {\displaystyle \nu } be a measure on B . {\displaystyle {\cal {B}}.} If μ , ν {\displaystyle \mu ,\nu } are both not a 0 − ∞ {\displaystyle 0-\infty } measure, then both μ {\displaystyle \mu } and ν {\displaystyle \nu } are semifinite if and only if ( μ × cld ν ) {\displaystyle (\mu \times _{\text{cld}}\nu )} ( A × B ) = μ ( A ) ν ( B ) {\displaystyle (A\times B)=\mu (A)\nu (B)} for all A ∈ A {\displaystyle A\in {\cal {A}}} and B ∈ B . {\displaystyle B\in {\cal {B}}.} (Here, μ × cld ν {\displaystyle \mu \times _{\text{cld}}\nu } is the measure defined in Theorem 39.1 in Berberian '65.) === Localizable measures === Localizable measures are a special case of semifinite measures and a generalization of sigma-finite measures. Let X {\displaystyle X} be a set, let A {\displaystyle {\cal {A}}} be a sigma-algebra on X , {\displaystyle X,} and let μ {\displaystyle \mu } be a measure on A . {\displaystyle {\cal {A}}.} Let F {\displaystyle \mathbb {F} } be R {\displaystyle \mathbb {R} } or C , {\displaystyle \mathbb {C} ,} and let T : L F ∞ ( μ ) → ( L F 1 ( μ ) ) ∗ : g ↦ T g = ( ∫ f g d μ ) f ∈ L F 1 ( μ ) . {\displaystyle T:L_{\mathbb {F} }^{\infty }(\mu )\to \left(L_{\mathbb {F} }^{1}(\mu )\right)^{*}:g\mapsto T_{g}=\left(\int fgd\mu \right)_{f\in L_{\mathbb {F} }^{1}(\mu )}.} Then μ {\displaystyle \mu } is localizable if and only if T {\displaystyle T} is bijective (if and only if L F ∞ ( μ ) {\displaystyle L_{\mathbb {F} }^{\infty }(\mu )} "is" L F 1 ( μ ) ∗ {\displaystyle L_{\mathbb {F} }^{1}(\mu )^{*}} ). === s-finite measures === A measure is said to be s-finite if it is a countable sum of finite measures. S-finite measures are more general than sigma-finite ones and have applications in the theory of stochastic processes. == Non-measurable sets == If the axiom of choice is assumed to be true, it can be proved that not all subsets of Euclidean space are Lebesgue measurable; examples of such sets include the Vitali set, and the non-measurable sets postulated by the Hausdorff paradox and the Banach–Tarski paradox. == Generalizations == For certain purposes, it is useful to have a "measure" whose values are not restricted to the non-negative reals or infinity. For instance, a countably additive set function with values in the (signed) real numbers is called a signed measure, while such a function with values in the complex numbers is called a complex measure. Observe, however, that complex measure is necessarily of finite variation, hence complex measures include finite signed measures but not, for example, the Lebesgue measure. Measures that take values in Banach spaces have been studied extensively. A measure that takes values in the set of self-adjoint projections on a Hilbert space is called a projection-valued measure; these are used in functional analysis for the spectral theorem. When it is necessary to distinguish the usual measures which take non-negative values from generalizations, the term positive measure is used. Positive measures are closed under conical combination but not general linear combination, while signed measures are the linear closure of positive measures. More generally see measure theory in topological vector spaces. Another generalization is the finitely additive measure, also known as a content. This is the same as a measure except that instead of requiring countable additivity we require only finite additivity. Historically, this definition was used first. It turns out that in general, finitely additive measures are connected with notions such as Banach limits, the dual of L ∞ {\displaystyle L^{\infty }} and the Stone–Čech compactification. All these are linked in one way or another to the axiom of choice. Contents remain useful in certain technical problems in geometric measure theory; this is the theory of Banach measures. A charge is a generalization in both directions: it is a finitely additive, signed measure. (Cf. ba space for information about bounded charges, where we say a charge is bounded to mean its range its a bounded subset of R.) == See also == == Notes == == Bibliography == == References == == External links == "Measure", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Tutorial: Measure Theory for Dummies
|
https://en.wikipedia.org/wiki/Measure_(mathematics)
|
In mathematics, an identity is an equality relating one mathematical expression A to another mathematical expression B, such that A and B (which might contain some variables) produce the same value for all values of the variables within a certain domain of discourse. In other words, A = B is an identity if A and B define the same functions, and an identity is an equality between functions that are differently defined. For example, ( a + b ) 2 = a 2 + 2 a b + b 2 {\displaystyle (a+b)^{2}=a^{2}+2ab+b^{2}} and cos 2 θ + sin 2 θ = 1 {\displaystyle \cos ^{2}\theta +\sin ^{2}\theta =1} are identities. Identities are sometimes indicated by the triple bar symbol ≡ instead of =, the equals sign. Formally, an identity is a universally quantified equality. == Common identities == === Algebraic identities === Certain identities, such as a + 0 = a {\displaystyle a+0=a} and a + ( − a ) = 0 {\displaystyle a+(-a)=0} , form the basis of algebra, while other identities, such as ( a + b ) 2 = a 2 + 2 a b + b 2 {\displaystyle (a+b)^{2}=a^{2}+2ab+b^{2}} and a 2 − b 2 = ( a + b ) ( a − b ) {\displaystyle a^{2}-b^{2}=(a+b)(a-b)} , can be useful in simplifying algebraic expressions and expanding them. === Trigonometric identities === Geometrically, trigonometric identities are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities involving both angles and side lengths of a triangle. Only the former are covered in this article. These identities are useful whenever expressions involving trigonometric functions need to be simplified. Another important application is the integration of non-trigonometric functions: a common technique which involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. One of the most prominent examples of trigonometric identities involves the equation sin 2 θ + cos 2 θ = 1 , {\displaystyle \sin ^{2}\theta +\cos ^{2}\theta =1,} which is true for all real values of θ {\displaystyle \theta } . On the other hand, the equation cos θ = 1 {\displaystyle \cos \theta =1} is only true for certain values of θ {\displaystyle \theta } , not all. For example, this equation is true when θ = 0 , {\displaystyle \theta =0,} but false when θ = 2 {\displaystyle \theta =2} . Another group of trigonometric identities concerns the so-called addition/subtraction formulas (e.g. the double-angle identity sin ( 2 θ ) = 2 sin θ cos θ {\displaystyle \sin(2\theta )=2\sin \theta \cos \theta } , the addition formula for tan ( x + y ) {\displaystyle \tan(x+y)} ), which can be used to break down expressions of larger angles into those with smaller constituents. === Exponential identities === The following identities hold for all integer exponents, provided that the base is non-zero: b m + n = b m ⋅ b n ( b m ) n = b m ⋅ n ( b ⋅ c ) n = b n ⋅ c n {\displaystyle {\begin{aligned}b^{m+n}&=b^{m}\cdot b^{n}\\(b^{m})^{n}&=b^{m\cdot n}\\(b\cdot c)^{n}&=b^{n}\cdot c^{n}\end{aligned}}} Unlike addition and multiplication, exponentiation is not commutative. For example, 2 + 3 = 3 + 2 = 5 and 2 · 3 = 3 · 2 = 6, but 23 = 8 whereas 32 = 9. Also unlike addition and multiplication, exponentiation is not associative either. For example, (2 + 3) + 4 = 2 + (3 + 4) = 9 and (2 · 3) · 4 = 2 · (3 · 4) = 24, but 23 to the 4 is 84 (or 4,096) whereas 2 to the 34 is 281 (or 2,417,851,639,229,258,349,412,352). When no parentheses are written, by convention the order is top-down, not bottom-up: b p q := b ( p q ) , {\displaystyle b^{p^{q}}:=b^{(p^{q})},} whereas ( b p ) q = b p ⋅ q . {\displaystyle (b^{p})^{q}=b^{p\cdot q}.} === Logarithmic identities === Several important formulas, sometimes called logarithmic identities or log laws, relate logarithms to one another: ==== Product, quotient, power and root ==== The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the pth power of a number is p times the logarithm of the number itself; the logarithm of a pth root is the logarithm of the number divided by p. The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions x = b log b x , {\displaystyle x=b^{\log _{b}x},} and/or y = b log b y , {\displaystyle y=b^{\log _{b}y},} in the left hand sides. ==== Change of base ==== The logarithm logb(x) can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula: log b ( x ) = log k ( x ) log k ( b ) . {\displaystyle \log _{b}(x)={\frac {\log _{k}(x)}{\log _{k}(b)}}.} Typical scientific calculators calculate the logarithms to bases 10 and e. Logarithms with respect to any base b can be determined using either of these two logarithms by the previous formula: log b ( x ) = log 10 ( x ) log 10 ( b ) = log e ( x ) log e ( b ) . {\displaystyle \log _{b}(x)={\frac {\log _{10}(x)}{\log _{10}(b)}}={\frac {\log _{e}(x)}{\log _{e}(b)}}.} Given a number x and its logarithm logb(x) to an unknown base b, the base is given by: b = x 1 log b ( x ) . {\displaystyle b=x^{\frac {1}{\log _{b}(x)}}.} === Hyperbolic function identities === The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integer powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of an even number of hyperbolic sines. The Gudermannian function gives a direct relationship between the trigonometric functions and the hyperbolic ones that does not involve complex numbers. == Logic and universal algebra == Formally, an identity is a true universally quantified formula of the form ∀ x 1 , … , x n : s = t , {\displaystyle \forall x_{1},\ldots ,x_{n}:s=t,} where s and t are terms with no other free variables than x 1 , … , x n . {\displaystyle x_{1},\ldots ,x_{n}.} The quantifier prefix ∀ x 1 , … , x n {\displaystyle \forall x_{1},\ldots ,x_{n}} is often left implicit, when it is stated that the formula is an identity. For example, the axioms of a monoid are often given as the formulas ∀ x , y , z : x ∗ ( y ∗ z ) = ( x ∗ y ) ∗ z , ∀ x : x ∗ 1 = x , ∀ x : 1 ∗ x = x , {\displaystyle \forall x,y,z:x*(y*z)=(x*y)*z,\quad \forall x:x*1=x,\quad \forall x:1*x=x,} or, shortly, x ∗ ( y ∗ z ) = ( x ∗ y ) ∗ z , x ∗ 1 = x , 1 ∗ x = x . {\displaystyle x*(y*z)=(x*y)*z,\qquad x*1=x,\qquad 1*x=x.} So, these formulas are identities in every monoid. As for any equality, the formulas without quantifier are often called equations. In other words, an identity is an equation that is true for all values of the variables. == See also == Accounting identity List of mathematical identities Law (mathematics) == References == === Notes === === Citations === === Sources === == External links == The Encyclopedia of Equation Online encyclopedia of mathematical identities (archived) A Collection of Algebraic Identities Archived 2011-10-01 at the Wayback Machine
|
https://en.wikipedia.org/wiki/Identity_(mathematics)
|
In mathematics, two non-zero real numbers a and b are said to be commensurable if their ratio a/b is a rational number; otherwise a and b are called incommensurable. (Recall that a rational number is one that is equivalent to the ratio of two integers.) There is a more general notion of commensurability in group theory. For example, the numbers 3 and 2 are commensurable because their ratio, 3/2, is a rational number. The numbers 3 {\displaystyle {\sqrt {3}}} and 2 3 {\displaystyle 2{\sqrt {3}}} are also commensurable because their ratio, 3 2 3 = 1 2 {\textstyle {\frac {\sqrt {3}}{2{\sqrt {3}}}}={\frac {1}{2}}} , is a rational number. However, the numbers 3 {\textstyle {\sqrt {3}}} and 2 are incommensurable because their ratio, 3 2 {\textstyle {\frac {\sqrt {3}}{2}}} , is an irrational number. More generally, it is immediate from the definition that if a and b are any two non-zero rational numbers, then a and b are commensurable; it is also immediate that if a is any irrational number and b is any non-zero rational number, then a and b are incommensurable. On the other hand, if both a and b are irrational numbers, then a and b may or may not be commensurable. == History of the concept == The Pythagoreans are credited with the proof of the existence of irrational numbers. When the ratio of the lengths of two line segments is irrational, the line segments themselves (not just their lengths) are also described as being incommensurable. A separate, more general and circuitous ancient Greek doctrine of proportionality for geometric magnitude was developed in Book V of Euclid's Elements in order to allow proofs involving incommensurable lengths, thus avoiding arguments which applied only to a historically restricted definition of number. Euclid's notion of commensurability is anticipated in passing in the discussion between Socrates and the slave boy in Plato's dialogue entitled Meno, in which Socrates uses the boy's own inherent capabilities to solve a complex geometric problem through the Socratic Method. He develops a proof which is, for all intents and purposes, very Euclidean in nature and speaks to the concept of incommensurability. The usage primarily comes from translations of Euclid's Elements, in which two line segments a and b are called commensurable precisely if there is some third segment c that can be laid end-to-end a whole number of times to produce a segment congruent to a, and also, with a different whole number, a segment congruent to b. Euclid did not use any concept of real number, but he used a notion of congruence of line segments, and of one such segment being longer or shorter than another. That a/b is rational is a necessary and sufficient condition for the existence of some real number c, and integers m and n, such that a = mc and b = nc. Assuming for simplicity that a and b are positive, one can say that a ruler, marked off in units of length c, could be used to measure out both a line segment of length a, and one of length b. That is, there is a common unit of length in terms of which a and b can both be measured; this is the origin of the term. Otherwise the pair a and b are incommensurable. == In group theory == In group theory, two subgroups Γ1 and Γ2 of a group G are said to be commensurable if the intersection Γ1 ∩ Γ2 is of finite index in both Γ1 and Γ2. Example: Let a and b be nonzero real numbers. Then the subgroup of the real numbers R generated by a is commensurable with the subgroup generated by b if and only if the real numbers a and b are commensurable, in the sense that a/b is rational. Thus the group-theoretic notion of commensurability generalizes the concept for real numbers. There is a similar notion for two groups which are not given as subgroups of the same group. Two groups G1 and G2 are (abstractly) commensurable if there are subgroups H1 ⊂ G1 and H2 ⊂ G2 of finite index such that H1 is isomorphic to H2. == In topology == Two path-connected topological spaces are sometimes said to be commensurable if they have homeomorphic finite-sheeted covering spaces. Depending on the type of space under consideration, one might want to use homotopy equivalences or diffeomorphisms instead of homeomorphisms in the definition. If two spaces are commensurable, then their fundamental groups are commensurable. Example: any two closed surfaces of genus at least 2 are commensurable with each other. == References ==
|
https://en.wikipedia.org/wiki/Commensurability_(mathematics)
|
Philosophy of mathematics is the branch of philosophy that deals with the nature of mathematics and its relationship to other areas of philosophy, particularly epistemology and metaphysics. Central questions posed include whether or not mathematical objects are purely abstract entities or are in some way concrete, and in what the relationship such objects have with physical reality consists. Major themes that are dealt with in philosophy of mathematics include: Reality: The question is whether mathematics is a pure product of human mind or whether it has some reality by itself. Logic and rigor Relationship with physical reality Relationship with science Relationship with applications Mathematical truth Nature as human activity (science, art, game, or all together) == Major themes == === Reality === === Logic and rigor === Mathematical reasoning requires rigor. This means that the definitions must be absolutely unambiguous and the proofs must be reducible to a succession of applications of syllogisms or inference rules, without any use of empirical evidence and intuition. The rules of rigorous reasoning have been established by the ancient Greek philosophers under the name of logic. Logic is not specific to mathematics, but, in mathematics, the standard of rigor is much higher than elsewhere. For many centuries, logic, although used for mathematical proofs, belonged to philosophy and was not specifically studied by mathematicians. Circa the end of the 19th century, several paradoxes made questionable the logical foundation of mathematics, and consequently the validity of the whole of mathematics. This has been called the foundational crisis of mathematics. Some of these paradoxes consist of results that seem to contradict the common intuition, such as the possibility to construct valid non-Euclidean geometries in which the parallel postulate is wrong, the Weierstrass function that is continuous but nowhere differentiable, and the study by Georg Cantor of infinite sets, which led to consider several sizes of infinity (infinite cardinals). Even more striking, Russell's paradox shows that the phrase "the set of all sets" is self contradictory. Several methods have been proposed to solve the problem by changing of logical framework, such as constructive mathematics and intuitionistic logic. Roughly speaking, the first one consists of requiring that every existence theorem must provide an explicit example, and the second one excludes from mathematical reasoning the law of excluded middle and double negation elimination. These logics have less inference rules than classical logic. On the other hand classical logic was a first-order logic, which means roughly that quantifiers cannot be applied to infinite sets. This means, for example that the sentence "every set of natural numbers has a least element" is nonsensical in any formalization of classical logic. This led to the introduction of higher-order logics, which are presently used commonly in mathematics. The problems of foundation of mathematics has been eventually resolved with the rise of mathematical logic as a new area of mathematics. In this framework, a mathematical or logical theory consists of a formal language that defines the well-formed of assertions, a set of basic assertions called axioms and a set of inference rules that allow producing new assertions from one or several known assertions. A theorem of such a theory is either an axiom or an assertion that can be obtained from previously known theorems by the application of an inference rule. The Zermelo–Fraenkel set theory with the axiom of choice, generally called ZFC, is a higher-order logic in which all mathematics have been restated; it is used implicitely in all mathematics texts that do not specify explicitly on which foundations they are based. Moreover, the other proposed foundations can be modeled and studied inside ZFC. It results that "rigor" is no more a relevant concept in mathematics, as a proof is either correct or erroneous, and a "rigorous proof" is simply a pleonasm. Where a special concept of rigor comes into play is in the socialized aspects of a proof. In particular, proofs are rarely written in full details, and some steps of a proof are generally considered as trivial, easy, or straightforward, and therefore left to the reader. As most proof errors occur in these skipped steps, a new proof requires to be verified by other specialists of the subject, and can be considered as reliable only after having been accepted by the community of the specialists, which may need several years. Also, the concept of "rigor" may remain useful for teaching to beginners what is a mathematical proof. === Relationship with sciences === Mathematics is used in most sciences for modeling phenomena, which then allows predictions to be made from experimental laws. The independence of mathematical truth from any experimentation implies that the accuracy of such predictions depends only on the adequacy of the model. Inaccurate predictions, rather than being caused by invalid mathematical concepts, imply the need to change the mathematical model used. For example, the perihelion precession of Mercury could only be explained after the emergence of Einstein's general relativity, which replaced Newton's law of gravitation as a better mathematical model. There is still a philosophical debate whether mathematics is a science. However, in practice, mathematicians are typically grouped with scientists, and mathematics shares much in common with the physical sciences. Like them, it is falsifiable, which means in mathematics that if a result or a theory is wrong, this can be proved by providing a counterexample. Similarly as in science, theories and results (theorems) are often obtained from experimentation. In mathematics, the experimentation may consist of computation on selected examples or of the study of figures or other representations of mathematical objects (often mind representations without physical support). For example, when asked how he came about his theorems, Gauss once replied "durch planmässiges Tattonieren" (through systematic experimentation). However, some authors emphasize that mathematics differs from the modern notion of science by not relying on empirical evidence. ==== Unreasonable effectiveness ==== The unreasonable effectiveness of mathematics is a phenomenon that was named and first made explicit by physicist Eugene Wigner. It is the fact that many mathematical theories (even the "purest") have applications outside their initial object. These applications may be completely outside their initial area of mathematics, and may concern physical phenomena that were completely unknown when the mathematical theory was introduced. Examples of unexpected applications of mathematical theories can be found in many areas of mathematics. A notable example is the prime factorization of natural numbers that was discovered more than 2,000 years before its common use for secure internet communications through the RSA cryptosystem. A second historical example is the theory of ellipses. They were studied by the ancient Greek mathematicians as conic sections (that is, intersections of cones with planes). It was almost 2,000 years later that Johannes Kepler discovered that the trajectories of the planets are ellipses. In the 19th century, the internal development of geometry (pure mathematics) led to definition and study of non-Euclidean geometries, spaces of dimension higher than three and manifolds. At this time, these concepts seemed totally disconnected from the physical reality, but at the beginning of the 20th century, Albert Einstein developed the theory of relativity that uses fundamentally these concepts. In particular, spacetime of special relativity is a non-Euclidean space of dimension four, and spacetime of general relativity is a (curved) manifold of dimension four. A striking aspect of the interaction between mathematics and physics is when mathematics drives research in physics. This is illustrated by the discoveries of the positron and the baryon Ω − . {\displaystyle \Omega ^{-}.} In both cases, the equations of the theories had unexplained solutions, which led to conjecture of the existence of an unknown particle, and the search for these particles. In both cases, these particles were discovered a few years later by specific experiments. == History == The origin of mathematics is of arguments and disagreements. Whether the birth of mathematics was by chance or induced by necessity during the development of similar subjects, such as physics, remains an area of contention. Many thinkers have contributed their ideas concerning the nature of mathematics. Today, some philosophers of mathematics aim to give accounts of this form of inquiry and its products as they stand, while others emphasize a role for themselves that goes beyond simple interpretation to critical analysis. There are traditions of mathematical philosophy in both Western philosophy and Eastern philosophy. Western philosophies of mathematics go as far back as Pythagoras, who described the theory "everything is mathematics" (mathematicism), Plato, who paraphrased Pythagoras, and studied the ontological status of mathematical objects, and Aristotle, who studied logic and issues related to infinity (actual versus potential). Greek philosophy on mathematics was strongly influenced by their study of geometry. For example, at one time, the Greeks held the opinion that 1 (one) was not a number, but rather a unit of arbitrary length. A number was defined as a multitude. Therefore, 3, for example, represented a certain multitude of units, and was thus "truly" a number. At another point, a similar argument was made that 2 was not a number but a fundamental notion of a pair. These views come from the heavily geometric straight-edge-and-compass viewpoint of the Greeks: just as lines drawn in a geometric problem are measured in proportion to the first arbitrarily drawn line, so too are the numbers on a number line measured in proportion to the arbitrary first "number" or "one". These earlier Greek ideas of numbers were later upended by the discovery of the irrationality of the square root of two. Hippasus, a disciple of Pythagoras, showed that the diagonal of a unit square was incommensurable with its (unit-length) edge: in other words he proved there was no existing (rational) number that accurately depicts the proportion of the diagonal of the unit square to its edge. This caused a significant re-evaluation of Greek philosophy of mathematics. According to legend, fellow Pythagoreans were so traumatized by this discovery that they murdered Hippasus to stop him from spreading his heretical idea. Simon Stevin was one of the first in Europe to challenge Greek ideas in the 16th century. Beginning with Leibniz, the focus shifted strongly to the relationship between mathematics and logic. This perspective dominated the philosophy of mathematics through the time of Boole, Frege and Russell, but was brought into question by developments in the late 19th and early 20th centuries. === Contemporary philosophy === A perennial issue in the philosophy of mathematics concerns the relationship between logic and mathematics at their joint foundations. While 20th-century philosophers continued to ask the questions mentioned at the outset of this article, the philosophy of mathematics in the 20th century was characterized by a predominant interest in formal logic, set theory (both naive set theory and axiomatic set theory), and foundational issues. It is a profound puzzle that on the one hand mathematical truths seem to have a compelling inevitability, but on the other hand the source of their "truthfulness" remains elusive. Investigations into this issue are known as the foundations of mathematics program. At the start of the 20th century, philosophers of mathematics were already beginning to divide into various schools of thought about all these questions, broadly distinguished by their pictures of mathematical epistemology and ontology. Three schools, formalism, intuitionism, and logicism, emerged at this time, partly in response to the increasingly widespread worry that mathematics as it stood, and analysis in particular, did not live up to the standards of certainty and rigor that had been taken for granted. Each school addressed the issues that came to the fore at that time, either attempting to resolve them or claiming that mathematics is not entitled to its status as our most trusted knowledge. Surprising and counter-intuitive developments in formal logic and set theory early in the 20th century led to new questions concerning what was traditionally called the foundations of mathematics. As the century unfolded, the initial focus of concern expanded to an open exploration of the fundamental axioms of mathematics, the axiomatic approach having been taken for granted since the time of Euclid around 300 BCE as the natural basis for mathematics. Notions of axiom, proposition and proof, as well as the notion of a proposition being true of a mathematical object (see Assignment), were formalized, allowing them to be treated mathematically. The Zermelo–Fraenkel axioms for set theory were formulated which provided a conceptual framework in which much mathematical discourse would be interpreted. In mathematics, as in physics, new and unexpected ideas had arisen and significant changes were coming. With Gödel numbering, propositions could be interpreted as referring to themselves or other propositions, enabling inquiry into the consistency of mathematical theories. This reflective critique in which the theory under review "becomes itself the object of a mathematical study" led Hilbert to call such study metamathematics or proof theory. At the middle of the century, a new mathematical theory was created by Samuel Eilenberg and Saunders Mac Lane, known as category theory, and it became a new contender for the natural language of mathematical thinking. As the 20th century progressed, however, philosophical opinions diverged as to just how well-founded were the questions about foundations that were raised at the century's beginning. Hilary Putnam summed up one common view of the situation in the last third of the century by saying: When philosophy discovers something wrong with science, sometimes science has to be changed—Russell's paradox comes to mind, as does Berkeley's attack on the actual infinitesimal—but more often it is philosophy that has to be changed. I do not think that the difficulties that philosophy finds with classical mathematics today are genuine difficulties; and I think that the philosophical interpretations of mathematics that we are being offered on every hand are wrong, and that "philosophical interpretation" is just what mathematics doesn't need.: 169–170 Philosophy of mathematics today proceeds along several different lines of inquiry, by philosophers of mathematics, logicians, and mathematicians, and there are many schools of thought on the subject. The schools are addressed separately in the next section, and their assumptions explained. == Contemporary schools of thought == Contemporary schools of thought in the philosophy of mathematics include: artistic, Platonism, mathematicism, logicism, formalism, conventionalism, intuitionism, constructivism, finitism, structuralism, embodied mind theories (Aristotelian realism, psychologism, empiricism), fictionalism, social constructivism, and non-traditional schools. However, many of these schools of thought are mutually compatible. For example, most living mathematicians are together Platonists and formalists, give a great importance to aesthetic, and consider that axioms should be chosen for the results they produce, not for their coherence with human intuition of reality (conventionalism). === Artistic === The view that claims that mathematics is the aesthetic combination of assumptions, and then also claims that mathematics is an art. A famous mathematician who claims that is the British G. H. Hardy. For Hardy, in his book, A Mathematician's Apology, the definition of mathematics was more like the aesthetic combination of concepts. === Platonism === === Mathematicism === Max Tegmark's mathematical universe hypothesis (or mathematicism) goes further than Platonism in asserting that not only do all mathematical objects exist, but nothing else does. Tegmark's sole postulate is: All structures that exist mathematically also exist physically. That is, in the sense that "in those [worlds] complex enough to contain self-aware substructures [they] will subjectively perceive themselves as existing in a physically 'real' world". === Logicism === Logicism is the thesis that mathematics is reducible to logic, and hence nothing but a part of logic.: 41 Logicists hold that mathematics can be known a priori, but suggest that our knowledge of mathematics is just part of our knowledge of logic in general, and is thus analytic, not requiring any special faculty of mathematical intuition. In this view, logic is the proper foundation of mathematics, and all mathematical statements are necessary logical truths. Rudolf Carnap (1931) presents the logicist thesis in two parts: The concepts of mathematics can be derived from logical concepts through explicit definitions. The theorems of mathematics can be derived from logical axioms through purely logical deduction. Gottlob Frege was the founder of logicism. In his seminal Die Grundgesetze der Arithmetik (Basic Laws of Arithmetic) he built up arithmetic from a system of logic with a general principle of comprehension, which he called "Basic Law V" (for concepts F and G, the extension of F equals the extension of G if and only if for all objects a, Fa equals Ga), a principle that he took to be acceptable as part of logic. Frege's construction was flawed. Bertrand Russell discovered that Basic Law V is inconsistent (this is Russell's paradox). Frege abandoned his logicist program soon after this, but it was continued by Russell and Whitehead. They attributed the paradox to "vicious circularity" and built up what they called ramified type theory to deal with it. In this system, they were eventually able to build up much of modern mathematics but in an altered, and excessively complex form (for example, there were different natural numbers in each type, and there were infinitely many types). They also had to make several compromises in order to develop much of mathematics, such as the "axiom of reducibility". Even Russell said that this axiom did not really belong to logic. Modern logicists (like Bob Hale, Crispin Wright, and perhaps others) have returned to a program closer to Frege's. They have abandoned Basic Law V in favor of abstraction principles such as Hume's principle (the number of objects falling under the concept F equals the number of objects falling under the concept G if and only if the extension of F and the extension of G can be put into one-to-one correspondence). Frege required Basic Law V to be able to give an explicit definition of the numbers, but all the properties of numbers can be derived from Hume's principle. This would not have been enough for Frege because (to paraphrase him) it does not exclude the possibility that the number 3 is in fact Julius Caesar. In addition, many of the weakened principles that they have had to adopt to replace Basic Law V no longer seem so obviously analytic, and thus purely logical. === Formalism === Formalism holds that mathematical statements may be thought of as statements about the consequences of certain string manipulation rules. For example, in the "game" of Euclidean geometry (which is seen as consisting of some strings called "axioms", and some "rules of inference" to generate new strings from given ones), one can prove that the Pythagorean theorem holds (that is, one can generate the string corresponding to the Pythagorean theorem). According to formalism, mathematical truths are not about numbers and sets and triangles and the like—in fact, they are not "about" anything at all. Another version of formalism is known as deductivism. In deductivism, the Pythagorean theorem is not an absolute truth, but a relative one, if it follows deductively from the appropriate axioms. The same is held to be true for all other mathematical statements. Formalism need not mean that mathematics is nothing more than a meaningless symbolic game. It is usually hoped that there exists some interpretation in which the rules of the game hold. (Compare this position to structuralism.) But it does allow the working mathematician to continue in his or her work and leave such problems to the philosopher or scientist. Many formalists would say that in practice, the axiom systems to be studied will be suggested by the demands of science or other areas of mathematics. A major early proponent of formalism was David Hilbert, whose program was intended to be a complete and consistent axiomatization of all of mathematics. Hilbert aimed to show the consistency of mathematical systems from the assumption that the "finitary arithmetic" (a subsystem of the usual arithmetic of the positive integers, chosen to be philosophically uncontroversial) was consistent. Hilbert's goals of creating a system of mathematics that is both complete and consistent were seriously undermined by the second of Gödel's incompleteness theorems, which states that sufficiently expressive consistent axiom systems can never prove their own consistency. Since any such axiom system would contain the finitary arithmetic as a subsystem, Gödel's theorem implied that it would be impossible to prove the system's consistency relative to that (since it would then prove its own consistency, which Gödel had shown was impossible). Thus, in order to show that any axiomatic system of mathematics is in fact consistent, one needs to first assume the consistency of a system of mathematics that is in a sense stronger than the system to be proven consistent. Hilbert was initially a deductivist, but, as may be clear from above, he considered certain metamathematical methods to yield intrinsically meaningful results and was a realist with respect to the finitary arithmetic. Later, he held the opinion that there was no other meaningful mathematics whatsoever, regardless of interpretation. Other formalists, such as Rudolf Carnap, Alfred Tarski, and Haskell Curry, considered mathematics to be the investigation of formal axiom systems. Mathematical logicians study formal systems but are just as often realists as they are formalists. Formalists are relatively tolerant and inviting to new approaches to logic, non-standard number systems, new set theories, etc. The more games we study, the better. However, in all three of these examples, motivation is drawn from existing mathematical or philosophical concerns. The "games" are usually not arbitrary. The main critique of formalism is that the actual mathematical ideas that occupy mathematicians are far removed from the string manipulation games mentioned above. Formalism is thus silent on the question of which axiom systems ought to be studied, as none is more meaningful than another from a formalistic point of view. Recently, some formalist mathematicians have proposed that all of our formal mathematical knowledge should be systematically encoded in computer-readable formats, so as to facilitate automated proof checking of mathematical proofs and the use of interactive theorem proving in the development of mathematical theories and computer software. Because of their close connection with computer science, this idea is also advocated by mathematical intuitionists and constructivists in the "computability" tradition—see QED project for a general overview. === Conventionalism === The French mathematician Henri Poincaré was among the first to articulate a conventionalist view. Poincaré's use of non-Euclidean geometries in his work on differential equations convinced him that Euclidean geometry should not be regarded as a priori truth. He held that axioms in geometry should be chosen for the results they produce, not for their apparent coherence with human intuitions about the physical world. === Intuitionism === In mathematics, intuitionism is a program of methodological reform whose motto is that "there are no non-experienced mathematical truths" (L. E. J. Brouwer). From this springboard, intuitionists seek to reconstruct what they consider to be the corrigible portion of mathematics in accordance with Kantian concepts of being, becoming, intuition, and knowledge. Brouwer, the founder of the movement, held that mathematical objects arise from the a priori forms of the volitions that inform the perception of empirical objects. A major force behind intuitionism was L. E. J. Brouwer, who rejected the usefulness of formalized logic of any sort for mathematics. His student Arend Heyting postulated an intuitionistic logic, different from the classical Aristotelian logic; this logic does not contain the law of the excluded middle and therefore frowns upon proofs by contradiction. The axiom of choice is also rejected in most intuitionistic set theories, though in some versions it is accepted. In intuitionism, the term "explicit construction" is not cleanly defined, and that has led to criticisms. Attempts have been made to use the concepts of Turing machine or computable function to fill this gap, leading to the claim that only questions regarding the behavior of finite algorithms are meaningful and should be investigated in mathematics. This has led to the study of the computable numbers, first introduced by Alan Turing. Not surprisingly, then, this approach to mathematics is sometimes associated with theoretical computer science. ==== Constructivism ==== Like intuitionism, constructivism involves the regulative principle that only mathematical entities which can be explicitly constructed in a certain sense should be admitted to mathematical discourse. In this view, mathematics is an exercise of the human intuition, not a game played with meaningless symbols. Instead, it is about entities that we can create directly through mental activity. In addition, some adherents of these schools reject non-constructive proofs, such as using proof by contradiction when showing the existence of an object or when trying to establish the truth of some proposition. Important work was done by Errett Bishop, who managed to prove versions of the most important theorems in real analysis as constructive analysis in his 1967 Foundations of Constructive Analysis. ==== Finitism ==== Finitism is an extreme form of constructivism, according to which a mathematical object does not exist unless it can be constructed from natural numbers in a finite number of steps. In her book Philosophy of Set Theory, Mary Tiles characterized those who allow countably infinite objects as classical finitists, and those who deny even countably infinite objects as strict finitists. The most famous proponent of finitism was Leopold Kronecker, who said: God created the natural numbers, all else is the work of man. Ultrafinitism is an even more extreme version of finitism, which rejects not only infinities but finite quantities that cannot feasibly be constructed with available resources. Another variant of finitism is Euclidean arithmetic, a system developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets. Mayberry's system is Aristotelian in general inspiration and, despite his strong rejection of any role for operationalism or feasibility in the foundations of mathematics, comes to somewhat similar conclusions, such as, for instance, that super-exponentiation is not a legitimate finitary function. === Structuralism === Structuralism is a position holding that mathematical theories describe structures, and that mathematical objects are exhaustively defined by their places in such structures, consequently having no intrinsic properties. For instance, it would maintain that all that needs to be known about the number 1 is that it is the first whole number after 0. Likewise all the other whole numbers are defined by their places in a structure, the number line. Other examples of mathematical objects might include lines and planes in geometry, or elements and operations in abstract algebra. Structuralism is an epistemologically realistic view in that it holds that mathematical statements have an objective truth value. However, its central claim only relates to what kind of entity a mathematical object is, not to what kind of existence mathematical objects or structures have (not, in other words, to their ontology). The kind of existence mathematical objects have would clearly be dependent on that of the structures in which they are embedded; different sub-varieties of structuralism make different ontological claims in this regard. The ante rem structuralism ("before the thing") has a similar ontology to Platonism. Structures are held to have a real but abstract and immaterial existence. As such, it faces the standard epistemological problem of explaining the interaction between such abstract structures and flesh-and-blood mathematicians (see Benacerraf's identification problem). The in re structuralism ("in the thing") is the equivalent of Aristotelian realism. Structures are held to exist inasmuch as some concrete system exemplifies them. This incurs the usual issues that some perfectly legitimate structures might accidentally happen not to exist, and that a finite physical world might not be "big" enough to accommodate some otherwise legitimate structures. The post rem structuralism ("after the thing") is anti-realist about structures in a way that parallels nominalism. Like nominalism, the post rem approach denies the existence of abstract mathematical objects with properties other than their place in a relational structure. According to this view mathematical systems exist, and have structural features in common. If something is true of a structure, it will be true of all systems exemplifying the structure. However, it is merely instrumental to talk of structures being "held in common" between systems: they in fact have no independent existence. === Embodied mind theories === Embodied mind theories hold that mathematical thought is a natural outgrowth of the human cognitive apparatus which finds itself in our physical universe. For example, the abstract concept of number springs from the experience of counting discrete objects (requiring the human senses such as sight for detecting the objects, touch; and signalling from the brain). It is held that mathematics is not universal and does not exist in any real sense, other than in human brains. Humans construct, but do not discover, mathematics. The cognitive processes of pattern-finding and distinguishing objects are also subject to neuroscience; if mathematics is considered to be relevant to a natural world (such as from realism or a degree of it, as opposed to pure solipsism). Its actual relevance to reality, while accepted to be a trustworthy approximation (it is also suggested the evolution of perceptions, the body, and the senses may have been necessary for survival) is not necessarily accurate to a full realism (and is still subject to flaws such as illusion, assumptions (consequently; the foundations and axioms in which mathematics have been formed by humans), generalisations, deception, and hallucinations). As such, this may also raise questions for the modern scientific method for its compatibility with general mathematics; as while relatively reliable, it is still limited by what can be measured by empiricism which may not be as reliable as previously assumed (see also: 'counterintuitive' concepts in such as quantum nonlocality, and action at a distance). Another issue is that one numeral system may not necessarily be applicable to problem solving. Subjects such as complex numbers or imaginary numbers require specific changes to more commonly used axioms of mathematics; otherwise they cannot be adequately understood. Alternatively, computer programmers may use hexadecimal for its 'human-friendly' representation of binary-coded values, rather than decimal (convenient for counting because humans have ten fingers). The axioms or logical rules behind mathematics also vary through time (such as the adaption and invention of zero). As perceptions from the human brain are subject to illusions, assumptions, deceptions, (induced) hallucinations, cognitive errors or assumptions in a general context, it can be questioned whether they are accurate or strictly indicative of truth (see also: philosophy of being), and the nature of empiricism itself in relation to the universe and whether it is independent to the senses and the universe. The human mind has no special claim on reality or approaches to it built out of math. If such constructs as Euler's identity are true then they are true as a map of the human mind and cognition. Embodied mind theorists thus explain the effectiveness of mathematics—mathematics was constructed by the brain in order to be effective in this universe. The most accessible, famous, and infamous treatment of this perspective is Where Mathematics Comes From, by George Lakoff and Rafael E. Núñez. In addition, mathematician Keith Devlin has investigated similar concepts with his book The Math Instinct, as has neuroscientist Stanislas Dehaene with his book The Number Sense. For more on the philosophical ideas that inspired this perspective, see cognitive science of mathematics. ==== Aristotelian realism ==== Aristotelian realism holds that mathematics studies properties such as symmetry, continuity and order that can be literally realized in the physical world (or in any other world there might be). It contrasts with Platonism in holding that the objects of mathematics, such as numbers, do not exist in an "abstract" world but can be physically realized. For example, the number 4 is realized in the relation between a heap of parrots and the universal "being a parrot" that divides the heap into so many parrots. Aristotelian realism is defended by James Franklin and the Sydney School in the philosophy of mathematics and is close to the view of Penelope Maddy that when an egg carton is opened, a set of three eggs is perceived (that is, a mathematical entity realized in the physical world). A problem for Aristotelian realism is what account to give of higher infinities, which may not be realizable in the physical world. The Euclidean arithmetic developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets also falls into the Aristotelian realist tradition. Mayberry, following Euclid, considers numbers to be simply "definite multitudes of units" realized in nature—such as "the members of the London Symphony Orchestra" or "the trees in Birnam wood". Whether or not there are definite multitudes of units for which Euclid's Common Notion 5 (the whole is greater than the part) fails and which would consequently be reckoned as infinite is for Mayberry essentially a question about Nature and does not entail any transcendental suppositions. ==== Psychologism ==== Psychologism in the philosophy of mathematics is the position that mathematical concepts and/or truths are grounded in, derived from or explained by psychological facts (or laws). John Stuart Mill seems to have been an advocate of a type of logical psychologism, as were many 19th-century German logicians such as Sigwart and Erdmann as well as a number of psychologists, past and present: for example, Gustave Le Bon. Psychologism was famously criticized by Frege in his The Foundations of Arithmetic, and many of his works and essays, including his review of Husserl's Philosophy of Arithmetic. Edmund Husserl, in the first volume of his Logical Investigations, called "The Prolegomena of Pure Logic", criticized psychologism thoroughly and sought to distance himself from it. The "Prolegomena" is considered a more concise, fair, and thorough refutation of psychologism than the criticisms made by Frege, and also it is considered today by many as being a memorable refutation for its decisive blow to psychologism. Psychologism was also criticized by Charles Sanders Peirce and Maurice Merleau-Ponty. ==== Empiricism ==== Mathematical empiricism is a form of realism that denies that mathematics can be known a priori at all. It says that we discover mathematical facts by empirical research, just like facts in any of the other sciences. It is not one of the classical three positions advocated in the early 20th century, but primarily arose in the middle of the century. However, an important early proponent of a view like this was John Stuart Mill. Mill's view was widely criticized, because, according to critics, such as A.J. Ayer, it makes statements like "2 + 2 = 4" come out as uncertain, contingent truths, which we can only learn by observing instances of two pairs coming together and forming a quartet. Karl Popper was another philosopher to point out empirical aspects of mathematics, observing that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently." Popper also noted he would "admit a system as empirical or scientific only if it is capable of being tested by experience." Contemporary mathematical empiricism, formulated by W. V. O. Quine and Hilary Putnam, is primarily supported by the indispensability argument: mathematics is indispensable to all empirical sciences, and if we want to believe in the reality of the phenomena described by the sciences, we ought also believe in the reality of those entities required for this description. That is, since physics needs to talk about electrons to say why light bulbs behave as they do, then electrons must exist. Since physics needs to talk about numbers in offering any of its explanations, then numbers must exist. In keeping with Quine and Putnam's overall philosophies, this is a naturalistic argument. It argues for the existence of mathematical entities as the best explanation for experience, thus stripping mathematics of being distinct from the other sciences. Putnam strongly rejected the term "Platonist" as implying an over-specific ontology that was not necessary to mathematical practice in any real sense. He advocated a form of "pure realism" that rejected mystical notions of truth and accepted much quasi-empiricism in mathematics. This grew from the increasingly popular assertion in the late 20th century that no one foundation of mathematics could be ever proven to exist. It is also sometimes called "postmodernism in mathematics" although that term is considered overloaded by some and insulting by others. Quasi-empiricism argues that in doing their research, mathematicians test hypotheses as well as prove theorems. A mathematical argument can transmit falsity from the conclusion to the premises just as well as it can transmit truth from the premises to the conclusion. Putnam has argued that any theory of mathematical realism would include quasi-empirical methods. He proposed that an alien species doing mathematics might well rely on quasi-empirical methods primarily, being willing often to forgo rigorous and axiomatic proofs, and still be doing mathematics—at perhaps a somewhat greater risk of failure of their calculations. He gave a detailed argument for this in New Directions. Quasi-empiricism was also developed by Imre Lakatos. The most important criticism of empirical views of mathematics is approximately the same as that raised against Mill. If mathematics is just as empirical as the other sciences, then this suggests that its results are just as fallible as theirs, and just as contingent. In Mill's case the empirical justification comes directly, while in Quine's case it comes indirectly, through the coherence of our scientific theory as a whole, i.e. consilience after E.O. Wilson. Quine suggests that mathematics seems completely certain because the role it plays in our web of belief is extraordinarily central, and that it would be extremely difficult for us to revise it, though not impossible. For a philosophy of mathematics that attempts to overcome some of the shortcomings of Quine and Gödel's approaches by taking aspects of each see Penelope Maddy's Realism in Mathematics. Another example of a realist theory is the embodied mind theory. For experimental evidence suggesting that human infants can do elementary arithmetic, see Brian Butterworth. === Fictionalism === Mathematical fictionalism was brought to fame in 1980 when Hartry Field published Science Without Numbers, which rejected and in fact reversed Quine's indispensability argument. Where Quine suggested that mathematics was indispensable for our best scientific theories, and therefore should be accepted as a body of truths talking about independently existing entities, Field suggested that mathematics was dispensable, and therefore should be considered as a body of falsehoods not talking about anything real. He did this by giving a complete axiomatization of Newtonian mechanics with no reference to numbers or functions at all. He started with the "betweenness" of Hilbert's axioms to characterize space without coordinatizing it, and then added extra relations between points to do the work formerly done by vector fields. Hilbert's geometry is mathematical, because it talks about abstract points, but in Field's theory, these points are the concrete points of physical space, so no special mathematical objects at all are needed. Having shown how to do science without using numbers, Field proceeded to rehabilitate mathematics as a kind of useful fiction. He showed that mathematical physics is a conservative extension of his non-mathematical physics (that is, every physical fact provable in mathematical physics is already provable from Field's system), so that mathematics is a reliable process whose physical applications are all true, even though its own statements are false. Thus, when doing mathematics, we can see ourselves as telling a sort of story, talking as if numbers existed. For Field, a statement like "2 + 2 = 4" is just as fictitious as "Sherlock Holmes lived at 221B Baker Street"—but both are true according to the relevant fictions. Another fictionalist, Mary Leng, expresses the perspective succinctly by dismissing any seeming connection between mathematics and the physical world as "a happy coincidence". This rejection separates fictionalism from other forms of anti-realism, which see mathematics itself as artificial but still bounded or fitted to reality in some way. By this account, there are no metaphysical or epistemological problems special to mathematics. The only worries left are the general worries about non-mathematical physics, and about fiction in general. Field's approach has been very influential, but is widely rejected. This is in part because of the requirement of strong fragments of second-order logic to carry out his reduction, and because the statement of conservativity seems to require quantification over abstract models or deductions. === Social constructivism === Social constructivism sees mathematics primarily as a social construct, as a product of culture, subject to correction and change. Like the other sciences, mathematics is viewed as an empirical endeavor whose results are constantly evaluated and may be discarded. However, while on an empiricist view the evaluation is some sort of comparison with "reality", social constructivists emphasize that the direction of mathematical research is dictated by the fashions of the social group performing it or by the needs of the society financing it. However, although such external forces may change the direction of some mathematical research, there are strong internal constraints—the mathematical traditions, methods, problems, meanings and values into which mathematicians are enculturated—that work to conserve the historically defined discipline. This runs counter to the traditional beliefs of working mathematicians, that mathematics is somehow pure or objective. But social constructivists argue that mathematics is in fact grounded by much uncertainty: as mathematical practice evolves, the status of previous mathematics is cast into doubt, and is corrected to the degree it is required or desired by the current mathematical community. This can be seen in the development of analysis from reexamination of the calculus of Leibniz and Newton. They argue further that finished mathematics is often accorded too much status, and folk mathematics not enough, due to an overemphasis on axiomatic proof and peer review as practices. The social nature of mathematics is highlighted in its subcultures. Major discoveries can be made in one branch of mathematics and be relevant to another, yet the relationship goes undiscovered for lack of social contact between mathematicians. Social constructivists argue each speciality forms its own epistemic community and often has great difficulty communicating, or motivating the investigation of unifying conjectures that might relate different areas of mathematics. Social constructivists see the process of "doing mathematics" as actually creating the meaning, while social realists see a deficiency either of human capacity to abstractify, or of human's cognitive bias, or of mathematicians' collective intelligence as preventing the comprehension of a real universe of mathematical objects. Social constructivists sometimes reject the search for foundations of mathematics as bound to fail, as pointless or even meaningless. Contributions to this school have been made by Imre Lakatos and Thomas Tymoczko, although it is not clear that either would endorse the title. More recently Paul Ernest has explicitly formulated a social constructivist philosophy of mathematics. Some consider the work of Paul Erdős as a whole to have advanced this view (although he personally rejected it) because of his uniquely broad collaborations, which prompted others to see and study "mathematics as a social activity", e.g., via the Erdős number. Reuben Hersh has also promoted the social view of mathematics, calling it a "humanistic" approach, similar to but not quite the same as that associated with Alvin White; one of Hersh's co-authors, Philip J. Davis, has expressed sympathy for the social view as well. === Beyond the traditional schools === ==== Unreasonable effectiveness ==== Rather than focus on narrow debates about the true nature of mathematical truth, or even on practices unique to mathematicians such as the proof, a growing movement from the 1960s to the 1990s began to question the idea of seeking foundations or finding any one right answer to why mathematics works. The starting point for this was Eugene Wigner's famous 1960 paper "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", in which he argued that the happy coincidence of mathematics and physics being so well matched seemed to be unreasonable and hard to explain. ==== Popper's two senses of number statements ==== Realist and constructivist theories are normally taken to be contraries. However, Karl Popper argued that a number statement such as "2 apples + 2 apples = 4 apples" can be taken in two senses. In one sense it is irrefutable and logically true. In the second sense it is factually true and falsifiable. Another way of putting this is to say that a single number statement can express two propositions: one of which can be explained on constructivist lines; the other on realist lines. ==== Philosophy of language ==== Innovations in the philosophy of language during the 20th century renewed interest in whether mathematics is, as is often said, the language of science. Although some mathematicians and philosophers would accept the statement "mathematics is a language" (most consider that the language of mathematics is a part of mathematics to which mathematics cannot be reduced), linguists believe that the implications of such a statement must be considered. For example, the tools of linguistics are not generally applied to the symbol systems of mathematics, that is, mathematics is studied in a markedly different way from other languages. If mathematics is a language, it is a different type of language from natural languages. Indeed, because of the need for clarity and specificity, the language of mathematics is far more constrained than natural languages studied by linguists. However, the methods developed by Frege and Tarski for the study of mathematical language have been extended greatly by Tarski's student Richard Montague and other linguists working in formal semantics to show that the distinction between mathematical language and natural language may not be as great as it seems. Mohan Ganesalingam has analysed mathematical language using tools from formal linguistics. Ganesalingam notes that some features of natural language are not necessary when analysing mathematical language (such as tense), but many of the same analytical tools can be used (such as context-free grammars). One important difference is that mathematical objects have clearly defined types, which can be explicitly defined in a text: "Effectively, we are allowed to introduce a word in one part of a sentence, and declare its part of speech in another; and this operation has no analogue in natural language.": 251 == Arguments == === Indispensability argument for realism === This argument, associated with Willard Quine and Hilary Putnam, is considered by Stephen Yablo to be one of the most challenging arguments in favor of the acceptance of the existence of abstract mathematical entities, such as numbers and sets. The form of the argument is as follows. One must have ontological commitments to all entities that are indispensable to the best scientific theories, and to those entities only (commonly referred to as "all and only"). Mathematical entities are indispensable to the best scientific theories. Therefore, One must have ontological commitments to mathematical entities. The justification for the first premise is the most controversial. Both Putnam and Quine invoke naturalism to justify the exclusion of all non-scientific entities, and hence to defend the "only" part of "all and only". The assertion that "all" entities postulated in scientific theories, including numbers, should be accepted as real is justified by confirmation holism. Since theories are not confirmed in a piecemeal fashion, but as a whole, there is no justification for excluding any of the entities referred to in well-confirmed theories. This puts the nominalist who wishes to exclude the existence of sets and non-Euclidean geometry, but to include the existence of quarks and other undetectable entities of physics, for example, in a difficult position. === Epistemic argument against realism === The anti-realist "epistemic argument" against Platonism has been made by Paul Benacerraf and Hartry Field. Platonism posits that mathematical objects are abstract entities. By general agreement, abstract entities cannot interact causally with concrete, physical entities ("the truth-values of our mathematical assertions depend on facts involving Platonic entities that reside in a realm outside of space-time"). Whilst our knowledge of concrete, physical objects is based on our ability to perceive them, and therefore to causally interact with them, there is no parallel account of how mathematicians come to have knowledge of abstract objects. Another way of making the point is that if the Platonic world were to disappear, it would make no difference to the ability of mathematicians to generate proofs, etc., which is already fully accountable in terms of physical processes in their brains. Field developed his views into fictionalism. Benacerraf also developed the philosophy of mathematical structuralism, according to which there are no mathematical objects. Nonetheless, some versions of structuralism are compatible with some versions of realism. The argument hinges on the idea that a satisfactory naturalistic account of thought processes in terms of brain processes can be given for mathematical reasoning along with everything else. One line of defense is to maintain that this is false, so that mathematical reasoning uses some special intuition that involves contact with the Platonic realm. A modern form of this argument is given by Sir Roger Penrose. Another line of defense is to maintain that abstract objects are relevant to mathematical reasoning in a way that is non-causal, and not analogous to perception. This argument is developed by Jerrold Katz in his 2000 book Realistic Rationalism. A more radical defense is denial of physical reality, i.e. the mathematical universe hypothesis. In that case, a mathematician's knowledge of mathematics is one mathematical object making contact with another. == Aesthetics == Many practicing mathematicians have been drawn to their subject because of a sense of beauty they perceive in it. One sometimes hears the sentiment that mathematicians would like to leave philosophy to the philosophers and get back to mathematics—where, presumably, the beauty lies. In his work on the divine proportion, H.E. Huntley relates the feeling of reading and understanding someone else's proof of a theorem of mathematics to that of a viewer of a masterpiece of art—the reader of a proof has a similar sense of exhilaration at understanding as the original author of the proof, much as, he argues, the viewer of a masterpiece has a sense of exhilaration similar to the original painter or sculptor. Indeed, one can study mathematical and scientific writings as literature. Philip J. Davis and Reuben Hersh have commented that the sense of mathematical beauty is universal amongst practicing mathematicians. By way of example, they provide two proofs of the irrationality of √2. The first is the traditional proof by contradiction, ascribed to Euclid; the second is a more direct proof involving the fundamental theorem of arithmetic that, they argue, gets to the heart of the issue. Davis and Hersh argue that mathematicians find the second proof more aesthetically appealing because it gets closer to the nature of the problem. Paul Erdős was well known for his notion of a hypothetical "Book" containing the most elegant or beautiful mathematical proofs. There is not universal agreement that a result has one "most elegant" proof; Gregory Chaitin has argued against this idea. Philosophers have sometimes criticized mathematicians' sense of beauty or elegance as being, at best, vaguely stated. By the same token, however, philosophers of mathematics have sought to characterize what makes one proof more desirable than another when both are logically sound. Another aspect of aesthetics concerning mathematics is mathematicians' views towards the possible uses of mathematics for purposes deemed unethical or inappropriate. The best-known exposition of this view occurs in G. H. Hardy's book A Mathematician's Apology, in which Hardy argues that pure mathematics is superior in beauty to applied mathematics precisely because it cannot be used for war and similar ends. == See also == === Related works === === Historical topics === History and philosophy of science History of mathematics History of philosophy === Journals === Philosophia Mathematica Philosophy of Mathematics Education Journal == Notes == == References == == Further reading == Benacerraf, Paul; Putnam, Hilary, eds. (1983). Philosophy of Mathematics, Selected Readings (2nd ed.). Cambridge University Press. ISBN 9781107268135. Hart, W. D. (1996). Wilbur Dyre Hart (ed.). The Philosophy of Mathematics. Oxford University Press. ISBN 9780198751199. Irvine, A., ed. (2009). The Philosophy of Mathematics. Handbook of the Philosophy of Science. North-Holland Elsevier. ISBN 9780080930589. Körner, Stephan (1960). The Philosophy of Mathematics, An Introduction. Harper Books. OCLC 1054045322. Russell, Bertrand (1993) [1919]. Introduction to Mathematical Philosophy. Routledge. ISBN 9780486277240. OCLC 1097317975. Shapiro, Stewart (2000). Thinking About Mathematics: The Philosophy of Mathematics. Oxford University Press. ISBN 9780192893062. == External links == Philosophy of mathematics at PhilPapers Philosophy of mathematics at the Indiana Philosophy Ontology Project Horsten, Leon. "Philosophy of Mathematics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. "Philosophy of mathematics". Internet Encyclopedia of Philosophy. Mathematical Structuralism, Internet Encyclopaedia of Philosophy Abstractionism, Internet Encyclopaedia of Philosophy "Ludwig Wittgenstein: Later Philosophy of Mathematics". Internet Encyclopedia of Philosophy. The London Philosophy Study Guide Archived 2009-09-23 at the Wayback Machine offers many suggestions on what to read, depending on the student's familiarity with the subject: Philosophy of Mathematics Archived 2009-06-20 at the Wayback Machine Mathematical Logic Archived 2009-01-25 at the Wayback Machine Set Theory & Further Logic Archived 2009-02-27 at the Wayback Machine R.B. Jones' philosophy of mathematics page Corfield, David. "The Philosophy of Real Mathematics – Blog". Peirce, C.S. (1998). "22. New Elements (Καινα Στοιχεία)". In Peirce Edition Project (ed.). The Essential Peirce, Selected Philosophical Writings. Vol. 2 (1893–1913). Indiana University Press. pp. 300–324. ISBN 9780253007810.
|
https://en.wikipedia.org/wiki/Philosophy_of_mathematics
|
In mathematics, an inequality is a relation which makes a non-equal comparison between two numbers or other mathematical expressions. It is used most often to compare two numbers on the number line by their size. The main types of inequality are less than and greater than (denoted by < and >, respectively the less-than and greater-than signs). == Notation == There are several different notations used to represent different kinds of inequalities: The notation a < b means that a is less than b. The notation a > b means that a is greater than b. In either case, a is not equal to b. These relations are known as strict inequalities, meaning that a is strictly less than or strictly greater than b. Equality is excluded. In contrast to strict inequalities, there are two types of inequality relations that are not strict: The notation a ≤ b or a ⩽ b or a ≦ b means that a is less than or equal to b (or, equivalently, at most b, or not greater than b). The notation a ≥ b or a ⩾ b or a ≧ b means that a is greater than or equal to b (or, equivalently, at least b, or not less than b). In the 17th and 18th centuries, personal notations or typewriting signs were used to signal inequalities. For example, In 1670, John Wallis used a single horizontal bar above rather than below the < and >. Later in 1734, ≦ and ≧, known as "less than (greater-than) over equal to" or "less than (greater than) or equal to with double horizontal bars", first appeared in Pierre Bouguer's work . After that, mathematicians simplified Bouguer's symbol to "less than (greater than) or equal to with one horizontal bar" (≤), or "less than (greater than) or slanted equal to" (⩽). The relation not greater than can also be represented by a ≯ b , {\displaystyle a\ngtr b,} the symbol for "greater than" bisected by a slash, "not". The same is true for not less than, a ≮ b . {\displaystyle a\nless b.} The notation a ≠ b means that a is not equal to b; this inequation sometimes is considered a form of strict inequality. It does not say that one is greater than the other; it does not even require a and b to be member of an ordered set. In engineering sciences, less formal use of the notation is to state that one quantity is "much greater" than another, normally by several orders of magnitude. The notation a ≪ b means that a is much less than b. The notation a ≫ b means that a is much greater than b. This implies that the lesser value can be neglected with little effect on the accuracy of an approximation (such as the case of ultrarelativistic limit in physics). In all of the cases above, any two symbols mirroring each other are symmetrical; a < b and b > a are equivalent, etc. == Properties on the number line == Inequalities are governed by the following properties. All of these properties also hold if all of the non-strict inequalities (≤ and ≥) are replaced by their corresponding strict inequalities (< and >) and — in the case of applying a function — monotonic functions are limited to strictly monotonic functions. === Converse === The relations ≤ and ≥ are each other's converse, meaning that for any real numbers a and b: === Transitivity === The transitive property of inequality states that for any real numbers a, b, c: If either of the premises is a strict inequality, then the conclusion is a strict inequality: === Addition and subtraction === A common constant c may be added to or subtracted from both sides of an inequality. So, for any real numbers a, b, c: In other words, the inequality relation is preserved under addition (or subtraction) and the real numbers are an ordered group under addition. === Multiplication and division === The properties that deal with multiplication and division state that for any real numbers, a, b and non-zero c: In other words, the inequality relation is preserved under multiplication and division with positive constant, but is reversed when a negative constant is involved. More generally, this applies for an ordered field. For more information, see § Ordered fields. === Additive inverse === The property for the additive inverse states that for any real numbers a and b: === Multiplicative inverse === If both numbers are positive, then the inequality relation between the multiplicative inverses is opposite of that between the original numbers. More specifically, for any non-zero real numbers a and b that are both positive (or both negative): All of the cases for the signs of a and b can also be written in chained notation, as follows: === Applying a function to both sides === Any monotonically increasing function, by its definition, may be applied to both sides of an inequality without breaking the inequality relation (provided that both expressions are in the domain of that function). However, applying a monotonically decreasing function to both sides of an inequality means the inequality relation would be reversed. The rules for the additive inverse, and the multiplicative inverse for positive numbers, are both examples of applying a monotonically decreasing function. If the inequality is strict (a < b, a > b) and the function is strictly monotonic, then the inequality remains strict. If only one of these conditions is strict, then the resultant inequality is non-strict. In fact, the rules for additive and multiplicative inverses are both examples of applying a strictly monotonically decreasing function. A few examples of this rule are: Raising both sides of an inequality to a power n > 0 (equiv., −n < 0), when a and b are positive real numbers: Taking the natural logarithm on both sides of an inequality, when a and b are positive real numbers: (this is true because the natural logarithm is a strictly increasing function.) == Formal definitions and generalizations == A (non-strict) partial order is a binary relation ≤ over a set P which is reflexive, antisymmetric, and transitive. That is, for all a, b, and c in P, it must satisfy the three following clauses: a ≤ a (reflexivity) if a ≤ b and b ≤ a, then a = b (antisymmetry) if a ≤ b and b ≤ c, then a ≤ c (transitivity) A set with a partial order is called a partially ordered set. Those are the very basic axioms that every kind of order has to satisfy. A strict partial order is a relation < that satisfies a ≮ a (irreflexivity), if a < b, then b ≮ a (asymmetry), if a < b and b < c, then a < c (transitivity), where ≮ means that < does not hold. Some types of partial orders are specified by adding further axioms, such as: Total order: For every a and b in P, a ≤ b or b ≤ a . Dense order: For all a and b in P for which a < b, there is a c in P such that a < c < b. Least-upper-bound property: Every non-empty subset of P with an upper bound has a least upper bound (supremum) in P. === Ordered fields === If (F, +, ×) is a field and ≤ is a total order on F, then (F, +, ×, ≤) is called an ordered field if and only if: a ≤ b implies a + c ≤ b + c; 0 ≤ a and 0 ≤ b implies 0 ≤ a × b. Both ( Q , + , × , ≤ ) {\displaystyle (\mathbb {Q} ,+,\times ,\leq )} and ( R , + , × , ≤ ) {\displaystyle (\mathbb {R} ,+,\times ,\leq )} are ordered fields, but ≤ cannot be defined in order to make ( C , + , × , ≤ ) {\displaystyle (\mathbb {C} ,+,\times ,\leq )} an ordered field, because −1 is the square of i and would therefore be positive. Besides being an ordered field, R also has the Least-upper-bound property. In fact, R can be defined as the only ordered field with that quality. == Chained notation == The notation a < b < c stands for "a < b and b < c", from which, by the transitivity property above, it also follows that a < c. By the above laws, one can add or subtract the same number to all three terms, or multiply or divide all three terms by same nonzero number and reverse all inequalities if that number is negative. Hence, for example, a < b + e < c is equivalent to a − e < b < c − e. This notation can be generalized to any number of terms: for instance, a1 ≤ a2 ≤ ... ≤ an means that ai ≤ ai+1 for i = 1, 2, ..., n − 1. By transitivity, this condition is equivalent to ai ≤ aj for any 1 ≤ i ≤ j ≤ n. When solving inequalities using chained notation, it is possible and sometimes necessary to evaluate the terms independently. For instance, to solve the inequality 4x < 2x + 1 ≤ 3x + 2, it is not possible to isolate x in any one part of the inequality through addition or subtraction. Instead, the inequalities must be solved independently, yielding x < 1/2 and x ≥ −1 respectively, which can be combined into the final solution −1 ≤ x < 1/2. Occasionally, chained notation is used with inequalities in different directions, in which case the meaning is the logical conjunction of the inequalities between adjacent terms. For example, the defining condition of a zigzag poset is written as a1 < a2 > a3 < a4 > a5 < a6 > ... . Mixed chained notation is used more often with compatible relations, like <, =, ≤. For instance, a < b = c ≤ d means that a < b, b = c, and c ≤ d. This notation exists in a few programming languages such as Python. In contrast, in programming languages that provide an ordering on the type of comparison results, such as C, even homogeneous chains may have a completely different meaning. == Sharp inequalities == An inequality is said to be sharp if it cannot be relaxed and still be valid in general. Formally, a universally quantified inequality φ is called sharp if, for every valid universally quantified inequality ψ, if ψ ⇒ φ holds, then ψ ⇔ φ also holds. For instance, the inequality ∀a ∈ R. a2 ≥ 0 is sharp, whereas the inequality ∀a ∈ R. a2 ≥ −1 is not sharp. == Inequalities between means == There are many inequalities between means. For example, for any positive numbers a1, a2, ..., an we have H ≤ G ≤ A ≤ Q , {\displaystyle H\leq G\leq A\leq Q,} where they represent the following means of the sequence: Harmonic mean : H = n 1 a 1 + 1 a 2 + ⋯ + 1 a n {\displaystyle H={\frac {n}{{\frac {1}{a_{1}}}+{\frac {1}{a_{2}}}+\cdots +{\frac {1}{a_{n}}}}}} Geometric mean : G = a 1 ⋅ a 2 ⋯ a n n {\displaystyle G={\sqrt[{n}]{a_{1}\cdot a_{2}\cdots a_{n}}}} Arithmetic mean : A = a 1 + a 2 + ⋯ + a n n {\displaystyle A={\frac {a_{1}+a_{2}+\cdots +a_{n}}{n}}} Quadratic mean : Q = a 1 2 + a 2 2 + ⋯ + a n 2 n {\displaystyle Q={\sqrt {\frac {a_{1}^{2}+a_{2}^{2}+\cdots +a_{n}^{2}}{n}}}} == Cauchy–Schwarz inequality == The Cauchy–Schwarz inequality states that for all vectors u and v of an inner product space it is true that | ⟨ u , v ⟩ | 2 ≤ ⟨ u , u ⟩ ⋅ ⟨ v , v ⟩ , {\displaystyle |\langle \mathbf {u} ,\mathbf {v} \rangle |^{2}\leq \langle \mathbf {u} ,\mathbf {u} \rangle \cdot \langle \mathbf {v} ,\mathbf {v} \rangle ,} where ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the inner product. Examples of inner products include the real and complex dot product; In Euclidean space Rn with the standard inner product, the Cauchy–Schwarz inequality is ( ∑ i = 1 n u i v i ) 2 ≤ ( ∑ i = 1 n u i 2 ) ( ∑ i = 1 n v i 2 ) . {\displaystyle {\biggl (}\sum _{i=1}^{n}u_{i}v_{i}{\biggr )}^{2}\leq {\biggl (}\sum _{i=1}^{n}u_{i}^{2}{\biggr )}{\biggl (}\sum _{i=1}^{n}v_{i}^{2}{\biggr )}.} == Power inequalities == A power inequality is an inequality containing terms of the form ab, where a and b are real positive numbers or variable expressions. They often appear in mathematical olympiads exercises. Examples: For any real x, e x ≥ 1 + x . {\displaystyle e^{x}\geq 1+x.} If x > 0 and p > 0, then x p − 1 p ≥ ln ( x ) ≥ 1 − 1 x p p . {\displaystyle {\frac {x^{p}-1}{p}}\geq \ln(x)\geq {\frac {1-{\frac {1}{x^{p}}}}{p}}.} In the limit of p → 0, the upper and lower bounds converge to ln(x). If x > 0, then x x ≥ ( 1 e ) 1 e . {\displaystyle x^{x}\geq \left({\frac {1}{e}}\right)^{\frac {1}{e}}.} If x > 0, then x x x ≥ x . {\displaystyle x^{x^{x}}\geq x.} If x, y, z > 0, then ( x + y ) z + ( x + z ) y + ( y + z ) x > 2. {\displaystyle \left(x+y\right)^{z}+\left(x+z\right)^{y}+\left(y+z\right)^{x}>2.} For any real distinct numbers a and b, e b − e a b − a > e ( a + b ) / 2 . {\displaystyle {\frac {e^{b}-e^{a}}{b-a}}>e^{(a+b)/2}.} If x, y > 0 and 0 < p < 1, then x p + y p > ( x + y ) p . {\displaystyle x^{p}+y^{p}>\left(x+y\right)^{p}.} If x, y, z > 0, then x x y y z z ≥ ( x y z ) ( x + y + z ) / 3 . {\displaystyle x^{x}y^{y}z^{z}\geq \left(xyz\right)^{(x+y+z)/3}.} If a, b > 0, then a a + b b ≥ a b + b a . {\displaystyle a^{a}+b^{b}\geq a^{b}+b^{a}.} If a, b > 0, then a e a + b e b ≥ a e b + b e a . {\displaystyle a^{ea}+b^{eb}\geq a^{eb}+b^{ea}.} If a, b, c > 0, then a 2 a + b 2 b + c 2 c ≥ a 2 b + b 2 c + c 2 a . {\displaystyle a^{2a}+b^{2b}+c^{2c}\geq a^{2b}+b^{2c}+c^{2a}.} If a, b > 0, then a b + b a > 1. {\displaystyle a^{b}+b^{a}>1.} == Well-known inequalities == Mathematicians often use inequalities to bound quantities for which exact formulas cannot be computed easily. Some inequalities are used so often that they have names: == Complex numbers and inequalities == The set of complex numbers C {\displaystyle \mathbb {C} } with its operations of addition and multiplication is a field, but it is impossible to define any relation ≤ so that ( C , + , × , ≤ ) {\displaystyle (\mathbb {C} ,+,\times ,\leq )} becomes an ordered field. To make ( C , + , × , ≤ ) {\displaystyle (\mathbb {C} ,+,\times ,\leq )} an ordered field, it would have to satisfy the following two properties: if a ≤ b, then a + c ≤ b + c; if 0 ≤ a and 0 ≤ b, then 0 ≤ ab. Because ≤ is a total order, for any number a, either 0 ≤ a or a ≤ 0 (in which case the first property above implies that 0 ≤ −a). In either case 0 ≤ a2; this means that i2 > 0 and 12 > 0; so −1 > 0 and 1 > 0, which means (−1 + 1) > 0; contradiction. However, an operation ≤ can be defined so as to satisfy only the first property (namely, "if a ≤ b, then a + c ≤ b + c"). Sometimes the lexicographical order definition is used: a ≤ b, if Re(a) < Re(b), or Re(a) = Re(b) and Im(a) ≤ Im(b) It can easily be proven that for this definition a ≤ b implies a + c ≤ b + c. == Systems of inequalities == Systems of linear inequalities can be simplified by Fourier–Motzkin elimination. The cylindrical algebraic decomposition is an algorithm that allows testing whether a system of polynomial equations and inequalities has solutions, and, if solutions exist, describing them. The complexity of this algorithm is doubly exponential in the number of variables. It is an active research domain to design algorithms that are more efficient in specific cases. == See also == Binary relation Bracket (mathematics), for the use of similar ‹ and › signs as brackets Inclusion (set theory) Inequation Interval (mathematics) List of inequalities List of triangle inequalities Partially ordered set Relational operators, used in programming languages to denote inequality == References == == Sources == Hardy, G., Littlewood J. E., Pólya, G. (1999). Inequalities. Cambridge Mathematical Library, Cambridge University Press. ISBN 0-521-05206-8.{{cite book}}: CS1 maint: multiple names: authors list (link) Beckenbach, E. F., Bellman, R. (1975). An Introduction to Inequalities. Random House Inc. ISBN 0-394-01559-2.{{cite book}}: CS1 maint: multiple names: authors list (link) Drachman, Byron C., Cloud, Michael J. (1998). Inequalities: With Applications to Engineering. Springer-Verlag. ISBN 0-387-98404-6.{{cite book}}: CS1 maint: multiple names: authors list (link) Grinshpan, A. Z. (2005), "General inequalities, consequences, and applications", Advances in Applied Mathematics, 34 (1): 71–100, doi:10.1016/j.aam.2004.05.001 Murray S. Klamkin. "'Quickie' inequalities" (PDF). Math Strategies. Archived (PDF) from the original on 2022-10-09. Arthur Lohwater (1982). "Introduction to Inequalities". Online e-book in PDF format. Harold Shapiro (2005). "Mathematical Problem Solving". The Old Problem Seminar. Kungliga Tekniska högskolan. "3rd USAMO". Archived from the original on 2008-02-03. Pachpatte, B. G. (2005). Mathematical Inequalities. North-Holland Mathematical Library. Vol. 67 (first ed.). Amsterdam, the Netherlands: Elsevier. ISBN 0-444-51795-2. ISSN 0924-6509. MR 2147066. Zbl 1091.26008. Ehrgott, Matthias (2005). Multicriteria Optimization. Springer-Berlin. ISBN 3-540-21398-8. Steele, J. Michael (2004). The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities. Cambridge University Press. ISBN 978-0-521-54677-5. == External links == "Inequality", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Graph of Inequalities by Ed Pegg, Jr. AoPS Wiki entry about Inequalities
|
https://en.wikipedia.org/wiki/Inequality_(mathematics)
|
In mathematics, an involution, involutory function, or self-inverse function is a function f that is its own inverse, f(f(x)) = x for all x in the domain of f. Equivalently, applying f twice produces the original value. == General properties == Any involution is a bijection. The identity map is a trivial example of an involution. Examples of nontrivial involutions include negation (x ↦ −x), reciprocation (x ↦ 1/x), and complex conjugation (z ↦ z) in arithmetic; reflection, half-turn rotation, and circle inversion in geometry; complementation in set theory; and reciprocal ciphers such as the ROT13 transformation and the Beaufort polyalphabetic cipher. The composition g ∘ f of two involutions f and g is an involution if and only if they commute: g ∘ f = f ∘ g. == Involutions on finite sets == The number of involutions, including the identity involution, on a set with n = 0, 1, 2, ... elements is given by a recurrence relation found by Heinrich August Rothe in 1800: a 0 = a 1 = 1 {\displaystyle a_{0}=a_{1}=1} and a n = a n − 1 + ( n − 1 ) a n − 2 {\displaystyle a_{n}=a_{n-1}+(n-1)a_{n-2}} for n > 1. {\displaystyle n>1.} The first few terms of this sequence are 1, 1, 2, 4, 10, 26, 76, 232 (sequence A000085 in the OEIS); these numbers are called the telephone numbers, and they also count the number of Young tableaux with a given number of cells. The number an can also be expressed by non-recursive formulas, such as the sum a n = ∑ m = 0 ⌊ n 2 ⌋ n ! 2 m m ! ( n − 2 m ) ! . {\displaystyle a_{n}=\sum _{m=0}^{\lfloor {\frac {n}{2}}\rfloor }{\frac {n!}{2^{m}m!(n-2m)!}}.} The number of fixed points of an involution on a finite set and its number of elements have the same parity. Thus the number of fixed points of all the involutions on a given finite set have the same parity. In particular, every involution on an odd number of elements has at least one fixed point. This can be used to prove Fermat's two squares theorem. == Involution throughout the fields of mathematics == === Real-valued functions === The graph of an involution (on the real numbers) is symmetric across the line y = x. This is due to the fact that the inverse of any general function will be its reflection over the line y = x. This can be seen by "swapping" x with y. If, in particular, the function is an involution, then its graph is its own reflection. Some basic examples of involutions include the functions f ( x ) = a − x , f ( x ) = b x − a + a {\displaystyle {\begin{alignedat}{1}f(x)&=a-x\;,\\f(x)&={\frac {b}{x-a}}+a\end{alignedat}}} Besides, we can construct an involution by wrapping an involution g in a bijection h and its inverse ( h − 1 ∘ g ∘ h {\displaystyle h^{-1}\circ g\circ h} ). For instance : f ( x ) = 1 − x 2 on [ 0 ; 1 ] ( g ( x ) = 1 − x and h ( x ) = x 2 ) , f ( x ) = ln ( e x + 1 e x − 1 ) ( g ( x ) = x + 1 x − 1 = 2 x − 1 + 1 and h ( x ) = e x ) {\displaystyle {\begin{alignedat}{2}f(x)&={\sqrt {1-x^{2}}}\quad {\textrm {on}}\;[0;1]&{\bigl (}g(x)=1-x\quad {\textrm {and}}\quad h(x)=x^{2}{\bigr )},\\f(x)&=\ln \left({\frac {e^{x}+1}{e^{x}-1}}\right)&{\bigl (}g(x)={\frac {x+1}{x-1}}={\frac {2}{x-1}}+1\quad {\textrm {and}}\quad h(x)=e^{x}{\bigr )}\\\end{alignedat}}} === Euclidean geometry === A simple example of an involution of the three-dimensional Euclidean space is reflection through a plane. Performing a reflection twice brings a point back to its original coordinates. Another involution is reflection through the origin; not a reflection in the above sense, and so, a distinct example. These transformations are examples of affine involutions. === Projective geometry === An involution is a projectivity of period 2, that is, a projectivity that interchanges pairs of points.: 24 Any projectivity that interchanges two points is an involution. The three pairs of opposite sides of a complete quadrangle meet any line (not through a vertex) in three pairs of an involution. This theorem has been called Desargues's Involution Theorem. Its origins can be seen in Lemma IV of the lemmas to the Porisms of Euclid in Volume VII of the Collection of Pappus of Alexandria. If an involution has one fixed point, it has another, and consists of the correspondence between harmonic conjugates with respect to these two points. In this instance the involution is termed "hyperbolic", while if there are no fixed points it is "elliptic". In the context of projectivities, fixed points are called double points.: 53 Another type of involution occurring in projective geometry is a polarity that is a correlation of period 2. === Linear algebra === In linear algebra, an involution is a linear operator T on a vector space, such that T2 = I. Except for in characteristic 2, such operators are diagonalizable for a given basis with just 1s and −1s on the diagonal of the corresponding matrix. If the operator is orthogonal (an orthogonal involution), it is orthonormally diagonalizable. For example, suppose that a basis for a vector space V is chosen, and that e1 and e2 are basis elements. There exists a linear transformation f that sends e1 to e2, and sends e2 to e1, and that is the identity on all other basis vectors. It can be checked that f(f(x)) = x for all x in V. That is, f is an involution of V. For a specific basis, any linear operator can be represented by a matrix T. Every matrix has a transpose, obtained by swapping rows for columns. This transposition is an involution on the set of matrices. Since elementwise complex conjugation is an independent involution, the conjugate transpose or Hermitian adjoint is also an involution. The definition of involution extends readily to modules. Given a module M over a ring R, an R endomorphism f of M is called an involution if f2 is the identity homomorphism on M. Involutions are related to idempotents; if 2 is invertible then they correspond in a one-to-one manner. In functional analysis, Banach *-algebras and C*-algebras are special types of Banach algebras with involutions. === Quaternion algebra, groups, semigroups === In a quaternion algebra, an (anti-)involution is defined by the following axioms: if we consider a transformation x ↦ f ( x ) {\displaystyle x\mapsto f(x)} then it is an involution if f ( f ( x ) ) = x {\displaystyle f(f(x))=x} (it is its own inverse) f ( x 1 + x 2 ) = f ( x 1 ) + f ( x 2 ) {\displaystyle f(x_{1}+x_{2})=f(x_{1})+f(x_{2})} and f ( λ x ) = λ f ( x ) {\displaystyle f(\lambda x)=\lambda f(x)} (it is linear) f ( x 1 x 2 ) = f ( x 1 ) f ( x 2 ) {\displaystyle f(x_{1}x_{2})=f(x_{1})f(x_{2})} An anti-involution does not obey the last axiom but instead f ( x 1 x 2 ) = f ( x 2 ) f ( x 1 ) {\displaystyle f(x_{1}x_{2})=f(x_{2})f(x_{1})} This former law is sometimes called antidistributive. It also appears in groups as (xy)−1 = (y)−1(x)−1. Taken as an axiom, it leads to the notion of semigroup with involution, of which there are natural examples that are not groups, for example square matrix multiplication (i.e. the full linear monoid) with transpose as the involution. === Ring theory === In ring theory, the word involution is customarily taken to mean an antihomomorphism that is its own inverse function. Examples of involutions in common rings: complex conjugation on the complex plane, and its equivalent in the split-complex numbers taking the transpose in a matrix ring. === Group theory === In group theory, an element of a group is an involution if it has order 2; that is, an involution is an element a such that a ≠ e and a2 = e, where e is the identity element. Originally, this definition agreed with the first definition above, since members of groups were always bijections from a set into itself; that is, group was taken to mean permutation group. By the end of the 19th century, group was defined more broadly, and accordingly so was involution. A permutation is an involution if and only if it can be written as a finite product of disjoint transpositions. The involutions of a group have a large impact on the group's structure. The study of involutions was instrumental in the classification of finite simple groups. An element x of a group G is called strongly real if there is an involution t with xt = x−1 (where xt = x−1 = t−1 ⋅ x ⋅ t). Coxeter groups are groups generated by a set S of involutions subject only to relations involving powers of pairs of elements of S. Coxeter groups can be used, among other things, to describe the possible regular polyhedra and their generalizations to higher dimensions. === Mathematical logic === The operation of complement in Boolean algebras is an involution. Accordingly, negation in classical logic satisfies the law of double negation: ¬¬A is equivalent to A. Generally in non-classical logics, negation that satisfies the law of double negation is called involutive. In algebraic semantics, such a negation is realized as an involution on the algebra of truth values. Examples of logics that have involutive negation are Kleene and Bochvar three-valued logics, Łukasiewicz many-valued logic, the fuzzy logic 'involutive monoidal t-norm logic' (IMTL), etc. Involutive negation is sometimes added as an additional connective to logics with non-involutive negation; this is usual, for example, in t-norm fuzzy logics. The involutiveness of negation is an important characterization property for logics and the corresponding varieties of algebras. For instance, involutive negation characterizes Boolean algebras among Heyting algebras. Correspondingly, classical Boolean logic arises by adding the law of double negation to intuitionistic logic. The same relationship holds also between MV-algebras and BL-algebras (and so correspondingly between Łukasiewicz logic and fuzzy logic BL), IMTL and MTL, and other pairs of important varieties of algebras (respectively, corresponding logics). In the study of binary relations, every relation has a converse relation. Since the converse of the converse is the original relation, the conversion operation is an involution on the category of relations. Binary relations are ordered through inclusion. While this ordering is reversed with the complementation involution, it is preserved under conversion. === Computer science === The XOR bitwise operation with a given value for one parameter is an involution on the other parameter. XOR masks in some instances were used to draw graphics on images in such a way that drawing them twice on the background reverts the background to its original state. Two special cases of this, which are also involutions, are the bitwise NOT operation which is XOR with an all-ones value, and stream cipher encryption, which is an XOR with a secret keystream. This predates binary computers; practically all mechanical cipher machines implement a reciprocal cipher, an involution on each typed-in letter. Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way. Another involution used in computers is an order-2 bitwise permutation. For example. a color value stored as integers in the form (R, G, B), could exchange R and B, resulting in the form (B, G, R): f(f(RGB)) = RGB, f(f(BGR)) = BGR. === Physics === Legendre transformation, which converts between the Lagrangian and Hamiltonian, is an involutive operation. Integrability, a central notion of physics and in particular the subfield of integrable systems, is closely related to involution, for example in context of Kramers–Wannier duality. == See also == Atbash Automorphism Idempotence ROT13 == References == == Further reading == Ell, Todd A.; Sangwine, Stephen J. (2007). "Quaternion involutions and anti-involutions". Computers & Mathematics with Applications. 53 (1): 137–143. arXiv:math/0506034. doi:10.1016/j.camwa.2006.10.029. S2CID 45639619. Knus, Max-Albert; Merkurjev, Alexander; Rost, Markus; Tignol, Jean-Pierre (1998), The book of involutions, Colloquium Publications, vol. 44, With a preface by J. Tits, Providence, RI: American Mathematical Society, ISBN 0-8218-0904-0, Zbl 0955.16001 "Involution", Encyclopedia of Mathematics, EMS Press, 2001 [1994] == External links == Media related to Involution at Wikimedia Commons
|
https://en.wikipedia.org/wiki/Involution_(mathematics)
|
In mathematics, a transformation, transform, or self-map is a function f, usually with some geometrical underpinning, that maps a set X to itself, i.e. f: X → X. Examples include linear transformations of vector spaces and geometric transformations, which include projective transformations, affine transformations, and specific affine transformations, such as rotations, reflections and translations. == Partial transformations == While it is common to use the term transformation for any function of a set into itself (especially in terms like "transformation semigroup" and similar), there exists an alternative form of terminological convention in which the term "transformation" is reserved only for bijections. When such a narrow notion of transformation is generalized to partial functions, then a partial transformation is a function f: A → B, where both A and B are subsets of some set X. == Algebraic structures == The set of all transformations on a given base set, together with function composition, forms a regular semigroup. == Combinatorics == For a finite set of cardinality n, there are nn transformations and (n+1)n partial transformations. == See also == Coordinate transformation Data transformation (statistics) Geometric transformation Infinitesimal transformation Linear transformation List of transforms Rigid transformation Transformation geometry Transformation semigroup Transformation group Transformation matrix == References == == External links == Media related to Transformation (function) at Wikimedia Commons
|
https://en.wikipedia.org/wiki/Transformation_(function)
|
In mathematics, a surface is a mathematical model of the common concept of a surface. It is a generalization of a plane, but, unlike a plane, it may be curved; this is analogous to a curve generalizing a straight line. There are several more precise definitions, depending on the context and the mathematical tools that are used for the study. The simplest mathematical surfaces are planes and spheres in the Euclidean 3-space. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not. A surface is a topological space of dimension two; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian). == Definitions == Often, a surface is defined by equations that are satisfied by the coordinates of its points. This is the case of the graph of a continuous function of two variables. The set of the zeros of a function of three variables is a surface, which is called an implicit surface. If the defining three-variate function is a polynomial, the surface is an algebraic surface. For example, the unit sphere is an algebraic surface, as it may be defined by the implicit equation x 2 + y 2 + z 2 − 1 = 0. {\displaystyle x^{2}+y^{2}+z^{2}-1=0.} A surface may also be defined as the image, in some space of dimension at least 3, of a continuous function of two variables (some further conditions are required to ensure that the image is not a curve). In this case, one says that one has a parametric surface, which is parametrized by these two variables, called parameters. For example, the unit sphere may be parametrized by the Euler angles, also called longitude u and latitude v by x = cos ( u ) cos ( v ) y = sin ( u ) cos ( v ) z = sin ( v ) . {\displaystyle {\begin{aligned}x&=\cos(u)\cos(v)\\y&=\sin(u)\cos(v)\\z&=\sin(v)\,.\end{aligned}}} Parametric equations of surfaces are often irregular at some points. For example, all but two points of the unit sphere, are the image, by the above parametrization, of exactly one pair of Euler angles (modulo 2π). For the remaining two points (the north and south poles), one has cos v = 0, and the longitude u may take any values. Also, there are surfaces for which there cannot exist a single parametrization that covers the whole surface. Therefore, one often considers surfaces which are parametrized by several parametric equations, whose images cover the surface. This is formalized by the concept of manifold: in the context of manifolds, typically in topology and differential geometry, a surface is a manifold of dimension two; this means that a surface is a topological space such that every point has a neighborhood which is homeomorphic to an open subset of the Euclidean plane (see Surface (topology) and Surface (differential geometry)). This allows defining surfaces in spaces of dimension higher than three, and even abstract surfaces, which are not contained in any other space. On the other hand, this excludes surfaces that have singularities, such as the vertex of a conical surface or points where a surface crosses itself. In classical geometry, a surface is generally defined as a locus of a point or a line. For example, a sphere is the locus of a point which is at a given distance of a fixed point, called the center; a conical surface is the locus of a line passing through a fixed point and crossing a curve; a surface of revolution is the locus of a curve rotating around a line. A ruled surface is the locus of a moving line satisfying some constraints; in modern terminology, a ruled surface is a surface, which is a union of lines. == Terminology == There are several kinds of surfaces that are considered in mathematics. An unambiguous terminology is thus necessary to distinguish them when needed. A topological surface is a surface that is a manifold of dimension two (see § Topological surface). A differentiable surface is a surfaces that is a differentiable manifold (see § Differentiable surface). Every differentiable surface is a topological surface, but the converse is false. A "surface" is often implicitly supposed to be contained in a Euclidean space of dimension 3, typically R3. A surface that is contained in a projective space is called a projective surface (see § Projective surface). A surface that is not supposed to be included in another space is called an abstract surface. == Examples == The graph of a continuous function of two variables, defined over a connected open subset of R2 is a topological surface. If the function is differentiable, the graph is a differentiable surface. A plane is both an algebraic surface and a differentiable surface. It is also a ruled surface and a surface of revolution. A circular cylinder (that is, the locus of a line crossing a circle and parallel to a given direction) is an algebraic surface and a differentiable surface. A circular cone (locus of a line crossing a circle, and passing through a fixed point, the apex, which is outside the plane of the circle) is an algebraic surface which is not a differentiable surface. If one removes the apex, the remainder of the cone is the union of two differentiable surfaces. The surface of a polyhedron is a topological surface, which is neither a differentiable surface nor an algebraic surface. A hyperbolic paraboloid (the graph of the function z = xy) is a differentiable surface and an algebraic surface. It is also a ruled surface, and, for this reason, is often used in architecture. A two-sheet hyperboloid is an algebraic surface and the union of two non-intersecting differentiable surfaces. == Parametric surface == A parametric surface is the image of an open subset of the Euclidean plane (typically R 2 {\displaystyle \mathbb {R} ^{2}} ) by a continuous function, in a topological space, generally a Euclidean space of dimension at least three. Usually the function is supposed to be continuously differentiable, and this will be always the case in this article. Specifically, a parametric surface in R 3 {\displaystyle \mathbb {R} ^{3}} is given by three functions of two variables u and v, called parameters x = f 1 ( u , v ) , y = f 2 ( u , v ) , z = f 3 ( u , v ) . {\displaystyle {\begin{aligned}x&=f_{1}(u,v),\\[4pt]y&=f_{2}(u,v),\\[4pt]z&=f_{3}(u,v)\,.\end{aligned}}} As the image of such a function may be a curve (for example, if the three functions are constant with respect to v), a further condition is required, generally that, for almost all values of the parameters, the Jacobian matrix [ ∂ f 1 ∂ u ∂ f 1 ∂ v ∂ f 2 ∂ u ∂ f 2 ∂ v ∂ f 3 ∂ u ∂ f 3 ∂ v ] {\displaystyle {\begin{bmatrix}{\dfrac {\partial f_{1}}{\partial u}}&{\dfrac {\partial f_{1}}{\partial v}}\\[6pt]{\dfrac {\partial f_{2}}{\partial u}}&{\dfrac {\partial f_{2}}{\partial v}}\\[6pt]{\dfrac {\partial f_{3}}{\partial u}}&{\dfrac {\partial f_{3}}{\partial v}}\end{bmatrix}}} has rank two. Here "almost all" means that the values of the parameters where the rank is two contain a dense open subset of the range of the parametrization. For surfaces in a space of higher dimension, the condition is the same, except for the number of columns of the Jacobian matrix. === Tangent plane and normal vector === A point p where the above Jacobian matrix has rank two is called regular, or, more properly, the parametrization is called regular at p. The tangent plane at a regular point p is the unique plane passing through p and having a direction parallel to the two row vectors of the Jacobian matrix. The tangent plane is an affine concept, because its definition is independent of the choice of a metric. In other words, any affine transformation maps the tangent plane to the surface at a point to the tangent plane to the image of the surface at the image of the point. The normal line at a point of a surface is the unique line passing through the point and perpendicular to the tangent plane; a normal vector is a vector which is parallel to the normal line. For other differential invariants of surfaces, in the neighborhood of a point, see Differential geometry of surfaces. === Irregular point and singular point === A point of a parametric surface which is not regular is irregular. There are several kinds of irregular points. It may occur that an irregular point becomes regular, if one changes the parametrization. This is the case of the poles in the parametrization of the unit sphere by Euler angles: it suffices to permute the role of the different coordinate axes for changing the poles. On the other hand, consider the circular cone of parametric equation x = t cos ( u ) y = t sin ( u ) z = t . {\displaystyle {\begin{aligned}x&=t\cos(u)\\y&=t\sin(u)\\z&=t\,.\end{aligned}}} The apex of the cone is the origin (0, 0, 0), and is obtained for t = 0. It is an irregular point that remains irregular, whichever parametrization is chosen (otherwise, there would exist a unique tangent plane). Such an irregular point, where the tangent plane is undefined, is said singular. There is another kind of singular points. There are the self-crossing points, that is the points where the surface crosses itself. In other words, these are the points which are obtained for (at least) two different values of the parameters. === Graph of a bivariate function === Let z = f(x, y) be a function of two real variables, a bivariate function. This is a parametric surface, parametrized as x = t y = u z = f ( t , u ) . {\displaystyle {\begin{aligned}x&=t\\y&=u\\z&=f(t,u)\,.\end{aligned}}} Every point of this surface is regular, as the two first columns of the Jacobian matrix form the identity matrix of rank two. === Rational surface === A rational surface is a surface that may be parametrized by rational functions of two variables. That is, if fi(t, u) are, for i = 0, 1, 2, 3, polynomials in two indeterminates, then the parametric surface, defined by x = f 1 ( t , u ) f 0 ( t , u ) , y = f 2 ( t , u ) f 0 ( t , u ) , z = f 3 ( t , u ) f 0 ( t , u ) , {\displaystyle {\begin{aligned}x&={\frac {f_{1}(t,u)}{f_{0}(t,u)}},\\[6pt]y&={\frac {f_{2}(t,u)}{f_{0}(t,u)}},\\[6pt]z&={\frac {f_{3}(t,u)}{f_{0}(t,u)}}\,,\end{aligned}}} is a rational surface. A rational surface is an algebraic surface, but most algebraic surfaces are not rational. == Implicit surface == An implicit surface in a Euclidean space (or, more generally, in an affine space) of dimension 3 is the set of the common zeros of a differentiable function of three variables f ( x , y , z ) = 0. {\displaystyle f(x,y,z)=0.} Implicit means that the equation defines implicitly one of the variables as a function of the other variables. This is made more exact by the implicit function theorem: if f(x0, y0, z0) = 0, and the partial derivative in z of f is not zero at (x0, y0, z0), then there exists a differentiable function φ(x, y) such that f ( x , y , φ ( x , y ) ) = 0 {\displaystyle f(x,y,\varphi (x,y))=0} in a neighbourhood of (x0, y0, z0). In other words, the implicit surface is the graph of a function near a point of the surface where the partial derivative in z is nonzero. An implicit surface has thus, locally, a parametric representation, except at the points of the surface where the three partial derivatives are zero. === Regular points and tangent plane === A point of the surface where at least one partial derivative of f is nonzero is called regular. At such a point ( x 0 , y 0 , z 0 ) {\displaystyle (x_{0},y_{0},z_{0})} , the tangent plane and the direction of the normal are well defined, and may be deduced, with the implicit function theorem from the definition given above, in § Tangent plane and normal vector. The direction of the normal is the gradient, that is the vector [ ∂ f ∂ x ( x 0 , y 0 , z 0 ) , ∂ f ∂ y ( x 0 , y 0 , z 0 ) , ∂ f ∂ z ( x 0 , y 0 , z 0 ) ] . {\displaystyle \left[{\frac {\partial f}{\partial x}}(x_{0},y_{0},z_{0}),{\frac {\partial f}{\partial y}}(x_{0},y_{0},z_{0}),{\frac {\partial f}{\partial z}}(x_{0},y_{0},z_{0})\right].} The tangent plane is defined by its implicit equation ∂ f ∂ x ( x 0 , y 0 , z 0 ) ( x − x 0 ) + ∂ f ∂ y ( x 0 , y 0 , z 0 ) ( y − y 0 ) + ∂ f ∂ z ( x 0 , y 0 , z 0 ) ( z − z 0 ) = 0. {\displaystyle {\frac {\partial f}{\partial x}}(x_{0},y_{0},z_{0})(x-x_{0})+{\frac {\partial f}{\partial y}}(x_{0},y_{0},z_{0})(y-y_{0})+{\frac {\partial f}{\partial z}}(x_{0},y_{0},z_{0})(z-z_{0})=0.} === Singular point === A singular point of an implicit surface (in R 3 {\displaystyle \mathbb {R} ^{3}} ) is a point of the surface where the implicit equation holds and the three partial derivatives of its defining function are all zero. Therefore, the singular points are the solutions of a system of four equations in three indeterminates. As most such systems have no solution, many surfaces do not have any singular point. A surface with no singular point is called regular or non-singular. The study of surfaces near their singular points and the classification of the singular points is singularity theory. A singular point is isolated if there is no other singular point in a neighborhood of it. Otherwise, the singular points may form a curve. This is in particular the case for self-crossing surfaces. == Algebraic surface == Originally, an algebraic surface was a surface which could be defined by an implicit equation f ( x , y , z ) = 0 , {\displaystyle f(x,y,z)=0,} where f is a polynomial in three indeterminates, with real coefficients. The concept has been extended in several directions, by defining surfaces over arbitrary fields, and by considering surfaces in spaces of arbitrary dimension or in projective spaces. Abstract algebraic surfaces, which are not explicitly embedded in another space, are also considered. === Surfaces over arbitrary fields === Polynomials with coefficients in any field are accepted for defining an algebraic surface. However, the field of coefficients of a polynomial is not well defined, as, for example, a polynomial with rational coefficients may also be considered as a polynomial with real or complex coefficients. Therefore, the concept of point of the surface has been generalized in the following way. Given a polynomial f(x, y, z), let k be the smallest field containing the coefficients, and K be an algebraically closed extension of k, of infinite transcendence degree. Then a point of the surface is an element of K3 which is a solution of the equation f ( x , y , z ) = 0. {\displaystyle f(x,y,z)=0.} If the polynomial has real coefficients, the field K is the complex field, and a point of the surface that belongs to R 3 {\displaystyle \mathbb {R} ^{3}} (a usual point) is called a real point. A point that belongs to k3 is called rational over k, or simply a rational point, if k is the field of rational numbers. === Projective surface === A projective surface in a projective space of dimension three is the set of points whose homogeneous coordinates are zeros of a single homogeneous polynomial in four variables. More generally, a projective surface is a subset of a projective space, which is a projective variety of dimension two. Projective surfaces are strongly related to affine surfaces (that is, ordinary algebraic surfaces). One passes from a projective surface to the corresponding affine surface by setting to one some coordinate or indeterminate of the defining polynomials (usually the last one). Conversely, one passes from an affine surface to its associated projective surface (called projective completion) by homogenizing the defining polynomial (in case of surfaces in a space of dimension three), or by homogenizing all polynomials of the defining ideal (for surfaces in a space of higher dimension). === In higher dimensional spaces === One cannot define the concept of an algebraic surface in a space of dimension higher than three without a general definition of an algebraic variety and of the dimension of an algebraic variety. In fact, an algebraic surface is an algebraic variety of dimension two. More precisely, an algebraic surface in a space of dimension n is the set of the common zeros of at least n – 2 polynomials, but these polynomials must satisfy further conditions that may be not immediate to verify. Firstly, the polynomials must not define a variety or an algebraic set of higher dimension, which is typically the case if one of the polynomials is in the ideal generated by the others. Generally, n – 2 polynomials define an algebraic set of dimension two or higher. If the dimension is two, the algebraic set may have several irreducible components. If there is only one component the n – 2 polynomials define a surface, which is a complete intersection. If there are several components, then one needs further polynomials for selecting a specific component. Most authors consider as an algebraic surface only algebraic varieties of dimension two, but some also consider as surfaces all algebraic sets whose irreducible components have the dimension two. In the case of surfaces in a space of dimension three, every surface is a complete intersection, and a surface is defined by a single polynomial, which is irreducible or not, depending on whether non-irreducible algebraic sets of dimension two are considered as surfaces or not. == Topological surface == In topology, a surface is generally defined as a manifold of dimension two. This means that a topological surface is a topological space such that every point has a neighborhood that is homeomorphic to an open subset of a Euclidean plane. Every topological surface is homeomorphic to a polyhedral surface such that all facets are triangles. The combinatorial study of such arrangements of triangles (or, more generally, of higher-dimensional simplexes) is the starting object of algebraic topology. This allows the characterization of the properties of surfaces in terms of purely algebraic invariants, such as the genus and homology groups. The homeomorphism classes of surfaces have been completely described (see Surface (topology)). == Differentiable surface == == Fractal surface == == In computer graphics == == See also == Area element, the area of a differential element of a surface Coordinate surfaces Hypersurface Perimeter, a two-dimensional equivalent Polyhedral surface Shape Signed distance function Solid figure Surface area Surface patch Surface integral == Footnotes == == Notes == == Sources == Gauss, Carl Friedrich (1902), General Investigations of Curved Surfaces of 1825 and 1827, Princeton University Library
|
https://en.wikipedia.org/wiki/Surface_(mathematics)
|
In mathematics, the term undefined refers to a value, function, or other expression that cannot be assigned a meaning within a specific formal system. Attempting to assign or use an undefined value within a particular formal system, may produce contradictory or meaningless results within that system. In practice, mathematicians may use the term undefined to warn that a particular calculation or property can produce mathematically inconsistent results, and therefore, it should be avoided. Caution must be taken to avoid the use of such undefined values in a deduction or proof. Whether a particular function or value is undefined, depends on the rules of the formal system in which it is used. For example, the imaginary number − 1 {\displaystyle {\sqrt {-1}}} is undefined within the set of real numbers. So it is meaningless to reason about the value, solely within the discourse of real numbers. However, defining the imaginary number i {\displaystyle i} to be equal to − 1 {\displaystyle {\sqrt {-1}}} , allows there to be a consistent set of mathematics referred to as the complex number plane. Therefore, within the discourse of complex numbers, − 1 {\displaystyle {\sqrt {-1}}} is in fact defined. Many new fields of mathematics have been created, by taking previously undefined functions and values, and assigning them new meanings. Most mathematicians generally consider these innovations significant, to the extent that they are both internally consistent and practically useful. For example, Ramanujan summation may seem unintuitive, as it works upon divergent series that assign finite values to apparently infinite sums such as 1 + 2 + 3 + 4 + ⋯. However, Ramanujan summation is useful for modelling a number of real-world phenomena, including the Casimir effect and bosonic string theory. A function may be said to be undefined, outside of its domain. As one example, f ( x ) = 1 x {\textstyle f(x)={\frac {1}{x}}} is undefined when x = 0 {\displaystyle x=0} . As division by zero is undefined in algebra, x = 0 {\displaystyle x=0} is not part of the domain of f ( x ) {\displaystyle f(x)} . == Other shades of meaning == In some mathematical contexts, undefined can refer to a primitive notion which is not defined in terms of simpler concepts. For example, in Elements, Euclid defines a point merely as "that of which there is no part", and a line merely as "length without breadth". Although these terms are not further defined, Euclid uses them to construct more complex geometric concepts. Contrast also the term undefined behavior in computer science, in which the term indicates that a function may produce or return any result, which may or may not be correct. == Common examples of undefined expressions == Many fields of mathematics refer to various kinds of expressions as undefined. Therefore, the following examples of undefined expressions are not exhaustive. === Division by zero === In arithmetic, and therefore algebra, division by zero is undefined. Use of a division by zero in an arithmetical calculation or proof, can produce absurd or meaningless results. Assuming that division by zero exists, can produce inconsistent logical results, such as the following fallacious "proof" that one is equal to two: The above "proof" is not meaningful. Since we know that x = y {\displaystyle x=y} , if we divide both sides of the equation by x − y {\displaystyle x-y} , we divide both sides of the equation by zero. This operation is undefined in arithmetic, and therefore deductions based on division by zero can be contradictory. If we assume that a non-zero answer n {\displaystyle n} exists, when some number k ∣ k ≠ 0 {\displaystyle k\mid k\neq 0} is divided by zero, then that would imply that k = n × 0 {\displaystyle k=n\times 0} . But there is no number, which when multiplied by zero, produces a number that is not zero. Therefore, our assumption is incorrect. === Zero to the power of zero === Depending on the particular context, mathematicians may refer to zero to the power of zero as undefined, indefinite, or equal to 1. Controversy exists as to which definitions are mathematically rigorous, and under what conditions. === The square root of a negative number === When restricted to the field of real numbers, the square root of a negative number is undefined, as no real number exists which, when squared, equals a negative number. Mathematicians, including Gerolamo Cardano, John Wallis, Leonhard Euler, and Carl Friedrich Gauss, explored formal definitions for the square roots of negative numbers, giving rise to the field of complex analysis. === In trigonometry === In trigonometry, for all n ∈ Z {\displaystyle n\in \mathbb {Z} } , the functions tan θ {\displaystyle \tan \theta } and sec θ {\displaystyle \sec \theta } are undefined for θ = π ( n − 1 2 ) {\textstyle \theta =\pi \left(n-{\frac {1}{2}}\right)} , while the functions cot θ {\displaystyle \cot \theta } and csc θ {\displaystyle \csc \theta } are undefined for all θ = π n {\displaystyle \theta =\pi n} . This is a consequence of the identities of these functions, which would imply a division by zero at those points. Also, arcsin k {\displaystyle \arcsin k} and arccos k {\displaystyle \arccos k} are both undefined when k > 1 {\displaystyle k>1} or k < − 1 {\displaystyle k<-1} , because the range of the sin {\displaystyle \sin } and cos {\displaystyle \cos } functions is between − 1 {\displaystyle -1} and 1 {\displaystyle 1} inclusive. === In complex analysis === In complex analysis, a point z {\displaystyle z} on the complex plane where a holomorphic function is undefined, is called a singularity. Some different types of singularities include: Removable singularities - in which the function can be extended holomorphically to z {\displaystyle z} Poles - in which the function can be extended meromorphically to z {\displaystyle z} Essential singularities - in which no meromorphic extension to z {\displaystyle z} can exist == Related terms == === Indeterminate === The term undefined should be contrasted with the term indeterminate. In the first case, undefined generally indicates that a value or property can have no meaningful definition. In the second case, indeterminate generally indicates that a value or property can have many meaningful definitions. Additionally, it seems to be generally accepted that undefined values may not be safely used within a particular formal system, whereas indeterminate values might be, depending on the relevant rules of the particular formal system. == See also == Analytic function - a function locally given by a convergent power series, which may be useful for dealing with otherwise undefined values L'Hôpital's rule - a method in calculus for evaluating indeterminate forms Indeterminate form - a mathematical expression for which many assignments exist NaN - the IEEE-754 expression indicating that the result of a calculation is not a number Primitive notion - a concept that is not defined in terms of previously-defined concepts Singularity - a point at which a mathematical function ceases to be well-behaved == References == == Further reading == Smart, James R. (1988). Modern Geometries (3rd ed.). Brooks/Cole. ISBN 0-534-08310-2. Lo Bello, Anthony (2013). Origins of Mathematical Words. Johns Hopkins University Press. ISBN 978-1-4214-1098-2. == External links == Undefined and indeterminate - Functions and their graphs - Algebra II - Khan Academy on YouTube
|
https://en.wikipedia.org/wiki/Undefined_(mathematics)
|
Babylonian mathematics (also known as Assyro-Babylonian mathematics) is the mathematics developed or practiced by the people of Mesopotamia, as attested by sources mainly surviving from the Old Babylonian period (1830–1531 BC) to the Seleucid from the last three or four centuries BC. With respect to content, there is scarcely any difference between the two groups of texts. Babylonian mathematics remained constant, in character and content, for over a millennium. In contrast to the scarcity of sources in Egyptian mathematics, knowledge of Babylonian mathematics is derived from hundreds of clay tablets unearthed since the 1850s. Written in cuneiform, tablets were inscribed while the clay was moist, and baked hard in an oven or by the heat of the sun. The majority of recovered clay tablets date from 1800 to 1600 BC, and cover topics that include fractions, algebra, quadratic and cubic equations and the Pythagorean theorem. The Babylonian tablet YBC 7289 gives an approximation of 2 {\displaystyle {\sqrt {2}}} accurate to three significant sexagesimal digits (about six significant decimal digits). == Origins of Babylonian mathematics == Babylonian mathematics is a range of numeric and more advanced mathematical practices in the ancient Near East, written in cuneiform script. Study has historically focused on the First Babylonian dynasty old Babylonian period in the early second millennium BC due to the wealth of data available. There has been debate over the earliest appearance of Babylonian mathematics, with historians suggesting a range of dates between the 5th and 3rd millennia BC. Babylonian mathematics was primarily written on clay tablets in cuneiform script in the Akkadian or Sumerian languages. "Babylonian mathematics" is perhaps an unhelpful term since the earliest suggested origins date to the use of accounting devices, such as bullae and tokens, in the 5th millennium BC. == Babylonian numerals == The Babylonian system of mathematics was a sexagesimal (base 60) numeral system. From this we derive the modern-day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 degrees in a circle. The Babylonians were able to make great advances in mathematics for two reasons. Firstly, the number 60 is a superior highly composite number, having factors of 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60 (including those that are themselves composite), facilitating calculations with fractions. Additionally, unlike the Egyptians and Romans, the Babylonians had a true place-value system, where digits written in the left column represented larger values (much as, in our base ten system, 734 = 7×100 + 3×10 + 4×1). == Old Babylonian mathematics (2000–1600 BC) == === Arithmetic === The Babylonians used pre-calculated tables to assist with arithmetic, including multiplication tables, tables of reciprocals, and tables of squares (or, by using the same table in the opposite way, tables of square roots). Their multiplication tables were not the 60 × 60 {\displaystyle 60\times 60} tables that one might expect by analogy to decimal multiplication tables. Instead, they kept only tables for multiplication by certain "principal numbers" (the regular numbers and 7). To calculate other products, they would split one of the numbers to be multiplied into a sum of principal numbers. Although many Babylonian tablets record exercises in multi-digit multiplication, these typically jump directly from the numbers being multiplied to their product, without showing intermediate values. Based on this, and on certain patterns of mistakes in some of these tablets, Jens Høyrup has suggested that long multiplication was performed in such a way that each step of the calculation erased the record of previous steps, as would happen using an abacus or counting board and would not happen with written long multiplication. A rare exception, "the only one of its kind known", is the Late Babylonian/Seleucid tablet BM 34601, which has been reconstructed as computing the square of a 13-digit sexagesimal number (the number 5 ⋅ 3 25 {\displaystyle 5\cdot 3^{25}} ) using a "slanting column of partial products" resembling modern long multiplication. The Babylonians did not have an algorithm for long division. Instead they based their method on the fact that: a b = a × 1 b {\displaystyle {\frac {a}{b}}=a\times {\frac {1}{b}}} together with a table of reciprocals. Numbers whose only prime factors are 2, 3 or 5 (known as 5-smooth or regular numbers) have finite reciprocals in sexagesimal notation, and tables with extensive lists of these reciprocals have been found. Reciprocals such as 1/7, 1/11, 1/13, etc. do not have finite representations in sexagesimal notation. To compute 1/13 or to divide a number by 13 the Babylonians would use an approximation such as: 1 13 = 7 91 = 7 × 1 91 ≈ 7 × 1 90 = 7 × 40 3600 = 280 3600 = 4 60 + 40 3600 . {\displaystyle {\frac {1}{13}}={\frac {7}{91}}=7\times {\frac {1}{91}}\approx 7\times {\frac {1}{90}}=7\times {\frac {40}{3600}}={\frac {280}{3600}}={\frac {4}{60}}+{\frac {40}{3600}}.} The Babylonian clay tablet YBC 7289 (c. 1800–1600 BC) gives an approximation of the square root of 2 in four sexagesimal figures, 𒐕 𒌋𒌋𒐼 𒐐𒐕 𒌋 = 1;24,51,10, which is accurate to about six decimal digits, and is the closest possible three-place sexagesimal representation of √2: 1 + 24 60 + 51 60 2 + 10 60 3 = 305470 216000 = 1.41421 296 ¯ . {\displaystyle 1+{\frac {24}{60}}+{\frac {51}{60^{2}}}+{\frac {10}{60^{3}}}={\frac {305470}{216000}}=1.41421{\overline {296}}.} === Algebra === As well as arithmetical calculations, Babylonian mathematicians also developed algebraic methods of solving equations. Once again, these were based on pre-calculated tables. To solve a quadratic equation, the Babylonians essentially used the standard quadratic formula. They considered quadratic equations of the form: x 2 + b x = c {\displaystyle \ x^{2}+bx=c} where b and c were not necessarily integers, but c was always positive. They knew that a solution to this form of equation is: x = − b 2 + ( b 2 ) 2 + c {\displaystyle x=-{\frac {b}{2}}+{\sqrt {\left({\frac {b}{2}}\right)^{2}+c}}} and they found square roots efficiently using division and averaging. Problems of this type included finding the dimensions of a rectangle given its area and the amount by which the length exceeds the width. Tables of values of n3 + n2 were used to solve certain cubic equations. For example, consider the equation: a x 3 + b x 2 = c . {\displaystyle \ ax^{3}+bx^{2}=c.} Multiplying the equation by a2 and dividing by b3 gives: ( a x b ) 3 + ( a x b ) 2 = c a 2 b 3 . {\displaystyle \left({\frac {ax}{b}}\right)^{3}+\left({\frac {ax}{b}}\right)^{2}={\frac {ca^{2}}{b^{3}}}.} Substituting y = ax/b gives: y 3 + y 2 = c a 2 b 3 {\displaystyle y^{3}+y^{2}={\frac {ca^{2}}{b^{3}}}} which could now be solved by looking up the n3 + n2 table to find the value closest to the right-hand side. The Babylonians accomplished this without algebraic notation, showing a remarkable depth of understanding. However, they did not have a method for solving the general cubic equation. === Growth === Babylonians modeled exponential growth, constrained growth (via a form of sigmoid functions), and doubling time, the latter in the context of interest on loans. Clay tablets from c. 2000 BC include the exercise "Given an interest rate of 1/60 per month (no compounding), compute the doubling time." This yields an annual interest rate of 12/60 = 20%, and hence a doubling time of 100% growth/20% growth per year = 5 years. === Plimpton 322 === The Plimpton 322 tablet contains a list of "Pythagorean triples", i.e., integers ( a , b , c ) {\displaystyle (a,b,c)} such that a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} . The triples are too many and too large to have been obtained by brute force. Much has been written on the subject, including some speculation (perhaps anachronistic) as to whether the tablet could have served as an early trigonometrical table. Care must be exercised to see the tablet in terms of methods familiar or accessible to scribes at the time. [...] the question "how was the tablet calculated?" does not have to have the same answer as the question "what problems does the tablet set?" The first can be answered most satisfactorily by reciprocal pairs, as first suggested half a century ago, and the second by some sort of right-triangle problems. === Geometry === Babylonians knew the common rules for measuring volumes and areas. They measured the circumference of a circle as three times the diameter and the area as one-twelfth the square of the circumference, which would be correct if π is estimated as 3. They were aware that this was an approximation, and one Old Babylonian mathematical tablet excavated near Susa in 1936 (dated to between the 19th and 17th centuries BC) gives a better approximation of π as 25/8 = 3.125, about 0.5 percent below the exact value. The volume of a cylinder was taken as the product of the base and the height, however, the volume of the frustum of a cone or a square pyramid was incorrectly taken as the product of the height and half the sum of the bases. The Pythagorean rule was also known to the Babylonians. The "Babylonian mile" was a measure of distance equal to about 11.3 km (or about seven modern miles). This measurement for distances eventually was converted to a "time-mile" used for measuring the travel of the Sun, therefore, representing time. The Babylonian astronomers kept detailed records of the rising and setting of stars, the motion of the planets, and the solar and lunar eclipses, all of which required familiarity with angular distances measured on the celestial sphere. They also used a form of Fourier analysis to compute an ephemeris (table of astronomical positions), which was discovered in the 1950s by Otto Neugebauer. To make calculations of the movements of celestial bodies, the Babylonians used basic arithmetic and a coordinate system based on the ecliptic, the part of the heavens that the sun and planets travel through. Tablets kept in the British Museum provide evidence that the Babylonians even went so far as to have a concept of objects in an abstract mathematical space. The tablets date from between 350 and 50 BC, revealing that the Babylonians understood and used geometry even earlier than previously thought. The Babylonians used a method for estimating the area under a curve by drawing a trapezoid underneath, a technique previously believed to have originated in 14th century Europe. This method of estimation allowed them to, for example, find the distance Jupiter had traveled in a certain amount of time. == See also == Babylonia Babylonian astronomy History of mathematics Islamic mathematics for mathematics in Islamic Iraq/Mesopotamia == Notes == == References == Berriman, A. E. (1956). The Babylonian quadratic equation. Boyer, C. B. (1989). Merzbach, Uta C. (ed.). A History of Mathematics (2nd rev. ed.). New York: Wiley. ISBN 0-471-09763-2. (1991 pbk ed. ISBN 0-471-54397-7). Høyrup, Jens. "Pythagorean 'Rule' and 'Theorem' – Mirror of the Relation Between Babylonian and Greek Mathematics". In Renger, Johannes (ed.). Babylon: Focus mesopotamischer Geschichte, Wiege früher Gelehrsamkeit, Mythos in der Moderne. 2. Internationales Colloquium der Deutschen Orient-Gesellschaft 24.–26. März 1998 in Berlin (PDF). Berlin: Deutsche Orient-Gesellschaft / Saarbrücken: SDV Saarbrücker Druckerei und Verlag. pp. 393–407. Joseph, G. G. (2000). The Crest of the Peacock. Princeton University Press. ISBN 0-691-00659-8. Joyce, David E. (1995). "Plimpton 322". Neugebauer, Otto (1969). The Exact Sciences in Antiquity (2nd ed.). Dover Publications. ISBN 978-0-486-22332-2. Muroi, Kazuo (2022). "Sexagesimal Calculations in Ancient Sumer". arXiv:2207.12102 [math.HO]. O'Connor, J. J.; Robertson, E. F. (December 2000). "An overview of Babylonian mathematics". MacTutor History of Mathematics. Robson, Eleanor (2001). "Neither Sherlock Holmes nor Babylon: a reassessment of Plimpton 322". Historia Math. 28 (3): 167–206. doi:10.1006/hmat.2001.2317. MR 1849797. Robson, E. (2002). "Words and pictures: New light on Plimpton 322". American Mathematical Monthly. 109 (2). Washington: 105–120. doi:10.1080/00029890.2002.11919845. JSTOR 2695324. S2CID 33907668. Robson, E. (2008). Mathematics in Ancient Iraq: A Social History. Princeton University Press. Toomer, G. J. (1981). Hipparchus and Babylonian Astronomy.
|
https://en.wikipedia.org/wiki/Babylonian_mathematics
|
The International Mathematical Olympiad (IMO) is a mathematical olympiad for pre-university students, and is the oldest of the International Science Olympiads. It is widely regarded as the most prestigious mathematical competition in the world. The first IMO was held in Romania in 1959. It has since been held annually, except in 1980. More than 100 countries participate. Each country sends a team of up to six students, plus one team leader, one deputy leader, and observers. Awards are given to approximately the top-scoring 50% of the individual contestants. Teams are not officially recognized—all scores are given only to individual contestants, but team scoring is unofficially compared more than individual scores. == Question type == The content ranges from extremely difficult algebra and pre-calculus problems to problems in branches of mathematics not conventionally covered in secondary or high school and often not at university level either, such as projective and complex geometry, functional equations, combinatorics, and well-grounded number theory, of which extensive knowledge of theorems is required. Calculus, though allowed in solutions, is never required, as there is a principle that anyone with a basic understanding of mathematics should understand the problems, even if the solutions require a great deal more knowledge. Supporters of this principle claim that this allows more universality and creates an incentive to find elegant, deceptively simple-looking problems which nevertheless require a certain level of ingenuity, often times a great deal of ingenuity to net all points for a given IMO problem. == Selection process == The selection process differs by country, but it often consists of a series of tests which admit fewer students at each progressing test. Contestants must be under the age of 20 and must not be registered at any tertiary institution. Subject to these conditions, an individual may participate any number of times in the IMO. == History == The first IMO was held in Romania in 1959. Since then it has been held every year (except in 1980, when it was cancelled due to internal strife in Mongolia). It was initially founded for eastern European member countries of the Warsaw Pact, under the USSR bloc of influence, but later other countries participated as well. Because of this eastern origin, the IMOs were first hosted only in eastern European countries, and gradually spread to other nations. Sources differ about the cities hosting some of the early IMOs. This may be partly because leaders and students are generally housed at different locations, and partly because after the competition the students were sometimes based in multiple cities for the rest of the IMO. The exact dates cited may also differ, because of leaders arriving before the students, and at more recent IMOs the IMO Advisory Board arriving before the leaders. Several students, such as Lisa Sauermann, Peter Scholze, Reid W. Barton, Nicușor Dan (notably elected President of Romania in 2025) and Ciprian Manolescu have performed exceptionally well in the IMO, winning multiple gold medals. Others, such as Terence Tao, Artur Avila, Grigori Perelman, Ngô Bảo Châu, Peter Scholze and Maryam Mirzakhani have gone on to become notable mathematicians. Several former participants have won awards such as the Fields Medal. Shortly after the 2016 International Mathematical Olympiad in Hong Kong, North Korean child prodigy Ri Jong-yol made his way to the South Korean consulate general, where he sought refuge for two months. Chinese authorities eventually allowed him to leave Hong Kong on a flight to Seoul. He legally changed his name to Lee Jung-ho (이정호) after receiving South Korean citizenship. This is the only case of its kind in the IMO's history. == Scoring and format == The competition consists of 6 problems. The competition is held over two consecutive days with 3 problems each; each day the contestants have four-and-a-half hours to solve three problems. Each problem is worth 7 points for a maximum total score of 42 points. Calculators are banned. Protractors were banned relatively recently. Unlike other science olympiads, the IMO has no official syllabus and does not cover any university-level topics. The problems chosen are from various areas of secondary school mathematics, broadly classifiable as geometry, number theory, algebra, and combinatorics. They require no knowledge of higher mathematics such as calculus and analysis, and solutions are often elementary. However, they are usually disguised so as to make the solutions difficult. The problems given in the IMO are largely designed to require creativity and the ability to solve problems quickly. Thus, the prominently featured problems are algebraic inequalities, complex numbers, and construction-oriented geometrical problems, though in recent years, the latter has not been as popular as before because of the algorithmic use of theorems like Muirhead's inequality, and complex/analytic bashing to solve problems. Each participating country, other than the host country, may submit suggested problems to a problem selection committee provided by the host country, which reduces the submitted problems to a shortlist. The team leaders arrive at the IMO a few days in advance of the contestants and form the IMO jury which is responsible for all the formal decisions relating to the contest, starting with selecting the six problems from the shortlist. The jury aims to order the problems so that the order in increasing difficulty is Q1, Q4, Q2, Q5, Q3 and Q6, where the first day problems Q1, Q2, and Q3 are in increasing difficulty, and the second day problems Q4, Q5, Q6 are in increasing difficulty. The team leaders of all countries are given the problems in advance of the contestants, and thus, are kept strictly separated and observed. Each country's marks are agreed between that country's leader and deputy leader and coordinators provided by the host country (the leader of the team whose country submitted the problem in the case of the marks of the host country), subject to the decisions of the chief coordinator and ultimately a jury if any disputes cannot be resolved. == Selection process == The selection process for the IMO varies greatly by country. In some countries, especially those in East Asia, the selection process involves several tests of a difficulty comparable to the IMO itself. The Chinese contestants go through a camp. In others, such as the United States, possible participants go through a series of easier standalone competitions that gradually increase in difficulty. In the United States, the tests include the American Mathematics Competitions, the American Invitational Mathematics Examination, and the United States of America Junior Mathematical Olympiad/United States of America Mathematical Olympiad, each of which is a competition in its own right. For high scorers in the final competition for the team selection, there also is a summer camp, like that of China. In countries of the former Soviet Union and other eastern European countries, a team has in the past been chosen several years beforehand, and they are given special training specifically for the event. However, such methods have been discontinued in some countries. == Awards == The participants are ranked based on their individual scores. Medals are awarded to the highest ranked participants; slightly fewer than half of them receive a medal. The cutoffs (minimum scores required to receive a gold, silver, or bronze medal respectively) are then chosen so that the numbers of gold, silver and bronze medals awarded are approximately in the ratios 1:2:3. Participants who do not win a medal but who score 7 points on at least one problem receive an honorable mention. Special prizes may be awarded for solutions of outstanding elegance or involving good generalisations of a problem. This last happened in 1995 (Nikolay Nikolov, Bulgaria) and 2005 (Iurie Boreico), but was more frequent up to the early 1980s. The special prize in 2005 was awarded to Iurie Boreico, a student from Moldova, for his solution to Problem 3, a three variable inequality. The rule that at most half the contestants win a medal is sometimes broken if it would cause the total number of medals to deviate too much from half the number of contestants. This last happened in 2010 (when the choice was to give either 226 (43.71%) or 266 (51.45%) of the 517 contestants (excluding the 6 from North Korea — see below) a medal), 2012 (when the choice was to give either 226 (41.24%) or 277 (50.55%) of the 548 contestants a medal), and 2013, when the choice was to give either 249 (47.16%) or 278 (52.65%) of the 528 contestants a medal. In these cases, slightly more than half the contestants were awarded a medal. == Penalties and bans == North Korea was disqualified twice for cheating, once at the 32nd IMO in 1991 and again at the 51st IMO in 2010. However, the incident in 2010 was controversial. There have been other cases of cheating where contestants received penalties, although these cases were not officially disclosed. (For instance, at the 34th IMO in 1993, a contestant was disqualified for bringing a pocket book of formulas, and two contestants were awarded zero points on second day's paper for bringing calculators.) Russia has been banned from participating in the Olympiad since 2022 as a response to its invasion of Ukraine. Nonetheless, a limited number of students (specifically, 6) are allowed to take part in the competition and receive awards, but only remotely and with their results being excluded from the unofficial team ranking. Slightly more than a half of the IMO 2021 Jury members (59 out of 107) voted in support of the sanction proposed by the IMO Board. == Summary == == Notable achievements == === National === The following nations have achieved the highest team score in the respective competition: China, 24 times: in 1989, 1990, 1992, 1993, 1995, 1997, 1999 (joint), 2000–2002, 2004–2006, 2008–2011, 2013, 2014, 2019 (joint), 2020–2023; Russia (including Soviet Union), 16 times: in 1963–1967, 1972–1974, 1976, 1979, 1984, 1986 (joint), 1988, 1991, 1999 (joint), 2007; United States, 9 times: in 1977, 1981, 1986 (joint), 1994, 2015, 2016, 2018, 2019 (joint), 2024; Hungary, 6 times: in 1961, 1962, 1969–1971, 1975; Romania, 5 times: in 1959, 1978, 1985, 1987, 1996; West Germany, twice: in 1982 and 1983; South Korea, twice: in 2012 and 2017; Bulgaria, once: in 2003; Iran, once: in 1998; East Germany, once: in 1968. The following nations have achieved an all-members-gold IMO with a full team: China, 15 times: in 1992, 1993, 1997, 2000–2002, 2004, 2006, 2009–2011, 2019, 2021–2023. United States, 4 times: in 1994, 2011, 2016, and 2019. South Korea, 3 times: in 2012, 2017, and 2019. Russia, twice: in 2002 and 2008. Bulgaria, once: in 2003. The only countries to have their entire team score perfectly in the IMO were the United States in 1994, China in 2022, and Luxembourg, whose 1-member team had a perfect score in 1981. The US's success earned a mention in TIME Magazine. Hungary won IMO 1975 in an unorthodox way when none of the eight team members received a gold medal (five silver, three bronze). The second-place team, East Germany, also did not have a single gold medal winner (four silver, four bronze). The current ten countries with the best all-time results are as follows: === Individual === Several individuals have consistently scored highly and/or earned medals on the IMO: Zhuo Qun Song (Canada) is the most highly decorated participant with five gold medals (including one perfect score in 2015) and one bronze medal. Reid Barton (United States) was the first participant to win a gold medal four times (1998–2001). Barton is also one of only eight four-time Putnam Fellows (2001–04). Christian Reiher (Germany), Lisa Sauermann (Germany), Teodor von Burg (Serbia), Nipun Pitimanaaree (Thailand) and Luke Robitaille (United States) are the only other participants to have won four gold medals (2000–03, 2008–11, 2009–12, 2010–13, 2011–14, and 2019–22 respectively); Reiher also received a bronze medal (1999), Sauermann a silver medal (2007), von Burg a silver medal (2008) and a bronze medal (2007), and Pitimanaaree a silver medal (2009). Wolfgang Burmeister (East Germany), Martin Härterich (West Germany), Iurie Boreico (Moldova), and Lim Jeck (Singapore) are the only other participants besides Reiher, Sauermann, von Burg, and Pitimanaaree to win five medals with at least three of them gold. Ciprian Manolescu (Romania) managed to write a perfect paper (42 points) for gold medal more times than anybody else in the history of the competition, doing it all three times he participated in the IMO (1995, 1996, 1997). Manolescu is also a three-time Putnam Fellow (1997, 1998, 2000). Eugenia Malinnikova (Soviet Union) is the highest-scoring female contestant in IMO history. She has 3 gold medals in IMO 1989 (41 points), IMO 1990 (42) and IMO 1991 (42), missing only 1 point in 1989 to precede Manolescu's achievement. Terence Tao (Australia) participated in IMO 1986, 1987 and 1988, winning bronze, silver and gold medals respectively. He won a gold medal when he just turned thirteen in IMO 1988, becoming the youngest person to receive a gold medal (Zhuo Qun Song of Canada also won a gold medal at age 13, in 2011, though he was older than Tao). Tao also holds the distinction of being the youngest medalist with his 1986 bronze medal, followed by 2009 bronze medalist Raúl Chávez Sarmiento (Peru), at the age of 10 and 11 respectively. Representing the United States, Noam Elkies won a gold medal with a perfect paper at the age of 14 in 1981. Both Elkies and Tao could have participated in the IMO multiple times following their success, but entered university and therefore became ineligible. == Gender gap and the launch of European Girls' Mathematical Olympiad == Over the years, since its inception to present, the IMO has attracted far more male contestants than female contestants. During the period 2000–2021, there were only 1,102 female contestants (9.2%) out of a total of 11,950 contestants. The gap is even more significant in terms of IMO gold medallists; from 1959 to 2021, there were 43 female (3.3%) and 1295 male gold medal winners. This gender gap in participation and in performance at the IMO level led to the establishment of the European Girls' Mathematical Olympiad (EGMO). == Media coverage == A documentary, "Hard Problems: The Road To The World's Toughest Math Contest" was made about the United States 2006 IMO team. A BBC documentary titled Beautiful Young Minds aired July 2007 about the IMO. A BBC fictional film titled X+Y released in September 2014 tells the story of an autistic boy who took part in the Olympiad. A book named Countdown by Steve Olson tells the story of the United States team's success in the 2001 Olympiad. == See also == List of International Mathematical Olympiads International Mathematics Competition for University Students (IMC) International Science Olympiad List of mathematics competitions Pan-African Mathematics Olympiads Junior Science Talent Search Examination Art of Problem Solving Mathcounts == Notes == == Citations == == References == Xu, Jiagu (2012). Lecture Notes on Mathematical Olympiad Courses, For Senior Section. World Scientific Publishing. ISBN 978-981-4368-94-0. Xiong, Bin; Lee, Peng Yee (2013). Mathematical Olympiad in China (2009-2010). World Scientific Publishing. ISBN 978-981-4390-21-7. Xu, Jiagu (2009). Lecture Notes on Mathematical Olympiad Courses, For Junior Section. World Scientific Publishing. ISBN 978-981-4293-53-2. Olson, Steve (2004). Count Down. Houghton Mifflin. ISBN 0-618-25141-3. Verhoeff, Tom (August 2002). The 43rd International Mathematical Olympiad: A Reflective Report on IMO 2002 (PDF). Computing Science Report, Vol. 2, No. 11. Faculty of Mathematics and Computing Science, Eindhoven University of Technology. Djukić, Dušan (2006). The IMO Compendium: A Collection of Problems Suggested for the International Olympiads, 1959–2004. Springer. ISBN 978-0-387-24299-6. Lord, Mary (23 July 2001). "Michael Jordans of math - U.S. Student whizzes stun the cipher world". U.S. News & World Report. 131 (3): 26. Saul, Mark (2003). "Mathematics in a Small Place: Notes on the Mathematics of Romania and Bulgaria" (PDF). Notices of the American Mathematical Society. 50: 561–565. Vakil, Ravi (1997). A Mathematical Mosaic: Patterns & Problem Solving. Brendan Kelly Publishing. p. 288. ISBN 978-1-895997-28-6. Liu, Andy (1998). Chinese Mathematics Competitions and Olympiads. AMT Publishing. ISBN 1-876420-00-6. == External links == Official IMO web site Archive to the IMO 1959–2003 problems and solutions Old central IMO web site
|
https://en.wikipedia.org/wiki/International_Mathematical_Olympiad
|
Informal mathematics, also called naïve mathematics, has historically been the predominant form of mathematics at most times and in most cultures, and is the subject of modern ethno-cultural studies of mathematics. The philosopher Imre Lakatos in his Proofs and Refutations aimed to sharpen the formulation of informal mathematics, by reconstructing its role in nineteenth century mathematical debates and concept formation, opposing the predominant assumptions of mathematical formalism. Informality may not discern between statements given by inductive reasoning (as in approximations which are deemed "correct" merely because they are useful), and statements derived by deductive reasoning. == Terminology == Informal mathematics means any informal mathematical practices, as used in everyday life, or by aboriginal or ancient peoples, without historical or geographical limitation. Modern mathematics, exceptionally from that point of view, emphasizes formal and strict proofs of all statements from given axioms. This can usefully be called therefore formal mathematics. Informal practices are usually understood intuitively and justified with examples—there are no axioms. This is of direct interest in anthropology and psychology: it casts light on the perceptions and agreements of other cultures. It is also of interest in developmental psychology as it reflects a naïve understanding of the relationships between numbers and things. Another term used for informal mathematics is folk mathematics, which is ambiguous; the mathematical folklore article is dedicated to the usage of that term among professional mathematicians. The field of naïve physics is concerned with similar understandings of physics. People use mathematics and physics in everyday life, without really understanding (or caring) how mathematical and physical ideas were historically derived and justified. == History == There has long been a standard account of the development of geometry in ancient Egypt, followed by Greek mathematics and the emergence of deductive logic. The modern sense of the term mathematics, as meaning only those systems justified with reference to axioms, is however an anachronism if read back into history. Several ancient societies built impressive mathematical systems and carried out complex calculations based on proofless heuristics and practical approaches. Mathematical facts were accepted on a pragmatic basis. Empirical methods, as in science, provided the justification for a given technique. Commerce, engineering, calendar creation and the prediction of eclipses and stellar progression were practiced by ancient cultures on at least three continents. == See also == Ethnomathematics Folk psychology Mathematical Platonism Numeracy Pseudomathematics == References ==
|
https://en.wikipedia.org/wiki/Informal_mathematics
|
The Encyclopedia of Mathematics (also EOM and formerly Encyclopaedia of Mathematics) is a large reference work in mathematics. == Overview == The 2002 version contains more than 8,000 entries covering most areas of mathematics at a graduate level, and the presentation is technical in nature. The encyclopedia is edited by Michiel Hazewinkel and was published by Kluwer Academic Publishers until 2003, when Kluwer became part of Springer. The CD-ROM contains animations and three-dimensional objects. The encyclopedia has been translated from the Soviet Matematicheskaya entsiklopediya (1977) originally edited by Ivan Matveevich Vinogradov and extended with comments and three supplements adding several thousand articles. Until November 29, 2011, a static version of the encyclopedia could be browsed online free of charge. This URL now redirects to the new wiki incarnation of the EOM. == Encyclopedia of Mathematics wiki == A new dynamic version of the encyclopedia is now available as a public wiki online. This new wiki is a collaboration between Springer and the European Mathematical Society. This new version of the encyclopedia includes the entire contents of the previous online version, but all entries can now be publicly updated to include the newest advancements in mathematics. All entries will be monitored for content accuracy by members of an editorial board selected by the European Mathematical Society. == Versions == Vinogradov, I. M. (Ed.), Matematicheskaya entsiklopediya, Moscow, Sov. Entsiklopediya, 1977. Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics (set), Kluwer, 1994 (ISBN 1-55608-010-7). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Vol. 1 (A–B), Kluwer, 1987 (ISBN 1-55608-000-X). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Vol. 2 (C), Kluwer, 1988 (ISBN 1-55608-001-8). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Vol. 3 (D–Fey), Kluwer, 1989 (ISBN 1-55608-002-6). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Vol. 4 (Fib–H), Kluwer, 1989 (ISBN 1-55608-003-4). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Vol. 5 (I–Lit), Kluwer, 1990 (ISBN 1-55608-004-2). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Vol. 6 (Lob–Opt), Kluwer, 1990 (ISBN 1-55608-005-0). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Vol. 7 (Orb–Ray), Kluwer, 1991 (ISBN 1-55608-006-9). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Vol. 8 (Rea–Sti), Kluwer, 1992 (ISBN 1-55608-007-7). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Vol. 9 (Sto–Zyg), Kluwer, 1993 (ISBN 1-55608-008-5). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Vol. 10 (Index), Kluwer, 1994 (ISBN 1-55608-009-3). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Supplement I, Kluwer, 1997 (ISBN 0-7923-4709-9). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Supplement II, Kluwer, 2000 (ISBN 0-7923-6114-8). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics, Supplement III, Kluwer, 2002 (ISBN 1-4020-0198-3). Hazewinkel, M. (Ed.), Encyclopaedia of Mathematics on CD-ROM, Kluwer, 1998 (ISBN 0-7923-4805-2). Encyclopedia of Mathematics, public wiki monitored by an editorial board under the management of the European Mathematical Society. == See also == List of online encyclopedias == References == == External links == Official website Publications by M. Hazewinkel at ResearchGate
|
https://en.wikipedia.org/wiki/Encyclopedia_of_Mathematics
|
Srinivasa Ramanujan Aiyangar (22 December 1887 – 26 April 1920) was an Indian mathematician. Often regarded as one of the greatest mathematicians of all time, though he had almost no formal training in pure mathematics, he made substantial contributions to mathematical analysis, number theory, infinite series, and continued fractions, including solutions to mathematical problems then considered unsolvable. Ramanujan initially developed his own mathematical research in isolation. According to Hans Eysenck, "he tried to interest the leading professional mathematicians in his work, but failed for the most part. What he had to show them was too novel, too unfamiliar, and additionally presented in unusual ways; they could not be bothered". Seeking mathematicians who could better understand his work, in 1913 he began a mail correspondence with the English mathematician G. H. Hardy at the University of Cambridge, England. Recognising Ramanujan's work as extraordinary, Hardy arranged for him to travel to Cambridge. In his notes, Hardy commented that Ramanujan had produced groundbreaking new theorems, including some that "defeated me completely; I had never seen anything in the least like them before", and some recently proven but highly advanced results. During his short life, Ramanujan independently compiled nearly 3,900 results (mostly identities and equations). Many were completely novel; his original and highly unconventional results, such as the Ramanujan prime, the Ramanujan theta function, partition formulae and mock theta functions, have opened entire new areas of work and inspired further research. Of his thousands of results, most have been proven correct. The Ramanujan Journal, a scientific journal, was established to publish work in all areas of mathematics influenced by Ramanujan, and his notebooks—containing summaries of his published and unpublished results—have been analysed and studied for decades since his death as a source of new mathematical ideas. As late as 2012, researchers continued to discover that mere comments in his writings about "simple properties" and "similar outputs" for certain findings were themselves profound and subtle number theory results that remained unsuspected until nearly a century after his death. He became one of the youngest Fellows of the Royal Society and only the second Indian member, and the first Indian to be elected a Fellow of Trinity College, Cambridge. In 1919, ill health—now believed to have been hepatic amoebiasis (a complication from episodes of dysentery many years previously)—compelled Ramanujan's return to India, where he died in 1920 at the age of 32. His last letters to Hardy, written in January 1920, show that he was still continuing to produce new mathematical ideas and theorems. His "lost notebook", containing discoveries from the last year of his life, caused great excitement among mathematicians when it was rediscovered in 1976. == Early life == Ramanujan (literally, "younger brother of Rama", a Hindu deity) was born on 22 December 1887 into a Tamil Brahmin Iyengar family in Erode, in present-day Tamil Nadu. His father, Kuppuswamy Srinivasa Iyengar, originally from Thanjavur district, worked as a clerk in a sari shop. His mother, Komalatammal, was a housewife and sang at a local temple. They lived in a small traditional home on Sarangapani Sannidhi Street in the town of Kumbakonam. The family home is now a museum. When Ramanujan was a year and a half old, his mother gave birth to a son, Sadagopan, who died less than three months later. In December 1889, Ramanujan contracted smallpox, but recovered, unlike the 4,000 others who died in a bad year in the Thanjavur district around this time. He moved with his mother to her parents' house in Kanchipuram, near Madras (now Chennai). His mother gave birth to two more children, in 1891 and 1894, both of whom died before their first birthdays. On 1 October 1892, Ramanujan was enrolled at the local school. After his maternal grandfather lost his job as a court official in Kanchipuram, Ramanujan and his mother moved back to Kumbakonam, and he was enrolled in Kangayan Primary School. When his paternal grandfather died, he was sent back to his maternal grandparents, then living in Madras. He did not like school in Madras, and tried to avoid attending. His family enlisted a local constable to make sure he attended school. Within six months, Ramanujan was back in Kumbakonam. Since Ramanujan's father was at work most of the day, his mother took care of the boy, and they had a close relationship. From her, he learned about tradition and puranas, to sing religious songs, to attend pujas at the temple, and to maintain particular eating habits—all part of Brahmin culture. At Kangayan Primary School, Ramanujan performed well. Just before turning 10, in November 1897, he passed his primary examinations in English, Tamil, geography, and arithmetic with the best scores in the district. That year, Ramanujan entered Town Higher Secondary School, where he encountered formal mathematics for the first time. A child prodigy by age 11, he had exhausted the mathematical knowledge of two college students who were lodgers at his home. He was later lent a book written by S. L. Loney on advanced trigonometry. He mastered this by the age of 13 while discovering sophisticated theorems on his own. By 14, he received merit certificates and academic awards that continued throughout his school career, and he assisted the school in the logistics of assigning its 1,200 students (each with differing needs) to its approximately 35 teachers. He completed mathematical exams in half the allotted time, and showed a familiarity with geometry and infinite series. Ramanujan was shown how to solve cubic equations in 1902. He would later develop his own method to solve the quartic. In 1903, he tried to solve the quintic, not knowing that it was impossible to solve with radicals. In 1903, when he was 16, Ramanujan obtained from a friend a library copy of A Synopsis of Elementary Results in Pure and Applied Mathematics, G. S. Carr's collection of 5,000 theorems. Ramanujan reportedly studied the contents of the book in detail. The next year, Ramanujan independently developed and investigated the Bernoulli numbers and calculated the Euler–Mascheroni constant up to 15 decimal places. His peers at the time said they "rarely understood him" and "stood in respectful awe" of him. When he graduated from Town Higher Secondary School in 1904, Ramanujan was awarded the K. Ranganatha Rao prize for mathematics by the school's headmaster, Krishnaswami Iyer. Iyer introduced Ramanujan as an outstanding student who deserved scores higher than the maximum. He received a scholarship to study at Government Arts College, Kumbakonam, but was so intent on mathematics that he could not focus on any other subjects and failed most of them, losing his scholarship in the process. In August 1905, Ramanujan ran away from home, heading towards Visakhapatnam, and stayed in Rajahmundry for about a month. He later enrolled at Pachaiyappa's College in Madras. There, he passed in mathematics, choosing only to attempt questions that appealed to him and leaving the rest unanswered, but performed poorly in other subjects, such as English, physiology, and Sanskrit. Ramanujan failed his Fellow of Arts exam in December 1906 and again a year later. Without an FA degree, he left college and continued to pursue independent research in mathematics, living in extreme poverty and often on the brink of starvation. In 1910, after a meeting between the 23-year-old Ramanujan and the founder of the Indian Mathematical Society, V. Ramaswamy Aiyer, Ramanujan began to get recognition in Madras's mathematical circles, leading to his inclusion as a researcher at the University of Madras. == Adulthood in India == On 14 July 1909, Ramanujan married Janaki (Janakiammal; 21 March 1899 – 13 April 1994), a girl his mother had selected for him a year earlier and who was ten years old when they married. It was not unusual then for marriages to be arranged with girls at a young age. Janaki was from Rajendram, a village close to Marudur (Karur district) Railway Station. Ramanujan's father did not participate in the marriage ceremony. As was common at that time, Janaki continued to stay at her maternal home for three years after marriage, until she reached puberty. In 1912, she and Ramanujan's mother joined Ramanujan in Madras. After the marriage, Ramanujan developed a hydrocele testis. The condition could be treated with a routine surgical operation that would release the blocked fluid in the scrotal sac, but his family could not afford the operation. In January 1910, a doctor volunteered to do the surgery at no cost. After his successful surgery, Ramanujan searched for a job. He stayed at a friend's house while he went from door to door around Madras looking for a clerical position. To make money, he tutored students at Presidency College who were preparing for their Fellow of Arts exam. In late 1910, Ramanujan was sick again. He feared for his health, and told his friend R. Radakrishna Iyer to "hand [his notebooks] over to Professor Singaravelu Mudaliar [the mathematics professor at Pachaiyappa's College] or to the British professor Edward B. Ross, of the Madras Christian College." After Ramanujan recovered and retrieved his notebooks from Iyer, he took a train from Kumbakonam to Villupuram, a city under French control. In 1912, Ramanujan moved with his wife and mother to a house in Saiva Muthaiah Mudali street, George Town, Madras, where they lived for a few months. In May 1913, upon securing a research position at Madras University, Ramanujan moved with his family to Triplicane. === Pursuit of career in mathematics === In 1910, Ramanujan met deputy collector V. Ramaswamy Aiyer, who founded the Indian Mathematical Society. Wishing for a job at the revenue department where Aiyer worked, Ramanujan showed him his mathematics notebooks. As Aiyer later recalled: I was struck by the extraordinary mathematical results contained in [the notebooks]. I had no mind to smother his genius by an appointment in the lowest rungs of the revenue department. Aiyer sent Ramanujan, with letters of introduction, to his mathematician friends in Madras. Some of them looked at his work and gave him letters of introduction to R. Ramachandra Rao, the district collector for Nellore and the secretary of the Indian Mathematical Society. Rao was impressed by Ramanujan's research but doubted that it was his own work. Ramanujan mentioned a correspondence he had with Professor Saldhana, a notable Bombay mathematician, in which Saldhana expressed a lack of understanding of his work but concluded that he was not a fraud. Ramanujan's friend C. V. Rajagopalachari tried to quell Rao's doubts about Ramanujan's academic integrity. Rao agreed to give him another chance, and listened as Ramanujan discussed elliptic integrals, hypergeometric series, and his theory of divergent series, which Rao said ultimately convinced him of Ramanujan's brilliance. When Rao asked him what he wanted, Ramanujan replied that he needed work and financial support. Rao consented and sent him to Madras. He continued his research with Rao's financial aid. With Aiyer's help, Ramanujan had his work published in the Journal of the Indian Mathematical Society. One of the first problems he posed in the journal was to find the value of: 1 + 2 1 + 3 1 + ⋯ {\displaystyle {\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}}} He waited for a solution to be offered in three issues, over six months, but failed to receive any. At the end, Ramanujan supplied an incomplete solution to the problem himself. On page 105 of his first notebook, he formulated an equation that could be used to solve the infinitely nested radicals problem. x + n + a = a x + ( n + a ) 2 + x a ( x + n ) + ( n + a ) 2 + ( x + n ) ⋯ {\displaystyle x+n+a={\sqrt {ax+(n+a)^{2}+x{\sqrt {a(x+n)+(n+a)^{2}+(x+n){\sqrt {\cdots }}}}}}} Using this equation, the answer to the question posed in the Journal was simply 3, obtained by setting x = 2, n = 1, and a = 0. Ramanujan wrote his first formal paper for the Journal on the properties of Bernoulli numbers. One property he discovered was that the denominators of the fractions of Bernoulli numbers (sequence A027642 in the OEIS) are always divisible by six. He also devised a method of calculating Bn based on previous Bernoulli numbers. One of these methods follows: It will be observed that if n is even but not equal to zero, Bn is a fraction and the numerator of Bn/n in its lowest terms is a prime number, the denominator of Bn contains each of the factors 2 and 3 once and only once, 2n(2n − 1)Bn/n is an integer and 2(2n − 1)Bn consequently is an odd integer. In his 17-page paper "Some Properties of Bernoulli's Numbers" (1911), Ramanujan gave three proofs, two corollaries and three conjectures. His writing initially had many flaws. As Journal editor M. T. Narayana Iyengar noted: Mr. Ramanujan's methods were so terse and novel and his presentation so lacking in clearness and precision, that the ordinary [mathematical reader], unaccustomed to such intellectual gymnastics, could hardly follow him. Ramanujan later wrote another paper and also continued to provide problems in the Journal. In early 1912, he got a temporary job in the Madras Accountant General's office, with a monthly salary of 20 rupees. He lasted only a few weeks. Toward the end of that assignment, he applied for a position under the Chief Accountant of the Madras Port Trust. In a letter dated 9 February 1912, Ramanujan wrote: Sir, I understand there is a clerkship vacant in your office, and I beg to apply for the same. I have passed the Matriculation Examination and studied up to the F.A. but was prevented from pursuing my studies further owing to several untoward circumstances. I have, however, been devoting all my time to Mathematics and developing the subject. I can say I am quite confident I can do justice to my work if I am appointed to the post. I therefore beg to request that you will be good enough to confer the appointment on me. Attached to his application was a recommendation from E. W. Middlemast, a mathematics professor at the Presidency College, who wrote that Ramanujan was "a young man of quite exceptional capacity in Mathematics". Three weeks after he applied, on 1 March, Ramanujan learned that he had been accepted as a Class III, Grade IV accounting clerk, making 30 rupees per month. At his office, Ramanujan easily and quickly completed the work he was given and spent his spare time doing mathematical research. Ramanujan's boss, Sir Francis Spring, and S. Narayana Iyer, a colleague who was also treasurer of the Indian Mathematical Society, encouraged Ramanujan in his mathematical pursuits. === Contacting British mathematicians === In the spring of 1913, Narayana Iyer, Ramachandra Rao and E. W. Middlemast tried to present Ramanujan's work to British mathematicians. M. J. M. Hill of University College London commented that Ramanujan's papers were riddled with holes. He said that although Ramanujan had "a taste for mathematics, and some ability", he lacked the necessary educational background and foundation to be accepted by mathematicians. Although Hill did not offer to take Ramanujan on as a student, he gave thorough and serious professional advice on his work. With the help of friends, Ramanujan drafted letters to leading mathematicians at Cambridge University. The first two professors, H. F. Baker and E. W. Hobson, returned Ramanujan's papers without comment. On 16 January 1913, Ramanujan wrote to G. H. Hardy, whom he knew from studying Orders of Infinity (1910). Coming from an unknown mathematician, the nine pages of mathematics made Hardy initially view Ramanujan's manuscripts as a possible fraud. Hardy recognised some of Ramanujan's formulae but others "seemed scarcely possible to believe".: 494 One of the theorems Hardy found amazing was on the bottom of page three (valid for 0 < a < b + 1/2): ∫ 0 ∞ 1 + x 2 ( b + 1 ) 2 1 + x 2 a 2 × 1 + x 2 ( b + 2 ) 2 1 + x 2 ( a + 1 ) 2 × ⋯ d x = π 2 × Γ ( a + 1 2 ) Γ ( b + 1 ) Γ ( b − a + 1 ) Γ ( a ) Γ ( b + 1 2 ) Γ ( b − a + 1 2 ) . {\displaystyle \int \limits _{0}^{\infty }{\frac {1+{\dfrac {x^{2}}{(b+1)^{2}}}}{1+{\dfrac {x^{2}}{a^{2}}}}}\times {\frac {1+{\dfrac {x^{2}}{(b+2)^{2}}}}{1+{\dfrac {x^{2}}{(a+1)^{2}}}}}\times \cdots \,dx={\frac {\sqrt {\pi }}{2}}\times {\frac {\Gamma \left(a+{\frac {1}{2}}\right)\Gamma (b+1)\Gamma (b-a+1)}{\Gamma (a)\Gamma \left(b+{\frac {1}{2}}\right)\Gamma \left(b-a+{\frac {1}{2}}\right)}}.} Hardy was also impressed by some of Ramanujan's other work relating to infinite series: 1 − 5 ( 1 2 ) 3 + 9 ( 1 × 3 2 × 4 ) 3 − 13 ( 1 × 3 × 5 2 × 4 × 6 ) 3 + ⋯ = 2 π {\displaystyle 1-5\left({\frac {1}{2}}\right)^{3}+9\left({\frac {1\times 3}{2\times 4}}\right)^{3}-13\left({\frac {1\times 3\times 5}{2\times 4\times 6}}\right)^{3}+\cdots ={\frac {2}{\pi }}} 1 + 9 ( 1 4 ) 4 + 17 ( 1 × 5 4 × 8 ) 4 + 25 ( 1 × 5 × 9 4 × 8 × 12 ) 4 + ⋯ = 2 2 π Γ 2 ( 3 4 ) . {\displaystyle 1+9\left({\frac {1}{4}}\right)^{4}+17\left({\frac {1\times 5}{4\times 8}}\right)^{4}+25\left({\frac {1\times 5\times 9}{4\times 8\times 12}}\right)^{4}+\cdots ={\frac {2{\sqrt {2}}}{{\sqrt {\pi }}\,\Gamma ^{2}\left({\frac {3}{4}}\right)}}.} The first result had already been determined by G. Bauer in 1859. The second was new to Hardy, and was derived from a class of functions called hypergeometric series, which had first been researched by Euler and Gauss. Hardy found these results "much more intriguing" than Gauss's work on integrals. After seeing Ramanujan's theorems on continued fractions on the last page of the manuscripts, Hardy said the theorems "defeated me completely; I had never seen anything in the least like them before", and that they "must be true, because, if they were not true, no one would have the imagination to invent them". Hardy asked a colleague, J. E. Littlewood, to take a look at the papers. Littlewood was amazed by Ramanujan's genius. After discussing the papers with Littlewood, Hardy concluded that the letters were "certainly the most remarkable I have received" and that Ramanujan was "a mathematician of the highest quality, a man of altogether exceptional originality and power".: 494–495 One colleague, E. H. Neville, later remarked that "No one who was in the mathematical circles in Cambridge at that time can forget the sensation caused by this letter... not one [theorem] could have been set in the most advanced mathematical examination in the world". On 8 February 1913, Hardy wrote Ramanujan a letter expressing interest in his work, adding that it was "essential that I should see proofs of some of your assertions". Before his letter arrived in Madras during the third week of February, Hardy contacted the Indian Office to plan for Ramanujan's trip to Cambridge. Secretary Arthur Davies of the Advisory Committee for Indian Students met with Ramanujan to discuss the overseas trip. In accordance with his Brahmin upbringing, Ramanujan refused to leave his country to "go to a foreign land", and his parents were also opposed for the same reason. Meanwhile, he sent Hardy a letter packed with theorems, writing, "I have found a friend in you who views my labour sympathetically." To supplement Hardy's endorsement, Gilbert Walker, a former mathematical lecturer at Trinity College, Cambridge, looked at Ramanujan's work and expressed amazement, urging the young man to spend time at Cambridge. As a result of Walker's endorsement, B. Hanumantha Rao, a mathematics professor at an engineering college, invited Ramanujan's colleague Narayana Iyer to a meeting of the Board of Studies in Mathematics to discuss "what we can do for S. Ramanujan". The board agreed to grant Ramanujan a monthly research scholarship of 75 rupees for the next two years at the University of Madras. While he was engaged as a research student, Ramanujan continued to submit papers to the Journal of the Indian Mathematical Society. In one instance, Iyer submitted some of Ramanujan's theorems on summation of series to the journal, adding, "The following theorem is due to S. Ramanujan, the mathematics student of Madras University." Later in November, British Professor Edward B. Ross of Madras Christian College, whom Ramanujan had met a few years before, stormed into his class one day with his eyes glowing, asking his students, "Does Ramanujan know Polish?" The reason was that in one paper, Ramanujan had anticipated the work of a Polish mathematician whose paper had just arrived in the day's mail. In his quarterly papers, Ramanujan drew up theorems to make definite integrals more easily solvable. Working off Giuliano Frullani's 1821 integral theorem, Ramanujan formulated generalisations that could be made to evaluate formerly unyielding integrals. Hardy's correspondence with Ramanujan soured after Ramanujan refused to come to England. Hardy enlisted a colleague lecturing in Madras, E. H. Neville, to mentor and bring Ramanujan to England. Neville asked Ramanujan why he would not go to Cambridge. Ramanujan apparently had now accepted the proposal; Neville said, "Ramanujan needed no converting" and "his parents' opposition had been withdrawn". Apparently, Ramanujan's mother had a vivid dream in which Ramanujan was surrounded by Europeans, and the family goddess, the deity of Namagiri, commanded her "to stand no longer between her son and the fulfilment of his life's purpose". On 17 March 1914, Ramanujan travelled to England by ship, leaving his wife to stay with his parents in India. == Life in England == Ramanujan departed from Madras aboard the S.S. Nevasa on 17 March 1914. When he disembarked in London on 14 April, Neville was waiting for him with a car. Four days later, Neville took him to his house on Chesterton Road in Cambridge. Ramanujan immediately began his work with Littlewood and Hardy. After six weeks, Ramanujan moved out of Neville's house and took up residence on Whewell's Court, a five-minute walk from Hardy's room. Hardy and Littlewood began to look at Ramanujan's notebooks. Hardy had already received 120 theorems from Ramanujan in the first two letters, but there were many more results and theorems in the notebooks. Hardy saw that some were wrong, others had already been discovered, and the rest were new breakthroughs. Ramanujan left a deep impression on Hardy and Littlewood. Littlewood commented, "I can believe that he's at least a Jacobi", while Hardy said he "can compare him only with Euler or Jacobi." Ramanujan spent nearly five years in Cambridge collaborating with Hardy and Littlewood, and published part of his findings there. Hardy and Ramanujan had highly contrasting personalities. Their collaboration was a clash of different cultures, beliefs, and working styles. In the previous few decades, the foundations of mathematics had come into question and the need for mathematically rigorous proofs was recognised. Hardy was an atheist and an apostle of proof and mathematical rigour, whereas Ramanujan was a deeply religious man who relied very strongly on his intuition and insights. Hardy tried his best to fill the gaps in Ramanujan's education and to mentor him in the need for formal proofs to support his results, without hindering his inspiration—a conflict that neither found easy. Ramanujan was awarded a Bachelor of Arts by Research degree (the predecessor of the PhD degree) in March 1916 for his work on highly composite numbers, sections of the first part of which had been published the preceding year in the Proceedings of the London Mathematical Society. The paper was more than 50 pages long and proved various properties of such numbers. Hardy disliked this topic area but remarked that though it engaged with what he called the 'backwater of mathematics', in it Ramanujan displayed 'extraordinary mastery over the algebra of inequalities'. On 6 December 1917, Ramanujan was elected to the London Mathematical Society. On 2 May 1918, he was elected a Fellow of the Royal Society, the second Indian admitted, after Ardaseer Cursetjee in 1841. At age 31, Ramanujan was one of the youngest Fellows in the Royal Society's history. He was elected "for his investigation in elliptic functions and the Theory of Numbers." On 13 October 1918, he was the first Indian to be elected a Fellow of Trinity College, Cambridge. == Illness and death == Ramanujan had numerous health problems throughout his life. His health worsened in England; possibly he was also less resilient due to the difficulty of keeping to the strict dietary requirements of his religion there and because of wartime rationing in 1914–18. He was diagnosed with tuberculosis and a severe vitamin deficiency, and confined to a sanatorium. He attempted suicide in late 1917 or early 1918 by jumping on the tracks of a London underground station. Scotland Yard arrested him for attempting suicide (which was a crime), but released him after Hardy intervened. In 1919, Ramanujan returned to Kumbakonam, Madras Presidency, where he died in 1920 aged 32. After his death, his brother Tirunarayanan compiled Ramanujan's remaining handwritten notes, consisting of formulae on singular moduli, hypergeometric series and continued fractions. In his last days, though in severe pain, "he continued doing his mathematics filling sheet after sheet with numbers", Janaki Ammal recounts. Ramanujan's widow, Smt. Janaki Ammal, moved to Bombay. In 1931, she returned to Madras and settled in Triplicane, where she supported herself on a pension from Madras University and income from tailoring. In 1950, she adopted a son, W. Narayanan, who eventually became an officer of the State Bank of India and raised a family. In her later years, she was granted a lifetime pension from Ramanujan's former employer, the Madras Port Trust, and pensions from, among others, the Indian National Science Academy and the state governments of Tamil Nadu, Andhra Pradesh and West Bengal. She continued to cherish Ramanujan's memory, and was active in efforts to increase his public recognition; prominent mathematicians, including George Andrews, Bruce C. Berndt and Béla Bollobás made it a point to visit her while in India. She died at her Triplicane residence in 1994. A 1994 analysis of Ramanujan's medical records and symptoms by D. A. B. Young concluded that his medical symptoms—including his past relapses, fevers, and hepatic conditions—were much closer to those of hepatic amoebiasis, an illness then widespread in Madras, than of tuberculosis. He had two episodes of dysentery before he left India. When not properly treated, amoebic dysentery can lie dormant for years and lead to hepatic amoebiasis, whose diagnosis was not then well established. At the time, if properly diagnosed, amoebiasis was a treatable and often curable disease; British soldiers who contracted it during the First World War were being successfully cured of amoebiasis around the time Ramanujan left England. == Personality and spiritual life == Ramanujan has been described as a person of a somewhat shy and quiet disposition, a dignified man with pleasant manners. He lived a simple life at Cambridge. Ramanujan's first Indian biographers describe him as a rigorously orthodox Hindu. He credited his acumen to his family goddess, Namagiri Thayar (Goddess Mahalakshmi) of Namakkal. He looked to her for inspiration in his work and said he dreamed of blood drops that symbolised her consort, Narasimha. Later he had visions of scrolls of complex mathematical content unfolding before his eyes. He often said, "An equation for me has no meaning unless it expresses a thought of God." Hardy cites Ramanujan as remarking that all religions seemed equally true to him. Hardy further argued that Ramanujan's religious belief had been romanticised by Westerners and overstated—in reference to his belief, not practice—by Indian biographers. At the same time, he remarked on Ramanujan's strict vegetarianism. Similarly, in an interview with Frontline, Berndt said, "Many people falsely promulgate mystical powers to Ramanujan's mathematical thinking. It is not true. He has meticulously recorded every result in his three notebooks," further speculating that Ramanujan worked out intermediate results on slate that he could not afford the paper to record more permanently. Berndt reported that Janaki said in 1984 that Ramanujan spent so much of his time on mathematics that he did not go to the temple, that she and her mother often fed him because he had no time to eat, and that most of the religious stories attributed to him originated with others. However, his orthopraxy was not in doubt. == Mathematical achievements == In mathematics, there is a distinction between insight and formulating or working through a proof. Ramanujan proposed an abundance of formulae that could be investigated later in depth. G. H. Hardy said that Ramanujan's discoveries are unusually rich and that there is often more to them than initially meets the eye. As a byproduct of his work, new directions of research were opened up. Examples of the most intriguing of these formulae include infinite series for π, one of which is given below: 1 π = 2 2 9801 ∑ k = 0 ∞ ( 4 k ) ! ( 1103 + 26390 k ) ( k ! ) 4 396 4 k . {\displaystyle {\frac {1}{\pi }}={\frac {2{\sqrt {2}}}{9801}}\sum _{k=0}^{\infty }{\frac {(4k)!(1103+26390k)}{(k!)^{4}396^{4k}}}.} This result is based on the negative fundamental discriminant d = −4 × 58 = −232 with class number h(d) = 2. Further, 26390 = 5 × 7 × 13 × 58 and 16 × 9801 = 3962, which is related to the fact that e π 58 = 396 4 − 104.000000177 … . {\textstyle e^{\pi {\sqrt {58}}}=396^{4}-104.000000177\dots .} This might be compared to Heegner numbers, which have class number 1 and yield similar formulae. Ramanujan's series for π converges extraordinarily rapidly and forms the basis of some of the fastest algorithms used to calculate π. Truncating the sum to the first term also gives the approximation 9801√2/4412 for π, which is correct to six decimal places; truncating it to the first two terms gives a value correct to 14 decimal places (see also the more general Ramanujan–Sato series). One of Ramanujan's remarkable capabilities was the rapid solution of problems, illustrated by the following anecdote about an incident in which P. C. Mahalanobis posed a problem: Imagine that you are on a street with houses marked 1 through n. There is a house in between (x) such that the sum of the house numbers to the left of it equals the sum of the house numbers to its right. If n is between 50 and 500, what are n and x?' This is a bivariate problem with multiple solutions. Ramanujan thought about it and gave the answer with a twist: He gave a continued fraction. The unusual part was that it was the solution to the whole class of problems. Mahalanobis was astounded and asked how he did it. 'It is simple. The minute I heard the problem, I knew that the answer was a continued fraction. Which continued fraction, I asked myself. Then the answer came to my mind', Ramanujan replied." His intuition also led him to derive some previously unknown identities, such as ( 1 + 2 ∑ n = 1 ∞ cos ( n θ ) cosh ( n π ) ) − 2 + ( 1 + 2 ∑ n = 1 ∞ cosh ( n θ ) cosh ( n π ) ) − 2 = 2 Γ 4 ( 3 4 ) π = 8 π 3 Γ 4 ( 1 4 ) {\displaystyle {\begin{aligned}&\left(1+2\sum _{n=1}^{\infty }{\frac {\cos(n\theta )}{\cosh(n\pi )}}\right)^{-2}+\left(1+2\sum _{n=1}^{\infty }{\frac {\cosh(n\theta )}{\cosh(n\pi )}}\right)^{-2}\\[6pt]={}&{\frac {2\Gamma ^{4}{\bigl (}{\frac {3}{4}}{\bigr )}}{\pi }}={\frac {8\pi ^{3}}{\Gamma ^{4}{\bigl (}{\frac {1}{4}}{\bigr )}}}\end{aligned}}} for all θ such that | ℜ ( θ ) | < π {\displaystyle |\Re (\theta )|<\pi } and | ℑ ( θ ) | < π {\displaystyle |\Im (\theta )|<\pi } , where Γ(z) is the gamma function, and related to a special value of the Dedekind eta function. Expanding into series of powers and equating coefficients of θ0, θ4, and θ8 gives some deep identities for the hyperbolic secant. In 1918, Hardy and Ramanujan studied the partition function P(n) extensively. They gave a non-convergent asymptotic series that permits exact computation of the number of partitions of an integer. In 1937, Hans Rademacher refined their formula to find an exact convergent series solution to this problem. Ramanujan and Hardy's work in this area gave rise to a powerful new method for finding asymptotic formulae called the circle method. In the last year of his life, Ramanujan discovered mock theta functions. For many years, these functions were a mystery, but they are now known to be the holomorphic parts of harmonic weak Maass forms. === The Ramanujan conjecture === Although there are numerous statements that could have borne the name Ramanujan conjecture, one was highly influential in later work. In particular, the connection of this conjecture with conjectures of André Weil in algebraic geometry opened up new areas of research. That Ramanujan conjecture is an assertion on the size of the tau-function, which has a generating function as the discriminant modular form Δ(q), a typical cusp form in the theory of modular forms. It was finally proven in 1973, as a consequence of Pierre Deligne's proof of the Weil conjectures. The reduction step involved is complicated. Deligne won a Fields Medal in 1978 for that work. In his paper "On certain arithmetical functions", Ramanujan defined the so-called delta-function, whose coefficients are called τ(n) (the Ramanujan tau function). He proved many congruences for these numbers, such as τ(p) ≡ 1 + p11 mod 691 for primes p. This congruence (and others like it that Ramanujan proved) inspired Jean-Pierre Serre (1954 Fields Medalist) to conjecture that there is a theory of Galois representations that "explains" these congruences and more generally all modular forms. Δ(z) is the first example of a modular form to be studied in this way. Deligne (in his Fields Medal-winning work) proved Serre's conjecture. The proof of Fermat's Last Theorem proceeds by first reinterpreting elliptic curves and modular forms in terms of these Galois representations. Without this theory, there would be no proof of Fermat's Last Theorem. === Ramanujan's notebooks === While still in Madras, Ramanujan recorded the bulk of his results in four notebooks of looseleaf paper. They were mostly written up without any derivations. This is probably the origin of the misapprehension that Ramanujan was unable to prove his results and simply thought up the final result directly. Mathematician Bruce C. Berndt, in his review of these notebooks and Ramanujan's work, says that Ramanujan most certainly was able to prove most of his results, but chose not to record the proofs in his notes. This may have been for any number of reasons. Since paper was very expensive, Ramanujan did most of his work and perhaps his proofs on slate, after which he transferred the final results to paper. At the time, slates were commonly used by mathematics students in the Madras Presidency. He was also quite likely to have been influenced by the style of G. S. Carr's book, which stated results without proofs. It is also possible that Ramanujan considered his work to be for his personal interest alone and therefore recorded only the results. The first notebook has 351 pages with 16 somewhat organised chapters and some unorganised material. The second has 256 pages in 21 chapters and 100 unorganised pages, and the third 33 unorganised pages. The results in his notebooks inspired numerous papers by later mathematicians trying to prove what he had found. Hardy himself wrote papers exploring material from Ramanujan's work, as did G. N. Watson, B. M. Wilson, and Bruce Berndt. In 1976, George Andrews rediscovered a fourth notebook with 87 unorganised pages, the so-called "lost notebook". == Hardy–Ramanujan number 1729 == The number 1729 is known as the Hardy–Ramanujan number after a famous visit by Hardy to see Ramanujan at a hospital. In Hardy's words: I remember once going to see him when he was ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. "No", he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways." Immediately before this anecdote, Hardy quoted Littlewood as saying, "Every positive integer was one of [Ramanujan's] personal friends." The two different ways are: 1729 = 1 3 + 12 3 = 9 3 + 10 3 . {\displaystyle 1729=1^{3}+12^{3}=9^{3}+10^{3}.} Generalisations of this idea have created the notion of "taxicab numbers". == Mathematicians' views of Ramanujan == In his obituary of Ramanujan, written for Nature in 1920, Hardy observed that Ramanujan's work primarily involved fields less known even among other pure mathematicians, concluding: His insight into formulae was quite amazing, and altogether beyond anything I have met with in any European mathematician. It is perhaps useless to speculate as to his history had he been introduced to modern ideas and methods at sixteen instead of at twenty-six. It is not extravagant to suppose that he might have become the greatest mathematician of his time. What he actually did is wonderful enough… when the researches which his work has suggested have been completed, it will probably seem a good deal more wonderful than it does to-day. Hardy further said: He combined a power of generalisation, a feeling for form, and a capacity for rapid modification of his hypotheses, that were often really startling, and made him, in his own peculiar field, without a rival in his day. The limitations of his knowledge were as startling as its profundity. Here was a man who could work out modular equations and theorems... to orders unheard of, whose mastery of continued fractions was... beyond that of any mathematician in the world, who had found for himself the functional equation of the zeta function and the dominant terms of many of the most famous problems in the analytic theory of numbers; and yet he had never heard of a doubly periodic function or of Cauchy's theorem, and had indeed but the vaguest idea of what a function of a complex variable was..." As an example, Hardy commented on 15 theorems in the first letter. Of those, the first 13 are correct and insightful, the 14th is incorrect but insightful, and the 15th is correct but misleading. (14): The coefficient of x n {\displaystyle x^{n}} in ( 1 − 2 x + 2 x 4 − 2 x 9 + ⋯ ) − 1 {\displaystyle \left(1-2x+2x^{4}-2x^{9}+\cdots \right)^{-1}} is the integer nearest to 1 4 n ( cosh ( π n ) − sinh ( π n ) π n ) . {\displaystyle {\frac {1}{4n}}\left(\cosh(\pi {\sqrt {n}})-{\frac {\sinh(\pi {\sqrt {n}})}{\pi {\sqrt {n}}}}\right).} This "was one of the most fruitful he ever made, since it ended by leading us to all our joint work on partitions". When asked about the methods Ramanujan used to arrive at his solutions, Hardy said they were "arrived at by a process of mingled argument, intuition, and induction, of which he was entirely unable to give any coherent account." He also said that he had "never met his equal, and can compare him only with Euler or Jacobi". Hardy thought Ramanujan worked in a 19th-century style, where arriving at correct formulas was more important than systematic formal theories. Hardy thought his achievements were greatest in algebra, especially hypergeometric series and continued fractions. It is possible that the great days of formulas are finished, and that Ramanujan ought to have been born 100 years ago; but he was by far the greatest formalist of his time. There have been a good many more important, and I suppose one must say greater, mathematicians than Ramanujan during the last 50 years, but not one who could stand up to him on his own ground. Playing the game of which he knew the rules, he could give any mathematician in the world fifteen. He discovered fewer new things in analysis, possibly because he lacked the formal education and did not find books to learn it from, but rediscovered many results, including the prime number theorem. In analysis, he worked on the elliptic functions and the analytic theory of numbers. In analytic number theory, he was as imaginative as usual, but much of what he imagined was wrong. Hardy blamed this on the inherent difficulty of analytic number theory, where imagination had led many great mathematicians astray. In analytic number theory, rigorous proof is more important than imagination, the opposite of Ramanujan's style. His "one great failure" is that he knew "nothing at all about the theory of analytic functions". Littlewood reportedly said that helping Ramanujan catch up with European mathematics beyond what was available in India was very difficult because each new point mentioned to Ramanujan caused him to produce original ideas that prevented Littlewood from continuing the lesson. K. Srinivasa Rao has said, "As for his place in the world of Mathematics, we quote Bruce C. Berndt: 'Paul Erdős has passed on to us Hardy's personal ratings of mathematicians. Suppose that we rate mathematicians on the basis of pure talent on a scale from 0 to 100. Hardy gave himself a score of 25, J. E. Littlewood 30, David Hilbert 80 and Ramanujan 100.'" During a May 2011 lecture at IIT Madras, Berndt said that over the last 40 years, as nearly all of Ramanujan's conjectures had been proven, there had been greater appreciation of Ramanujan's work and brilliance, and that Ramanujan's work was now pervading many areas of modern mathematics and physics. == Posthumous recognition == The year after his death, Nature listed Ramanujan among other distinguished scientists and mathematicians on a "Calendar of Scientific Pioneers" who had achieved eminence. Ramanujan's home state of Tamil Nadu celebrates 22 December (Ramanujan's birthday) as 'State IT Day'. Stamps picturing Ramanujan were issued by the government of India in 1962, 2011, 2012 and 2016. Since Ramanujan's centennial year, his birthday, 22 December, has been annually celebrated as Ramanujan Day by the Government Arts College, Kumbakonam, where he studied, and at the IIT Madras in Chennai. The International Centre for Theoretical Physics (ICTP) has created a prize in Ramanujan's name for young mathematicians from developing countries in cooperation with the International Mathematical Union, which nominates members of the prize committee. SASTRA University, a private university based in Tamil Nadu, has instituted the SASTRA Ramanujan Prize of US$10,000 to be given annually to a mathematician not exceeding age 32 for outstanding contributions in an area of mathematics influenced by Ramanujan. Based on the recommendations of a committee appointed by the University Grants Commission (UGC), Government of India, the Srinivasa Ramanujan Centre, established by SASTRA, has been declared an off-campus centre under the ambit of SASTRA University. House of Ramanujan Mathematics, a museum of Ramanujan's life and work, is also on this campus. SASTRA purchased and renovated the house where Ramanujan lived at Kumabakonam. In 2011, on the 125th anniversary of his birth, the Indian government declared that 22 December will be celebrated every year as National Mathematics Day. Then Indian Prime Minister Manmohan Singh also declared that 2012 would be celebrated as National Mathematics Year and 22 December as National Mathematics Day of India. Ramanujan IT City is an information technology (IT) special economic zone (SEZ) in Chennai that was built in 2011. Situated next to the Tidel Park, it includes 25 acres (10 ha) with two zones, with a total area of 5.7 million square feet (530,000 m2), including 4.5 million square feet (420,000 m2) of office space. == Commemorative postal stamps == Commemorative stamps released by India Post (by year): == In popular culture == The Man Who Loved Numbers is a 1988 PBS NOVA documentary about Ramanujan (S15, E9). The Man Who Knew Infinity is a 2015 film based on Kanigel's book of the same name. British actor Dev Patel portrays Ramanujan. Ramanujan, an Indo-British collaboration film chronicling Ramanujan's life, was released in 2014 by the independent film company Camphor Cinema. The cast and crew include director Gnana Rajasekaran, cinematographer Sunny Joseph and editor B. Lenin. Indian and English stars Abhinay Vaddi, Suhasini Maniratnam, Bhama, Kevin McGowan and Michael Lieber star in pivotal roles. Nandan Kudhyadi directed the Indian documentary films The Genius of Srinivasa Ramanujan (2013) and Srinivasa Ramanujan: The Mathematician and His Legacy (2016) about the mathematician. Ramanujan (The Man Who Reshaped 20th Century Mathematics), an Indian docudrama film directed by Akashdeep released in 2018. M. N. Krish's thriller novel The Steradian Trail weaves Ramanujan and his accidental discovery into its plot connecting religion, mathematics, finance and economics. Partition, a play by Ira Hauptman about Hardy and Ramanujan, was first performed in 2013. The play First Class Man by Alter Ego Productions was based on David Freeman's First Class Man. The play centres around Ramanujan and his complex and dysfunctional relationship with Hardy. On 16 October 2011 it was announced that Roger Spottiswoode, best known for his James Bond film Tomorrow Never Dies, is working on the film version, starring Siddharth. A Disappearing Number is a British stage production by the company Complicite that explores the relationship between Hardy and Ramanujan. David Leavitt's novel The Indian Clerk explores the events following Ramanujan's letter to Hardy. Google honoured Ramanujan on his 125th birth anniversary by replacing its logo with a doodle on its home page. Ramanujan was mentioned in the 1997 film Good Will Hunting, in a scene where professor Gerald Lambeau (Stellan Skarsgård) explains to Sean Maguire (Robin Williams) the genius of Will Hunting (Matt Damon) by comparing him to Ramanujan. == Selected papers == == Further works of Ramanujan's mathematics == George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part I (Springer, 2005, ISBN 0-387-25529-X) George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part II, (Springer, 2008, ISBN 978-0-387-77765-8) George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part III, (Springer, 2012, ISBN 978-1-4614-3809-0) George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part IV, (Springer, 2013, ISBN 978-1-4614-4080-2) George E. Andrews and Bruce C. Berndt, Ramanujan's Lost Notebook: Part V, (Springer, 2018, ISBN 978-3-319-77832-7) M. P. Chaudhary, A simple solution of some integrals given by Srinivasa Ramanujan, (Resonance: J. Sci. Education – publication of Indian Academy of Science, 2008) M.P. Chaudhary, Mock theta functions to mock theta conjectures, SCIENTIA, Series A: Math. Sci., (22)(2012) 33–46. M.P. Chaudhary, On modular relations for the Roger-Ramanujan type identities, Pacific J. Appl. Math., 7(3)(2016) 177–184. == Selected publications on Ramanujan and his work == == Selected publications on works of Ramanujan == == See also == == Footnotes == == References == == External links == === Media links === Biswas, Soutik (16 March 2006). "Film to celebrate mathematics genius". BBC. Retrieved 24 August 2006. Feature Film on Mathematics Genius Ramanujan by Dev Benegal and Stephen Fry BBC radio programme about Ramanujan – episode 5 A biographical song about Ramanujan's life "Why Did This Mathematician's Equations Make Everyone So Angry?". Thoughty2. 11 April 2022. Retrieved 29 June 2022 – via YouTube. === Biographical links === Srinivasa Ramanujan at the Mathematics Genealogy Project O'Connor, John J.; Robertson, Edmund F., "Srinivasa Ramanujan", MacTutor History of Mathematics Archive, University of St Andrews Weisstein, Eric Wolfgang (ed.). "Ramanujan, Srinivasa (1887–1920)". ScienceWorld. A short biography of Ramanujan "Our Devoted Site for Great Mathematical Genius" === Other links === Wolfram, Stephen (27 April 2016). "Who Was Ramanujan?". Stephen Wolfram Writings. A Study Group For Mathematics: Srinivasa Ramanujan Iyengar The Ramanujan Journal – An international journal devoted to Ramanujan International Math Union Prizes, including a Ramanujan Prize Hindu.com: Norwegian and Indian mathematical geniuses, Ramanujan – Essays and Surveys Archived 6 November 2012 at the Wayback Machine, Ramanujan's growing influence, Ramanujan's mentor Hindu.com: The sponsor of Ramanujan Bruce C. Berndt; Robert A. Rankin (2000). "The Books Studied by Ramanujan in India". American Mathematical Monthly. 107 (7): 595–601. doi:10.2307/2589114. JSTOR 2589114. MR 1786233. "Ramanujan's mock theta function puzzle solved" Ramanujan's papers and notebooks Sample page from the second notebook Ramanujan on Fried Eye Clark, Alex. "163 and Ramanujan Constant". Numberphile. Brady Haran. Archived from the original on 4 February 2018. Retrieved 23 June 2018.
|
https://en.wikipedia.org/wiki/Srinivasa_Ramanujan
|
In mathematics, a submersion is a differentiable map between differentiable manifolds whose differential is everywhere surjective. It is a basic concept in differential topology, dual to that of an immersion. == Definition == Let M and N be differentiable manifolds, and let f : M → N {\displaystyle f\colon M\to N} be a differentiable map between them. The map f is a submersion at a point p ∈ M {\displaystyle p\in M} if its differential D f p : T p M → T f ( p ) N {\displaystyle Df_{p}\colon T_{p}M\to T_{f(p)}N} is a surjective linear map. In this case, p is called a regular point of the map f; otherwise, p is a critical point. A point q ∈ N {\displaystyle q\in N} is a regular value of f if all points p in the preimage f − 1 ( q ) {\displaystyle f^{-1}(q)} are regular points. A differentiable map f that is a submersion at each point p ∈ M {\displaystyle p\in M} is called a submersion. Equivalently, f is a submersion if its differential D f p {\displaystyle Df_{p}} has constant rank equal to the dimension of N. Some authors use the term critical point to describe a point where the rank of the Jacobian matrix of f at p is not maximal.: Indeed, this is the more useful notion in singularity theory. If the dimension of M is greater than or equal to the dimension of N, then these two notions of critical point coincide. However, if the dimension of M is less than the dimension of N, all points are critical according to the definition above (the differential cannot be surjective), but the rank of the Jacobian may still be maximal (if it is equal to dim M). The definition given above is the more commonly used one, e.g., in the formulation of Sard's theorem. == Submersion theorem == Given a submersion f : M → N {\displaystyle f\colon M\to N} between smooth manifolds of dimensions m {\displaystyle m} and n {\displaystyle n} , for each x ∈ M {\displaystyle x\in M} there exist surjective charts ϕ : U → R m {\displaystyle \phi :U\to \mathbb {R} ^{m}} of M {\displaystyle M} around x {\displaystyle x} , and ψ : V → R n {\displaystyle \psi :V\to \mathbb {R} ^{n}} of N {\displaystyle N} around f ( x ) {\displaystyle f(x)} , such that f {\displaystyle f} restricts to a submersion f : U → V {\displaystyle f\colon U\to V} which, when expressed in coordinates as ψ ∘ f ∘ ϕ − 1 : R m → R n {\displaystyle \psi \circ f\circ \phi ^{-1}:\mathbb {R} ^{m}\to \mathbb {R} ^{n}} , becomes an ordinary orthogonal projection. As an application, for each p ∈ N {\displaystyle p\in N} the corresponding fiber of f {\displaystyle f} , denoted M p = f − 1 ( p ) {\displaystyle M_{p}=f^{-1}({p})} can be equipped with the structure of a smooth submanifold of M {\displaystyle M} whose dimension equals the difference of the dimensions of N {\displaystyle N} and M {\displaystyle M} . This theorem is a consequence of the inverse function theorem (see Inverse function theorem#Giving a manifold structure). For example, consider f : R 3 → R {\displaystyle f\colon \mathbb {R} ^{3}\to \mathbb {R} } given by f ( x , y , z ) = x 4 + y 4 + z 4 . {\displaystyle f(x,y,z)=x^{4}+y^{4}+z^{4}.} . The Jacobian matrix is [ ∂ f ∂ x ∂ f ∂ y ∂ f ∂ z ] = [ 4 x 3 4 y 3 4 z 3 ] . {\displaystyle {\begin{bmatrix}{\frac {\partial f}{\partial x}}&{\frac {\partial f}{\partial y}}&{\frac {\partial f}{\partial z}}\end{bmatrix}}={\begin{bmatrix}4x^{3}&4y^{3}&4z^{3}\end{bmatrix}}.} This has maximal rank at every point except for ( 0 , 0 , 0 ) {\displaystyle (0,0,0)} . Also, the fibers f − 1 ( { t } ) = { ( a , b , c ) ∈ R 3 : a 4 + b 4 + c 4 = t } {\displaystyle f^{-1}(\{t\})=\left\{(a,b,c)\in \mathbb {R} ^{3}:a^{4}+b^{4}+c^{4}=t\right\}} are empty for t < 0 {\displaystyle t<0} , and equal to a point when t = 0 {\displaystyle t=0} . Hence, we only have a smooth submersion f : R 3 ∖ ( 0 , 0 , 0 ) → R > 0 , {\displaystyle f\colon \mathbb {R} ^{3}\setminus {(0,0,0)}\to \mathbb {R} _{>0},} and the subsets M t = { ( a , b , c ) ∈ R 3 : a 4 + b 4 + c 4 = t } {\displaystyle M_{t}=\left\{(a,b,c)\in \mathbb {R} ^{3}:a^{4}+b^{4}+c^{4}=t\right\}} are two-dimensional smooth manifolds for t > 0 {\displaystyle t>0} . == Examples == Any projection π : R m + n → R n ⊂ R m + n {\displaystyle \pi \colon \mathbb {R} ^{m+n}\rightarrow \mathbb {R} ^{n}\subset \mathbb {R} ^{m+n}} Local diffeomorphisms Riemannian submersions The projection in a smooth vector bundle or a more general smooth fibration. The surjectivity of the differential is a necessary condition for the existence of a local trivialization. === Maps between spheres === A large class of examples of submersions are submersions between spheres of higher dimension, such as f : S n + k → S k {\displaystyle f:S^{n+k}\to S^{k}} whose fibers have dimension n {\displaystyle n} . This is because the fibers (inverse images of elements p i n S k {\displaystyle pinS^{k}} ) are smooth manifolds of dimension n {\displaystyle n} . Then, if we take a path γ : I → S k {\displaystyle \gamma :I\to S^{k}} and take the pullback M I → S n + k ↓ ↓ f I x → γ S k {\displaystyle {\begin{matrix}M_{I}&\to &S^{n+k}\\\downarrow &&\downarrow f\\I&x\rightarrow {\gamma }&S^{k}\end{matrix}}} we get an example of a special kind of bordism, called a framed bordism. In fact, the framed cobordism groups Ω n f r {\displaystyle \Omega _{n}^{fr}} are intimately related to the stable homotopy groups. === Families of algebraic varieties === Another large class of submersions is given by families of algebraic varieties π : X → S {\displaystyle \pi :{\mathfrak {X}}\to S} whose fibers are smooth algebraic varieties. If we consider the underlying manifolds of these varieties, we get smooth manifolds. For example, the Weierstrass family π : W t o A 1 {\displaystyle \pi :{\mathcal {W}}to\mathbb {A} ^{1}} of elliptic curves is a widely studied submersion because it includes many technical complexities used to demonstrate more complex theory, such as intersection homology and perverse sheaves. This family is given by W = { ( t , x , y ) ∈ A 1 × A 2 : y 2 = x ( x − 1 ) ( x − t ) } {\displaystyle {\mathcal {W}}=\left\{(t,x,y)\in \mathbb {A} ^{1}\times \mathbb {A} ^{2}:y^{2}=x(x-1)(x-t)\right\}} where A 1 {\displaystyle \mathbb {A} ^{1}} is the affine line and A 2 {\displaystyle \mathbb {A} ^{2}} is the affine plane. Since we are considering complex varieties, these are equivalently the spaces C , C 2 {\displaystyle \mathbb {C} ,\mathbb {C} ^{2}} of the complex line and the complex plane. Note that we should actually remove the points t = 0 , 1 {\displaystyle t=0,1} because there are singularities (since there is a double root). == Local normal form == If f: M → N is a submersion at p and f(p) = q ∈ N, then there exists an open neighborhood U of p in M, an open neighborhood V of q in N, and local coordinates (x1, …, xm) at p and (x1, …, xn) at q such that f(U) = V, and the map f in these local coordinates is the standard projection f ( x 1 , … , x n , x n + 1 , … , x m ) = ( x 1 , … , x n ) . {\displaystyle f(x_{1},\ldots ,x_{n},x_{n+1},\ldots ,x_{m})=(x_{1},\ldots ,x_{n}).} It follows that the full preimage f−1(q) in M of a regular value q in N under a differentiable map f: M → N is either empty or a differentiable manifold of dimension dim M − dim N, possibly disconnected. This is the content of the regular value theorem (also known as the submersion theorem). In particular, the conclusion holds for all q in N if the map f is a submersion. == Topological manifold submersions == Submersions are also well-defined for general topological manifolds. A topological manifold submersion is a continuous surjection f : M → N such that for all p in M, for some continuous charts ψ at p and φ at f(p), the map ψ−1 ∘ f ∘ φ is equal to the projection map from Rm to Rn, where m = dim(M) ≥ n = dim(N). == See also == Ehresmann's fibration theorem == Notes == == References == Arnold, Vladimir I.; Gusein-Zade, Sabir M.; Varchenko, Alexander N. (1985). Singularities of Differentiable Maps: Volume 1. Birkhäuser. ISBN 0-8176-3187-9. Bruce, James W.; Giblin, Peter J. (1984). Curves and Singularities. Cambridge University Press. ISBN 0-521-42999-4. MR 0774048. Crampin, Michael; Pirani, Felix Arnold Edward (1994). Applicable differential geometry. Cambridge, England: Cambridge University Press. ISBN 978-0-521-23190-9. do Carmo, Manfredo Perdigao (1994). Riemannian Geometry. ISBN 978-0-8176-3490-2. Frankel, Theodore (1997). The Geometry of Physics. Cambridge: Cambridge University Press. ISBN 0-521-38753-1. MR 1481707. Gallot, Sylvestre; Hulin, Dominique; Lafontaine, Jacques (2004). Riemannian Geometry (3rd ed.). Berlin, New York: Springer-Verlag. ISBN 978-3-540-20493-0. Kosinski, Antoni Albert (2007) [1993]. Differential manifolds. Mineola, New York: Dover Publications. ISBN 978-0-486-46244-8. Lang, Serge (1999). Fundamentals of Differential Geometry. Graduate Texts in Mathematics. New York: Springer. ISBN 978-0-387-98593-0. Sternberg, Shlomo Zvi (2012). Curvature in Mathematics and Physics. Mineola, New York: Dover Publications. ISBN 978-0-486-47855-5. == Further reading == https://mathoverflow.net/questions/376129/what-are-the-sufficient-and-necessary-conditions-for-surjective-submersions-to-b?rq=1
|
https://en.wikipedia.org/wiki/Submersion_(mathematics)
|
In mathematics, parity is the property of an integer of whether it is even or odd. An integer is even if it is divisible by 2, and odd if it is not. For example, −4, 0, and 82 are even numbers, while −3, 5, 23, and 69 are odd numbers. The above definition of parity applies only to integer numbers, hence it cannot be applied to numbers with decimals or fractions like 1/2 or 4.6978. See the section "Higher mathematics" below for some extensions of the notion of parity to a larger class of "numbers" or in other more general settings. Even and odd numbers have opposite parities, e.g., 22 (even number) and 13 (odd number) have opposite parities. In particular, the parity of zero is even. Any two consecutive integers have opposite parity. A number (i.e., integer) expressed in the decimal numeral system is even or odd according to whether its last digit is even or odd. That is, if the last digit is 1, 3, 5, 7, or 9, then it is odd; otherwise it is even—as the last digit of any even number is 0, 2, 4, 6, or 8. The same idea will work using any even base. In particular, a number expressed in the binary numeral system is odd if its last digit is 1; and it is even if its last digit is 0. In an odd base, the number is even according to the sum of its digits—it is even if and only if the sum of its digits is even. == Definition == An even number is an integer of the form x = 2 k {\displaystyle x=2k} where k is an integer; an odd number is an integer of the form x = 2 k + 1. {\displaystyle x=2k+1.} An equivalent definition is that an even number is divisible by 2: 2 | x {\displaystyle 2\ |\ x} and an odd number is not: 2 ⧸ | x {\displaystyle 2\not |\ x} The sets of even and odd numbers can be defined as following: { 2 k : k ∈ Z } {\displaystyle \{2k:k\in \mathbb {Z} \}} { 2 k + 1 : k ∈ Z } {\displaystyle \{2k+1:k\in \mathbb {Z} \}} The set of even numbers is a prime ideal of Z {\displaystyle \mathbb {Z} } and the quotient ring Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } is the field with two elements. Parity can then be defined as the unique ring homomorphism from Z {\displaystyle \mathbb {Z} } to Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } where odd numbers are 1 and even numbers are 0. The consequences of this homomorphism are covered below. == Properties == The following laws can be verified using the properties of divisibility. They are a special case of rules in modular arithmetic, and are commonly used to check if an equality is likely to be correct by testing the parity of each side. As with ordinary arithmetic, multiplication and addition are commutative and associative in modulo 2 arithmetic, and multiplication is distributive over addition. However, subtraction in modulo 2 is identical to addition, so subtraction also possesses these properties, which is not true for normal integer arithmetic. === Addition and subtraction === even ± even = even; even ± odd = odd; odd ± odd = even; === Multiplication === even × even = even; even × odd = even; odd × odd = odd; By construction in the previous section, the structure ({even, odd}, +, ×) is in fact the field with two elements. === Division === The division of two whole numbers does not necessarily result in a whole number. For example, 1 divided by 4 equals 1/4, which is neither even nor odd, since the concepts of even and odd apply only to integers. But when the quotient is an integer, it will be even if and only if the dividend has more factors of two than the divisor. == History == The ancient Greeks considered 1, the monad, to be neither fully odd nor fully even. Some of this sentiment survived into the 19th century: Friedrich Wilhelm August Fröbel's 1826 The Education of Man instructs the teacher to drill students with the claim that 1 is neither even nor odd, to which Fröbel attaches the philosophical afterthought, It is well to direct the pupil's attention here at once to a great far-reaching law of nature and of thought. It is this, that between two relatively different things or ideas there stands always a third, in a sort of balance, seeming to unite the two. Thus, there is here between odd and even numbers one number (one) which is neither of the two. Similarly, in form, the right angle stands between the acute and obtuse angles; and in language, the semi-vowels or aspirants between the mutes and vowels. A thoughtful teacher and a pupil taught to think for himself can scarcely help noticing this and other important laws. == Higher mathematics == === Higher dimensions and more general classes of numbers === Integer coordinates of points in Euclidean spaces of two or more dimensions also have a parity, usually defined as the parity of the sum of the coordinates. For instance, the face-centered cubic lattice and its higher-dimensional generalizations (the Dn lattices) consist of all of the integer points whose coordinates have an even sum. This feature also manifests itself in chess, where the parity of a square is indicated by its color: bishops are constrained to moving between squares of the same parity, whereas knights alternate parity between moves. This form of parity was famously used to solve the mutilated chessboard problem: if two opposite corner squares are removed from a chessboard, then the remaining board cannot be covered by dominoes, because each domino covers one square of each parity and there are two more squares of one parity than of the other. The parity of an ordinal number may be defined to be even if the number is a limit ordinal, or a limit ordinal plus a finite even number, and odd otherwise. Let R be a commutative ring and let I be an ideal of R whose index is 2. Elements of the coset 0 + I {\displaystyle 0+I} may be called even, while elements of the coset 1 + I {\displaystyle 1+I} may be called odd. As an example, let R = Z(2) be the localization of Z at the prime ideal (2). Then an element of R is even or odd if and only if its numerator is so in Z. === Number theory === The even numbers form an ideal in the ring of integers, but the odd numbers do not—this is clear from the fact that the identity element for addition, zero, is an element of the even numbers only. An integer is even if it is congruent to 0 modulo this ideal, in other words if it is congruent to 0 modulo 2, and odd if it is congruent to 1 modulo 2. All prime numbers are odd, with one exception: the prime number 2. All known perfect numbers are even; it is unknown whether any odd perfect numbers exist. Goldbach's conjecture states that every even integer greater than 2 can be represented as a sum of two prime numbers. Modern computer calculations have shown this conjecture to be true for integers up to at least 4 × 1018, but still no general proof has been found. === Group theory === The parity of a permutation (as defined in abstract algebra) is the parity of the number of transpositions into which the permutation can be decomposed. For example (ABC) to (BCA) is even because it can be done by swapping A and B then C and A (two transpositions). It can be shown that no permutation can be decomposed both in an even and in an odd number of transpositions. Hence the above is a suitable definition. In Rubik's Cube, Megaminx, and other twisting puzzles, the moves of the puzzle allow only even permutations of the puzzle pieces, so parity is important in understanding the configuration space of these puzzles. The Feit–Thompson theorem states that a finite group is always solvable if its order is an odd number. This is an example of odd numbers playing a role in an advanced mathematical theorem where the method of application of the simple hypothesis of "odd order" is far from obvious. === Analysis === The parity of a function describes how its values change when its arguments are exchanged with their negations. An even function, such as an even power of a variable, gives the same result for any argument as for its negation. An odd function, such as an odd power of a variable, gives for any argument the negation of its result when given the negation of that argument. It is possible for a function to be neither odd nor even, and for the case f(x) = 0, to be both odd and even. The Taylor series of an even function contains only terms whose exponent is an even number, and the Taylor series of an odd function contains only terms whose exponent is an odd number. === Combinatorial game theory === In combinatorial game theory, an evil number is a number that has an even number of 1's in its binary representation, and an odious number is a number that has an odd number of 1's in its binary representation; these numbers play an important role in the strategy for the game Kayles. The parity function maps a number to the number of 1's in its binary representation, modulo 2, so its value is zero for evil numbers and one for odious numbers. The Thue–Morse sequence, an infinite sequence of 0's and 1's, has a 0 in position i when i is evil, and a 1 in that position when i is odious. == Additional applications == In information theory, a parity bit appended to a binary number provides the simplest form of error detecting code. If a single bit in the resulting value is changed, then it will no longer have the correct parity: changing a bit in the original number gives it a different parity than the recorded one, and changing the parity bit while not changing the number it was derived from again produces an incorrect result. In this way, all single-bit transmission errors may be reliably detected. Some more sophisticated error detecting codes are also based on the use of multiple parity bits for subsets of the bits of the original encoded value. In wind instruments with a cylindrical bore and in effect closed at one end, such as the clarinet at the mouthpiece, the harmonics produced are odd multiples of the fundamental frequency. (With cylindrical pipes open at both ends, used for example in some organ stops such as the open diapason, the harmonics are even multiples of the same frequency for the given bore length, but this has the effect of the fundamental frequency being doubled and all multiples of this fundamental frequency being produced.) See harmonic series (music). In some countries, house numberings are chosen so that the houses on one side of a street have even numbers and the houses on the other side have odd numbers. Similarly, among United States numbered highways, even numbers primarily indicate east–west highways while odd numbers primarily indicate north–south highways. Among airline flight numbers, even numbers typically identify eastbound or northbound flights, and odd numbers typically identify westbound or southbound flights. == See also == Divisor Half-integer == References ==
|
https://en.wikipedia.org/wiki/Parity_(mathematics)
|
In mathematics, a rigid collection C of mathematical objects (for instance sets or functions) is one in which every c ∈ C is uniquely determined by less information about c than one would expect. The above statement does not define a mathematical property; instead, it describes in what sense the adjective "rigid" is typically used in mathematics, by mathematicians. == Examples == Some examples include: Harmonic functions on the unit disk are rigid in the sense that they are uniquely determined by their boundary values. Holomorphic functions are determined by the set of all derivatives at a single point. A smooth function from the real line to the complex plane is not, in general, determined by all its derivatives at a single point, but it is if we require additionally that it be possible to extend the function to one on a neighbourhood of the real line in the complex plane. The Schwarz lemma is an example of such a rigidity theorem. By the fundamental theorem of algebra, polynomials in C are rigid in the sense that any polynomial is completely determined by its values on any infinite set, say N, or the unit disk. By the previous example, a polynomial is also determined within the set of holomorphic functions by the finite set of its non-zero derivatives at any single point. Linear maps L(X, Y) between vector spaces X, Y are rigid in the sense that any L ∈ L(X, Y) is completely determined by its values on any set of basis vectors of X. Mostow's rigidity theorem, which states that the geometric structure of negatively curved manifolds is determined by their topological structure. A well-ordered set is rigid in the sense that the only (order-preserving) automorphism on it is the identity function. Consequently, an isomorphism between two given well-ordered sets will be unique. Cauchy's theorem on geometry of convex polytopes states that a convex polytope is uniquely determined by the geometry of its faces and combinatorial adjacency rules. Alexandrov's uniqueness theorem states that a convex polyhedron in three dimensions is uniquely determined by the metric space of geodesics on its surface. Rigidity results in K-theory show isomorphisms between various algebraic K-theory groups. Rigid groups in the inverse Galois problem. == Combinatorial use == In combinatorics, the term rigid is also used to define the notion of a rigid surjection, which is a surjection f : n → m {\displaystyle f:n\to m} for which the following equivalent conditions hold: For every i , j ∈ m {\displaystyle i,j\in m} , i < j ⟹ min f − 1 ( i ) < min f − 1 ( j ) {\displaystyle i<j\implies \min f^{-1}(i)<\min f^{-1}(j)} ; Considering f {\displaystyle f} as an n {\displaystyle n} -tuple ( f ( 0 ) , f ( 1 ) , … , f ( n − 1 ) ) {\displaystyle {\big (}f(0),f(1),\ldots ,f(n-1){\big )}} , the first occurrences of the elements in m {\displaystyle m} are in increasing order; f {\displaystyle f} maps initial segments of n {\displaystyle n} to initial segments of m {\displaystyle m} . This relates to the above definition of rigid, in that each rigid surjection f {\displaystyle f} uniquely defines, and is uniquely defined by, a partition of n {\displaystyle n} into m {\displaystyle m} pieces. Given a rigid surjection f {\displaystyle f} , the partition is defined by n = f − 1 ( 0 ) ⊔ ⋯ ⊔ f − 1 ( m − 1 ) {\displaystyle n=f^{-1}(0)\sqcup \cdots \sqcup f^{-1}(m-1)} . Conversely, given a partition of n = A 0 ⊔ ⋯ ⊔ A m − 1 {\displaystyle n=A_{0}\sqcup \cdots \sqcup A_{m-1}} , order the A i {\displaystyle A_{i}} by letting A i ≺ A j ⟺ min A i < min A j {\displaystyle A_{i}\prec A_{j}\iff \min A_{i}<\min A_{j}} . If n = B 0 ⊔ ⋯ ⊔ B m − 1 {\displaystyle n=B_{0}\sqcup \cdots \sqcup B_{m-1}} is now the ≺ {\displaystyle \prec } -ordered partition, the function f : n → m {\displaystyle f:n\to m} defined by f ( i ) = j ⟺ i ∈ B j {\displaystyle f(i)=j\iff i\in B_{j}} is a rigid surjection. == See also == Uniqueness theorem Structural rigidity, a mathematical theory describing the degrees of freedom of ensembles of rigid physical objects connected together by flexible hinges. Level structure (algebraic geometry) == References == This article incorporates material from rigid on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Rigidity_(mathematics)
|
In contemporary education, mathematics education—known in Europe as the didactics or pedagogy of mathematics—is the practice of teaching, learning, and carrying out scholarly research into the transfer of mathematical knowledge. Although research into mathematics education is primarily concerned with the tools, methods, and approaches that facilitate practice or the study of practice, it also covers an extensive field of study encompassing a variety of different concepts, theories and methods. National and international organisations regularly hold conferences and publish literature in order to improve mathematics education. == History == === Ancient === Elementary mathematics were a core part of education in many ancient civilisations, including ancient Egypt, ancient Babylonia, ancient Greece, ancient Rome, and Vedic India. In most cases, formal education was only available to male children with sufficiently high status, wealth, or caste. The oldest known mathematics textbook is the Rhind papyrus, dated from circa 1650 BCE. ==== Pythagorean theorem ==== Historians of Mesopotamia have confirmed that use of the Pythagorean rule dates back to the Old Babylonian Empire (20th–16th centuries BC) and that it was being taught in scribal schools over one thousand years before the birth of Pythagoras. In Plato's division of the liberal arts into the trivium and the quadrivium, the quadrivium included the mathematical fields of arithmetic and geometry. This structure was continued in the structure of classical education that was developed in medieval Europe. The teaching of geometry was almost universally based on Euclid's Elements. Apprentices to trades such as masons, merchants, and moneylenders could expect to learn such practical mathematics as was relevant to their profession. === Medieval and early modern === In the Middle Ages, the academic status of mathematics declined, because it was strongly associated with trade and commerce, and considered somewhat un-Christian. Although it continued to be taught in European universities, it was seen as subservient to the study of natural, metaphysical, and moral philosophy. The first modern arithmetic curriculum (starting with addition, then subtraction, multiplication, and division) arose at reckoning schools in Italy in the 1300s. Spreading along trade routes, these methods were designed to be used in commerce. They contrasted with Platonic math taught at universities, which was more philosophical and concerned numbers as concepts rather than calculating methods. They also contrasted with mathematical methods learned by artisan apprentices, which were specific to the tasks and tools at hand. For example, the division of a board into thirds can be accomplished with a piece of string, instead of measuring the length and using the arithmetic operation of division. The first mathematics textbooks to be written in English and French were published by Robert Recorde, beginning with The Grounde of Artes in 1543. However, there are many different writings on mathematics and mathematics methodology that date back to 1800 BCE. These were mostly located in Mesopotamia, where the Sumerians were practicing multiplication and division. There are also artifacts demonstrating their methodology for solving equations like the quadratic equation. After the Sumerians, some of the most famous ancient works on mathematics came from Egypt in the form of the Rhind Mathematical Papyrus and the Moscow Mathematical Papyrus. The more famous Rhind Papyrus has been dated back to approximately 1650 BCE, but it is thought to be a copy of an even older scroll. This papyrus was essentially an early textbook for Egyptian students. The social status of mathematical study was improving by the seventeenth century, with the University of Aberdeen creating a Mathematics Chair in 1613, followed by the Chair in Geometry being set up in University of Oxford in 1619 and the Lucasian Chair of Mathematics being established by the University of Cambridge in 1662. === Modern === In the 18th and 19th centuries, the Industrial Revolution led to an enormous increase in urban populations. Basic numeracy skills, such as the ability to tell the time, count money, and carry out simple arithmetic, became essential in this new urban lifestyle. Within the new public education systems, mathematics became a central part of the curriculum from an early age. By the twentieth century, mathematics was part of the core curriculum in all developed countries. During the twentieth century, mathematics education was established as an independent field of research. Main events in this development include the following: In 1893, a Chair in mathematics education was created at the University of Göttingen, under the administration of Felix Klein. The International Commission on Mathematical Instruction (ICMI) was founded in 1908, and Felix Klein became the first president of the organisation. The professional periodical literature on mathematics education in the United States had generated more than 4,000 articles after 1920, so in 1941 William L. Schaaf published a classified index, sorting them into their various subjects. A renewed interest in mathematics education emerged in the 1960s, and the International Commission was revitalized. In 1968, the Shell Centre for Mathematical Education was established in Nottingham. The first International Congress on Mathematical Education (ICME) was held in Lyon in 1969. The second congress was in Exeter in 1972, and after that, it has been held every four years. Midway through the twentieth century, the cultural impact of the "electronic age" (McLuhan) was also taken up by educational theory and the teaching of mathematics. While previous approach focused on "working with specialized 'problems' in arithmetic", the emerging structural approach to knowledge had "small children meditating about number theory and 'sets'." Since the 1980s, there have been a number of efforts to reform the traditional curriculum, which focuses on continuous mathematics and relegates even some basic discrete concepts to advanced study, to better balance coverage of the continuous and discrete sides of the subject: In the 1980s and early 1990s, there was a push to make discrete mathematics more available at the post-secondary level; From the late 1980s into the new millennium, countries like the US began to identify and standardize sets of discrete mathematics topics for primary and secondary education; Concurrently, academics began compiling practical advice on introducing discrete math topics into the classroom; Researchers continued arguing the urgency of making the transition throughout the 2000s; and In parallel, some textbook authors began working on materials explicitly designed to provide more balance. Similar efforts are also underway to shift more focus to mathematical modeling as well as its relationship to discrete math. == Objectives == At different times and in different cultures and countries, mathematics education has attempted to achieve a variety of different objectives. These objectives have included: The teaching and learning of basic numeracy skills to all students The teaching of practical mathematics (arithmetic, elementary algebra, plane and solid geometry, trigonometry, probability, statistics) to most students, to equip them to follow a trade or craft and to understand mathematics commonly used in news and Internet (such as percentages, charts, probability, and statistics) The teaching of abstract mathematical concepts (such as set and function) at an early age The teaching of selected areas of mathematics (such as Euclidean geometry) as an example of an axiomatic system and a model of deductive reasoning The teaching of selected areas of mathematics (such as calculus) as an example of the intellectual achievements of the modern world The teaching of advanced mathematics to those students who wish to follow a career in science, technology, engineering, and mathematics (STEM) fields The teaching of heuristics and other problem-solving strategies to solve non-routine problems The teaching of mathematics in social sciences and actuarial sciences, as well as in some selected arts under liberal arts education in liberal arts colleges or universities == Methods == The method or methods used in any particular context are largely determined by the objectives that the relevant educational system is trying to achieve. Methods of teaching mathematics include the following: Computer-based math: an approach based on the use of mathematical software as the primary tool of computation. Computer-based mathematics education: involves the use of computers to teach mathematics. Mobile applications have also been developed to help students learn mathematics. Classical education: the teaching of mathematics within the quadrivium, part of the classical education curriculum of the Middle Ages, which was typically based on Euclid's Elements taught as a paradigm of deductive reasoning. Conventional approach: the gradual and systematic guiding through the hierarchy of mathematical notions, ideas and techniques. Starts with arithmetic and is followed by Euclidean geometry and elementary algebra taught concurrently. Requires the instructor to be well informed about elementary mathematics since didactic and curriculum decisions are often dictated by the logic of the subject rather than pedagogical considerations. Other methods emerge by emphasizing some aspects of this approach. Relational approach: uses class topics to solve everyday problems and relates the topic to current events. This approach focuses on the many uses of mathematics and helps students understand why they need to know it as well as helps them to apply mathematics to real-world situations outside of the classroom. Historical method: teaching the development of mathematics within a historical, social, and cultural context. Proponents argue it provides more human interest than the conventional approach. Discovery math: a constructivist method of teaching (discovery learning) mathematics which centres around problem-based or inquiry-based learning, with the use of open-ended questions and manipulative tools. This type of mathematics education was implemented in various parts of Canada beginning in 2005. Discovery-based mathematics is at the forefront of the Canadian "math wars" debate with many criticizing it for declining math scores. New Math: a method of teaching mathematics which focuses on abstract concepts such as set theory, functions, and bases other than ten. Adopted in the US as a response to the challenge of early Soviet technical superiority in space, it began to be challenged in the late 1960s. One of the most influential critiques of the New Math was Morris Kline's 1973 book Why Johnny Can't Add. The New Math method was the topic of one of Tom Lehrer's most popular parody songs, with his introductory remarks to the song: "...in the new approach, as you know, the important thing is to understand what you're doing, rather than to get the right answer." Recreational mathematics: mathematical problems that are fun can motivate students to learn mathematics and can increase their enjoyment of mathematics. Standards-based mathematics: a vision for pre-college mathematics education in the United States and Canada, focused on deepening student understanding of mathematical ideas and procedures, and formalized by the National Council of Teachers of Mathematics which created the Principles and Standards for School Mathematics. Mastery: an approach in which most students are expected to achieve a high level of competence before progressing. Problem solving: the cultivation of mathematical ingenuity, creativity, and heuristic thinking by setting students open-ended, unusual, and sometimes unsolved problems. The problems can range from simple word problems to problems from international mathematics competitions such as the International Mathematical Olympiad. Problem-solving is used as a means to build new mathematical knowledge, typically by building on students' prior understandings. Exercises: the reinforcement of mathematical skills by completing large numbers of exercises of a similar type, such as adding simple fractions or solving quadratic equations. Rote learning: the teaching of mathematical results, definitions and concepts by repetition and memorisation typically without meaning or supported by mathematical reasoning. A derisory term is drill and kill. In traditional education, rote learning is used to teach multiplication tables, definitions, formulas, and other aspects of mathematics. Math walk: a walk where experience of perceived objects and scenes is translated into mathematical language. == Content and age levels == Different levels of mathematics are taught at different ages and in somewhat different sequences in different countries. Sometimes a class may be taught at an earlier age than typical as a special or honors class. Elementary mathematics in most countries is taught similarly, though there are differences. Most countries tend to cover fewer topics in greater depth than in the United States. During the primary school years, children learn about whole numbers and arithmetic, including addition, subtraction, multiplication, and division. Comparisons and measurement are taught, in both numeric and pictorial form, as well as fractions and proportionality, patterns, and various topics related to geometry. At high school level in most of the US, algebra, geometry, and analysis (pre-calculus and calculus) are taught as separate courses in different years. On the other hand, in most other countries (and in a few US states), mathematics is taught as an integrated subject, with topics from all branches of mathematics studied every year; students thus undertake a pre-defined course - entailing several topics - rather than choosing courses à la carte as in the United States. Even in these cases, however, several "mathematics" options may be offered, selected based on the student's intended studies post high school. (In South Africa, for example, the options are Mathematics, Mathematical Literacy and Technical Mathematics.) Thus, a science-oriented curriculum typically overlaps the first year of university mathematics, and includes differential calculus and trigonometry at age 16–17 and integral calculus, complex numbers, analytic geometry, exponential and logarithmic functions, and infinite series in their final year of secondary school; Probability and statistics are similarly often taught. At college and university level, science and engineering students will be required to take multivariable calculus, differential equations, and linear algebra; at several US colleges, the minor or AS in mathematics substantively comprises these courses. Mathematics majors study additional areas of pure mathematics—and often applied mathematics—with the requirement of specified advanced courses in analysis and modern algebra. Other topics in pure mathematics include differential geometry, set theory, and topology. Applied mathematics may be taken as a major subject in its own right, covering partial differential equations, optimization, and numerical analysis among other topics. Courses here are also taught within other programs: for example, civil engineers may be required to study fluid mechanics, and "math for computer science" might include graph theory, permutation, probability, and formal mathematical proofs. Pure and applied math degrees often include modules in probability theory or mathematical statistics, as well as stochastic processes. (Theoretical) physics is mathematics-intensive, often overlapping substantively with the pure or applied math degree. Business mathematics is usually limited to introductory calculus and (sometimes) matrix calculations; economics programs additionally cover optimization, often differential equations and linear algebra, and sometimes analysis. Business and social science students also typically take statistics and probability courses. == Standards == Throughout most of history, standards for mathematics education were set locally, by individual schools or teachers, depending on the levels of achievement that were relevant to, realistic for, and considered socially appropriate for their pupils. In modern times, there has been a move towards regional or national standards, usually under the umbrella of a wider standard school curriculum. In England, for example, standards for mathematics education are set as part of the National Curriculum for England, while Scotland maintains its own educational system. Many other countries have centralized ministries which set national standards or curricula, and sometimes even textbooks. Ma (2000) summarized the research of others who found, based on nationwide data, that students with higher scores on standardized mathematics tests had taken more mathematics courses in high school. This led some states to require three years of mathematics instead of two. But because this requirement was often met by taking another lower-level mathematics course, the additional courses had a "diluted" effect in raising achievement levels. In North America, the National Council of Teachers of Mathematics (NCTM) published the Principles and Standards for School Mathematics in 2000 for the United States and Canada, which boosted the trend towards reform mathematics. In 2006, the NCTM released Curriculum Focal Points, which recommend the most important mathematical topics for each grade level through grade 8. However, these standards were guidelines to implement as American states and Canadian provinces chose. In 2010, the National Governors Association Center for Best Practices and the Council of Chief State School Officers published the Common Core State Standards for US states, which were subsequently adopted by most states. Adoption of the Common Core State Standards in mathematics is at the discretion of each state, and is not mandated by the federal government. "States routinely review their academic standards and may choose to change or add onto the standards to best meet the needs of their students." The NCTM has state affiliates that have different education standards at the state level. For example, Missouri has the Missouri Council of Teachers of Mathematics (MCTM) which has its pillars and standards of education listed on its website. The MCTM also offers membership opportunities to teachers and future teachers so that they can stay up to date on the changes in math educational standards. The Programme for International Student Assessment (PISA), created by the Organisation for the Economic Co-operation and Development (OECD), is a global program studying the reading, science, and mathematics abilities of 15-year-old students. The first assessment was conducted in the year 2000 with 43 countries participating. PISA has repeated this assessment every three years to provide comparable data, helping to guide global education to better prepare youth for future economies. There have been many ramifications following the results of triennial PISA assessments due to implicit and explicit responses of stakeholders, which have led to education reform and policy change. == Research == According to Hiebert and Grouws, "Robust, useful theories of classroom teaching do not yet exist." However, there are useful theories on how children learn mathematics, and much research has been conducted in recent decades to explore how these theories can be applied to teaching. The following results are examples of some of the current findings in the field of mathematics education. === Important results === One of the strongest results in recent research is that the most important feature of effective teaching is giving students "the opportunity to learn". Teachers can set expectations, times, kinds of tasks, questions, acceptable answers, and types of discussions that will influence students' opportunities to learn. This must involve both skill efficiency and conceptual understanding. === Conceptual understanding === Source: Two of the most important features of teaching in the promotion of conceptual understanding times are attending explicitly to concepts and allowing students to struggle with important mathematics. Both of these features have been confirmed through a wide variety of studies. Explicit attention to concepts involves making connections between facts, procedures, and ideas. (This is often seen as one of the strong points in mathematics teaching in East Asian countries, where teachers typically devote about half of their time to making connections. At the other extreme is the US, where essentially no connections are made in school classrooms.) These connections can be made through explanation of the meaning of a procedure, questions comparing strategies and solutions of problems, noticing how one problem is a special case of another, reminding students of the main point, discussing how lessons connect, and so on. Deliberate, productive struggle with mathematical ideas refers to the fact that when students exert effort with important mathematical ideas, even if this struggle initially involves confusion and errors, the result is greater learning. This is true whether the struggle is due to intentionally challenging, well-implemented teaching, or unintentionally confusing, faulty teaching. === Formative assessment === Formative assessment is both the best and cheapest way to boost student achievement, student engagement, and teacher professional satisfaction. Results surpass those of reducing class size or increasing teachers' content knowledge. Effective assessment is based on clarifying what students should know, creating appropriate activities to obtain the evidence needed, giving good feedback, encouraging students to take control of their learning and letting students be resources for one another. === Homework === Homework assignments which lead students to practice past lessons or prepare for future lessons are more effective than those going over the current lesson. Students benefit from feedback. Students with learning disabilities or low motivation may profit from rewards. For younger children, homework helps simple skills, but not broader measures of achievement. === Students with difficulties === Source: Students with genuine difficulties (unrelated to motivation or past instruction) struggle with basic facts, answer impulsively, struggle with mental representations, have poor number sense, and have poor short-term memory. Techniques that have been found productive for helping such students include peer-assisted learning, explicit teaching with visual aids, instruction informed by formative assessment, and encouraging students to think aloud. In particular, research surrounding students with disabilities in a mathematics classroom is mostly done by special education researchers. Some mathematics education researchers have called for more collaboration across disciplines to better understand supports that could be helpful to mathematics students with disabilities. === Algebraic reasoning === Elementary school children need to spend a long time learning to express algebraic properties without symbols before learning algebraic notation. When learning symbols, many students believe letters always represent unknowns and struggle with the concept of variable. They prefer arithmetic reasoning to algebraic equations for solving word problems. It takes time to move from arithmetic to algebraic generalizations to describe patterns. Students often have trouble with the minus sign and understand the equals sign to mean "the answer is...". === Cultural Equity === Despite the popular belief that mathematics is race neutral, some research suggests that effective mathematics teaching of culturally diverse students requires a culturally relevant pedagogy that considers students' cultural backgrounds and experiences. The three criteria for culturally relevant pedagogy are academic success, cultural competence, and critical consciousness. More recent research proposes that culturally sustaining pedagogy explicitly aims to perpetuate and foster cultural and linguistic pluralism within the educational system, ensuring that students can thrive while retaining their cultural identities. === Mathematics Teacher Education === Student teaching is a crucial part of a teacher candidate's path to becoming a teacher. Recommended reform in mathematics teacher education includes a focus on learning to anticipate, elicit, and use students’ mathematical thinking as the primary goal, as opposed to models with an over-emphasis on classroom management and survival. === Methodology === As with other educational research (and the social sciences in general), mathematics education research depends on both quantitative and qualitative studies. Quantitative research includes studies that use inferential statistics to answer specific questions, such as whether a certain teaching method gives significantly better results than the status quo. The best quantitative studies involve randomized trials where students or classes are randomly assigned different methods to test their effects. They depend on large samples to obtain statistically significant results. Qualitative research, such as case studies, action research, discourse analysis, and clinical interviews, depend on small but focused samples in an attempt to understand student learning and to look at how and why a given method gives the results it does. Such studies cannot conclusively establish that one method is better than another, as randomized trials can, but unless it is understood why treatment X is better than treatment Y, application of results of quantitative studies will often lead to "lethal mutations" of the finding in actual classrooms. Exploratory qualitative research is also useful for suggesting new hypotheses, which can eventually be tested by randomized experiments. Both qualitative and quantitative studies, therefore, are considered essential in education—just as in the other social sciences. Many studies are "mixed", simultaneously combining aspects of both quantitative and qualitative research, as appropriate. ==== Randomized trials ==== There has been some controversy over the relative strengths of different types of research. Because of an opinion that randomized trials provide clear, objective evidence on "what works", policymakers often consider only those studies. Some scholars have pushed for more random experiments in which teaching methods are randomly assigned to classes. In other disciplines concerned with human subjects—like biomedicine, psychology, and policy evaluation—controlled, randomized experiments remain the preferred method of evaluating treatments. Educational statisticians and some mathematics educators have been working to increase the use of randomized experiments to evaluate teaching methods. On the other hand, many scholars in educational schools have argued against increasing the number of randomized experiments, often because of philosophical objections, such as the ethical difficulty of randomly assigning students to various treatments when the effects of such treatments are not yet known to be effective, or the difficulty of assuring rigid control of the independent variable in fluid, real school settings. In the United States, the National Mathematics Advisory Panel (NMAP) published a report in 2008 based on studies, some of which used randomized assignment of treatments to experimental units, such as classrooms or students. The NMAP report's preference for randomized experiments received criticism from some scholars. In 2010, the What Works Clearinghouse (essentially the research arm for the Department of Education) responded to ongoing controversy by extending its research base to include non-experimental studies, including regression discontinuity designs and single-case studies. == Organizations == Advisory Committee on Mathematics Education American Mathematical Association of Two-Year Colleges Association of Teachers of Mathematics Canadian Mathematical Society C.D. Howe Institute Mathematical Association National Council of Teachers of Mathematics OECD International Association for the Evaluation of Educational Achievement Association of Mathematics Teacher Educators == See also == == References == Voit, Rita (14 February 2020). "Accelerated Math: What Every Parent Should Know". Resources by HEROES Academy. Retrieved 20 September 2023. == Further reading == == External links == History of Mathematical Education A quarter century of US 'math wars' and political partisanship. David Klein. California State University, Northridge, United States
|
https://en.wikipedia.org/wiki/Mathematics_education
|
A mathematical object is an abstract concept arising in mathematics. Typically, a mathematical object can be a value that can be assigned to a symbol, and therefore can be involved in formulas. Commonly encountered mathematical objects include numbers, expressions, shapes, functions, and sets. Mathematical objects can be very complex; for example, theorems, proofs, and even formal theories are considered as mathematical objects in proof theory. In Philosophy of mathematics, the concept of "mathematical objects" touches on topics of existence, identity, and the nature of reality. In metaphysics, objects are often considered entities that possess properties and can stand in various relations to one another. Philosophers debate whether mathematical objects have an independent existence outside of human thought (realism), or if their existence is dependent on mental constructs or language (idealism and nominalism). Objects can range from the concrete: such as physical objects usually studied in applied mathematics, to the abstract, studied in pure mathematics. What constitutes an "object" is foundational to many areas of philosophy, from ontology (the study of being) to epistemology (the study of knowledge). In mathematics, objects are often seen as entities that exist independently of the physical world, raising questions about their ontological status. There are varying schools of thought which offer different perspectives on the matter, and many famous mathematicians and philosophers each have differing opinions on which is more correct. == In philosophy of mathematics == === Quine-Putnam indispensability === Quine-Putnam indispensability is an argument for the existence of mathematical objects based on their unreasonable effectiveness in the natural sciences. Every branch of science relies largely on large and often vastly different areas of mathematics. From physics' use of Hilbert spaces in quantum mechanics and differential geometry in general relativity to biology's use of chaos theory and combinatorics (see mathematical biology), not only does mathematics help with predictions, it allows these areas to have an elegant language to express these ideas. Moreover, it is hard to imagine how areas like quantum mechanics and general relativity could have developed without their assistance from mathematics, and therefore, one could argue that mathematics is indispensable to these theories. It is because of this unreasonable effectiveness and indispensability of mathematics that philosophers Willard Quine and Hilary Putnam argue that we should believe the mathematical objects for which these theories depend actually exist, that is, we ought to have an ontological commitment to them. The argument is described by the following syllogism:(Premise 1) We ought to have ontological commitment to all and only the entities that are indispensable to our best scientific theories. (Premise 2) Mathematical entities are indispensable to our best scientific theories. (Conclusion) We ought to have ontological commitment to mathematical entitiesThis argument resonates with a philosophy in applied mathematics called Naturalism (or sometimes Predicativism) which states that the only authoritative standards on existence are those of science. === Schools of thought === ==== Platonism ==== Platonism asserts that mathematical objects are seen as real, abstract entities that exist independently of human thought, often in some Platonic realm. Just as physical objects like electrons and planets exist, so do numbers and sets. And just as statements about electrons and planets are true or false as these objects contain perfectly objective properties, so are statements about numbers and sets. Mathematicians discover these objects rather than invent them. (See also: Mathematical Platonism) Some some notable platonists include: Plato: The ancient Greek philosopher who, though not a mathematician, laid the groundwork for Platonism by positing the existence of an abstract realm of perfect forms or ideas, which influenced later thinkers in mathematics. Kurt Gödel: A 20th-century logician and mathematician, Gödel was a strong proponent of mathematical Platonism, and his work in model theory was a major influence on modern platonism Roger Penrose: A contemporary mathematical physicist, Penrose has argued for a Platonic view of mathematics, suggesting that mathematical truths exist in a realm of abstract reality that we discover. ==== Nominalism ==== Nominalism denies the independent existence of mathematical objects. Instead, it suggests that they are merely convenient fictions or shorthand for describing relationships and structures within our language and theories. Under this view, mathematical objects do not have an existence beyond the symbols and concepts we use. Some notable nominalists include: Nelson Goodman: A philosopher known for his work in the philosophy of science and nominalism. He argued against the existence of abstract objects, proposing instead that mathematical objects are merely a product of our linguistic and symbolic conventions. Hartry Field: A contemporary philosopher who has developed the form of nominalism called "fictionalism," which argues that mathematical statements are useful fictions that do not correspond to any actual abstract objects. ==== Logicism ==== Logicism asserts that all mathematical truths can be reduced to logical truths, and all objects forming the subject matter of those branches of mathematics are logical objects. In other words, mathematics is fundamentally a branch of logic, and all mathematical concepts, theorems, and truths can be derived from purely logical principles and definitions. Logicism faced challenges, particularly with the Russillian axioms, the Multiplicative axiom (now called the Axiom of Choice) and his Axiom of Infinity, and later with the discovery of Gödel's incompleteness theorems, which showed that any sufficiently powerful formal system (like those used to express arithmetic) cannot be both complete and consistent. This meant that not all mathematical truths could be derived purely from a logical system, undermining the logicist program. Some notable logicists include: Gottlob Frege: Frege is often regarded as the founder of logicism. In his work, Grundgesetze der Arithmetik (Basic Laws of Arithmetic), Frege attempted to show that arithmetic could be derived from logical axioms. He developed a formal system that aimed to express all of arithmetic in terms of logic. Frege's work laid the groundwork for much of modern logic and was highly influential, though it encountered difficulties, most notably Russell's paradox, which revealed inconsistencies in Frege's system. Bertrand Russell: Russell, along with Alfred North Whitehead, further developed logicism in their monumental work Principia Mathematica. They attempted to derive all of mathematics from a set of logical axioms, using a type theory to avoid the paradoxes that Frege's system encountered. Although Principia Mathematica was enormously influential, the effort to reduce all of mathematics to logic was ultimately seen as incomplete. However, it did advance the development of mathematical logic and analytic philosophy. ==== Formalism ==== Mathematical formalism treats objects as symbols within a formal system. The focus is on the manipulation of these symbols according to specified rules, rather than on the objects themselves. One common understanding of formalism takes mathematics as not a body of propositions representing an abstract piece of reality but much more akin to a game, bringing with it no more ontological commitment of objects or properties than playing ludo or chess. In this view, mathematics is about the consistency of formal systems rather than the discovery of pre-existing objects. Some philosophers consider logicism to be a type of formalism. Some notable formalists include: David Hilbert: A leading mathematician of the early 20th century, Hilbert is one of the most prominent advocates of formalism as a foundation of mathematics (see Hilbert's program). He believed that mathematics is a system of formal rules and that its truth lies in the consistency of these rules rather than any connection to an abstract reality. Hermann Weyl: German mathematician and philosopher who, while not strictly a formalist, contributed to formalist ideas, particularly in his work on the foundations of mathematics. Freeman Dyson wrote that Weyl alone bore comparison with the "last great universal mathematicians of the nineteenth century", Henri Poincaré and David Hilbert. ==== Constructivism ==== Mathematical constructivism asserts that it is necessary to find (or "construct") a specific example of a mathematical object in order to prove that an example exists. Contrastingly, in classical mathematics, one can prove the existence of a mathematical object without "finding" that object explicitly, by assuming its non-existence and then deriving a contradiction from that assumption. Such a proof by contradiction might be called non-constructive, and a constructivist might reject it. The constructive viewpoint involves a verificational interpretation of the existential quantifier, which is at odds with its classical interpretation. There are many forms of constructivism. These include Brouwer's program of intutionism, the finitism of Hilbert and Bernays, the constructive recursive mathematics of mathematicians Shanin and Markov, and Bishop's program of constructive analysis. Constructivism also includes the study of constructive set theories such as Constructive Zermelo–Fraenkel and the study of philosophy. Some notable constructivists include: L. E. J. Brouwer: Dutch mathematician and philosopher regarded as one of the greatest mathematicians of the 20th century, known for (among other things) pioneering the intuitionist movement to mathematical logic, and opposition of David Hilbert's formalism movement (see: Brouwer–Hilbert controversy). Errett Bishop: American mathematician known for his work on analysis. He is best known for developing constructive analysis in his 1967 Foundations of Constructive Analysis, where he proved most of the important theorems in real analysis using constructivist methods. ==== Structuralism ==== Structuralism suggests that mathematical objects are defined by their place within a structure or system. The nature of a number, for example, is not tied to any particular thing, but to its role within the system of arithmetic. In a sense, the thesis is that mathematical objects (if there are such objects) simply have no intrinsic nature. Some notable structuralists include: Paul Benacerraf: A philosopher known for his work in the philosophy of mathematics, particularly his paper "What Numbers Could Not Be," which argues for a structuralist view of mathematical objects. Stewart Shapiro: Another prominent philosopher who has developed and defended structuralism, especially in his book Philosophy of Mathematics: Structure and Ontology. === Objects versus mappings === Frege famously distinguished between functions and objects. According to his view, a function is a kind of ‘incomplete’ entity that maps arguments to values, and is denoted by an incomplete expression, whereas an object is a ‘complete’ entity and can be denoted by a singular term. Frege reduced properties and relations to functions and so these entities are not included among the objects. Some authors make use of Frege's notion of ‘object’ when discussing abstract objects. But though Frege's sense of ‘object’ is important, it is not the only way to use the term. Other philosophers include properties and relations among the abstract objects. And when the background context for discussing objects is type theory, properties and relations of higher type (e.g., properties of properties, and properties of relations) may be all be considered ‘objects’. This latter use of ‘object’ is interchangeable with ‘entity.’ It is this more broad interpretation that mathematicians mean when they use the term 'object'. == See also == Abstract object Exceptional object Impossible object List of mathematical objects List of mathematical shapes List of shapes List of surfaces List of two-dimensional geometric shapes Mathematical structure == Notes == == References == Citations Further reading Azzouni, J., 1994. Metaphysical Myths, Mathematical Practice. Cambridge University Press. Burgess, John, and Rosen, Gideon, 1997. A Subject with No Object. Oxford Univ. Press. Davis, Philip and Reuben Hersh, 1999 [1981]. The Mathematical Experience. Mariner Books: 156–62. Gold, Bonnie, and Simons, Roger A., 2011. Proof and Other Dilemmas: Mathematics and Philosophy. Mathematical Association of America. Hersh, Reuben, 1997. What is Mathematics, Really? Oxford University Press. Sfard, A., 2000, "Symbolizing mathematical reality into being, Or how mathematical discourse and mathematical objects create each other," in Cobb, P., et al., Symbolizing and communicating in mathematics classrooms: Perspectives on discourse, tools and instructional design. Lawrence Erlbaum. Stewart Shapiro, 2000. Thinking about mathematics: The philosophy of mathematics. Oxford University Press. == External links == Stanford Encyclopedia of Philosophy: "Abstract Objects"—by Gideon Rosen. Wells, Charles. "Mathematical Objects". AMOF: The Amazing Mathematical Object Factory Mathematical Object Exhibit
|
https://en.wikipedia.org/wiki/Mathematical_object
|
In mathematics, a variable (from Latin variabilis 'changeable') is a symbol, typically a letter, that refers to an unspecified mathematical object. One says colloquially that the variable represents or denotes the object, and that any valid candidate for the object is the value of the variable. The values a variable can take are usually of the same kind, often numbers. More specifically, the values involved may form a set, such as the set of real numbers. The object may not always exist, or it might be uncertain whether any valid candidate exists or not. For example, one could represent two integers by the variables p and q and require that the value of the square of p is twice the square of q, which in algebraic notation can be written p2 = 2 q2. A definitive proof that this relationship is impossible to satisfy when p and q are restricted to integer numbers isn't obvious, but it has been known since ancient times and has had a big influence on mathematics ever since. Originally, the term variable was used primarily for the argument of a function, in which case its value could be thought of as varying within the domain of the function. This is the motivation for the choice of the term. Also, variables are used for denoting values of functions, such as the symbol y in the equation y = f(x), where x is the argument and f denotes the function itself. A variable may represent an unspecified number that remains fixed during the resolution of a problem; in which case, it is often called a parameter. A variable may denote an unknown number that has to be determined; in which case, it is called an unknown; for example, in the quadratic equation ax2 + bx + c = 0, the variables a, b, c are parameters, and x is the unknown. Sometimes the same symbol can be used to denote both a variable and a constant, that is a well defined mathematical object. For example, the Greek letter π generally represents the number π, but has also been used to denote a projection. Similarly, the letter e often denotes Euler's number, but has been used to denote an unassigned coefficient for quartic function and higher degree polynomials. Even the symbol 1 has been used to denote an identity element of an arbitrary field. These two notions are used almost identically, therefore one usually must be told whether a given symbol denotes a variable or a constant. Variables are often used for representing matrices, functions, their arguments, sets and their elements, vectors, spaces, etc. In mathematical logic, a variable is a symbol that either represents an unspecified constant of the theory, or is being quantified over. == History == === Early history === The earliest uses of an "unknown quantity" date back to at least the Ancient Egyptians with the Moscow Mathematical Papyrus (c. 1500 BC) which described problems with unknowns rhetorically, called the "Aha problems". The "Aha problems" involve finding unknown quantities (referred to as aha, "stack") if the sum of the quantity and part(s) of it are given (The Rhind Mathematical Papyrus also contains four of these type of problems). For example, problem 19 asks one to calculate a quantity taken 1+1⁄2 times and added to 4 to make 10. In modern mathematical notation: 3/2x + 4 = 10. Around the same time in Mesopotamia, mathematics of the Old Babylonian period (c. 2000 BC – 1500 BC) was more advanced, also studying quadratic and cubic equations. In works of ancient greece such as Euclid's Elements (c. 300 BC), mathematics was described geometrically. For example, The Elements, proposition 1 of Book II, Euclid includes the proposition: "If there be two straight lines, and one of them be cut into any number of segments whatever, the rectangle contained by the two straight lines is equal to the rectangles contained by the uncut straight line and each of the segments." This corresponds to the algebraic identity a(b + c) = ab + ac (distributivity), but is described entirely geometrically. Euclid, and other greek geometers, also used single letters refer to geometric points and shapes. This kind of algebra is now sometimes called Greek geometric algebra. Diophantus of Alexandria, pioneered a form of syncopated algebra in his Arithmetica (c. 200 AD), which introduced symbolic manipulation of expressions with unknowns and powers, but without modern symbols for relations (such as equality or inequality) or exponents. An unknown number was called ζ {\displaystyle \zeta } . The square of ζ {\displaystyle \zeta } was Δ v {\displaystyle \Delta ^{v}} ; the cube was K v {\displaystyle K^{v}} ; the fourth power was Δ v Δ {\displaystyle \Delta ^{v}\Delta } ; and the fifth power was Δ K v {\displaystyle \Delta K^{v}} . So for example, what would be written in modern notation as: x 3 − 2 x 2 + 10 x − 1 , {\displaystyle x^{3}-2x^{2}+10x-1,} would be written in Diophantus's syncopated notation as: K υ α ¯ ζ ι ¯ ⋔ Δ υ β ¯ M α ¯ {\displaystyle \mathrm {K} ^{\upsilon }{\overline {\alpha }}\;\zeta {\overline {\iota }}\;\,\pitchfork \;\,\Delta ^{\upsilon }{\overline {\beta }}\;\mathrm {M} {\overline {\alpha }}\,\;} In the 7th century BC, Brahmagupta used different colours to represent the unknowns in algebraic equations in the Brāhmasphuṭasiddhānta. One section of this book is called "Equations of Several Colours". Greek and other ancient mathematical advances, were often trapped in long periods of stagnation, and so there were few revolutions in notation, but this began to change by the early modern period. === Early modern period === At the end of the 16th century, François Viète introduced the idea of representing known and unknown numbers by letters, nowadays called variables, and the idea of computing with them as if they were numbers—in order to obtain the result by a simple replacement. Viète's convention was to use consonants for known values, and vowels for unknowns. In 1637, René Descartes "invented the convention of representing unknowns in equations by x, y, and z, and knowns by a, b, and c". Contrarily to Viète's convention, Descartes' is still commonly in use. The history of the letter x in math was discussed in an 1887 Scientific American article. Starting in the 1660s, Isaac Newton and Gottfried Wilhelm Leibniz independently developed the infinitesimal calculus, which essentially consists of studying how an infinitesimal variation of a time-varying quantity, called a Fluent, induces a corresponding variation of another quantity which is a function of the first variable. Almost a century later, Leonhard Euler fixed the terminology of infinitesimal calculus, and introduced the notation y = f(x) for a function f, its variable x and its value y. Until the end of the 19th century, the word variable referred almost exclusively to the arguments and the values of functions. In the second half of the 19th century, it appeared that the foundation of infinitesimal calculus was not formalized enough to deal with apparent paradoxes such as a nowhere differentiable continuous function. To solve this problem, Karl Weierstrass introduced a new formalism consisting of replacing the intuitive notion of limit by a formal definition. The older notion of limit was "when the variable x varies and tends toward a, then f(x) tends toward L", without any accurate definition of "tends". Weierstrass replaced this sentence by the formula ( ∀ ϵ > 0 ) ( ∃ η > 0 ) ( ∀ x ) | x − a | < η {\displaystyle (\forall \epsilon >0)(\exists \eta >0)(\forall x)\;|x-a|<\eta } ⇒ | L − f ( x ) | < ϵ , {\displaystyle \;\Rightarrow |L-f(x)|<\epsilon ,} in which none of the five variables is considered as varying. This static formulation led to the modern notion of variable, which is simply a symbol representing a mathematical object that either is unknown, or may be replaced by any element of a given set (e.g., the set of real numbers). == Notation == Variables are generally denoted by a single letter, most often from the Latin alphabet and less often from the Greek, which may be lowercase or capitalized. The letter may be followed by a subscript: a number (as in x2), another variable (xi), a word or abbreviation of a word as a label (xtotal) or a mathematical expression (x2i+1). Under the influence of computer science, some variable names in pure mathematics consist of several letters and digits. Following René Descartes (1596–1650), letters at the beginning of the alphabet such as a, b, c are commonly used for known values and parameters, and letters at the end of the alphabet such as x, y, z are commonly used for unknowns and variables of functions. In printed mathematics, the norm is to set variables and constants in an italic typeface. For example, a general quadratic function is conventionally written as ax2 + bx + c, where a, b and c are parameters (also called constants, because they are constant functions), while x is the variable of the function. A more explicit way to denote this function is x ↦ ax2 + bx + c, which clarifies the function-argument status of x and the constant status of a, b and c. Since c occurs in a term that is a constant function of x, it is called the constant term. Specific branches and applications of mathematics have specific naming conventions for variables. Variables with similar roles or meanings are often assigned consecutive letters or the same letter with different subscripts. For example, the three axes in 3D coordinate space are conventionally called x, y, and z. In physics, the names of variables are largely determined by the physical quantity they describe, but various naming conventions exist. A convention often followed in probability and statistics is to use X, Y, Z for the names of random variables, keeping x, y, z for variables representing corresponding better-defined values. === Conventional variable names === a, b, c, d (sometimes extended to e, f) for parameters or coefficients a0, a1, a2, ... for situations where distinct letters are inconvenient ai or ui for the ith term of a sequence or the ith coefficient of a series f, g, h for functions (as in f(x)) i, j, k (sometimes l or h) for varying integers or indices in an indexed family, or unit vectors l and w for the length and width of a figure l also for a line, or in number theory for a prime number not equal to p n (with m as a second choice) for a fixed integer, such as a count of objects or the degree of a polynomial p for a prime number or a probability q for a prime power or a quotient r for a radius, a remainder or a correlation coefficient t for time x, y, z for the three Cartesian coordinates of a point in Euclidean geometry or the corresponding axes z for a complex number, or in statistics a normal random variable α, β, γ, θ, φ for angle measures ε (with δ as a second choice) for an arbitrarily small positive number λ for an eigenvalue Σ (capital sigma) for a sum, or σ (lowercase sigma) in statistics for the standard deviation μ for a mean == Specific kinds of variables == It is common for variables to play different roles in the same mathematical formula, and names or qualifiers have been introduced to distinguish them. For example, the general cubic equation a x 3 + b x 2 + c x + d = 0 , {\displaystyle ax^{3}+bx^{2}+cx+d=0,} is interpreted as having five variables: four, a, b, c, d, which are taken to be given numbers and the fifth variable, x, is understood to be an unknown number. To distinguish them, the variable x is called an unknown, and the other variables are called parameters or coefficients, or sometimes constants, although this last terminology is incorrect for an equation, and should be reserved for the function defined by the left-hand side of this equation. In the context of functions, the term variable refers commonly to the arguments of the functions. This is typically the case in sentences like "function of a real variable", "x is the variable of the function f : x ↦ f(x)", "f is a function of the variable x" (meaning that the argument of the function is referred to by the variable x). In the same context, variables that are independent of x define constant functions and are therefore called constant. For example, a constant of integration is an arbitrary constant function that is added to a particular antiderivative to obtain the other antiderivatives. Because of the strong relationship between polynomials and polynomial functions, the term "constant" is often used to denote the coefficients of a polynomial, which are constant functions of the indeterminates. Other specific names for variables are: An unknown is a variable in an equation which has to be solved for. An indeterminate is a symbol, commonly called variable, that appears in a polynomial or a formal power series. Formally speaking, an indeterminate is not a variable, but a constant in the polynomial ring or the ring of formal power series. However, because of the strong relationship between polynomials or power series and the functions that they define, many authors consider indeterminates as a special kind of variables. A parameter is a quantity (usually a number) which is a part of the input of a problem, and remains constant during the whole solution of this problem. For example, in mechanics the mass and the size of a solid body are parameters for the study of its movement. In computer science, parameter has a different meaning and denotes an argument of a function. Free variables and bound variables A random variable is a kind of variable that is used in probability theory and its applications. All these denominations of variables are of semantic nature, and the way of computing with them (syntax) is the same for all. === Dependent and independent variables === In calculus and its application to physics and other sciences, it is rather common to consider a variable, say y, whose possible values depend on the value of another variable, say x. In mathematical terms, the dependent variable y represents the value of a function of x. To simplify formulas, it is often useful to use the same symbol for the dependent variable y and the function mapping x onto y. For example, the state of a physical system depends on measurable quantities such as the pressure, the temperature, the spatial position, ..., and all these quantities vary when the system evolves, that is, they are function of the time. In the formulas describing the system, these quantities are represented by variables which are dependent on the time, and thus considered implicitly as functions of the time. Therefore, in a formula, a dependent variable is a variable that is implicitly a function of another (or several other) variables. An independent variable is a variable that is not dependent. The property of a variable to be dependent or independent depends often of the point of view and is not intrinsic. For example, in the notation f(x, y, z), the three variables may be all independent and the notation represents a function of three variables. On the other hand, if y and z depend on x (are dependent variables) then the notation represents a function of the single independent variable x. === Examples === If one defines a function f from the real numbers to the real numbers by f ( x ) = x 2 + sin ( x + 4 ) {\displaystyle f(x)=x^{2}+\sin(x+4)} then x is a variable standing for the argument of the function being defined, which can be any real number. In the identity ∑ i = 1 n i = n 2 + n 2 {\displaystyle \sum _{i=1}^{n}i={\frac {n^{2}+n}{2}}} the variable i is a summation variable which designates in turn each of the integers 1, 2, ..., n (it is also called index because its variation is over a discrete set of values) while n is a parameter (it does not vary within the formula). In the theory of polynomials, a polynomial of degree 2 is generally denoted as ax2 + bx + c, where a, b and c are called coefficients (they are assumed to be fixed, i.e., parameters of the problem considered) while x is called a variable. When studying this polynomial for its polynomial function this x stands for the function argument. When studying the polynomial as an object in itself, x is taken to be an indeterminate, and would often be written with a capital letter instead to indicate this status. ==== Example: the ideal gas law ==== Consider the equation describing the ideal gas law, P V = N k B T . {\displaystyle PV=Nk_{\text{B}}T.} This equation would generally be interpreted to have four variables, and one constant. The constant is kB, the Boltzmann constant. One of the variables, N, the number of particles, is a positive integer (and therefore a discrete variable), while the other three, P, V and T, for pressure, volume and temperature, are continuous variables. One could rearrange this equation to obtain P as a function of the other variables, P ( V , N , T ) = N k B T V . {\displaystyle P(V,N,T)={\frac {Nk_{\text{B}}T}{V}}.} Then P, as a function of the other variables, is the dependent variable, while its arguments, V, N and T, are independent variables. One could approach this function more formally and think about its domain and range: in function notation, here P is a function P : R > 0 × N × R > 0 → R {\displaystyle P:\mathbb {R} _{>0}\times \mathbb {N} \times \mathbb {R} _{>0}\rightarrow \mathbb {R} } . However, in an experiment, in order to determine the dependence of pressure on a single one of the independent variables, it is necessary to fix all but one of the variables, say T. This gives a function P ( T ) = N k B T V , {\displaystyle P(T)={\frac {Nk_{\text{B}}T}{V}},} where now N and V are also regarded as constants. Mathematically, this constitutes a partial application of the earlier function P. This illustrates how independent variables and constants are largely dependent on the point of view taken. One could even regard kB as a variable to obtain a function P ( V , N , T , k B ) = N k B T V . {\displaystyle P(V,N,T,k_{\text{B}})={\frac {Nk_{\text{B}}T}{V}}.} == Moduli spaces == Considering constants and variables can lead to the concept of moduli spaces. For illustration, consider the equation for a parabola, y = a x 2 + b x + c , {\displaystyle y=ax^{2}+bx+c,} where a, b, c, x and y are all considered to be real. The set of points (x, y) in the 2D plane satisfying this equation trace out the graph of a parabola. Here, a, b and c are regarded as constants, which specify the parabola, while x and y are variables. Then instead regarding a, b and c as variables, we observe that each set of 3-tuples (a, b, c) corresponds to a different parabola. That is, they specify coordinates on the 'space of parabolas': this is known as a moduli space of parabolas. == See also == Lambda calculus Observable variable Physical constant Propositional variable == References == == Bibliography ==
|
https://en.wikipedia.org/wiki/Variable_(mathematics)
|
Mathematicism is 'the effort to employ the formal structure and rigorous method of mathematics as a model for the conduct of philosophy', or the epistemological view that reality is fundamentally mathematical. The term has been applied to a number of philosophers, including Pythagoras and René Descartes although the term was not used by themselves. The role of mathematics in Western philosophy has grown and expanded from Pythagoras onwards. It is clear that numbers held a particular importance for the Pythagorean school, although it was the later work of Plato that attracts the label of mathematicism from modern philosophers. Furthermore it is René Descartes who provides the first mathematical epistemology which he describes as a mathesis universalis, and which is also referred to as mathematicism. == Pythagoras == Although we do not have writings of Pythagoras himself, good evidence that he pioneered the concept of mathematicism is given by Plato, and summed up in the quotation often attributed to him that "everything is mathematics". Aristotle says of the Pythagorean school: The first to devote themselves to mathematics and to make them progress were the so-called Pythagoreans. They, devoted to this study, believed that the principles of mathematics were also the principles of all things that be. Now, since the principles of mathematics are numbers, and they thought they found in numbers, more than in fire and earth and water, similarities with things that are and that become (they judged, for example, that justice was a particular property of numbers, the soul and mind another, opportunity another, and similarly, so to say, anything else), and since furthermore they saw expressed by numbers the properties and the ratios of harmony, since finally everything in nature appeared to them to be similar to numbers, and numbers appeared to be first among all there is in nature, they thought that the elements of numbers were the elements of all that there is, and that the whole world was harmony and number. And all the properties they could find in numbers and in musical chords, corresponding to properties and parts of the sky, and in general to the whole cosmic order, they gathered and adapted to it. And if something was missing, they made an effort to introduce it, so that their tractation be complete. To clarify with an example: since ten seems to be a perfect number and to contain in itself the whole nature of numbers, they said that the bodies that move in the sky are also ten: and since one can only see nine, they added as tenth the anti-Earth. Further evidence for the views of Pythagoras and his school, although fragmentary and sometimes contradictory, comes from Alexander Polyhistor. Alexander tells us that central doctrines of the Pythagorieans were the harmony of numbers and the ideal that the mathematical world has primacy over, or can account for the existence of, the physical world. According to Aristotle, the Pythagoreans used mathematics for solely mystical reasons, devoid of practical application. They believed that all things were made of numbers. The number one (the monad) represented the origin of all things and other numbers similarly had symbolic representations. Nevertheless modern scholars debate whether this numerology was taught by Pythagoras himself or whether it was original to the later philosopher of the Pythagorean school, Philolaus of Croton. Walter Burkert argues in his study Lore and Science in Ancient Pythagoreanism, that the only mathematics the Pythagoreans ever actually engaged in was simple, proofless arithmetic, but that these arithmetic discoveries did contribute significantly to the beginnings of mathematics. == Plato == The Pythagorian school influenced the work of Plato. Mathematical Platonism is the metaphysical view that (a) there are abstract mathematical objects whose existence is independent of us, and (b) there are true mathematical sentences that provide true descriptions of such objects. The independence of the mathematical objects is such that they are non physical and do not exist in space or time. Neither does their existence rely on thought or language. For this reason, mathematical proofs are discovered, not invented. The proof existed before its discovery, and merely became known to the one who discovered it. In summary, therefore, Mathematical Platonism can be reduced to three propositions: Existence: There are mathematical objects. Abstractness: Mathematical objects are abstract. Independence: Mathematical objects are independent of intelligent agents and their language, thought, and practices. It is again not clear the extent to which Plato held to these views himself but they were associated with the Platonist school. Nevertheless, this was a significant progression in the ideas of mathematicism. Markus Gabriel refers to Plato in his Fields of Sense: A New Realist Ontology, and in so doing provides a definition for mathematicism. He says: Ultimately, set-theoretical ontology is a remainder of Platonic mathematicism. Let mathematicism from here on be the view that everything that exists can be studied mathematically either directly or indirectly. It is an instance of theory-reduction, that is, a claim to the effect that every vocabulary can be translated into that of mathematics such that this reduction grounds all derivative vocabulary and helps us understand it significantly better. He goes on, however, to show that the term need not be applied merely to the set-theroetical ontology that he takes issue with, but for other mathematical ontologies. Set-theoretical ontology is just one instance of mathematicism. Depending on one's preferred candidate for the most fundamental theory of quantifiable structure, one can wind up with a graphtheoretical mathematicism, a set-theoretical, category-theoretical, or some other (maybe hybrid) form of mathematicism. However, mathematicism is metaphysics, and metaphysics need not be associated with ontology. == René Descartes == Although mathematical methods of investigation have been used to establish meaning and analyse the world since Pythagoras, it was Descartes who pioneered the subject as epistemology, setting out Rules for the Direction of the Mind. He proposed that method, rather than intuition, should direct the mind, saying: So blind is the curiosity with which mortals are possessed that they often direct their minds down untrodden paths, in the groundless hope that they will chance upon what they are seeking, rather like someone who is consumed with such a senseless desire to discover treasure that he continually roams the streets to see if he can find any that a passerby might have dropped [...] By 'a method' I mean reliable rules which are easy to apply, and such that if one follows them exactly, one will never take what is false to be true or fruitlessly expend one's mental efforts, but will gradually and constantly increase one's knowledge till one arrives at a true understanding of everything within one's capacity In the discussion of Rule Four, Descartes' describes what he calls mathesis universalis: Rule Four We need a method if we are to investigate the truth of things. [...] I began my investigation by inquiring what exactly is generally meant by the term 'mathematics' and why it is that, in addition to arithmetic and geometry, sciences such as astronomy, music, optics, mechanics, among others, are called branches of mathematics. [...] This made me realize that there must be a general science which explains all the points that can be raised concerning order and measure irrespective of the subject-matter, and that this science should be termed mathesis universalis — a venerable term with a well-established meaning — for it covers everything that entitles these other sciences to be called branches of mathematics. [...] The concept of mathesis universalis was, for Descartes, a universal science modeled on mathematics. It is this mathesis universalis that is referred to when writers speak of Descartes' mathematicism. Following Descartes, Leibniz attempted to derive connections between mathematical logic, algebra, infinitesimal calculus, combinatorics, and universal characteristics in an incomplete treatise titled "Mathesis Universalis", published in 1695. Following on from Leibniz, Benedict de Spinoza and then various 20th century philosophers, including Bertrand Russell, Ludwig Wittgenstein, and Rudolf Carnap have attempted to elaborate and develop Leibniz's work on mathematical logic, syntactic systems and their calculi and to resolve problems in the field of metaphysics. == Gottfried Leibniz == Leibniz attempted to work out the possible connections between mathematical logic, algebra, infinitesimal calculus, combinatorics, and universal characteristics in an incomplete treatise titled "Mathesis Universalis" in 1695. In his account of mathesis universalis, Leibniz proposed a dual method of universal synthesis and analysis for the ascertaining truth, described in De Synthesi et Analysi universale seu Arte inveniendi et judicandi (1890). == Ludwig Wittgenstein == One of the perhaps most prominent critics of the idea of mathesis universalis was Ludwig Wittgenstein and his philosophy of mathematics. As anthropologist Emily Martin notes: Tackling mathematics, the realm of symbolic life perhaps most difficult to regard as contingent on social norms, Wittgenstein commented that people found the idea that numbers rested on conventional social understandings "unbearable". == Bertrand Russell and Alfred North Whitehead == The Principia Mathematica is a three-volume work on the foundations of mathematics written by the mathematicians Alfred North Whitehead and Bertrand Russell and published in 1910, 1912, and 1913. According to its introduction, this work had three aims: To analyze to the greatest possible extent the ideas and methods of mathematical logic and to minimize the number of primitive notions, axioms, and inference rules; To precisely express mathematical propositions in symbolic logic using the most convenient notation that precise expression allows; To solve the paradoxes that plagued logic and set theory at the turn of the 20th century, like Russell's paradox. There is no doubt that Principia Mathematica is of great importance in the history of mathematics and philosophy: as Irvine has noted, it sparked interest in symbolic logic and advanced the subject by popularizing it; it showcased the powers and capacities of symbolic logic; and it showed how advances in philosophy of mathematics and symbolic logic could go hand-in-hand with tremendous fruitfulness. Indeed, the work was in part brought about by an interest in logicism, the view on which all mathematical truths are logical truths. It was in part thanks to the advances made in Principia Mathematica that, despite its defects, numerous advances in meta-logic were made, including Gödel's incompleteness theorems. == Michel Foucault == In The Order of Things, Michel Foucault discuses mathesis as the conjunction point in the ordering of simple natures and algebra, paralleling his concept of taxinomia. Though omitting explicit references to universality, Foucault uses the term to organise and interpret all of human science, as is evident in the full title of his book: "The Order of Things: An Archaeology of the Human Sciences". == Tim Maudlin == Tim Maudlin's mathematical universe hypothesis attempts to construct "a rigorous mathematical structure using primitive terms that give a natural fit with physics" and investigating why mathematics should provide such a powerful language for describing the physical world. According to Maudlin, "the most satisfying possible answer to such a question is: because the physical world literally has a mathematical structure". == See also == Digital Physics Mathematical Psychology Modern Platonism Unit-point atomism Wolfram Physics Project Mathematical universe hypothesis Characteristica universalis De Arte Combinatoria An Essay towards a Real Character, and a Philosophical Language Lingua generalis == References == == Bibliography == == External links == Media related to Mathematicism at Wikimedia Commons Raul Corazzon's Ontology web page: Mathesis Universalis with a bibliography "mathematicism". Britannica. "mathematicism". Collins Dictionary. "mathematicism". Oxford Living Dictionary. Archived from the original on 15 January 2018.
|
https://en.wikipedia.org/wiki/Mathematicism
|
In mathematics, an annulus (pl.: annuli or annuluses) is the region between two concentric circles. Informally, it is shaped like a ring or a hardware washer. The word "annulus" is borrowed from the Latin word anulus or annulus meaning 'little ring'. The adjectival form is annular (as in annular eclipse). The open annulus is topologically equivalent to both the open cylinder S1 × (0,1) and the punctured plane. == Area == The area of an annulus is the difference in the areas of the larger circle of radius R and the smaller one of radius r: A = π R 2 − π r 2 = π ( R 2 − r 2 ) = π ( R + r ) ( R − r ) . {\displaystyle A=\pi R^{2}-\pi r^{2}=\pi \left(R^{2}-r^{2}\right)=\pi (R+r)(R-r).} The area of an annulus is determined by the length of the longest line segment within the annulus, which is the chord tangent to the inner circle, 2d in the accompanying diagram. That can be shown using the Pythagorean theorem since this line is tangent to the smaller circle and perpendicular to its radius at that point, so d and r are sides of a right-angled triangle with hypotenuse R, and the area of the annulus is given by A = π ( R 2 − r 2 ) = π d 2 . {\displaystyle A=\pi \left(R^{2}-r^{2}\right)=\pi d^{2}.} The area can also be obtained via calculus by dividing the annulus up into an infinite number of annuli of infinitesimal width dρ and area 2πρ dρ and then integrating from ρ = r to ρ = R: A = ∫ r R 2 π ρ d ρ = π ( R 2 − r 2 ) . {\displaystyle A=\int _{r}^{R}\!\!2\pi \rho \,d\rho =\pi \left(R^{2}-r^{2}\right).} The area of an annulus sector (the region between two circular sectors with overlapping radii) of angle θ, with θ measured in radians, is given by A = θ 2 ( R 2 − r 2 ) . {\displaystyle A={\frac {\theta }{2}}\left(R^{2}-r^{2}\right).} == Complex structure == In complex analysis an annulus ann(a; r, R) in the complex plane is an open region defined as r < | z − a | < R . {\displaystyle r<|z-a|<R.} If r = 0 {\displaystyle r=0} , the region is known as the punctured disk (a disk with a point hole in the center) of radius R around the point a. As a subset of the complex plane, an annulus can be considered as a Riemann surface. The complex structure of an annulus depends only on the ratio r/R. Each annulus ann(a; r, R) can be holomorphically mapped to a standard one centered at the origin and with outer radius 1 by the map z ↦ z − a R . {\displaystyle z\mapsto {\frac {z-a}{R}}.} The inner radius is then r/R < 1. The Hadamard three-circle theorem is a statement about the maximum value a holomorphic function may take inside an annulus. The Joukowsky transform conformally maps an annulus onto an ellipse with a slit cut between foci. == See also == Annular cutter – Form of core drill Annulus theorem/conjecture – In mathematics, on the region between two well-behaved spheres Focaloid – Geometric shell bounded by two concentric, similar ellipses or ellipsoidsPages displaying short descriptions of redirect targets List of geometric shapes – List of listsPages displaying short descriptions of redirect targets Spherical shell – Three-dimensional geometric shape Torus – Doughnut-shaped surface of revolution == References == == External links == Annulus definition and properties With interactive animation Area of an annulus, formula With interactive animation
|
https://en.wikipedia.org/wiki/Annulus_(mathematics)
|
In mathematics, especially in the area of algebra known as group theory, the holomorph of a group G {\displaystyle G} , denoted Hol ( G ) {\displaystyle \operatorname {Hol} (G)} , is a group that simultaneously contains (copies of) G {\displaystyle G} and its automorphism group Aut ( G ) {\displaystyle \operatorname {Aut} (G)} . It provides interesting examples of groups, and allows one to treat group elements and group automorphisms in a uniform context. The holomorph can be described as a semidirect product or as a permutation group. == Hol(G) as a semidirect product == If Aut ( G ) {\displaystyle \operatorname {Aut} (G)} is the automorphism group of G {\displaystyle G} then Hol ( G ) = G ⋊ Aut ( G ) {\displaystyle \operatorname {Hol} (G)=G\rtimes \operatorname {Aut} (G)} where the multiplication is given by Typically, a semidirect product is given in the form G ⋊ ϕ A {\displaystyle G\rtimes _{\phi }A} where G {\displaystyle G} and A {\displaystyle A} are groups and ϕ : A → Aut ( G ) {\displaystyle \phi :A\rightarrow \operatorname {Aut} (G)} is a homomorphism and where the multiplication of elements in the semidirect product is given as ( g , a ) ( h , b ) = ( g ϕ ( a ) ( h ) , a b ) {\displaystyle (g,a)(h,b)=(g\phi (a)(h),ab)} which is well defined, since ϕ ( a ) ∈ Aut ( G ) {\displaystyle \phi (a)\in \operatorname {Aut} (G)} and therefore ϕ ( a ) ( h ) ∈ G {\displaystyle \phi (a)(h)\in G} . For the holomorph, A = Aut ( G ) {\displaystyle A=\operatorname {Aut} (G)} and ϕ {\displaystyle \phi } is the identity map, as such we suppress writing ϕ {\displaystyle \phi } explicitly in the multiplication given in equation (1) above. For example, G = C 3 = ⟨ x ⟩ = { 1 , x , x 2 } {\displaystyle G=C_{3}=\langle x\rangle =\{1,x,x^{2}\}} the cyclic group of order 3 Aut ( G ) = ⟨ σ ⟩ = { 1 , σ } {\displaystyle \operatorname {Aut} (G)=\langle \sigma \rangle =\{1,\sigma \}} where σ ( x ) = x 2 {\displaystyle \sigma (x)=x^{2}} Hol ( G ) = { ( x i , σ j ) } {\displaystyle \operatorname {Hol} (G)=\{(x^{i},\sigma ^{j})\}} with the multiplication given by: ( x i 1 , σ j 1 ) ( x i 2 , σ j 2 ) = ( x i 1 + i 2 2 j 1 , σ j 1 + j 2 ) {\displaystyle (x^{i_{1}},\sigma ^{j_{1}})(x^{i_{2}},\sigma ^{j_{2}})=(x^{i_{1}+i_{2}2^{^{j_{1}}}},\sigma ^{j_{1}+j_{2}})} where the exponents of x {\displaystyle x} are taken mod 3 and those of σ {\displaystyle \sigma } mod 2. Observe, for example ( x , σ ) ( x 2 , σ ) = ( x 1 + 2 ⋅ 2 , σ 2 ) = ( x 2 , 1 ) {\displaystyle (x,\sigma )(x^{2},\sigma )=(x^{1+2\cdot 2},\sigma ^{2})=(x^{2},1)} and this group is not abelian, as ( x 2 , σ ) ( x , σ ) = ( x , 1 ) {\displaystyle (x^{2},\sigma )(x,\sigma )=(x,1)} , so that Hol ( C 3 ) {\displaystyle \operatorname {Hol} (C_{3})} is a non-abelian group of order 6, which, by basic group theory, must be isomorphic to the symmetric group S 3 {\displaystyle S_{3}} . == Hol(G) as a permutation group == A group G acts naturally on itself by left and right multiplication, each giving rise to a homomorphism from G into the symmetric group on the underlying set of G. One homomorphism is defined as λ: G → Sym(G), λg(h) = g·h. That is, g is mapped to the permutation obtained by left-multiplying each element of G by g. Similarly, a second homomorphism ρ: G → Sym(G) is defined by ρg(h) = h·g−1, where the inverse ensures that ρgh(k) = ρg(ρh(k)). These homomorphisms are called the left and right regular representations of G. Each homomorphism is injective, a fact referred to as Cayley's theorem. For example, if G = C3 = {1, x, x2 } is a cyclic group of order three, then λx(1) = x·1 = x, λx(x) = x·x = x2, and λx(x2) = x·x2 = 1, so λ(x) takes (1, x, x2) to (x, x2, 1). The image of λ is a subgroup of Sym(G) isomorphic to G, and its normalizer in Sym(G) is defined to be the holomorph N of G. For each n in N and g in G, there is an h in G such that n·λg = λh·n. If an element n of the holomorph fixes the identity of G, then for 1 in G, (n·λg)(1) = (λh·n)(1), but the left hand side is n(g), and the right side is h. In other words, if n in N fixes the identity of G, then for every g in G, n·λg = λn(g)·n. If g, h are elements of G, and n is an element of N fixing the identity of G, then applying this equality twice to n·λg·λh and once to the (equivalent) expression n·λgg gives that n(g)·n(h) = n(g·h). That is, every element of N that fixes the identity of G is in fact an automorphism of G. Such an n normalizes λG, and the only λg that fixes the identity is λ(1). Setting A to be the stabilizer of the identity, the subgroup generated by A and λG is semidirect product with normal subgroup λG and complement A. Since λG is transitive, the subgroup generated by λG and the point stabilizer A is all of N, which shows the holomorph as a permutation group is isomorphic to the holomorph as semidirect product. It is useful, but not directly relevant, that the centralizer of λG in Sym(G) is ρG, their intersection is ρ Z ( G ) = λ Z ( G ) {\displaystyle \rho _{Z(G)}=\lambda _{Z(G)}} , where Z(G) is the center of G, and that A is a common complement to both of these normal subgroups of N. == Properties == ρ(G) ∩ Aut(G) = 1 Aut(G) normalizes ρ(G) so that canonically ρ(G)Aut(G) ≅ G ⋊ Aut(G) Inn ( G ) ≅ Im ( g ↦ λ ( g ) ρ ( g ) ) {\displaystyle \operatorname {Inn} (G)\cong \operatorname {Im} (g\mapsto \lambda (g)\rho (g))} since λ(g)ρ(g)(h) = ghg−1 ( Inn ( G ) {\displaystyle \operatorname {Inn} (G)} is the group of inner automorphisms of G.) K ≤ G is a characteristic subgroup if and only if λ(K) ⊴ Hol(G) == References == Hall, Marshall Jr. (1959), The theory of groups, Macmillan, MR 0103215 Burnside, William (2004), Theory of Groups of Finite Order, 2nd ed., Dover, p. 87
|
https://en.wikipedia.org/wiki/Holomorph_(mathematics)
|
In mathematics, an operation is a function from a set to itself. For example, an operation on real numbers will take in real numbers and return a real number. An operation can take zero or more input values (also called "operands" or "arguments") to a well-defined output value. The number of operands is the arity of the operation. The most commonly studied operations are binary operations (i.e., operations of arity 2), such as addition and multiplication, and unary operations (i.e., operations of arity 1), such as additive inverse and multiplicative inverse. An operation of arity zero, or nullary operation, is a constant. The mixed product is an example of an operation of arity 3, also called ternary operation. Generally, the arity is taken to be finite. However, infinitary operations are sometimes considered, in which case the "usual" operations of finite arity are called finitary operations. A partial operation is defined similarly to an operation, but with a partial function in place of a function. == Types of operation == There are two common types of operations: unary and binary. Unary operations involve only one value, such as negation and trigonometric functions. Binary operations, on the other hand, take two values, and include addition, subtraction, multiplication, division, and exponentiation. Operations can involve mathematical objects other than numbers. The logical values true and false can be combined using logic operations, such as and, or, and not. Vectors can be added and subtracted. Rotations can be combined using the function composition operation, performing the first rotation and then the second. Operations on sets include the binary operations union and intersection and the unary operation of complementation. Operations on functions include composition and convolution. Operations may not be defined for every possible value of its domain. For example, in the real numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is defined form a set called its domain of definition or active domain. The set which contains the values produced is called the codomain, but the set of actual values attained by the operation is its codomain of definition, active codomain, image or range. For example, in the real numbers, the squaring operation only produces non-negative numbers; the codomain is the set of real numbers, but the range is the non-negative numbers. Operations can involve dissimilar objects: a vector can be multiplied by a scalar to form another vector (an operation known as scalar multiplication), and the inner product operation on two vectors produces a quantity that is scalar. An operation may or may not have certain properties, for example it may be associative, commutative, anticommutative, idempotent, and so on. The values combined are called operands, arguments, or inputs, and the value produced is called the value, result, or output. Operations can have fewer or more than two inputs (including the case of zero input and infinitely many inputs). An operator is similar to an operation in that it refers to the symbol or the process used to denote the operation. Hence, their point of view is different. For instance, one often speaks of "the operation of addition" or "the addition operation," when focusing on the operands and result, but one switch to "addition operator" (rarely "operator of addition"), when focusing on the process, or from the more symbolic viewpoint, the function +: X × X → X (where X is a set such as the set of real numbers). == Definition == An n-ary operation ω on a set X is a function ω: Xn → X. The set Xn is called the domain of the operation, the output set is called the codomain of the operation, and the fixed non-negative integer n (the number of operands) is called the arity of the operation. Thus a unary operation has arity one, and a binary operation has arity two. An operation of arity zero, called a nullary operation, is simply an element of the codomain Y. An n-ary operation can also be viewed as an (n + 1)-ary relation that is total on its n input domains and unique on its output domain. An n-ary partial operation ω from Xn to X is a partial function ω: Xn → X. An n-ary partial operation can also be viewed as an (n + 1)-ary relation that is unique on its output domain. The above describes what is usually called a finitary operation, referring to the finite number of operands (the value n). There are obvious extensions where the arity is taken to be an infinite ordinal or cardinal, or even an arbitrary set indexing the operands. Often, the use of the term operation implies that the domain of the function includes a power of the codomain (i.e. the Cartesian product of one or more copies of the codomain), although this is by no means universal, as in the case of dot product, where vectors are multiplied and result in a scalar. An n-ary operation ω: Xn → X is called an internal operation. An n-ary operation ω: Xi × S × Xn − i − 1 → X where 0 ≤ i < n is called an external operation by the scalar set or operator set S. In particular for a binary operation, ω: S × X → X is called a left-external operation by S, and ω: X × S → X is called a right-external operation by S. An example of an internal operation is vector addition, where two vectors are added and result in a vector. An example of an external operation is scalar multiplication, where a vector is multiplied by a scalar and result in a vector. An n-ary multifunction or multioperation ω is a mapping from a Cartesian power of a set into the set of subsets of that set, formally ω : X n → P ( X ) {\displaystyle \omega :X^{n}\rightarrow {\mathcal {P}}(X)} . == See also == Finitary relation Hyperoperation Infix notation Operator (mathematics) Order of operations == References ==
|
https://en.wikipedia.org/wiki/Operation_(mathematics)
|
Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance: the property that a mathematical object remains unchanged under a set of operations or transformations. Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This can occur in many ways; for example, if X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups. If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (i.e., an isometry). In general, every kind of structure in mathematics will have its own kind of symmetry, many of which are listed in the given points mentioned above. == Symmetry in geometry == The types of symmetry considered in basic geometry include reflectional symmetry, rotational symmetry, translational symmetry and glide reflection symmetry, which are described more fully in the main article Symmetry (geometry). == Symmetry in calculus == === Even and odd functions === ==== Even functions ==== Let f(x) be a real-valued function of a real variable, then f is even if the following equation holds for all x and -x in the domain of f: f ( x ) = f ( − x ) {\displaystyle f(x)=f(-x)} Geometrically speaking, the graph face of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis. Examples of even functions include |x|, x2, x4, cos(x), and cosh(x). ==== Odd functions ==== Again, let f be a real-valued function of a real variable, then f is odd if the following equation holds for all x and -x in the domain of f: − f ( x ) = f ( − x ) {\displaystyle -f(x)=f(-x)} That is, f ( x ) + f ( − x ) = 0 . {\displaystyle f(x)+f(-x)=0\,.} Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. Examples of odd functions are x, x3, sin(x), sinh(x), and erf(x). === Integrating === The integral of an odd function from −A to +A is zero, provided that A is finite and that the function is integrable (e.g., has no vertical asymptotes between −A and A). The integral of an even function from −A to +A is twice the integral from 0 to +A, provided that A is finite and the function is integrable (e.g., has no vertical asymptotes between −A and A). This also holds true when A is infinite, but only if the integral converges. === Series === The Maclaurin series of an even function includes only even powers. The Maclaurin series of an odd function includes only odd powers. The Fourier series of a periodic even function includes only cosine terms. The Fourier series of a periodic odd function includes only sine terms. == Symmetry in linear algebra == === Symmetry in matrices === In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose (i.e., it is invariant under matrix transposition). Formally, matrix A is symmetric if A = A T . {\displaystyle A=A^{T}.} By the definition of matrix equality, which requires that the entries in all corresponding positions be equal, equal matrices must have the same dimensions (as matrices of different sizes or shapes cannot be equal). Consequently, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if the entries are written as A = (aij), then aij = aji, for all indices i and j. For example, the following 3×3 matrix is symmetric: [ 1 7 3 7 4 − 5 3 − 5 6 ] {\displaystyle {\begin{bmatrix}1&7&3\\7&4&-5\\3&-5&6\end{bmatrix}}} Every square diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative. In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them. == Symmetry in abstract algebra == === Symmetric groups === The symmetric group Sn (on a finite set of n symbols) is the group whose elements are all the permutations of the n symbols, and whose group operation is the composition of such permutations, which are treated as bijective functions from the set of symbols to itself. Since there are n! (n factorial) possible permutations of a set of n symbols, it follows that the order (i.e., the number of elements) of the symmetric group Sn is n!. === Symmetric polynomials === A symmetric polynomial is a polynomial P(X1, X2, ..., Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial if for any permutation σ of the subscripts 1, 2, ..., n, one has P(Xσ(1), Xσ(2), ..., Xσ(n)) = P(X1, X2, ..., Xn). Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view, the elementary symmetric polynomials are the most fundamental symmetric polynomials. A theorem states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials, which implies that every symmetric polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial. ==== Examples ==== In two variables X1 and X2, one has symmetric polynomials such as: X 1 3 + X 2 3 − 7 {\displaystyle X_{1}^{3}+X_{2}^{3}-7} 4 X 1 2 X 2 2 + X 1 3 X 2 + X 1 X 2 3 + ( X 1 + X 2 ) 4 {\displaystyle 4X_{1}^{2}X_{2}^{2}+X_{1}^{3}X_{2}+X_{1}X_{2}^{3}+(X_{1}+X_{2})^{4}} and in three variables X1, X2 and X3, one has as a symmetric polynomial: X 1 X 2 X 3 − 2 X 1 X 2 − 2 X 1 X 3 − 2 X 2 X 3 {\displaystyle X_{1}X_{2}X_{3}-2X_{1}X_{2}-2X_{1}X_{3}-2X_{2}X_{3}\,} === Symmetric tensors === In mathematics, a symmetric tensor is tensor that is invariant under a permutation of its vector arguments: T ( v 1 , v 2 , … , v r ) = T ( v σ 1 , v σ 2 , … , v σ r ) {\displaystyle T(v_{1},v_{2},\dots ,v_{r})=T(v_{\sigma 1},v_{\sigma 2},\dots ,v_{\sigma r})} for every permutation σ of the symbols {1,2,...,r}. Alternatively, an rth order symmetric tensor represented in coordinates as a quantity with r indices satisfies T i 1 i 2 … i r = T i σ 1 i σ 2 … i σ r . {\displaystyle T_{i_{1}i_{2}\dots i_{r}}=T_{i_{\sigma 1}i_{\sigma 2}\dots i_{\sigma r}}.} The space of symmetric tensors of rank r on a finite-dimensional vector space is naturally isomorphic to the dual of the space of homogeneous polynomials of degree r on V. Over fields of characteristic zero, the graded vector space of all symmetric tensors can be naturally identified with the symmetric algebra on V. A related concept is that of the antisymmetric tensor or alternating form. Symmetric tensors occur widely in engineering, physics and mathematics. === Galois theory === Given a polynomial, it may be that some of the roots are connected by various algebraic equations. For example, it may be that for two of the roots, say A and B, that A2 + 5B3 = 7. The central idea of Galois theory is to consider those permutations (or rearrangements) of the roots having the property that any algebraic equation satisfied by the roots is still satisfied after the roots have been permuted. An important proviso is that we restrict ourselves to algebraic equations whose coefficients are rational numbers. Thus, Galois theory studies the symmetries inherent in algebraic equations. === Automorphisms of algebraic objects === In abstract algebra, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object. ==== Examples ==== In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X. In elementary arithmetic, the set of integers, Z, considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field. A group automorphism is a group isomorphism from a group to itself. Informally, it is a permutation of the group elements such that the structure remains unchanged. For every group G there is a natural group homomorphism G → Aut(G) whose image is the group Inn(G) of inner automorphisms and whose kernel is the center of G. Thus, if G has trivial center it can be embedded into its own automorphism group. In linear algebra, an endomorphism of a vector space V is a linear operator V → V. An automorphism is an invertible linear operator on V. When the vector space is finite-dimensional, the automorphism group of V is the same as the general linear group, GL(V). A field automorphism is a bijective ring homomorphism from a field to itself. In the cases of the rational numbers (Q) and the real numbers (R) there are no nontrivial field automorphisms. Some subfields of R have nontrivial field automorphisms, which however do not extend to all of R (because they cannot preserve the property of a number having a square root in R). In the case of the complex numbers, C, there is a unique nontrivial automorphism that sends R into R: complex conjugation, but there are infinitely (uncountably) many "wild" automorphisms (assuming the axiom of choice). Field automorphisms are important to the theory of field extensions, in particular Galois extensions. In the case of a Galois extension L/K the subgroup of all automorphisms of L fixing K pointwise is called the Galois group of the extension. == Symmetry in representation theory == === Symmetry in quantum mechanics: bosons and fermions === In quantum mechanics, bosons have representatives that are symmetric under permutation operators, and fermions have antisymmetric representatives. This implies the Pauli exclusion principle for fermions. In fact, the Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented as a sum of states in which one particle is in state | x ⟩ {\displaystyle \scriptstyle |x\rangle } and the other in state | y ⟩ {\displaystyle \scriptstyle |y\rangle } : | ψ ⟩ = ∑ x , y A ( x , y ) | x , y ⟩ {\displaystyle |\psi \rangle =\sum _{x,y}A(x,y)|x,y\rangle } and antisymmetry under exchange means that A(x,y) = −A(y,x). This implies that A(x,x) = 0, which is Pauli exclusion. It is true in any basis, since unitary changes of basis keep antisymmetric matrices antisymmetric, although strictly speaking, the quantity A(x,y) is not a matrix but an antisymmetric rank-two tensor. Conversely, if the diagonal quantities A(x,x) are zero in every basis, then the wavefunction component: A ( x , y ) = ⟨ ψ | x , y ⟩ = ⟨ ψ | ( | x ⟩ ⊗ | y ⟩ ) {\displaystyle A(x,y)=\langle \psi |x,y\rangle =\langle \psi |(|x\rangle \otimes |y\rangle )} is necessarily antisymmetric. To prove it, consider the matrix element: ⟨ ψ | ( ( | x ⟩ + | y ⟩ ) ⊗ ( | x ⟩ + | y ⟩ ) ) {\displaystyle \langle \psi |((|x\rangle +|y\rangle )\otimes (|x\rangle +|y\rangle ))\,} This is zero, because the two particles have zero probability to both be in the superposition state | x ⟩ + | y ⟩ {\displaystyle \scriptstyle |x\rangle +|y\rangle } . But this is equal to ⟨ ψ | x , x ⟩ + ⟨ ψ | x , y ⟩ + ⟨ ψ | y , x ⟩ + ⟨ ψ | y , y ⟩ {\displaystyle \langle \psi |x,x\rangle +\langle \psi |x,y\rangle +\langle \psi |y,x\rangle +\langle \psi |y,y\rangle \,} The first and last terms on the right hand side are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey: ⟨ ψ | x , y ⟩ + ⟨ ψ | y , x ⟩ = 0 {\displaystyle \langle \psi |x,y\rangle +\langle \psi |y,x\rangle =0\,} . or A ( x , y ) = − A ( y , x ) {\displaystyle A(x,y)=-A(y,x)\,} == Symmetry in set theory == === Symmetric relation === We call a relation symmetric if every time the relation stands from A to B, it stands too from B to A. Note that symmetry is not the exact opposite of antisymmetry. == Symmetry in metric spaces == === Isometries of a space === An isometry is a distance-preserving map between metric spaces. Given a metric space, or a set and scheme for assigning distances between elements of the set, an isometry is a transformation which maps elements to another metric space such that the distance between the elements in the new metric space is equal to the distance between the elements in the original metric space. In a two-dimensional or three-dimensional space, two geometric figures are congruent if they are related by an isometry: related by either a rigid motion, or a composition of a rigid motion and a reflection. Up to a relation by a rigid motion, they are equal if related by a direct isometry. Isometries have been used to unify the working definition of symmetry in geometry and for functions, probability distributions, matrices, strings, graphs, etc. == Symmetries of differential equations == A symmetry of a differential equation is a transformation that leaves the differential equation invariant. Knowledge of such symmetries may help solve the differential equation. A Line symmetry of a system of differential equations is a continuous symmetry of the system of differential equations. Knowledge of a Line symmetry can be used to simplify an ordinary differential equation through reduction of order. For ordinary differential equations, knowledge of an appropriate set of Lie symmetries allows one to explicitly calculate a set of first integrals, yielding a complete solution without integration. Symmetries may be found by solving a related set of ordinary differential equations. Solving these equations is often much simpler than solving the original differential equations. == Symmetry in probability == In the case of a finite number of possible outcomes, symmetry with respect to permutations (relabelings) implies a discrete uniform distribution. In the case of a real interval of possible outcomes, symmetry with respect to interchanging sub-intervals of equal length corresponds to a continuous uniform distribution. In other cases, such as "taking a random integer" or "taking a random real number", there are no probability distributions at all symmetric with respect to relabellings or to exchange of equally long subintervals. Other reasonable symmetries do not single out one particular distribution, or in other words, there is not a unique probability distribution providing maximum symmetry. There is one type of isometry in one dimension that may leave the probability distribution unchanged, that is reflection in a point, for example zero. A possible symmetry for randomness with positive outcomes is that the former applies for the logarithm, i.e., the outcome and its reciprocal have the same distribution. However this symmetry does not single out any particular distribution uniquely. For a "random point" in a plane or in space, one can choose an origin, and consider a probability distribution with circular or spherical symmetry, respectively. == See also == Use of symmetry in integration Invariance (mathematics) == References == == Bibliography == Weyl, Hermann (1989) [1952]. Symmetry. Princeton Science Library. Princeton University Press. ISBN 0-691-02374-3. Ronan, Mark (2006). Symmetry and the Monster. Oxford University Press. ISBN 978-0-19-280723-6. (Concise introduction for lay reader) du Sautoy, Marcus (2012). Finding Moonshine: A Mathematician's Journey Through Symmetry. Harper Collins. ISBN 978-0-00-738087-9.
|
https://en.wikipedia.org/wiki/Symmetry_in_mathematics
|
Discrete mathematics is the study of mathematical structures that can be considered "discrete" (in a way analogous to discrete variables, having a bijection with the set of natural numbers) rather than "continuous" (analogously to continuous functions). Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (finite sets or sets with the same cardinality as the natural numbers). However, there is no exact definition of the term "discrete mathematics". The set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business. Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development of digital computers which operate in "discrete" steps and store data in "discrete" bits. Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches of computer science, such as computer algorithms, programming languages, cryptography, automated theorem proving, and software development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems. Although the main objects of study in discrete mathematics are discrete objects, analytic methods from "continuous" mathematics are often employed as well. In university curricula, discrete mathematics appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts by ACM and MAA into a course that is basically intended to develop mathematical maturity in first-year students; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well. Some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is sometimes seen as a preparatory course, like precalculus in this respect. The Fulkerson Prize is awarded for outstanding papers in discrete mathematics. == Topics == === Theoretical computer science === Theoretical computer science includes areas of discrete mathematics relevant to computing. It draws heavily on graph theory and mathematical logic. Included within theoretical computer science is the study of algorithms and data structures. Computability studies what can be computed in principle, and has close ties to logic, while complexity studies the time, space, and other resources taken by computations. Automata theory and formal language theory are closely related to computability. Petri nets and process algebras are used to model computer systems, and methods from discrete mathematics are used in analyzing VLSI electronic circuits. Computational geometry applies algorithms to geometrical problems and representations of geometrical objects, while computer image analysis applies them to representations of images. Theoretical computer science also includes the study of various continuous computational topics. === Information theory === Information theory involves the quantification of information. Closely related is coding theory which is used to design efficient and reliable data transmission and storage methods. Information theory also includes continuous topics such as: analog signals, analog coding, analog encryption. === Logic === Logic is the study of the principles of valid reasoning and inference, as well as of consistency, soundness, and completeness. For example, in most systems of logic (but not in intuitionistic logic) Peirce's law (((P→Q)→P)→P) is a theorem. For classical logic, it can be easily verified with a truth table. The study of mathematical proof is particularly important in logic, and has accumulated to automated theorem proving and formal verification of software. Logical formulas are discrete structures, as are proofs, which form finite trees or, more generally, directed acyclic graph structures (with each inference step combining one or more premise branches to give a single conclusion). The truth values of logical formulas usually form a finite set, generally restricted to two values: true and false, but logic can also be continuous-valued, e.g., fuzzy logic. Concepts such as infinite proof trees or infinite derivation trees have also been studied, e.g. infinitary logic. === Set theory === Set theory is the branch of mathematics that studies sets, which are collections of objects, such as {blue, white, red} or the (infinite) set of all prime numbers. Partially ordered sets and sets with other relations have applications in several areas. In discrete mathematics, countable sets (including finite sets) are the main focus. The beginning of set theory as a branch of mathematics is usually marked by Georg Cantor's work distinguishing between different kinds of infinite set, motivated by the study of trigonometric series, and further development of the theory of infinite sets is outside the scope of discrete mathematics. Indeed, contemporary work in descriptive set theory makes extensive use of traditional continuous mathematics. === Combinatorics === Combinatorics studies the ways in which discrete structures can be combined or arranged. Enumerative combinatorics concentrates on counting the number of certain combinatorial objects - e.g. the twelvefold way provides a unified framework for counting permutations, combinations and partitions. Analytic combinatorics concerns the enumeration (i.e., determining the number) of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae. Topological combinatorics concerns the use of techniques from topology and algebraic topology/combinatorial topology in combinatorics. Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties. Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, partition theory is now considered a part of combinatorics or an independent field. Order theory is the study of partially ordered sets, both finite and infinite. === Graph theory === Graph theory, the study of graphs and networks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right. Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures. They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts of topology, e.g. knot theory. Algebraic graph theory has close links with group theory and topological graph theory has close links to topology. There are also continuous graphs; however, for the most part, research in graph theory falls within the domain of discrete mathematics. === Number theory === Number theory is concerned with the properties of numbers in general, particularly integers. It has applications to cryptography and cryptanalysis, particularly with regard to modular arithmetic, diophantine equations, linear and quadratic congruences, prime numbers and primality testing. Other discrete aspects of number theory include geometry of numbers. In analytic number theory, techniques from continuous mathematics are also used. Topics that go beyond discrete objects include transcendental numbers, diophantine approximation, p-adic analysis and function fields. === Algebraic structures === Algebraic structures occur as both discrete examples and continuous examples. Discrete algebras include: Boolean algebra used in logic gates and programming; relational algebra used in databases; discrete and finite versions of groups, rings and fields are important in algebraic coding theory; discrete semigroups and monoids appear in the theory of formal languages. === Discrete analogues of continuous mathematics === There are many concepts and theories in continuous mathematics which have discrete versions, such as discrete calculus, discrete Fourier transforms, discrete geometry, discrete logarithms, discrete differential geometry, discrete exterior calculus, discrete Morse theory, discrete optimization, discrete probability theory, discrete probability distribution, difference equations, discrete dynamical systems, and discrete vector measures. ==== Calculus of finite differences, discrete analysis, and discrete calculus ==== In discrete calculus and the calculus of finite differences, a function defined on an interval of the integers is usually called a sequence. A sequence could be a finite sequence from a data source or an infinite sequence from a discrete dynamical system. Such a discrete function could be defined explicitly by a list (if its domain is finite), or by a formula for its general term, or it could be given implicitly by a recurrence relation or difference equation. Difference equations are similar to differential equations, but replace differentiation by taking the difference between adjacent terms; they can be used to approximate differential equations or (more often) studied in their own right. Many questions and methods concerning differential equations have counterparts for difference equations. For instance, where there are integral transforms in harmonic analysis for studying continuous functions or analogue signals, there are discrete transforms for discrete functions or digital signals. As well as discrete metric spaces, there are more general discrete topological spaces, finite metric spaces, finite topological spaces. The time scale calculus is a unification of the theory of difference equations with that of differential equations, which has applications to fields requiring simultaneous modelling of discrete and continuous data. Another way of modeling such a situation is the notion of hybrid dynamical systems. ==== Discrete geometry ==== Discrete geometry and combinatorial geometry are about combinatorial properties of discrete collections of geometrical objects. A long-standing topic in discrete geometry is tiling of the plane. In algebraic geometry, the concept of a curve can be extended to discrete geometries by taking the spectra of polynomial rings over finite fields to be models of the affine spaces over that field, and letting subvarieties or spectra of other rings provide the curves that lie in that space. Although the space in which the curves appear has a finite number of points, the curves are not so much sets of points as analogues of curves in continuous settings. For example, every point of the form V ( x − c ) ⊂ Spec K [ x ] = A 1 {\displaystyle V(x-c)\subset \operatorname {Spec} K[x]=\mathbb {A} ^{1}} for K {\displaystyle K} a field can be studied either as Spec K [ x ] / ( x − c ) ≅ Spec K {\displaystyle \operatorname {Spec} K[x]/(x-c)\cong \operatorname {Spec} K} , a point, or as the spectrum Spec K [ x ] ( x − c ) {\displaystyle \operatorname {Spec} K[x]_{(x-c)}} of the local ring at (x-c), a point together with a neighborhood around it. Algebraic varieties also have a well-defined notion of tangent space called the Zariski tangent space, making many features of calculus applicable even in finite settings. ==== Discrete modelling ==== In applied mathematics, discrete modelling is the discrete analogue of continuous modelling. In discrete modelling, discrete formulae are fit to data. A common method in this form of modelling is to use recurrence relation. Discretization concerns the process of transferring continuous models and equations into discrete counterparts, often for the purposes of making calculations easier by using approximations. Numerical analysis provides an important example. == Challenges == The history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, but not proved until 1976 (by Kenneth Appel and Wolfgang Haken, using substantial computer assistance). In logic, the second problem on David Hilbert's list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödel's second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself. Hilbert's tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done. The need to break German codes in World War II led to advances in cryptography and theoretical computer science, with the first programmable digital electronic computer being developed at England's Bletchley Park with the guidance of Alan Turing and his seminal work, On Computable Numbers. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades. The telecommunications industry has also motivated advances in discrete mathematics, particularly in graph theory and information theory. Formal verification of statements in logic has been necessary for software development of safety-critical systems, and advances in automated theorem proving have been driven by this need. Computational geometry has been an important part of the computer graphics incorporated into modern video games and computer-aided design tools. Several fields of discrete mathematics, particularly theoretical computer science, graph theory, and combinatorics, are important in addressing the challenging bioinformatics problems associated with understanding the tree of life. Currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP. The Clay Mathematics Institute has offered a $1 million USD prize for the first correct proof, along with prizes for six other mathematical problems. == See also == Outline of discrete mathematics Cyberchase, a show that teaches discrete mathematics to children == References == == Further reading == == External links == Discrete mathematics Archived 2011-08-29 at the Wayback Machine at the utk.edu Mathematics Archives, providing links to syllabi, tutorials, programs, etc. Iowa Central: Electrical Technologies Program Discrete mathematics for Electrical engineering.
|
https://en.wikipedia.org/wiki/Discrete_mathematics
|
Mathematical Reviews is a journal published by the American Mathematical Society (AMS) that contains brief synopses, and in some cases evaluations, of many articles in mathematics, statistics, and theoretical computer science. The AMS also publishes an associated online bibliographic database called MathSciNet, which contains an electronic version of Mathematical Reviews. == Reviews == Mathematical Reviews was founded by Otto E. Neugebauer in 1940 as an alternative to the German journal Zentralblatt für Mathematik, which Neugebauer had also founded a decade earlier, but which under the Nazis had begun censoring reviews by and of Jewish mathematicians. The goal of the new journal was to give reviews of every mathematical research publication. As of November 2007, the Mathematical Reviews database contained information on over 2.2 million articles. The authors of reviews are volunteers, usually chosen by the editors because of some expertise in the area of the article. It and Zentralblatt für Mathematik are the only comprehensive resources of this type. (The Mathematics section of Referativny Zhurnal is available only in Russian and is smaller in scale and difficult to access.) Often reviews give detailed summaries of the contents of the paper, sometimes with critical comments by the reviewer and references to related work. However, reviewers are not encouraged to criticize the paper, because the author does not have an opportunity to respond. The author's summary may be quoted when it is not possible to give an independent review, or when the summary is deemed adequate by the reviewer or the editors. Only bibliographic information may be given when a work is in an unusual language, when it is a brief paper in a conference volume, or when it is outside the primary scope of the Reviews. Originally the reviews were written in several languages, but later an "English only" policy was introduced. Selected reviews (called "featured reviews") were also published as a book by the AMS, but this program has been discontinued. == Online database == In 1980, all the contents of Mathematical Reviews since 1940 were integrated into an electronic searchable database. Eventually the contents became part of MathSciNet, which was officially launched in 1996. MathSciNet also has extensive citation information. == Mathematical citation quotient == Mathematical Reviews computes a mathematical citation quotient (MCQ) for each journal. Like the impact factor and other similar citation rates, this is a numerical statistic that measures the frequency of citations to a journal. The MCQ is calculated by counting the total number of citations into the journal that have been indexed by Mathematical Reviews over a five-year period, and dividing this total by the total number of papers published by the journal during that five-year period. For the period 2012 – 2014, the top five journals in Mathematical Reviews by MCQ were: Acta Numerica — MCQ 8.14 Publications Mathématiques de l'IHÉS — MCQ 5.06 Journal of the American Mathematical Society — MCQ 4.79 Annals of Mathematics — MCQ 4.60 Forum of Mathematics, Pi — MCQ 4.54 The "All Journal MCQ" is computed by considering all the journals indexed by Mathematical Reviews as a single meta-journal, which makes it possible to determine if a particular journal has a higher or lower MCQ than average. The 2018 All Journal MCQ is 0.41. == Current Mathematical Publications == Current Mathematical Publications was a subject index in print format that published the newest and upcoming mathematical literature, chosen and indexed by Mathematical Reviews editors. It covered the period from 1965 until 2012, when it was discontinued. == See also == Referativnyi Zhurnal, published in former Soviet Union and now in Russia Zentralblatt MATH, published in Germany INSPEC Web of Science IEEE Xplore Current Index to Statistics == References == == External links == Mathematical Reviews database with access to the online search function for the database (for subscribers), and links to information about the service, such as the following: Mathematical Reviews editorial statement outlines the mission of Mathematical Reviews; Mathematical Reviews guide for reviewers, intended for both reviewers and users of Mathematical Reviews. Exceptional MathReviews collected by Kimball Martin and sorted by amusement factor.
|
https://en.wikipedia.org/wiki/Mathematical_Reviews
|
In mathematics and physics, the term generator or generating set may refer to any of a number of related concepts. The underlying concept in each case is that of a smaller set of objects, together with a set of operations that can be applied to it, that result in the creation of a larger collection of objects, called the generated set. The larger set is then said to be generated by the smaller set. It is commonly the case that the generating set has a simpler set of properties than the generated set, thus making it easier to discuss and examine. It is usually the case that properties of the generating set are in some way preserved by the act of generation; likewise, the properties of the generated set are often reflected in the generating set. == List of generators == A list of examples of generating sets follow. Generating set or spanning set of a vector space: a set that spans the vector space Generating set of a group: A subset of a group that is not contained in any subgroup of the group other than the entire group Generating set of a ring: A subset S of a ring A generates A if the only subring of A containing S is A Generating set of an ideal in a ring Generating set of a module A generator, in category theory, is an object that can be used to distinguish morphisms In topology, a collection of sets that generate the topology is called a subbase Generating set of a topological algebra: S is a generating set of a topological algebra A if the smallest closed subalgebra of A containing S is A Generating a σ-algebra by a collection of subsets == Differential equations == In the study of differential equations, and commonly those occurring in physics, one has the idea of a set of infinitesimal displacements that can be extended to obtain a manifold, or at least, a local part of it, by means of integration. The general concept is of using the exponential map to take the vectors in the tangent space and extend them, as geodesics, to an open set surrounding the tangent point. In this case, it is not unusual to call the elements of the tangent space the generators of the manifold. When the manifold possesses some sort of symmetry, there is also the related notion of a charge or current, which is sometimes also called the generator, although, strictly speaking, charges are not elements of the tangent space. Elements of the Lie algebra to a Lie group are sometimes referred to as "generators of the group," especially by physicists. The Lie algebra can be thought of as the infinitesimal vectors generating the group, at least locally, by means of the exponential map, but the Lie algebra does not form a generating set in the strict sense. In stochastic analysis, an Itō diffusion or more general Itō process has an infinitesimal generator. The generator of any continuous symmetry implied by Noether's theorem, the generators of a Lie group being a special case. In this case, a generator is sometimes called a charge or Noether charge, examples include: angular momentum as the generator of rotations, linear momentum as the generator of translations, electric charge being the generator of the U(1) symmetry group of electromagnetism, the color charges of quarks are the generators of the SU(3) color symmetry in quantum chromodynamics, More precisely, "charge" should apply only to the root system of a Lie group. == See also == Free object Generating function Lie theory Symmetry (physics) Supersymmetry Gauge theory Field (physics) == References == == External links == Generating Sets, K. Conrad
|
https://en.wikipedia.org/wiki/Generator_(mathematics)
|
In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or a particularly simple object possessing a given structure (e.g., group, topological space). The noun triviality usually refers to a simple technical aspect of some proof or definition. The origin of the term in mathematical language comes from the medieval trivium curriculum, which distinguishes from the more difficult quadrivium curriculum. The opposite of trivial is nontrivial, which is commonly used to indicate that an example or a solution is not simple, or that a statement or a theorem is not easy to prove. Triviality does not have a rigorous definition in mathematics. It is subjective, and often determined in a given situation by the knowledge and experience of those considering the case. == Trivial and nontrivial solutions == In mathematics, the term "trivial" is often used to refer to objects (e.g., groups, topological spaces) with a very simple structure. These include, among others: Empty set: the set containing no or null members Trivial group: the mathematical group containing only the identity element Trivial ring: a ring defined on a singleton set "Trivial" can also be used to describe solutions to an equation that have a very simple structure, but for the sake of completeness cannot be omitted. These solutions are called the trivial solutions. For example, consider the differential equation y ′ = y {\displaystyle y'=y} where y = y ( x ) {\displaystyle y=y(x)} is a function whose derivative is y ′ {\displaystyle y'} . The trivial solution is the zero function y ( x ) = 0 {\displaystyle y(x)=0} while a nontrivial solution is the exponential function y ( x ) = e x . {\displaystyle y(x)=e^{x}.} The differential equation f ″ ( x ) = − λ f ( x ) {\displaystyle f''(x)=-\lambda f(x)} with boundary conditions f ( 0 ) = f ( L ) = 0 {\displaystyle f(0)=f(L)=0} is important in mathematics and physics, as it could be used to describe a particle in a box in quantum mechanics, or a standing wave on a string. It always includes the solution f ( x ) = 0 {\displaystyle f(x)=0} , which is considered obvious and hence is called the "trivial" solution. In some cases, there may be other solutions (sinusoids), which are called "nontrivial" solutions. Similarly, mathematicians often describe Fermat's last theorem as asserting that there are no nontrivial integer solutions to the equation a n + b n = c n {\displaystyle a^{n}+b^{n}=c^{n}} , where n is greater than 2. Clearly, there are some solutions to the equation. For example, a = b = c = 0 {\displaystyle a=b=c=0} is a solution for any n, but such solutions are obvious and obtainable with little effort, and hence "trivial". == In mathematical reasoning == Trivial may also refer to any easy case of a proof, which for the sake of completeness cannot be ignored. For instance, proofs by mathematical induction have two parts: the "base case" which shows that the theorem is true for a particular initial value (such as n = 0 or n = 1), and the inductive step which shows that if the theorem is true for a certain value of n, then it is also true for the value n + 1. The base case is often trivial and is identified as such, although there are situations where the base case is difficult but the inductive step is trivial. Similarly, one might want to prove that some property is possessed by all the members of a certain set. The main part of the proof will consider the case of a nonempty set, and examine the members in detail; in the case where the set is empty, the property is trivially possessed by all the members of the empty set, since there are none (see vacuous truth for more). The judgement of whether a situation under consideration is trivial or not depends on who considers it since the situation is obviously true for someone who has sufficient knowledge or experience of it while to someone who has never seen this, it may be even hard to be understood so not trivial at all. And there can be an argument about how quickly and easily a problem should be recognized for the problem to be treated as trivial. The following examples show the subjectivity and ambiguity of the triviality judgement. Triviality also depends on context. A proof in functional analysis would probably, given a number, trivially assume the existence of a larger number. However, when proving basic results about the natural numbers in elementary number theory, the proof may very well hinge on the remark that any natural number has a successor – a statement which should itself be proved or be taken as an axiom so is not trivial (for more, see Peano's axioms). === Trivial proofs === In some texts, a trivial proof refers to a statement involving a material implication P→Q, where the consequent Q, is always true. Here, the proof follows immediately by virtue of the definition of material implication in which as the implication is true regardless of the truth value of the antecedent P if the consequent is fixed as true. A related concept is a vacuous truth, where the antecedent P in a material implication P→Q is false. In this case, the implication is always true regardless of the truth value of the consequent Q – again by virtue of the definition of material implication. == Humor == A common joke in the mathematical community is to say that "trivial" is synonymous with "proved"—that is, any theorem can be considered "trivial" once it is known to be proved as true. Two mathematicians who are discussing a theorem: the first mathematician says that the theorem is "trivial". In response to the other's request for an explanation, he then proceeds with twenty minutes of exposition. At the end of the explanation, the second mathematician agrees that the theorem is trivial. But can we say that this theorem is trivial even if it takes a lot of time and effort to prove it? When a mathematician says that a theorem is trivial, but he is unable to prove it by himself at the moment that he pronounces it as trivial, is the theorem trivial? Often, as a joke, a problem is referred to as "intuitively obvious". For example, someone experienced in calculus would consider the following statement trivial: ∫ 0 1 x 2 d x = 1 3 . {\displaystyle \int _{0}^{1}x^{2}\,dx={\frac {1}{3}}.} However, to someone with no knowledge of integral calculus, this is not obvious, so it is not trivial. == Examples == In number theory, it is often important to find factors of an integer number N. Any number N has four obvious factors: ±1 and ±N. These are called "trivial factors". Any other factor, if it exists, would be called "nontrivial". The homogeneous matrix equation A x = 0 {\displaystyle A\mathbf {x} =\mathbf {0} } , where A {\displaystyle A} is a fixed matrix, x {\displaystyle \mathbf {x} } is an unknown vector, and 0 {\displaystyle \mathbf {0} } is the zero vector, has an obvious solution x = 0 {\displaystyle \mathbf {x} =\mathbf {0} } . This is called the "trivial solution". Any other solutions, with x ≠ 0 {\displaystyle \mathbf {x} \neq \mathbf {0} } , are called "nontrivial". In group theory, there is a very simple group with just one element in it; this is often called the "trivial group". All other groups, which are more complicated, are called "nontrivial". In graph theory, the trivial graph is a graph which has only 1 vertex and no edge. Database theory has a concept called functional dependency, written X → Y {\displaystyle X\to Y} . The dependence X → Y {\displaystyle X\to Y} is true if Y is a subset of X, so this type of dependence is called "trivial". All other dependences, which are less obvious, are called "nontrivial". It can be shown that Riemann's zeta function has zeros at the negative even numbers −2, −4, … Though the proof is comparatively easy, this result would still not normally be called trivial; however, it is in this case, for its other zeros are generally unknown and have important applications and involve open questions (such as the Riemann hypothesis). Accordingly, the negative even numbers are called the trivial zeros of the function, while any other zeros are considered to be non-trivial. == See also == Degeneracy Initial and terminal objects List of mathematical jargon Pathological Trivialism Trivial measure Trivial representation Trivial topology == References == == External links == Trivial entry at MathWorld
|
https://en.wikipedia.org/wiki/Triviality_(mathematics)
|
In mathematics, value may refer to several, strongly related notions. In general, a mathematical value may be any definite mathematical object. In elementary mathematics, this is most often a number – for example, a real number such as π or an integer such as 42. The value of a variable or a constant is any number or other mathematical object assigned to it. Physical quantities have numerical values attached to units of measurement. The value of a mathematical expression is the object assigned to this expression when the variables and constants in it are assigned values. The value of a function, given the value(s) assigned to its argument(s), is the quantity assumed by the function for these argument values. For example, if the function f is defined by f(x) = 2x2 − 3x + 1, then assigning the value 3 to its argument x yields the function value 10, since f(3) = 2·32 − 3·3 + 1 = 10. If the variable, expression or function only assumes real values, it is called real-valued. Likewise, a complex-valued variable, expression or function only assumes complex values. == See also == Value function Value (computer science) Absolute value Truth value == References ==
|
https://en.wikipedia.org/wiki/Value_(mathematics)
|
Connected Mathematics is a comprehensive mathematics program intended for U.S. students in grades 6–8. The curriculum design, text materials for students, and supporting resources for teachers were created and have been progressively refined by the Connected Mathematics Project (CMP) at Michigan State University with advice and contributions from many mathematics teachers, curriculum developers, mathematicians, and mathematics education researchers. The current third edition of Connected Mathematics is a major revision of the program to reflect new expectations of the Common Core State Standards for Mathematics and what the authors have learned from over twenty years of field experience by thousands of teachers working with millions of middle grades students. This CMP3 program is now published in paper and electronic form by Pearson Education. == Core principles == The first edition of Connected Mathematics, developed with financial support from the National Science Foundation, was designed to provide instructional materials for middle grades mathematics. It was based on the 1989 Curriculum and Evaluation Standards and the 1991 Professional Standards for Teaching Mathematics from the National Council of Teachers of Mathematics. These standards highlighted four core features of the curriculum: Comprehensive coverage of mathematical concepts and skills across four content strands—number, algebra, geometry and measurement, and probability and statistics. Connections between the concepts and methods of the four major content strands, and between the abstractions of mathematics and their applications in real-world problem contexts. Instructional materials that transform classrooms into dynamic environments where students learn by solving problems and sharing their thinking with others, while teachers encourage and support students to be curious, to ask questions, and to enjoy learning and using mathematics. Developing students' understanding of mathematical concepts, principles, procedures, and habits of mind, and fostering the disposition to use mathematical reasoning in making sense of new situations and solving problems. These principles have guided the development and refinement of the Connected Mathematics program for over twenty years. The first edition was published in 1995; a major revision, also supported by National Science Foundation funding, was published in 2006; and the current third edition was published in 2014. In the third edition, the collection of units was expanded to cover Common Core Standards for both grade eight and Algebra I. Each CMP grade level course aims to advance student understanding, skills, and problem-solving in every content strand, with increasing sophistication and challenge over the middle school grades. The problem tasks for students are designed to make connections within mathematics, between mathematics and other subject areas, and/or to real-world settings that appeal to students. Curriculum units consist of 3–5 investigations, each focused on a key mathematical idea; each investigation consists of several major problems that the teacher and students explore in class. Applications/Connections/Extensions problem sets are included for each investigation to help students practice, apply, connect, and extend essential understandings. While engaged in collaborative problem-solving and classroom discourse about mathematics, students are explicitly encouraged to reflect on their use of what the NCTM standards once called mathematical processes and now refer to as mathematical practices—making sense of problems and solving them, reasoning abstractly and quantitatively, constructing arguments and critiquing the reasoning of others, modeling with mathematics, using mathematical tools strategically, seeking and using structure, expressing regularity in repeated reasoning, and communicating ideas and results with precision. == Implementation challenges == The introduction of new curriculum content, instructional materials, and teaching methods is challenging in K–12 education. When the proposed changes contrast with long-standing traditional practice, it is common to hear concerns from parents, teachers, and other professionals, as well as from students who have been successful and comfortable in traditional classrooms. In recognition of this innovation challenge, the National Science Foundation complemented its investment in new curriculum materials with substantial investments in professional development for teachers. By funding state and urban systemic initiatives, local systemic change projects, and math-science partnership programs, as well as national centers for standards-based school mathematics curriculum dissemination and implementation, the NSF provided powerful support for the adoption and implementation of the various reform mathematics curricula developed during the standards era. In addition to those programs, for nearly twenty years, CMP has sponsored summer Getting to Know CMP institutes, workshops for leaders of CMP implementation, and an annual User's Conference for the sharing of implementation experiences and insights, all on the campus of Michigan State University. The whole reform curriculum effort has greatly enhanced the field's understanding of what works in that important and challenging process—the clearest message being that significant lasting change takes time, persistent effort, and coordination of work by teachers at all levels in a system. == Research findings == Connected Mathematics has become the most widely used of the middle school curriculum materials developed to implement the NCTM Standards. The effects of its use have been described in expository journal articles and evaluated in mathematics education research projects. Many of the research studies are master's or doctoral dissertation research projects focused on specific aspects of the CMP classroom experience and student learning. But there have also been a number of large-scale independent evaluations of the results of the program. In the large-scale controlled research studies the most common (but by no means universal) pattern of results has been better performance by CMP students on measures of conceptual understanding and problem solving and no significant difference between students of CMP and traditional curriculum materials on measures of routine skills and factual knowledge. For example, this pattern is what the LieCal project found from a longitudinal study comparing learning by students in CMP and traditional middle grades curricula: (1) Students did not sacrifice basic mathematical skills if they were taught using a standards-based or reform mathematics curriculum like CMP; (2) African American students experienced greater gains in symbol manipulation when they used a traditional curriculum; (3) the use of either the CMP or a non-CMP curriculum improved the mathematics achievement of all students, including students of color; (4) the use of CMP contributed to significantly higher problem-solving growth for all ethnic groups; and (5) a high level of conceptual emphasis in a classroom improved the students’ ability to represent problem situations. Perhaps the most telling result of all is reported in the 2008 study by James Tarr and colleagues at the University of Missouri. While finding no overall significant effects from use of reform or traditional curriculum materials, the study did discover effects favoring the NSF-funded curricula when those programs were implemented with high or even moderate levels of fidelity to Standards-based learning environments. That is, when the innovative programs are used as designed, they produce positive effects. == Historical controversy == Like other curricula designed and developed during the 1990s to implement the NCTM Standards, Connected Math was criticized by supporters of more traditional curricula. Critics made the following claims: Reform curricula like CMP pay too little attention to the development of basic computational skills in number and algebra; Student investigation and discovery of key mathematical concepts and skills might lead to critical gaps and misconceptions in their knowledge. Emphasis on mathematics in real-world contexts might cause students to miss abstractions and generalizations that are the powerful heart of the subject. The lack of explanatory prose in textbooks makes it hard for parents to help their children with homework and puts students with weak note-taking abilities, poor handwriting, slow handwriting, and attention deficits at a distinct disadvantage. Additionally, with limited explanatory written materials, students who miss one or more days of school will struggle to catch up on missed materials. Small-group learning is less efficient than teacher-led direct instructional methods, and the most able and interested students might be held back by having to collaborate with less able and motivated students. The CMP program does not take into account the needs of students with minor learning disabilities or other disabilities who might be integrated into general education classrooms but still need extra help and need associated or modified learning materials. The publishers and creators of CMP have stated that reassuring results from a variety of research projects blunted concerns about basic skill mastery, missing knowledge, and student misconceptions resulting from use of CMP and other reform curricula. However, many teachers and parents remain wary. == References == == External links == Connected Mathematics Project http://connectedmath.msu.edu/ Pearson http://www.connectedmathematics3.com Common Core State Standards http://www.corestandards.org/Math
|
https://en.wikipedia.org/wiki/Connected_Mathematics
|
In mathematics, the sign of a real number is its property of being either positive, negative, or 0. Depending on local conventions, zero may be considered as having its own unique sign, having no sign, or having both positive and negative sign. In some contexts, it makes sense to distinguish between a positive and a negative zero. In mathematics and physics, the phrase "change of sign" is associated with exchanging an object for its additive inverse (multiplication with −1, negation), an operation which is not restricted to real numbers. It applies among other objects to vectors, matrices, and complex numbers, which are not prescribed to be only either positive, negative, or zero. The word "sign" is also often used to indicate binary aspects of mathematical or scientific objects, such as odd and even (sign of a permutation), sense of orientation or rotation (cw/ccw), one sided limits, and other concepts described in § Other meanings below. == Sign of a number == Numbers from various number systems, like integers, rationals, complex numbers, quaternions, octonions, ... may have multiple attributes, that fix certain properties of a number. A number system that bears the structure of an ordered ring contains a unique number that when added with any number leaves the latter unchanged. This unique number is known as the system's additive identity element. For example, the integers has the structure of an ordered ring. This number is generally denoted as 0. Because of the total order in this ring, there are numbers greater than zero, called the positive numbers. Another property required for a ring to be ordered is that, for each positive number, there exists a unique corresponding number less than 0 whose sum with the original positive number is 0. These numbers less than 0 are called the negative numbers. The numbers in each such pair are their respective additive inverses. This attribute of a number, being exclusively either zero (0), positive (+), or negative (−), is called its sign, and is often encoded to the real numbers 0, 1, and −1, respectively (similar to the way the sign function is defined). Since rational and real numbers are also ordered rings (in fact ordered fields), the sign attribute also applies to these number systems. When a minus sign is used in between two numbers, it represents the binary operation of subtraction. When a minus sign is written before a single number, it represents the unary operation of yielding the additive inverse (sometimes called negation) of the operand. Abstractly then, the difference of two number is the sum of the minuend with the additive inverse of the subtrahend. While 0 is its own additive inverse (−0 = 0), the additive inverse of a positive number is negative, and the additive inverse of a negative number is positive. A double application of this operation is written as −(−3) = 3. The plus sign is predominantly used in algebra to denote the binary operation of addition, and only rarely to emphasize the positivity of an expression. In common numeral notation (used in arithmetic and elsewhere), the sign of a number is often made explicit by placing a plus or a minus sign before the number. For example, +3 denotes "positive three", and −3 denotes "negative three" (algebraically: the additive inverse of 3). Without specific context (or when no explicit sign is given), a number is interpreted per default as positive. This notation establishes a strong association of the minus sign "−" with negative numbers, and the plus sign "+" with positive numbers. === Sign of zero === Within the convention of zero being neither positive nor negative, a specific sign-value 0 may be assigned to the number value 0. This is exploited in the sgn {\displaystyle \operatorname {sgn} } -function, as defined for real numbers. In arithmetic, +0 and −0 both denote the same number 0. There is generally no danger of confusing the value with its sign, although the convention of assigning both signs to 0 does not immediately allow for this discrimination. In certain European countries, e.g. in Belgium and France, 0 is considered to be both positive and negative following the convention set forth by Nicolas Bourbaki. In some contexts, such as floating-point representations of real numbers within computers, it is useful to consider signed versions of zero, with signed zeros referring to different, discrete number representations (see signed number representations for more). The symbols +0 and −0 rarely appear as substitutes for 0+ and 0−, used in calculus and mathematical analysis for one-sided limits (right-sided limit and left-sided limit, respectively). This notation refers to the behaviour of a function as its real input variable approaches 0 along positive (resp., negative) values; the two limits need not exist or agree. === Terminology for signs === When 0 is said to be neither positive nor negative, the following phrases may refer to the sign of a number: A number is positive if it is greater than zero. A number is negative if it is less than zero. A number is non-negative if it is greater than or equal to zero. A number is non-positive if it is less than or equal to zero. When 0 is said to be both positive and negative, modified phrases are used to refer to the sign of a number: A number is strictly positive if it is greater than zero. A number is strictly negative if it is less than zero. A number is positive if it is greater than or equal to zero. A number is negative if it is less than or equal to zero. For example, the absolute value of a real number is always "non-negative", but is not necessarily "positive" in the first interpretation, whereas in the second interpretation, it is called "positive"—though not necessarily "strictly positive". The same terminology is sometimes used for functions that yield real or other signed values. For example, a function would be called a positive function if its values are positive for all arguments of its domain, or a non-negative function if all of its values are non-negative. === Complex numbers === Complex numbers are impossible to order, so they cannot carry the structure of an ordered ring, and, accordingly, cannot be partitioned into positive and negative complex numbers. They do, however, share an attribute with the reals, which is called absolute value or magnitude. Magnitudes are always non-negative real numbers, and to any non-zero number there belongs a positive real number, its absolute value. For example, the absolute value of −3 and the absolute value of 3 are both equal to 3. This is written in symbols as |−3| = 3 and |3| = 3. In general, any arbitrary real value can be specified by its magnitude and its sign. Using the standard encoding, any real value is given by the product of the magnitude and the sign in standard encoding. This relation can be generalized to define a sign for complex numbers. Since the real and complex numbers both form a field and contain the positive reals, they also contain the reciprocals of the magnitudes of all non-zero numbers. This means that any non-zero number may be multiplied with the reciprocal of its magnitude, that is, divided by its magnitude. It is immediate that the quotient of any non-zero real number by its magnitude yields exactly its sign. By analogy, the sign of a complex number z can be defined as the quotient of z and its magnitude |z|. The sign of a complex number is the exponential of the product of its argument with the imaginary unit. represents in some sense its complex argument. This is to be compared to the sign of real numbers, except with e i π = − 1. {\displaystyle e^{i\pi }=-1.} For the definition of a complex sign-function. see § Complex sign function below. === Sign functions === When dealing with numbers, it is often convenient to have their sign available as a number. This is accomplished by functions that extract the sign of any number, and map it to a predefined value before making it available for further calculations. For example, it might be advantageous to formulate an intricate algorithm for positive values only, and take care of the sign only afterwards. ==== Real sign function ==== The sign function or signum function extracts the sign of a real number, by mapping the set of real numbers to the set of the three reals { − 1 , 0 , 1 } . {\displaystyle \{-1,\;0,\;1\}.} It can be defined as follows: sgn : R → { − 1 , 0 , 1 } x ↦ sgn ( x ) = { − 1 if x < 0 , 0 if x = 0 , 1 if x > 0. {\displaystyle {\begin{aligned}\operatorname {sgn} :{}&\mathbb {R} \to \{-1,0,1\}\\&x\mapsto \operatorname {sgn}(x)={\begin{cases}-1&{\text{if }}x<0,\\~~\,0&{\text{if }}x=0,\\~~\,1&{\text{if }}x>0.\end{cases}}\end{aligned}}} Thus sgn(x) is 1 when x is positive, and sgn(x) is −1 when x is negative. For non-zero values of x, this function can also be defined by the formula sgn ( x ) = x | x | = | x | x , {\displaystyle \operatorname {sgn}(x)={\frac {x}{|x|}}={\frac {|x|}{x}},} where |x| is the absolute value of x. ==== Complex sign function ==== While a real number has a 1-dimensional direction, a complex number has a 2-dimensional direction. The complex sign function requires the magnitude of its argument z = x + iy, which can be calculated as | z | = z z ¯ = x 2 + y 2 . {\displaystyle |z|={\sqrt {z{\bar {z}}}}={\sqrt {x^{2}+y^{2}}}.} Analogous to above, the complex sign function extracts the complex sign of a complex number by mapping the set of non-zero complex numbers to the set of unimodular complex numbers, and 0 to 0: { z ∈ C : | z | = 1 } ∪ { 0 } . {\displaystyle \{z\in \mathbb {C} :|z|=1\}\cup \{0\}.} It may be defined as follows: Let z be also expressed by its magnitude and one of its arguments φ as z = |z|⋅eiφ, then sgn ( z ) = { 0 for z = 0 z | z | = e i φ otherwise . {\displaystyle \operatorname {sgn}(z)={\begin{cases}0&{\text{for }}z=0\\{\dfrac {z}{|z|}}=e^{i\varphi }&{\text{otherwise}}.\end{cases}}} This definition may also be recognized as a normalized vector, that is, a vector whose direction is unchanged, and whose length is fixed to unity. If the original value was R,θ in polar form, then sign(R, θ) is 1 θ. Extension of sign() or signum() to any number of dimensions is obvious, but this has already been defined as normalizing a vector. == Signs per convention == In situations where there are exactly two possibilities on equal footing for an attribute, these are often labelled by convention as plus and minus, respectively. In some contexts, the choice of this assignment (i.e., which range of values is considered positive and which negative) is natural, whereas in other contexts, the choice is arbitrary, making an explicit sign convention necessary, the only requirement being consistent use of the convention. === Sign of an angle === In many contexts, it is common to associate a sign with the measure of an angle, particularly an oriented angle or an angle of rotation. In such a situation, the sign indicates whether the angle is in the clockwise or counterclockwise direction. Though different conventions can be used, it is common in mathematics to have counterclockwise angles count as positive, and clockwise angles count as negative. It is also possible to associate a sign to an angle of rotation in three dimensions, assuming that the axis of rotation has been oriented. Specifically, a right-handed rotation around an oriented axis typically counts as positive, while a left-handed rotation counts as negative. An angle which is the negative of a given angle has an equal arc, but the opposite axis. === Sign of a change === When a quantity x changes over time, the change in the value of x is typically defined by the equation Δ x = x final − x initial . {\displaystyle \Delta x=x_{\text{final}}-x_{\text{initial}}.} Using this convention, an increase in x counts as positive change, while a decrease of x counts as negative change. In calculus, this same convention is used in the definition of the derivative. As a result, any increasing function has positive derivative, while any decreasing function has negative derivative. === Sign of a direction === When studying one-dimensional displacements and motions in analytic geometry and physics, it is common to label the two possible directions as positive and negative. Because the number line is usually drawn with positive numbers to the right, and negative numbers to the left, a common convention is for motions to the right to be given a positive sign, and for motions to the left to be given a negative sign. On the Cartesian plane, the rightward and upward directions are usually thought of as positive, with rightward being the positive x-direction, and upward being the positive y-direction. If a displacement vector is separated into its vector components, then the horizontal part will be positive for motion to the right and negative for motion to the left, while the vertical part will be positive for motion upward and negative for motion downward. Likewise, a negative speed (rate of change of displacement) implies a velocity in the opposite direction, i.e., receding instead of advancing; a special case is the radial speed. In 3D space, notions related to sign can be found in the two normal orientations and orientability in general. === Signedness in computing === In computing, an integer value may be either signed or unsigned, depending on whether the computer is keeping track of a sign for the number. By restricting an integer variable to non-negative values only, one more bit can be used for storing the value of a number. Because of the way integer arithmetic is done within computers, signed number representations usually do not store the sign as a single independent bit, instead using e.g. two's complement. In contrast, real numbers are stored and manipulated as floating point values. The floating point values are represented using three separate values, mantissa, exponent, and sign. Given this separate sign bit, it is possible to represent both positive and negative zero. Most programming languages normally treat positive zero and negative zero as equivalent values, albeit, they provide means by which the distinction can be detected. === Other meanings === In addition to the sign of a real number, the word sign is also used in various related ways throughout mathematics and other sciences: Words up to sign mean that, for a quantity q, it is known that either q = Q or q = −Q for certain Q. It is often expressed as q = ±Q. For real numbers, it means that only the absolute value |q| of the quantity is known. For complex numbers and vectors, a quantity known up to sign is a stronger condition than a quantity with known magnitude: aside Q and −Q, there are many other possible values of q such that |q| = |Q|. The sign of a permutation is defined to be positive if the permutation is even, and negative if the permutation is odd. In graph theory, a signed graph is a graph in which each edge has been marked with a positive or negative sign. In mathematical analysis, a signed measure is a generalization of the concept of measure in which the measure of a set may have positive or negative values. The concept of signed distance is used to convey side, inside or out. The ideas of signed area and signed volume are sometimes used when it is convenient for certain areas or volumes to count as negative. This is particularly true in the theory of determinants. In an (abstract) oriented vector space, each ordered basis for the vector space can be classified as either positively or negatively oriented. In a signed-digit representation, each digit of a number may have a positive or negative sign. In physics, any electric charge comes with a sign, either positive or negative. By convention, a positive charge is a charge with the same sign as that of a proton, and a negative charge is a charge with the same sign as that of an electron. == See also == Percent sign Plus–minus sign Positive element Signedness Symmetry in mathematics == References ==
|
https://en.wikipedia.org/wiki/Sign_(mathematics)
|
Mathematics emerged independently in China by the 11th century BCE. The Chinese independently developed a real number system that includes significantly large and negative numbers, more than one numeral system (binary and decimal), algebra, geometry, number theory and trigonometry. Since the Han dynasty, as diophantine approximation being a prominent numerical method, the Chinese made substantial progress on polynomial evaluation. Algorithms like regula falsi and expressions like simple continued fractions are widely used and have been well-documented ever since. They deliberately find the principal nth root of positive numbers and the roots of equations. The major texts from the period, The Nine Chapters on the Mathematical Art and the Book on Numbers and Computation gave detailed processes for solving various mathematical problems in daily life. All procedures were computed using a counting board in both texts, and they included inverse elements as well as Euclidean divisions. The texts provide procedures similar to that of Gaussian elimination and Horner's method for linear algebra. The achievement of Chinese algebra reached a zenith in the 13th century during the Yuan dynasty with the development of tian yuan shu. As a result of obvious linguistic and geographic barriers, as well as content, Chinese mathematics and the mathematics of the ancient Mediterranean world are presumed to have developed more or less independently up to the time when The Nine Chapters on the Mathematical Art reached its final form, while the Book on Numbers and Computation and Huainanzi are roughly contemporary with classical Greek mathematics. Some exchange of ideas across Asia through known cultural exchanges from at least Roman times is likely. Frequently, elements of the mathematics of early societies correspond to rudimentary results found later in branches of modern mathematics such as geometry or number theory. The Pythagorean theorem for example, has been attested to the time of the Duke of Zhou. Knowledge of Pascal's triangle has also been shown to have existed in China centuries before Pascal, such as the Song-era polymath Shen Kuo. == Pre-imperial era == Shang dynasty (1600–1050 BC). One of the oldest surviving mathematical works is the I Ching, which greatly influenced written literature during the Zhou dynasty (1050–256 BC). For mathematics, the book included a sophisticated use of hexagrams. Leibniz pointed out, the I Ching (Yi Jing) contained elements of binary numbers. Since the Shang period, the Chinese had already fully developed a decimal system. Since early times, Chinese understood basic arithmetic (which dominated far eastern history), algebra, equations, and negative numbers with counting rods. Although the Chinese were more focused on arithmetic and advanced algebra for astronomical uses, they were also the first to develop negative numbers, algebraic geometry, and the usage of decimals. Math was one of the Six Arts students were required to master during the Zhou dynasty (1122–256 BCE). Learning them all perfectly was required to be a perfect gentleman, comparable to the concept of a "renaissance man". Six Arts have their roots in the Confucian philosophy. The oldest existent work on geometry in China comes from the philosophical Mohist canon c. 330 BCE, compiled by the followers of Mozi (470–390 BCE). The Mo Jing described various aspects of many fields associated with physical science, and provided a small wealth of information on mathematics as well. It provided an 'atomic' definition of the geometric point, stating that a line is separated into parts, and the part which has no remaining parts (i.e. cannot be divided into smaller parts) and thus forms the extreme end of a line is a point. Much like Euclid's first and third definitions and Plato's 'beginning of a line', the Mo Jing stated that "a point may stand at the end (of a line) or at its beginning like a head-presentation in childbirth. (As to its invisibility) there is nothing similar to it." Similar to the atomists of Democritus, the Mo Jing stated that a point is the smallest unit, and cannot be cut in half, since 'nothing' cannot be halved." It stated that two lines of equal length will always finish at the same place," while providing definitions for the comparison of lengths and for parallels," along with principles of space and bounded space. It also described the fact that planes without the quality of thickness cannot be piled up since they cannot mutually touch. The book provided word recognition for circumference, diameter, and radius, along with the definition of volume. The history of mathematical development lacks some evidence. There are still debates about certain mathematical classics. For example, the Zhoubi Suanjing dates around 1200–1000 BC, yet many scholars believed it was written between 300 and 250 BCE. The Zhoubi Suanjing contains an in-depth proof of the Gougu Theorem (a special case of the Pythagorean theorem), but focuses more on astronomical calculations. However, the recent archaeological discovery of the Tsinghua Bamboo Slips, dated c. 305 BCE, has revealed some aspects of pre-Qin mathematics, such as the first known decimal multiplication table. The abacus was first mentioned in the second century BC, alongside 'calculation with rods' (suan zi) in which small bamboo sticks are placed in successive squares of a checkerboard. == Qin dynasty == Not much is known about Qin dynasty mathematics, or before, due to the burning of books and burying of scholars, circa 213–210 BC. Knowledge of this period can be determined from civil projects and historical evidence. The Qin dynasty created a standard system of weights. Civil projects of the Qin dynasty were significant feats of human engineering. Emperor Qin Shi Huang ordered many men to build large, life-sized statues for the palace tomb along with other temples and shrines, and the shape of the tomb was designed with geometric skills of architecture. It is certain that one of the greatest feats of human history, the Great Wall of China, required many mathematical techniques. All Qin dynasty buildings and grand projects used advanced computation formulas for volume, area and proportion. Qin bamboo cash purchased at the antiquarian market of Hong Kong by the Yuelu Academy, according to the preliminary reports, contains the earliest epigraphic sample of a mathematical treatise. == Han dynasty == In the Han dynasty, numbers were developed into a place value decimal system and used on a counting board with a set of counting rods called rod calculus, consisting of only nine symbols with a blank space on the counting board representing zero. Negative numbers and fractions were also incorporated into solutions of the great mathematical texts of the period. The mathematical texts of the time, the Book on Numbers and Computation and Jiuzhang suanshu solved basic arithmetic problems such as addition, subtraction, multiplication and division. Furthermore, they gave the processes for square and cubed root extraction, which eventually was applied to solving quadratic equations up to the third order. Both texts also made substantial progress in Linear Algebra, namely solving systems of equations with multiple unknowns. The value of pi is taken to be equal to three in both texts. However, the mathematicians Liu Xin (d. 23) and Zhang Heng (78–139) gave more accurate approximations for pi than Chinese of previous centuries had used. Mathematics was developed to solve practical problems in the time such as division of land or problems related to division of payment. The Chinese did not focus on theoretical proofs based on geometry or algebra in the modern sense of proving equations to find area or volume. The Book of Computations and The Nine Chapters on the Mathematical Art provide numerous practical examples that would be used in daily life. === Book on Numbers and Computation === The Book on Numbers and Computation is approximately seven thousand characters in length, written on 190 bamboo strips. It was discovered together with other writings in 1984 when archaeologists opened a tomb at Zhangjiashan in Hubei province. From documentary evidence this tomb is known to have been closed in 186 BC, early in the Western Han dynasty. While its relationship to the Nine Chapters is still under discussion by scholars, some of its contents are clearly paralleled there. The text of the Suan shu shu is however much less systematic than the Nine Chapters, and appears to consist of a number of more or less independent short sections of text drawn from a number of sources. The Book of Computations contains many perquisites to problems that would be expanded upon in The Nine Chapters on the Mathematical Art. An example of the elementary mathematics in the Suàn shù shū, the square root is approximated by using false position method which says to "combine the excess and deficiency as the divisor; (taking) the deficiency numerator multiplied by the excess denominator and the excess numerator times the deficiency denominator, combine them as the dividend." Furthermore, The Book of Computations solves systems of two equations and two unknowns using the same false position method. === The Nine Chapters on the Mathematical Art === The Nine Chapters on the Mathematical Art dates archeologically to 179 CE, though it is traditionally dated to 1000 BCE, but it was written perhaps as early as 300–200 BCE. Although the author(s) are unknown, they made a major contribution in the eastern world. Problems are set up with questions immediately followed by answers and procedure. There are no formal mathematical proofs within the text, just a step-by-step procedure. The commentary of Liu Hui provided geometrical and algebraic proofs to the problems given within the text. The Nine Chapters on the Mathematical Art was one of the most influential of all Chinese mathematical books and it is composed of 246 problems. It was later incorporated into The Ten Computational Canons, which became the core of mathematical education in later centuries. This book includes 246 problems on surveying, agriculture, partnerships, engineering, taxation, calculation, the solution of equations, and the properties of right triangles. The Nine Chapters made significant additions to solving quadratic equations in a way similar to Horner's method. It also made advanced contributions to fangcheng, or what is now known as linear algebra. Chapter seven solves system of linear equations with two unknowns using the false position method, similar to The Book of Computations. Chapter eight deals with solving determinate and indeterminate simultaneous linear equations using positive and negative numbers, with one problem dealing with solving four equations in five unknowns. The Nine Chapters solves systems of equations using methods similar to the modern Gaussian elimination and back substitution. The version of The Nine Chapters that has served as the foundation for modern renditions was a result of the efforts of the scholar Dai Zhen. Transcribing the problems directly from Yongle Encyclopedia, he then proceeded to make revisions to the original text, along with the inclusion his own notes explaining his reasoning behind the alterations. His finished work would be first published in 1774, but a new revision would be published in 1776 to correct various errors as well as include a version of The Nine Chapters from the Southern Song that contained the commentaries of Lui Hui and Li Chunfeng. The final version of Dai Zhen's work would come in 1777, titled Ripple Pavilion, with this final rendition being widely distributed and coming to serve as the standard for modern versions of The Nine Chapters. However, this version has come under scrutiny from Guo Shuchen, alleging that the edited version still contains numerous errors and that not all of the original amendments were done by Dai Zhen himself. === Calculation of pi === Problems in The Nine Chapters on the Mathematical Art take pi to be equal to three in calculating problems related to circles and spheres, such as spherical surface area. There is no explicit formula given within the text for the calculation of pi to be three, but it is used throughout the problems of both The Nine Chapters on the Mathematical Art and the Artificer's Record, which was produced in the same time period. Historians believe that this figure of pi was calculated using the 3:1 relationship between the circumference and diameter of a circle. Some Han mathematicians attempted to improve this number, such as Liu Xin, who is believed to have estimated pi to be 3.154. Later, Liu Hui attempted to improve the calculation by calculating pi to be 3.141024. Liu calculated this number by using polygons inside a hexagon as a lower limit compared to a circle. Zu Chongzhi later discovered the calculation of pi to be 3.1415926 < π < 3.1415927 by using polygons with 24,576 sides. This calculation would be discovered in Europe during the 16th century. There is no explicit method or record of how he calculated this estimate. === Division and root extraction === Basic arithmetic processes such as addition, subtraction, multiplication and division were present before the Han dynasty. The Nine Chapters on the Mathematical Art take these basic operations for granted and simply instruct the reader to perform them. Han mathematicians calculated square and cube roots in a similar manner as division, and problems on division and root extraction both occur in Chapter Four of The Nine Chapters on the Mathematical Art. Calculating the square and cube roots of numbers is done through successive approximation, the same as division, and often uses similar terms such as dividend (shi) and divisor (fa) throughout the process. This process of successive approximation was then extended to solving quadratics of the second and third order, such as x 2 + a = b {\displaystyle x^{2}+a=b} , using a method similar to Horner's method. The method was not extended to solve quadratics of the nth order during the Han dynasty; however, this method was eventually used to solve these equations. === Linear algebra === The Book of Computations is the first known text to solve systems of equations with two unknowns. There are a total of three sets of problems within The Book of Computations involving solving systems of equations with the false position method, which again are put into practical terms. Chapter Seven of The Nine Chapters on the Mathematical Art also deals with solving a system of two equations with two unknowns with the false position method. To solve for the greater of the two unknowns, the false position method instructs the reader to cross-multiply the minor terms or zi (which are the values given for the excess and deficit) with the major terms mu. To solve for the lesser of the two unknowns, simply add the minor terms together. Chapter Eight of The Nine Chapters on the Mathematical Art deals with solving infinite equations with infinite unknowns. This process is referred to as the "fangcheng procedure" throughout the chapter. Many historians chose to leave the term fangcheng untranslated due to conflicting evidence of what the term means. Many historians translate the word to linear algebra today. In this chapter, the process of Gaussian elimination and back-substitution are used to solve systems of equations with many unknowns. Problems were done on a counting board and included the use of negative numbers as well as fractions. The counting board was effectively a matrix, where the top line is the first variable of one equation and the bottom was the last. === Liu Hui's commentary on The Nine Chapters on the Mathematical Art === Liu Hui's commentary on The Nine Chapters on the Mathematical Art is the earliest edition of the original text available. Hui is believed by most to be a mathematician shortly after the Han dynasty. Within his commentary, Hui qualified and proved some of the problems from either an algebraic or geometrical standpoint. For instance, throughout The Nine Chapters on the Mathematical Art, the value of pi is taken to be equal to three in problems regarding circles or spheres. In his commentary, Liu Hui finds a more accurate estimation of pi using the method of exhaustion. The method involves creating successive polygons within a circle so that eventually the area of a higher-order polygon will be identical to that of the circle. From this method, Liu Hui asserted that the value of pi is about 3.14. Liu Hui also presented a geometric proof of square and cubed root extraction similar to the Greek method, which involved cutting a square or cube in any line or section and determining the square root through symmetry of the remaining rectangles. == Three Kingdoms, Jin, and Sixteen Kingdoms == In the third century Liu Hui wrote his commentary on the Nine Chapters and also wrote Haidao Suanjing which dealt with using Pythagorean theorem (already known by the 9 chapters), and triple, quadruple triangulation for surveying; his accomplishment in the mathematical surveying exceeded those accomplished in the west by a millennium. He was the first Chinese mathematician to calculate π=3.1416 with his π algorithm. He discovered the usage of Cavalieri's principle to find an accurate formula for the volume of a cylinder, and also developed elements of the infinitesimal calculus during the 3rd century CE. In the fourth century, another influential mathematician named Zu Chongzhi, introduced the Da Ming Li. This calendar was specifically calculated to predict many cosmological cycles that will occur in a period of time. Very little is really known about his life. Today, the only sources are found in Book of Sui, we now know that Zu Chongzhi was one of the generations of mathematicians. He used Liu Hui's pi-algorithm applied to a 12288-gon and obtained a value of pi to 7 accurate decimal places (between 3.1415926 and 3.1415927), which would remain the most accurate approximation of π available for the next 900 years. He also applied He Chengtian's interpolation for approximating irrational number with fraction in his astronomy and mathematical works, he obtained 355 113 {\displaystyle {\tfrac {355}{113}}} as a good fraction approximate for pi; Yoshio Mikami commented that neither the Greeks, nor the Hindus nor Arabs knew about this fraction approximation to pi, not until the Dutch mathematician Adrian Anthoniszoom rediscovered it in 1585, "the Chinese had therefore been possessed of this the most extraordinary of all fractional values over a whole millennium earlier than Europe". Along with his son, Zu Geng, Zu Chongzhi applied the Cavalieri's principle to find an accurate solution for calculating the volume of the sphere. Besides containing formulas for the volume of the sphere, his book also included formulas of cubic equations and the accurate value of pi. His work, Zhui Shu was discarded out of the syllabus of mathematics during the Song dynasty and lost. Many believed that Zhui Shu contains the formulas and methods for linear, matrix algebra, algorithm for calculating the value of π, formula for the volume of the sphere. The text should also associate with his astronomical methods of interpolation, which would contain knowledge, similar to our modern mathematics. A mathematical manual called Sunzi mathematical classic dated between 200 and 400 CE contained the most detailed step by step description of multiplication and division algorithm with counting rods. Intriguingly, Sunzi may have influenced the development of place-value systems and place-value systems and the associated Galley division in the West. European sources learned place-value techniques in the 13th century, from a Latin translation an early-9th-century work by Al-Khwarizmi. Khwarizmi's presentation is almost identical to the division algorithm in Sunzi, even regarding stylistic matters (for example, using blank spaces to represent trailing zeros); the similarity suggests that the results may not have been an independent discovery. Islamic commentators on Al-Khwarizmi's work believed that it primarily summarized Hindu knowledge; Al-Khwarizmi's failure to cite his sources makes it difficult to determine whether those sources had in turn learned the procedure from China. In the fifth century the manual called "Zhang Qiujian suanjing" discussed linear and quadratic equations. By this point the Chinese had the concept of negative numbers. == Tang dynasty == By the Tang dynasty study of mathematics was fairly standard in the great schools. The Ten Computational Canons was a collection of ten Chinese mathematical works, compiled by early Tang dynasty mathematician Li Chunfeng (李淳風 602–670), as the official mathematical texts for imperial examinations in mathematics. The Sui dynasty and Tang dynasty ran the "School of Computations". Wang Xiaotong was a great mathematician in the beginning of the Tang dynasty, and he wrote a book: Jigu Suanjing (Continuation of Ancient Mathematics), where numerical solutions which general cubic equations appear for the first time. The Tibetans obtained their first knowledge of mathematics (arithmetic) from China during the reign of Nam-ri srong btsan, who died in 630. The table of sines by the Indian mathematician, Aryabhata, were translated into the Chinese mathematical book of the Kaiyuan Zhanjing, compiled in 718 AD during the Tang dynasty. Although the Chinese excelled in other fields of mathematics such as solid geometry, binomial theorem, and complex algebraic formulas, early forms of trigonometry were not as widely appreciated as in the contemporary Indian and Islamic mathematics. Yi Xing, the mathematician and Buddhist monk was credited for calculating the tangent table. Instead, the early Chinese used an empirical substitute known as chong cha, while practical use of plane trigonometry in using the sine, the tangent, and the secant were known. Yi Xing was famed for his genius, and was known to have calculated the number of possible positions on a go board game (though without a symbol for zero he had difficulties expressing the number). == Song and Yuan dynasties == Northern Song dynasty mathematician Jia Xian developed an additive multiplicative method for extraction of square root and cubic root which implemented the "Horner" rule. Four outstanding mathematicians arose during the Song dynasty and Yuan dynasty, particularly in the twelfth and thirteenth centuries: Yang Hui, Qin Jiushao, Li Zhi (Li Ye), and Zhu Shijie. Yang Hui, Qin Jiushao, Zhu Shijie all used the Horner-Ruffini method six hundred years earlier to solve certain types of simultaneous equations, roots, quadratic, cubic, and quartic equations. Yang Hui was also the first person in history to discover and prove "Pascal's Triangle", along with its binomial proof (although the earliest mention of the Pascal's triangle in China exists before the eleventh century AD). Li Zhi on the other hand, investigated on a form of algebraic geometry based on tiān yuán shù. His book; Ceyuan haijing revolutionized the idea of inscribing a circle into triangles, by turning this geometry problem by algebra instead of the traditional method of using Pythagorean theorem. Guo Shoujing of this era also worked on spherical trigonometry for precise astronomical calculations. At this point of mathematical history, a lot of modern western mathematics were already discovered by Chinese mathematicians. Things grew quiet for a time until the thirteenth century Renaissance of Chinese math. This saw Chinese mathematicians solving equations with methods Europe would not know until the eighteenth century. The high point of this era came with Zhu Shijie's two books Suanxue qimeng and the Jade Mirror of the Four Unknowns. In one case he reportedly gave a method equivalent to Gauss's pivotal condensation. Qin Jiushao (c. 1202 – 1261) was the first to introduce the zero symbol into Chinese mathematics." Before this innovation, blank spaces were used instead of zeros in the system of counting rods. One of the most important contribution of Qin Jiushao was his method of solving high order numerical equations. Referring to Qin's solution of a 4th order equation, Yoshio Mikami put it: "Who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe?" Qin also solved a 10th order equation. Pascal's triangle was first illustrated in China by Yang Hui in his book Xiangjie Jiuzhang Suanfa (詳解九章算法), although it was described earlier around 1100 by Jia Xian. Although the Introduction to Computational Studies (算學啓蒙) written by Zhu Shijie (fl. 13th century) in 1299 contained nothing new in Chinese algebra, it had a great impact on the development of Japanese mathematics. === Algebra === ==== Ceyuan haijing ==== Ceyuan haijing (Chinese: 測圓海鏡; pinyin: Cèyuán Hǎijìng), or Sea-Mirror of the Circle Measurements, is a collection of 692 formula and 170 problems related to inscribed circle in a triangle, written by Li Zhi (or Li Ye) (1192–1272 AD). He used Tian yuan shu to convert intricated geometry problems into pure algebra problems. He then used fan fa, or Horner's method, to solve equations of degree as high as six, although he did not describe his method of solving equations. "Li Chih (or Li Yeh, 1192–1279), a mathematician of Peking who was offered a government post by Khublai Khan in 1206, but politely found an excuse to decline it. His Ts'e-yuan hai-ching (Sea-Mirror of the Circle Measurements) includes 170 problems dealing with[...]some of the problems leading to polynomial equations of sixth degree. Although he did not describe his method of solution of equations, it appears that it was not very different from that used by Chu Shih-chieh and Horner. Others who used the Horner method were Ch'in Chiu-shao (ca. 1202 – ca.1261) and Yang Hui (fl. ca. 1261–1275). ==== Jade Mirror of the Four Unknowns ==== The Jade Mirror of the Four Unknowns was written by Zhu Shijie in 1303 AD and marks the peak in the development of Chinese algebra. The four elements, called heaven, earth, man and matter, represented the four unknown quantities in his algebraic equations. It deals with simultaneous equations and with equations of degrees as high as fourteen. The author uses the method of fan fa, today called Horner's method, to solve these equations. There are many summation series equations given without proof in the Mirror. A few of the summation series are: 1 2 + 2 2 + 3 2 + ⋯ + n 2 = n ( n + 1 ) ( 2 n + 1 ) 3 ! {\displaystyle 1^{2}+2^{2}+3^{2}+\cdots +n^{2}={n(n+1)(2n+1) \over 3!}} 1 + 8 + 30 + 80 + ⋯ + n 2 ( n + 1 ) ( n + 2 ) 3 ! = n ( n + 1 ) ( n + 2 ) ( n + 3 ) ( 4 n + 1 ) 5 ! {\displaystyle 1+8+30+80+\cdots +{n^{2}(n+1)(n+2) \over 3!}={n(n+1)(n+2)(n+3)(4n+1) \over 5!}} ==== Mathematical Treatise in Nine Sections ==== The Mathematical Treatise in Nine Sections, was written by the wealthy governor and minister Ch'in Chiu-shao (c. 1202 – c. 1261) and with the invention of a method of solving simultaneous congruences, it marks the high point in Chinese indeterminate analysis. ==== Magic squares and magic circles ==== The earliest known magic squares of order greater than three are attributed to Yang Hui (fl. ca. 1261–1275), who worked with magic squares of order as high as ten. "The same "Horner" device was used by Yang Hui, about whose life almost nothing is known and who work has survived only in part. Among his contributions that are extant are the earliest Chinese magic squares of order greater than three, including two each of orders four through eight and one each of orders nine and ten." He also worked with magic circle. === Trigonometry === The embryonic state of trigonometry in China slowly began to change and advance during the Song dynasty (960–1279), where Chinese mathematicians began to express greater emphasis for the need of spherical trigonometry in calendar science and astronomical calculations. The polymath and official Shen Kuo (1031–1095) used trigonometric functions to solve mathematical problems of chords and arcs. Joseph W. Dauben notes that in Shen's "technique of intersecting circles" formula, he creates an approximation of the arc of a circle s by s = c + 2v2/d, where d is the diameter, v is the versine, c is the length of the chord c subtending the arc. Sal Restivo writes that Shen's work in the lengths of arcs of circles provided the basis for spherical trigonometry developed in the 13th century by the mathematician and astronomer Guo Shoujing (1231–1316). Gauchet and Needham state Guo used spherical trigonometry in his calculations to improve the Chinese calendar and astronomy. Along with a later 17th-century Chinese illustration of Guo's mathematical proofs, Needham writes: Guo used a quadrangular spherical pyramid, the basal quadrilateral of which consisted of one equatorial and one ecliptic arc, together with two meridian arcs, one of which passed through the summer solstice point...By such methods he was able to obtain the du lü (degrees of equator corresponding to degrees of ecliptic), the ji cha (values of chords for given ecliptic arcs), and the cha lü (difference between chords of arcs differing by 1 degree). Despite the achievements of Shen and Guo's work in trigonometry, another substantial work in Chinese trigonometry would not be published again until 1607, with the dual publication of Euclid's Elements by Chinese official and astronomer Xu Guangqi (1562–1633) and the Italian Jesuit Matteo Ricci (1552–1610). == Ming dynasty == After the overthrow of the Yuan dynasty, China became suspicious of Mongol-favored knowledge. The court turned away from math and physics in favor of botany and pharmacology. Imperial examinations included little mathematics, and what little they included ignored recent developments. Martzloff writes: At the end of the 16th century, Chinese autochthonous mathematics known by the Chinese themselves amounted to almost nothing, little more than calculation on the abacus, whilst in the 17th and 18th centuries nothing could be paralleled with the revolutionary progress in the theatre of European science. Moreover, at this same period, no one could report what had taken place in the more distant past, since the Chinese themselves only had a fragmentary knowledge of that. One should not forget that, in China itself, autochthonous mathematics was not rediscovered on a large scale prior to the last quarter of the 18th century. Correspondingly, scholars paid less attention to mathematics; preeminent mathematicians such as Gu Yingxiang and Tang Shunzhi appear to have been ignorant of the 'increase multiply' method. Without oral interlocutors to explicate them, the texts rapidly became incomprehensible; worse yet, most problems could be solved with more elementary methods. To the average scholar, then, tianyuan seemed numerology. When Wu Jing collated all the mathematical works of previous dynasties into The Annotations of Calculations in the Nine Chapters on the Mathematical Art, he omitted Tian yuan shu and the increase multiply method. Instead, mathematical progress became focused on computational tools. In 15 century, abacus came into its suan pan form. Easy to use and carry, both fast and accurate, it rapidly overtook rod calculus as the preferred form of computation. Zhusuan, the arithmetic calculation through abacus, inspired multiple new works. Suanfa Tongzong (General Source of Computational Methods), a 17-volume work published in 1592 by Cheng Dawei, remained in use for over 300 years. Zhu Zaiyu, Prince of Zheng used 81 position abacus to calculate the square root and cubic root of 2 to 25 figure accuracy, a precision that enabled his development of the equal-temperament system. In the late 16th century, Matteo Ricci decided to published Western scientific works in order to establish a position at the Imperial Court. With the assistance of Xu Guangqi, he was able to translate Euclid's Elements using the same techniques used to teach classical Buddhist texts. Other missionaries followed in his example, translating Western works on special functions (trigonometry and logarithms) that were neglected in the Chinese tradition. However, contemporary scholars found the emphasis on proofs — as opposed to solved problems — baffling, and most continued to work from classical texts alone. == Qing dynasty == Under the Kangxi Emperor, who learned Western mathematics from the Jesuits and was open to outside knowledge and ideas, Chinese mathematics enjoyed a brief period of official support. At Kangxi's direction, Mei Goucheng and three other outstanding mathematicians compiled a 53-volume work titled Shuli Jingyun ("The Essence of Mathematical Study") which was printed in 1723, and gave a systematic introduction to western mathematical knowledge. At the same time, Mei Goucheng also developed to Meishi Congshu Jiyang [The Compiled works of Mei]. Meishi Congshu Jiyang was an encyclopedic summary of nearly all schools of Chinese mathematics at that time, but it also included the cross-cultural works of Mei Wending (1633–1721), Goucheng's grandfather. The enterprise sought to alleviate the difficulties for Chinese mathematicians working on Western mathematics in tracking down citations. In 1773, the Qianlong Emperor decided to compile the Complete Library of the Four Treasuries (or Siku Quanshu). Dai Zhen (1724–1777) selected and proofread The Nine Chapters on the Mathematical Art from Yongle Encyclopedia and several other mathematical works from Han and Tang dynasties. The long-missing mathematical works from Song and Yuan dynasties such as Si-yüan yü-jian and Ceyuan haijing were also found and printed, which directly led to a wave of new research. The most annotated works were Jiuzhang suanshu xicaotushuo (The Illustrations of Calculation Process for The Nine Chapters on the Mathematical Art ) contributed by Li Huang and Siyuan yujian xicao (The Detailed Explanation of Si-yuan yu-jian) by Luo Shilin. == Western influences == In 1840, the First Opium War forced China to open its door and look at the outside world, which also led to an influx of western mathematical studies at a rate unrivaled in the previous centuries. In 1852, the Chinese mathematician Li Shanlan and the British missionary Alexander Wylie co-translated the later nine volumes of Elements and 13 volumes on Algebra. With the assistance of Joseph Edkins, more works on astronomy and calculus soon followed. Chinese scholars were initially unsure whether to approach the new works: was study of Western knowledge a form of submission to foreign invaders? But by the end of the century, it became clear that China could only begin to recover its sovereignty by incorporating Western works. Chinese scholars, taught in Western missionary schools, from (translated) Western texts, rapidly lost touch with the indigenous tradition. Those who were self-trained or in traditionalist circles nevertheless continued to work within the traditional framework of algorithmic mathematics without resorting to Western symbolism. Yet, as Martzloff notes, "from 1911 onwards, solely Western mathematics has been practised in China." === In modern China === Chinese mathematics experienced a great surge of revival following the establishment of a modern Chinese republic in 1912. Ever since then, modern Chinese mathematicians have made numerous achievements in various mathematical fields. Some famous modern ethnic Chinese mathematicians include: Shiing-Shen Chern was widely regarded as a leader in geometry and one of the greatest mathematicians of the 20th century and was awarded the Wolf Prize for his contributions to mathematics. Ky Fan made contributions to fixed point theory, in addition to influencing nonlinear functional analysis, which have found wide application in mathematical economics and game theory, potential theory, calculus of variations, and differential equations. Shing-Tung Yau, a Fields Medal laureate, has influenced both physics and mathematics, and he has been active at the interface between geometry and theoretical physics and subsequently awarded the for his contributions. Terence Tao, a Fields Medal laureate and child prodigy of Chinese heritage, was the youngest participant in the history of the International Mathematical Olympiad at the age of 10, winning a bronze, silver, and gold medal. He remains the youngest winner of each of the three medals in the Olympiad's history. Yitang Zhang, a number theorist who established the first finite bound on gaps between prime numbers. Chen Jingrun, a number theorist who proved that every sufficiently large even number can be written as the sum of either two primes, or a prime and a semiprime, which is now called Chen's theorem. His work was important for research of Goldbach's conjecture. == People's Republic of China == In 1949, at the beginning of the founding of the People's Republic of China, the government paid great attention to the cause of science although the country was in a predicament of lack of funds. The Chinese Academy of Sciences was established in November 1949. The Institute of Mathematics was formally established in July 1952. Then, the Chinese Mathematical Society and its founding journals restored and added other special journals. In the 18 years after 1949, the number of published papers accounted for more than three times the total number of articles before 1949. Many of them not only filled the gaps in China's past, but also reached the world's advanced level. During the chaos of the Cultural Revolution, the sciences declined. In the field of mathematics, in addition to Chen Jingrun, Hua Luogeng, Zhang Guanghou and other mathematicians struggling to continue their work. After the catastrophe, with the publication of Guo Moruo's literary "Spring of Science", Chinese sciences and mathematics experienced a revival. In 1977, a new mathematical development plan was formulated in Beijing, the work of the mathematics society was resumed, the journal was re-published, the academic journal was published, the mathematics education was strengthened, and basic theoretical research was strengthened. An important mathematical achievement of the Chinese mathematician in the direction of the power system is how Xia Zhihong proved the Painleve conjecture in 1988. When there are some initial states of N celestial bodies, one of the celestial bodies ran to infinity or speed in a limited time. Infinity is reached, that is, there are non-collision singularities. The Painleve conjecture is an important conjecture in the field of power systems proposed in 1895. A very important recent development for the 4-body problem is that Xue Jinxin and Dolgopyat proved a non-collision singularity in a simplified version of the 4-body system around 2013. In addition, in 2007, Shen Weixiao and Kozlovski, Van-Strien proved the Real Fatou conjecture: Real hyperbolic polynomials are dense in the space of real polynomials with fixed degree. This conjecture can be traced back to Fatou in the 1920s, and later Smale posed it in the 1960s. The proof of Real Fatou conjecture is one of the most important developments in conformal dynamics in the past decade. === IMO performance === In comparison to other participating countries at the International Mathematical Olympiad, China has highest team scores and has won the all-members-gold IMO with a full team the most number of times. == In education == The first reference to a book being used in learning mathematics in China is dated to the second century CE (Hou Hanshu: 24, 862; 35,1207). Ma Xu, who is a youth c. 110, and Zheng Xuan (127–200) both studied the Nine Chapters on Mathematical procedures. Christopher Cullen claims that mathematics, in a manner akin to medicine, was taught orally. The stylistics of the Suàn shù shū from Zhangjiashan suggest that the text was assembled from various sources and then underwent codification. == See also == Chinese astronomy History of mathematics Indian mathematics Islamic mathematics Japanese mathematics List of Chinese discoveries List of Chinese mathematicians Numbers in Chinese culture == References == === Citations === === Works cited === == External links == Early mathematics texts (Chinese) - Chinese Text Project Overview of Chinese mathematics Chinese Mathematics Through the Han Dynasty Primer of Mathematics by Zhu Shijie
|
https://en.wikipedia.org/wiki/Chinese_mathematics
|
In mathematics, the support of a real-valued function f {\displaystyle f} is the subset of the function domain of elements that are not mapped to zero. If the domain of f {\displaystyle f} is a topological space, then the support of f {\displaystyle f} is instead defined as the smallest closed set containing all points not mapped to zero. This concept is used widely in mathematical analysis. == Formulation == Suppose that f : X → R {\displaystyle f:X\to \mathbb {R} } is a real-valued function whose domain is an arbitrary set X . {\displaystyle X.} The set-theoretic support of f , {\displaystyle f,} written supp ( f ) , {\displaystyle \operatorname {supp} (f),} is the set of points in X {\displaystyle X} where f {\displaystyle f} is non-zero: supp ( f ) = { x ∈ X : f ( x ) ≠ 0 } . {\displaystyle \operatorname {supp} (f)=\{x\in X\,:\,f(x)\neq 0\}.} The support of f {\displaystyle f} is the smallest subset of X {\displaystyle X} with the property that f {\displaystyle f} is zero on the subset's complement. If f ( x ) = 0 {\displaystyle f(x)=0} for all but a finite number of points x ∈ X , {\displaystyle x\in X,} then f {\displaystyle f} is said to have finite support. If the set X {\displaystyle X} has an additional structure (for example, a topology), then the support of f {\displaystyle f} is defined in an analogous way as the smallest subset of X {\displaystyle X} of an appropriate type such that f {\displaystyle f} vanishes in an appropriate sense on its complement. The notion of support also extends in a natural way to functions taking values in more general sets than R {\displaystyle \mathbb {R} } and to other objects, such as measures or distributions. == Closed support == The most common situation occurs when X {\displaystyle X} is a topological space (such as the real line or n {\displaystyle n} -dimensional Euclidean space) and f : X → R {\displaystyle f:X\to \mathbb {R} } is a continuous real- (or complex-) valued function. In this case, the support of f {\displaystyle f} , supp ( f ) {\displaystyle \operatorname {supp} (f)} , or the closed support of f {\displaystyle f} , is defined topologically as the closure (taken in X {\displaystyle X} ) of the subset of X {\displaystyle X} where f {\displaystyle f} is non-zero that is, supp ( f ) := cl X ( { x ∈ X : f ( x ) ≠ 0 } ) = f − 1 ( { 0 } c ) ¯ . {\displaystyle \operatorname {supp} (f):=\operatorname {cl} _{X}\left(\{x\in X\,:\,f(x)\neq 0\}\right)={\overline {f^{-1}\left(\{0\}^{\mathrm {c} }\right)}}.} Since the intersection of closed sets is closed, supp ( f ) {\displaystyle \operatorname {supp} (f)} is the intersection of all closed sets that contain the set-theoretic support of f . {\displaystyle f.} Note that if the function f : R n ⊇ X → R {\displaystyle f:\mathbb {R} ^{n}\supseteq X\to \mathbb {R} } is defined on an open subset X ⊆ R n {\displaystyle X\subseteq \mathbb {R} ^{n}} , then the closure is still taken with respect to X {\displaystyle X} and not with respect to the ambient R n {\displaystyle \mathbb {R} ^{n}} . For example, if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is the function defined by f ( x ) = { 1 − x 2 if | x | < 1 0 if | x | ≥ 1 {\displaystyle f(x)={\begin{cases}1-x^{2}&{\text{if }}|x|<1\\0&{\text{if }}|x|\geq 1\end{cases}}} then supp ( f ) {\displaystyle \operatorname {supp} (f)} , the support of f {\displaystyle f} , or the closed support of f {\displaystyle f} , is the closed interval [ − 1 , 1 ] , {\displaystyle [-1,1],} since f {\displaystyle f} is non-zero on the open interval ( − 1 , 1 ) {\displaystyle (-1,1)} and the closure of this set is [ − 1 , 1 ] . {\displaystyle [-1,1].} The notion of closed support is usually applied to continuous functions, but the definition makes sense for arbitrary real or complex-valued functions on a topological space, and some authors do not require that f : X → R {\displaystyle f:X\to \mathbb {R} } (or f : X → C {\displaystyle f:X\to \mathbb {C} } ) be continuous. == Compact support == Functions with compact support on a topological space X {\displaystyle X} are those whose closed support is a compact subset of X . {\displaystyle X.} If X {\displaystyle X} is the real line, or n {\displaystyle n} -dimensional Euclidean space, then a function has compact support if and only if it has bounded support, since a subset of R n {\displaystyle \mathbb {R} ^{n}} is compact if and only if it is closed and bounded. For example, the function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } defined above is a continuous function with compact support [ − 1 , 1 ] . {\displaystyle [-1,1].} If f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is a smooth function then because f {\displaystyle f} is identically 0 {\displaystyle 0} on the open subset R n ∖ supp ( f ) , {\displaystyle \mathbb {R} ^{n}\setminus \operatorname {supp} (f),} all of f {\displaystyle f} 's partial derivatives of all orders are also identically 0 {\displaystyle 0} on R n ∖ supp ( f ) . {\displaystyle \mathbb {R} ^{n}\setminus \operatorname {supp} (f).} The condition of compact support is stronger than the condition of vanishing at infinity. For example, the function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } defined by f ( x ) = 1 1 + x 2 {\displaystyle f(x)={\frac {1}{1+x^{2}}}} vanishes at infinity, since f ( x ) → 0 {\displaystyle f(x)\to 0} as | x | → ∞ , {\displaystyle |x|\to \infty ,} but its support R {\displaystyle \mathbb {R} } is not compact. Real-valued compactly supported smooth functions on a Euclidean space are called bump functions. Mollifiers are an important special case of bump functions as they can be used in distribution theory to create sequences of smooth functions approximating nonsmooth (generalized) functions, via convolution. In good cases, functions with compact support are dense in the space of functions that vanish at infinity, but this property requires some technical work to justify in a given example. As an intuition for more complex examples, and in the language of limits, for any ε > 0 , {\displaystyle \varepsilon >0,} any function f {\displaystyle f} on the real line R {\displaystyle \mathbb {R} } that vanishes at infinity can be approximated by choosing an appropriate compact subset C {\displaystyle C} of R {\displaystyle \mathbb {R} } such that | f ( x ) − I C ( x ) f ( x ) | < ε {\displaystyle \left|f(x)-I_{C}(x)f(x)\right|<\varepsilon } for all x ∈ X , {\displaystyle x\in X,} where I C {\displaystyle I_{C}} is the indicator function of C . {\displaystyle C.} Every continuous function on a compact topological space has compact support since every closed subset of a compact space is indeed compact. == Essential support == If X {\displaystyle X} is a topological measure space with a Borel measure μ {\displaystyle \mu } (such as R n , {\displaystyle \mathbb {R} ^{n},} or a Lebesgue measurable subset of R n , {\displaystyle \mathbb {R} ^{n},} equipped with Lebesgue measure), then one typically identifies functions that are equal μ {\displaystyle \mu } -almost everywhere. In that case, the essential support of a measurable function f : X → R {\displaystyle f:X\to \mathbb {R} } written e s s s u p p ( f ) , {\displaystyle \operatorname {ess\,supp} (f),} is defined to be the smallest closed subset F {\displaystyle F} of X {\displaystyle X} such that f = 0 {\displaystyle f=0} μ {\displaystyle \mu } -almost everywhere outside F . {\displaystyle F.} Equivalently, e s s s u p p ( f ) {\displaystyle \operatorname {ess\,supp} (f)} is the complement of the largest open set on which f = 0 {\displaystyle f=0} μ {\displaystyle \mu } -almost everywhere e s s s u p p ( f ) := X ∖ ⋃ { Ω ⊆ X : Ω is open and f = 0 μ -almost everywhere in Ω } . {\displaystyle \operatorname {ess\,supp} (f):=X\setminus \bigcup \left\{\Omega \subseteq X:\Omega {\text{ is open and }}f=0\,\mu {\text{-almost everywhere in }}\Omega \right\}.} The essential support of a function f {\displaystyle f} depends on the measure μ {\displaystyle \mu } as well as on f , {\displaystyle f,} and it may be strictly smaller than the closed support. For example, if f : [ 0 , 1 ] → R {\displaystyle f:[0,1]\to \mathbb {R} } is the Dirichlet function that is 0 {\displaystyle 0} on irrational numbers and 1 {\displaystyle 1} on rational numbers, and [ 0 , 1 ] {\displaystyle [0,1]} is equipped with Lebesgue measure, then the support of f {\displaystyle f} is the entire interval [ 0 , 1 ] , {\displaystyle [0,1],} but the essential support of f {\displaystyle f} is empty, since f {\displaystyle f} is equal almost everywhere to the zero function. In analysis one nearly always wants to use the essential support of a function, rather than its closed support, when the two sets are different, so e s s s u p p ( f ) {\displaystyle \operatorname {ess\,supp} (f)} is often written simply as supp ( f ) {\displaystyle \operatorname {supp} (f)} and referred to as the support. == Generalization == If M {\displaystyle M} is an arbitrary set containing zero, the concept of support is immediately generalizable to functions f : X → M . {\displaystyle f:X\to M.} Support may also be defined for any algebraic structure with identity (such as a group, monoid, or composition algebra), in which the identity element assumes the role of zero. For instance, the family Z N {\displaystyle \mathbb {Z} ^{\mathbb {N} }} of functions from the natural numbers to the integers is the uncountable set of integer sequences. The subfamily { f ∈ Z N : f has finite support } {\displaystyle \left\{f\in \mathbb {Z} ^{\mathbb {N} }:f{\text{ has finite support }}\right\}} is the countable set of all integer sequences that have only finitely many nonzero entries. Functions of finite support are used in defining algebraic structures such as group rings and free abelian groups. == In probability and measure theory == In probability theory, the support of a probability distribution can be loosely thought of as the closure of the set of possible values of a random variable having that distribution. There are, however, some subtleties to consider when dealing with general distributions defined on a sigma algebra, rather than on a topological space. More formally, if X : Ω → R {\displaystyle X:\Omega \to \mathbb {R} } is a random variable on ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} then the support of X {\displaystyle X} is the smallest closed set R X ⊆ R {\displaystyle R_{X}\subseteq \mathbb {R} } such that P ( X ∈ R X ) = 1. {\displaystyle P\left(X\in R_{X}\right)=1.} In practice however, the support of a discrete random variable X {\displaystyle X} is often defined as the set R X = { x ∈ R : P ( X = x ) > 0 } {\displaystyle R_{X}=\{x\in \mathbb {R} :P(X=x)>0\}} and the support of a continuous random variable X {\displaystyle X} is defined as the set R X = { x ∈ R : f X ( x ) > 0 } {\displaystyle R_{X}=\{x\in \mathbb {R} :f_{X}(x)>0\}} where f X ( x ) {\displaystyle f_{X}(x)} is a probability density function of X {\displaystyle X} (the set-theoretic support). Note that the word support can refer to the logarithm of the likelihood of a probability density function. == Support of a distribution == It is possible also to talk about the support of a distribution, such as the Dirac delta function δ ( x ) {\displaystyle \delta (x)} on the real line. In that example, we can consider test functions F , {\displaystyle F,} which are smooth functions with support not including the point 0. {\displaystyle 0.} Since δ ( F ) {\displaystyle \delta (F)} (the distribution δ {\displaystyle \delta } applied as linear functional to F {\displaystyle F} ) is 0 {\displaystyle 0} for such functions, we can say that the support of δ {\displaystyle \delta } is { 0 } {\displaystyle \{0\}} only. Since measures (including probability measures) on the real line are special cases of distributions, we can also speak of the support of a measure in the same way. Suppose that f {\displaystyle f} is a distribution, and that U {\displaystyle U} is an open set in Euclidean space such that, for all test functions ϕ {\displaystyle \phi } such that the support of ϕ {\displaystyle \phi } is contained in U , {\displaystyle U,} f ( ϕ ) = 0. {\displaystyle f(\phi )=0.} Then f {\displaystyle f} is said to vanish on U . {\displaystyle U.} Now, if f {\displaystyle f} vanishes on an arbitrary family U α {\displaystyle U_{\alpha }} of open sets, then for any test function ϕ {\displaystyle \phi } supported in ⋃ U α , {\textstyle \bigcup U_{\alpha },} a simple argument based on the compactness of the support of ϕ {\displaystyle \phi } and a partition of unity shows that f ( ϕ ) = 0 {\displaystyle f(\phi )=0} as well. Hence we can define the support of f {\displaystyle f} as the complement of the largest open set on which f {\displaystyle f} vanishes. For example, the support of the Dirac delta is { 0 } . {\displaystyle \{0\}.} == Singular support == In Fourier analysis in particular, it is interesting to study the singular support of a distribution. This has the intuitive interpretation as the set of points at which a distribution fails to be a smooth function. For example, the Fourier transform of the Heaviside step function can, up to constant factors, be considered to be 1 / x {\displaystyle 1/x} (a function) except at x = 0. {\displaystyle x=0.} While x = 0 {\displaystyle x=0} is clearly a special point, it is more precise to say that the transform of the distribution has singular support { 0 } {\displaystyle \{0\}} : it cannot accurately be expressed as a function in relation to test functions with support including 0. {\displaystyle 0.} It can be expressed as an application of a Cauchy principal value improper integral. For distributions in several variables, singular supports allow one to define wave front sets and understand Huygens' principle in terms of mathematical analysis. Singular supports may also be used to understand phenomena special to distribution theory, such as attempts to 'multiply' distributions (squaring the Dirac delta function fails – essentially because the singular supports of the distributions to be multiplied should be disjoint). == Family of supports == An abstract notion of family of supports on a topological space X , {\displaystyle X,} suitable for sheaf theory, was defined by Henri Cartan. In extending Poincaré duality to manifolds that are not compact, the 'compact support' idea enters naturally on one side of the duality; see for example Alexander–Spanier cohomology. Bredon, Sheaf Theory (2nd edition, 1997) gives these definitions. A family Φ {\displaystyle \Phi } of closed subsets of X {\displaystyle X} is a family of supports, if it is down-closed and closed under finite union. Its extent is the union over Φ . {\displaystyle \Phi .} A paracompactifying family of supports that satisfies further that any Y {\displaystyle Y} in Φ {\displaystyle \Phi } is, with the subspace topology, a paracompact space; and has some Z {\displaystyle Z} in Φ {\displaystyle \Phi } which is a neighbourhood. If X {\displaystyle X} is a locally compact space, assumed Hausdorff, the family of all compact subsets satisfies the further conditions, making it paracompactifying. == See also == Bounded function – A mathematical function the set of whose values is bounded Bump function – Smooth and compactly supported function Support of a module Titchmarsh convolution theorem == Citations == == References == Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
|
https://en.wikipedia.org/wiki/Support_(mathematics)
|
In mathematics, the oscillation of a function or a sequence is a number that quantifies how much that sequence or function varies between its extreme values as it approaches infinity or a point. As is the case with limits, there are several definitions that put the intuitive concept into a form suitable for a mathematical treatment: oscillation of a sequence of real numbers, oscillation of a real-valued function at a point, and oscillation of a function on an interval (or open set). == Definitions == === Oscillation of a sequence === Let ( a n ) {\displaystyle (a_{n})} be a sequence of real numbers. The oscillation ω ( a n ) {\displaystyle \omega (a_{n})} of that sequence is defined as the difference (possibly infinite) between the limit superior and limit inferior of ( a n ) {\displaystyle (a_{n})} : ω ( a n ) = lim sup n → ∞ a n − lim inf n → ∞ a n {\displaystyle \omega (a_{n})=\limsup _{n\to \infty }a_{n}-\liminf _{n\to \infty }a_{n}} . The oscillation is zero if and only if the sequence converges. It is undefined if lim sup n → ∞ {\displaystyle \limsup _{n\to \infty }} and lim inf n → ∞ {\displaystyle \liminf _{n\to \infty }} are both equal to +∞ or both equal to −∞, that is, if the sequence tends to +∞ or −∞. === Oscillation of a function on an open set === Let f {\displaystyle f} be a real-valued function of a real variable. The oscillation of f {\displaystyle f} on an interval I {\displaystyle I} in its domain is the difference between the supremum and infimum of f {\displaystyle f} : ω f ( I ) = sup x ∈ I f ( x ) − inf x ∈ I f ( x ) . {\displaystyle \omega _{f}(I)=\sup _{x\in I}f(x)-\inf _{x\in I}f(x).} More generally, if f : X → R {\displaystyle f:X\to \mathbb {R} } is a function on a topological space X {\displaystyle X} (such as a metric space), then the oscillation of f {\displaystyle f} on an open set U {\displaystyle U} is ω f ( U ) = sup x ∈ U f ( x ) − inf x ∈ U f ( x ) . {\displaystyle \omega _{f}(U)=\sup _{x\in U}f(x)-\inf _{x\in U}f(x).} === Oscillation of a function at a point === The oscillation of a function f {\displaystyle f} of a real variable at a point x 0 {\displaystyle x_{0}} is defined as the limit as ϵ → 0 {\displaystyle \epsilon \to 0} of the oscillation of f {\displaystyle f} on an ϵ {\displaystyle \epsilon } -neighborhood of x 0 {\displaystyle x_{0}} : ω f ( x 0 ) = lim ϵ → 0 ω f ( x 0 − ϵ , x 0 + ϵ ) . {\displaystyle \omega _{f}(x_{0})=\lim _{\epsilon \to 0}\omega _{f}(x_{0}-\epsilon ,x_{0}+\epsilon ).} This is the same as the difference between the limit superior and limit inferior of the function at x 0 {\displaystyle x_{0}} , provided the point x 0 {\displaystyle x_{0}} is not excluded from the limits. More generally, if f : X → R {\displaystyle f:X\to \mathbb {R} } is a real-valued function on a metric space, then the oscillation is ω f ( x 0 ) = lim ϵ → 0 ω f ( B ϵ ( x 0 ) ) . {\displaystyle \omega _{f}(x_{0})=\lim _{\epsilon \to 0}\omega _{f}(B_{\epsilon }(x_{0})).} == Examples == 1 x {\displaystyle {\frac {1}{x}}} has oscillation ∞ at x {\displaystyle x} = 0, and oscillation 0 at other finite x {\displaystyle x} and at −∞ and +∞. sin 1 x {\displaystyle \sin {\frac {1}{x}}} (the topologist's sine curve) has oscillation 2 at x {\displaystyle x} = 0, and 0 elsewhere. sin x {\displaystyle \sin x} has oscillation 0 at every finite x {\displaystyle x} , and 2 at −∞ and +∞. ( − 1 ) x {\displaystyle (-1)^{x}} or 1, −1, 1, −1, 1, −1... has oscillation 2. In the last example the sequence is periodic, and any sequence that is periodic without being constant will have non-zero oscillation. However, non-zero oscillation does not usually indicate periodicity. Geometrically, the graph of an oscillating function on the real numbers follows some path in the xy-plane, without settling into ever-smaller regions. In well-behaved cases the path might look like a loop coming back on itself, that is, periodic behaviour; in the worst cases quite irregular movement covering a whole region. == Continuity == Oscillation can be used to define continuity of a function, and is easily equivalent to the usual ε-δ definition (in the case of functions defined everywhere on the real line): a function ƒ is continuous at a point x0 if and only if the oscillation is zero; in symbols, ω f ( x 0 ) = 0. {\displaystyle \omega _{f}(x_{0})=0.} A benefit of this definition is that it quantifies discontinuity: the oscillation gives how much the function is discontinuous at a point. For example, in the classification of discontinuities: in a removable discontinuity, the distance that the value of the function is off by is the oscillation; in a jump discontinuity, the size of the jump is the oscillation (assuming that the value at the point lies between these limits from the two sides); in an essential discontinuity, oscillation measures the failure of a limit to exist. This definition is useful in descriptive set theory to study the set of discontinuities and continuous points – the continuous points are the intersection of the sets where the oscillation is less than ε (hence a Gδ set) – and gives a very quick proof of one direction of the Lebesgue integrability condition. The oscillation is equivalent to the ε-δ definition by a simple re-arrangement, and by using a limit (lim sup, lim inf) to define oscillation: if (at a given point) for a given ε0 there is no δ that satisfies the ε-δ definition, then the oscillation is at least ε0, and conversely if for every ε there is a desired δ, the oscillation is 0. The oscillation definition can be naturally generalized to maps from a topological space to a metric space. == Generalizations == More generally, if f : X → Y is a function from a topological space X into a metric space Y, then the oscillation of f is defined at each x ∈ X by ω ( x ) = inf { d i a m ( f ( U ) ) ∣ U i s a n e i g h b o r h o o d o f x } {\displaystyle \omega (x)=\inf \left\{\mathrm {diam} (f(U))\mid U\mathrm {\ is\ a\ neighborhood\ of\ } x\right\}} == See also == Wave equation Wave envelope Grandi's series Bounded mean oscillation == References == == Further reading ==
|
https://en.wikipedia.org/wiki/Oscillation_(mathematics)
|
In geometry, a locus (plural: loci) (Latin word for "place", "location") is a set of all points (commonly, a line, a line segment, a curve or a surface), whose location satisfies or is determined by one or more specified conditions. The set of the points that satisfy some property is often called the locus of a point satisfying this property. The use of the singular in this formulation is a witness that, until the end of the 19th century, mathematicians did not consider infinite sets. Instead of viewing lines and curves as sets of points, they viewed them as places where a point may be located or may move. == History and philosophy == Until the beginning of the 20th century, a geometrical shape (for example a curve) was not considered as an infinite set of points; rather, it was considered as an entity on which a point may be located or on which it moves. Thus a circle in the Euclidean plane was defined as the locus of a point that is at a given distance of a fixed point, the center of the circle. In modern mathematics, similar concepts are more frequently reformulated by describing shapes as sets; for instance, one says that the circle is the set of points that are at a given distance from the center. In contrast to the set-theoretic view, the old formulation avoids considering infinite collections, as avoiding the actual infinite was an important philosophical position of earlier mathematicians. Once set theory became the universal basis over which the whole mathematics is built, the term of locus became rather old-fashioned. Nevertheless, the word is still widely used, mainly for a concise formulation, for example: Critical locus, the set of the critical points of a differentiable function. Zero locus or vanishing locus, the set of points where a function vanishes, in that it takes the value zero. Singular locus, the set of the singular points of an algebraic variety. Connectedness locus, the subset of the parameter set of a family of rational functions for which the Julia set of the function is connected. More recently, techniques such as the theory of schemes, and the use of category theory instead of set theory to give a foundation to mathematics, have returned to notions more like the original definition of a locus as an object in itself rather than as a set of points. == Examples in plane geometry == Examples from plane geometry include: The set of points equidistant from two points is a perpendicular bisector to the line segment connecting the two points. The set of points equidistant from two intersecting lines is the union of their two angle bisectors. All conic sections are loci: Circle: the set of points at constant distance (the radius) from a fixed point (the center). Parabola: the set of points equidistant from a fixed point (the focus) and a line (the directrix). Hyperbola: the set of points for each of which the absolute value of the difference between the distances to two given foci is a constant. Ellipse: the set of points for each of which the sum of the distances to two given foci is a constant Other examples of loci appear in various areas of mathematics. For example, in complex dynamics, the Mandelbrot set is a subset of the complex plane that may be characterized as the connectedness locus of a family of polynomial maps. == Proof of a locus == To prove a geometric shape is the correct locus for a given set of conditions, one generally divides the proof into two stages: the proof that all the points that satisfy the conditions are on the given shape, and the proof that all the points on the given shape satisfy the conditions. == Examples == === First example === Find the locus of a point P that has a given ratio of distances k = d1/d2 to two given points. In this example k = 3, A(−1, 0) and B(0, 2) are chosen as the fixed points. P(x, y) is a point of the locus ⇔ | P A | = 3 | P B | {\displaystyle \Leftrightarrow |PA|=3|PB|} ⇔ | P A | 2 = 9 | P B | 2 {\displaystyle \Leftrightarrow |PA|^{2}=9|PB|^{2}} ⇔ ( x + 1 ) 2 + ( y − 0 ) 2 = 9 ( x − 0 ) 2 + 9 ( y − 2 ) 2 {\displaystyle \Leftrightarrow (x+1)^{2}+(y-0)^{2}=9(x-0)^{2}+9(y-2)^{2}} ⇔ 8 ( x 2 + y 2 ) − 2 x − 36 y + 35 = 0 {\displaystyle \Leftrightarrow 8(x^{2}+y^{2})-2x-36y+35=0} ⇔ ( x − 1 8 ) 2 + ( y − 9 4 ) 2 = 45 64 . {\displaystyle \Leftrightarrow \left(x-{\frac {1}{8}}\right)^{2}+\left(y-{\frac {9}{4}}\right)^{2}={\frac {45}{64}}.} This equation represents a circle with center (1/8, 9/4) and radius 3 8 5 {\displaystyle {\tfrac {3}{8}}{\sqrt {5}}} . It is the circle of Apollonius defined by these values of k, A, and B. === Second example === A triangle ABC has a fixed side [AB] with length c. Determine the locus of the third vertex C such that the medians from A and C are orthogonal. Choose an orthonormal coordinate system such that A(−c/2, 0), B(c/2, 0). C(x, y) is the variable third vertex. The center of [BC] is M((2x + c)/4, y/2). The median from C has a slope y/x. The median AM has slope 2y/(2x + 3c). C(x, y) is a point of the locus ⇔ {\displaystyle \Leftrightarrow } the medians from A and C are orthogonal ⇔ y x ⋅ 2 y 2 x + 3 c = − 1 {\displaystyle \Leftrightarrow {\frac {y}{x}}\cdot {\frac {2y}{2x+3c}}=-1} ⇔ 2 y 2 + 2 x 2 + 3 c x = 0 {\displaystyle \Leftrightarrow 2y^{2}+2x^{2}+3cx=0} ⇔ x 2 + y 2 + ( 3 c / 2 ) x = 0 {\displaystyle \Leftrightarrow x^{2}+y^{2}+(3c/2)x=0} ⇔ ( x + 3 c / 4 ) 2 + y 2 = 9 c 2 / 16. {\displaystyle \Leftrightarrow (x+3c/4)^{2}+y^{2}=9c^{2}/16.} The locus of the vertex C is a circle with center (−3c/4, 0) and radius 3c/4. === Third example === A locus can also be defined by two associated curves depending on one common parameter. If the parameter varies, the intersection points of the associated curves describe the locus. In the figure, the points K and L are fixed points on a given line m. The line k is a variable line through K. The line l through L is perpendicular to k. The angle α {\displaystyle \alpha } between k and m is the parameter. k and l are associated lines depending on the common parameter. The variable intersection point S of k and l describes a circle. This circle is the locus of the intersection point of the two associated lines. === Fourth example === A locus of points need not be one-dimensional (as a circle, line, etc.). For example, the locus of the inequality 2x + 3y – 6 < 0 is the portion of the plane that is below the line of equation 2x + 3y – 6 = 0. == See also == Algebraic variety Curve Line (geometry) Set-builder notation Shape (geometry) == References ==
|
https://en.wikipedia.org/wiki/Locus_(mathematics)
|
In mathematics, a structure on a set (or on some sets) refers to providing or endowing it (or them) with certain additional features (e.g. an operation, relation, metric, or topology). Τhe additional features are attached or related to the set (or to the sets), so as to provide it (or them) with some additional meaning or significance. A partial list of possible structures are measures, algebraic structures (groups, fields, etc.), topologies, metric structures (geometries), orders, graphs, events, equivalence relations, differential structures, and categories. Sometimes, a set is endowed with more than one feature simultaneously, which allows mathematicians to study the interaction between the different structures more richly. For example, an ordering imposes a rigid form, shape, or topology on the set, and if a set has both a topology feature and a group feature, such that these two features are related in a certain way, then the structure becomes a topological group. Map between two sets with the same type of structure, which preserve this structure [morphism: structure in the domain is mapped properly to the (same type) structure in the codomain] is of special interest in many fields of mathematics. Examples are homomorphisms, which preserve algebraic structures; continuous functions, which preserve topological structures; and differentiable functions, which preserve differential structures. == History == In 1939, the French group with the pseudonym "Nicolas Bourbaki" saw structures as the root of mathematics. They first mentioned them in their "Fascicule" of Theory of Sets and expanded it into Chapter IV of the 1957 edition. They identified three mother structures: algebraic, topological, and order. == Example: the real numbers == The set of real numbers has several standard structures: An order: each number is either less than or greater than any other number. Algebraic structure: there are operations of addition and multiplication, the first of which makes it into a group and the pair of which together make it into a field. A measure: intervals of the real line have a specific length, which can be extended to the Lebesgue measure on many of its subsets. A metric: there is a notion of distance between points. A geometry: it is equipped with a metric and is flat. A topology: there is a notion of open sets. There are interfaces among these: Its order and, independently, its metric structure induce its topology. Its order and algebraic structure make it into an ordered field. Its algebraic structure and topology make it into a Lie group, a type of topological group. == See also == Abstract structure Isomorphism Equivalent definitions of mathematical structures Forgetful functor Intuitionistic type theory Mathematical object Algebraic structure Space (mathematics) Category (mathematics) == References == == Further reading == Bourbaki, Nikolas (1968). "Elements of Mathematics: Theory of Sets". Hermann, Addison-Wesley. pp. 259–346, 383–385. Foldes, Stephan (1994). Fundamental Structures of Algebra and Discrete Mathematics. Hoboken: John Wiley & Sons. ISBN 9781118031438. Hegedus, Stephen John; Moreno-Armella, Luis (2011). "The emergence of mathematical structures". Educational Studies in Mathematics. 77 (2): 369–388. doi:10.1007/s10649-010-9297-7. S2CID 119981368. Kolman, Bernard; Busby, Robert C.; Ross, Sharon Cutler (2000). Discrete mathematical structures (4th ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 978-0-13-083143-9. Malik, D.S.; Sen, M.K. (2004). Discrete mathematical structures : theory and applications. Australia: Thomson/Course Technology. ISBN 978-0-619-21558-3. Pudlák, Pavel (2013). "Mathematical structures". Logical foundations of mathematics and computational complexity a gentle introduction. Cham: Springer. pp. 2–24. ISBN 9783319001197. Senechal, M. (21 May 1993). "Mathematical Structures". Science. 260 (5111): 1170–1173. doi:10.1126/science.260.5111.1170. PMID 17806355. == External links == "Structure". PlanetMath. (provides a model theoretic definition.) Mathematical structures in computer science (journal)
|
https://en.wikipedia.org/wiki/Mathematical_structure
|
In mathematics, a plane is a two-dimensional space or flat surface that extends indefinitely. A plane is the two-dimensional analogue of a point (zero dimensions), a line (one dimension) and three-dimensional space. When working exclusively in two-dimensional Euclidean space, the definite article is used, so the Euclidean plane refers to the whole space. Several notions of a plane may be defined. The Euclidean plane follows Euclidean geometry, and in particular the parallel postulate. A projective plane may be constructed by adding "points at infinity" where two otherwise parallel lines would intersect, so that every pair of lines intersects in exactly one point. The elliptic plane may be further defined by adding a metric to the real projective plane. One may also conceive of a hyperbolic plane, which obeys hyperbolic geometry and has a negative curvature. Abstractly, one may forget all structure except the topology, producing the topological plane, which is homeomorphic to an open disk. Viewing the plane as an affine space produces the affine plane, which lacks a notion of distance but preserves the notion of collinearity. Conversely, in adding more structure, one may view the plane as a 1-dimensional complex manifold, called the complex line. Many fundamental tasks in mathematics, geometry, trigonometry, graph theory, and graphing are performed in a two-dimensional or planar space. == Euclidean plane == === Embedding in three-dimensional space === == Elliptic plane == == Projective plane == == Further generalizations == In addition to its familiar geometric structure, with isomorphisms that are isometries with respect to the usual inner product, the plane may be viewed at various other levels of abstraction. Each level of abstraction corresponds to a specific category. At one extreme, all geometrical and metric concepts may be dropped to leave the topological plane, which may be thought of as an idealized homotopically trivial infinite rubber sheet, which retains a notion of proximity, but has no distances. The topological plane has a concept of a linear path, but no concept of a straight line. The topological plane, or its equivalent the open disc, is the basic topological neighborhood used to construct surfaces (or 2-manifolds) classified in low-dimensional topology. Isomorphisms of the topological plane are all continuous bijections. The topological plane is the natural context for the branch of graph theory that deals with planar graphs, and results such as the four color theorem. The plane may also be viewed as an affine space, whose isomorphisms are combinations of translations and non-singular linear maps. From this viewpoint there are no distances, but collinearity and ratios of distances on any line are preserved. Differential geometry views a plane as a 2-dimensional real manifold, a topological plane which is provided with a differential structure. Again in this case, there is no notion of distance, but there is now a concept of smoothness of maps, for example a differentiable or smooth path (depending on the type of differential structure applied). The isomorphisms in this case are bijections with the chosen degree of differentiability. In the opposite direction of abstraction, we may apply a compatible field structure to the geometric plane, giving rise to the complex plane and the major area of complex analysis. The complex field has only two isomorphisms that leave the real line fixed, the identity and conjugation. In the same way as in the real case, the plane may also be viewed as the simplest, one-dimensional (in terms of complex dimension, over the complex numbers) complex manifold, sometimes called the complex line. However, this viewpoint contrasts sharply with the case of the plane as a 2-dimensional real manifold. The isomorphisms are all conformal bijections of the complex plane, but the only possibilities are maps that correspond to the composition of a multiplication by a complex number and a translation. In addition, the Euclidean geometry (which has zero curvature everywhere) is not the only geometry that the plane may have. The plane may be given a spherical geometry by using the stereographic projection. This can be thought of as placing a sphere tangent to the plane (just like a ball on the floor), removing the top point, and projecting the sphere onto the plane from this point. This is one of the projections that may be used in making a flat map of part of the Earth's surface. The resulting geometry has constant positive curvature. Alternatively, the plane can also be given a metric which gives it constant negative curvature giving the hyperbolic plane. The latter possibility finds an application in the theory of special relativity in the simplified case where there are two spatial dimensions and one time dimension. (The hyperbolic plane is a timelike hypersurface in three-dimensional Minkowski space.) == Topological and differential geometric notions == The one-point compactification of the plane is homeomorphic to a sphere (see stereographic projection); the open disk is homeomorphic to a sphere with the "north pole" missing; adding that point completes the (compact) sphere. The result of this compactification is a manifold referred to as the Riemann sphere or the complex projective line. The projection from the Euclidean plane to a sphere without a point is a diffeomorphism and even a conformal map. The plane itself is homeomorphic (and diffeomorphic) to an open disk. For the hyperbolic plane such diffeomorphism is conformal, but for the Euclidean plane it is not. == See also == Affine plane Half-plane Hyperbolic geometry == References ==
|
https://en.wikipedia.org/wiki/Plane_(mathematics)
|
In mathematics, a domino is a polyomino of order 2, that is, a polygon in the plane made of two equal-sized squares connected edge-to-edge. When rotations and reflections are not considered to be distinct shapes, there is only one free domino. Since it has reflection symmetry, it is also the only one-sided domino (with reflections considered distinct). When rotations are also considered distinct, there are two fixed dominoes: The second one can be created by rotating the one above by 90°. In a wider sense, the term domino is sometimes understood to mean a tile of any shape. == Packing and tiling == Dominos can tile the plane in a countably infinite number of ways. The number of tilings of a 2×n rectangle with dominoes is F n {\displaystyle F_{n}} , the nth Fibonacci number. Domino tilings figure in several celebrated problems, including the Aztec diamond problem in which large diamond-shaped regions have a number of tilings equal to a power of two, with most tilings appearing random within a central circular region and having a more regular structure outside of this "arctic circle", and the mutilated chessboard problem, in which removing two opposite corners from a chessboard makes it impossible to tile with dominoes. == See also == Dominoes, a set of domino-shaped gaming pieces Tatami, Japanese domino-shaped floor mats == References ==
|
https://en.wikipedia.org/wiki/Domino_(mathematics)
|
In mathematics, a reflection (also spelled reflexion) is a mapping from a Euclidean space to itself that is an isometry with a hyperplane as the set of fixed points; this set is called the axis (in dimension 2) or plane (in dimension 3) of reflection. The image of a figure by a reflection is its mirror image in the axis or plane of reflection. For example the mirror image of the small Latin letter p for a reflection with respect to a vertical axis (a vertical reflection) would look like q. Its image by reflection in a horizontal axis (a horizontal reflection) would look like b. A reflection is an involution: when applied twice in succession, every point returns to its original location, and every geometrical object is restored to its original state. The term reflection is sometimes used for a larger class of mappings from a Euclidean space to itself, namely the non-identity isometries that are involutions. The set of fixed points (the "mirror") of such an isometry is an affine subspace, but is possibly smaller than a hyperplane. For instance a reflection through a point is an involutive isometry with just one fixed point; the image of the letter p under it would look like a d. This operation is also known as a central inversion (Coxeter 1969, §7.2), and exhibits Euclidean space as a symmetric space. In a Euclidean vector space, the reflection in the point situated at the origin is the same as vector negation. Other examples include reflections in a line in three-dimensional space. Typically, however, unqualified use of the term "reflection" means reflection in a hyperplane. Some mathematicians use "flip" as a synonym for "reflection". == Construction == In a plane (or, respectively, 3-dimensional) geometry, to find the reflection of a point drop a perpendicular from the point to the line (plane) used for reflection, and extend it the same distance on the other side. To find the reflection of a figure, reflect each point in the figure. To reflect point P through the line AB using compass and straightedge, proceed as follows (see figure): Step 1 (red): construct a circle with center at P and some fixed radius r to create points A′ and B′ on the line AB, which will be equidistant from P. Step 2 (green): construct circles centered at A′ and B′ having radius r. P and Q will be the points of intersection of these two circles. Point Q is then the reflection of point P through line AB. == Properties == The matrix for a reflection is orthogonal with determinant −1 and eigenvalues −1, 1, 1, ..., 1. The product of two such matrices is a special orthogonal matrix that represents a rotation. Every rotation is the result of reflecting in an even number of reflections in hyperplanes through the origin, and every improper rotation is the result of reflecting in an odd number. Thus reflections generate the orthogonal group, and this result is known as the Cartan–Dieudonné theorem. Similarly the Euclidean group, which consists of all isometries of Euclidean space, is generated by reflections in affine hyperplanes. In general, a group generated by reflections in affine hyperplanes is known as a reflection group. The finite groups generated in this way are examples of Coxeter groups. == Reflection across a line in the plane == Reflection across an arbitrary line through the origin in two dimensions can be described by the following formula Ref l ( v ) = 2 v ⋅ l l ⋅ l l − v , {\displaystyle \operatorname {Ref} _{l}(v)=2{\frac {v\cdot l}{l\cdot l}}l-v,} where v {\displaystyle v} denotes the vector being reflected, l {\displaystyle l} denotes any vector in the line across which the reflection is performed, and v ⋅ l {\displaystyle v\cdot l} denotes the dot product of v {\displaystyle v} with l {\displaystyle l} . Note the formula above can also be written as Ref l ( v ) = 2 Proj l ( v ) − v , {\displaystyle \operatorname {Ref} _{l}(v)=2\operatorname {Proj} _{l}(v)-v,} saying that a reflection of v {\displaystyle v} across l {\displaystyle l} is equal to 2 times the projection of v {\displaystyle v} on l {\displaystyle l} , minus the vector v {\displaystyle v} . Reflections in a line have the eigenvalues of 1, and −1. == Reflection through a hyperplane in n dimensions == Given a vector v {\displaystyle v} in Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , the formula for the reflection in the hyperplane through the origin, orthogonal to a {\displaystyle a} , is given by Ref a ( v ) = v − 2 v ⋅ a a ⋅ a a , {\displaystyle \operatorname {Ref} _{a}(v)=v-2{\frac {v\cdot a}{a\cdot a}}a,} where v ⋅ a {\displaystyle v\cdot a} denotes the dot product of v {\displaystyle v} with a {\displaystyle a} . Note that the second term in the above equation is just twice the vector projection of v {\displaystyle v} onto a {\displaystyle a} . One can easily check that Refa(v) = −v, if v {\displaystyle v} is parallel to a {\displaystyle a} , and Refa(v) = v, if v {\displaystyle v} is perpendicular to a. Using the geometric product, the formula is Ref a ( v ) = − a v a a 2 . {\displaystyle \operatorname {Ref} _{a}(v)=-{\frac {ava}{a^{2}}}.} Since these reflections are isometries of Euclidean space fixing the origin they may be represented by orthogonal matrices. The orthogonal matrix corresponding to the above reflection is the matrix R = I − 2 a a T a T a , {\displaystyle R=I-2{\frac {aa^{T}}{a^{T}a}},} where I {\displaystyle I} denotes the n × n {\displaystyle n\times n} identity matrix and a T {\displaystyle a^{T}} is the transpose of a. Its entries are R i j = δ i j − 2 a i a j ‖ a ‖ 2 , {\displaystyle R_{ij}=\delta _{ij}-2{\frac {a_{i}a_{j}}{\left\|a\right\|^{2}}},} where δij is the Kronecker delta. The formula for the reflection in the affine hyperplane v ⋅ a = c {\displaystyle v\cdot a=c} not through the origin is Ref a , c ( v ) = v − 2 v ⋅ a − c a ⋅ a a . {\displaystyle \operatorname {Ref} _{a,c}(v)=v-2{\frac {v\cdot a-c}{a\cdot a}}a.} == See also == Additive inverse Coordinate rotations and reflections Householder transformation Inversive geometry Plane of rotation Reflection mapping Reflection group Reflection symmetry == Notes == == References == Coxeter, Harold Scott MacDonald (1969), Introduction to Geometry (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-50458-0, MR 0123930 Popov, V.L. (2001) [1994], "Reflection", Encyclopedia of Mathematics, EMS Press Weisstein, Eric W. "Reflection". MathWorld. == External links == Reflection in Line at cut-the-knot Understanding 2D Reflection and Understanding 3D Reflection by Roger Germundsson, The Wolfram Demonstrations Project.
|
https://en.wikipedia.org/wiki/Reflection_(mathematics)
|
In mathematics, the eccentricity of a conic section is a non-negative real number that uniquely characterizes its shape. One can think of the eccentricity as a measure of how much a conic section deviates from being circular. In particular: The eccentricity of a circle is 0. The eccentricity of a non-circular ellipse is between 0 and 1. The eccentricity of a parabola is 1. The eccentricity of a hyperbola is greater than 1. The eccentricity of a pair of lines is ∞ . {\displaystyle \infty .} Two conic sections with the same eccentricity are similar. == Definitions == Any conic section can be defined as the locus of points whose distances to a point (the focus) and a line (the directrix) are in a constant ratio. That ratio is called the eccentricity, commonly denoted as e. The eccentricity can also be defined in terms of the intersection of a plane and a double-napped cone associated with the conic section. If the cone is oriented with its axis vertical, the eccentricity is e = sin β sin α , 0 < α < 90 ∘ , 0 ≤ β ≤ 90 ∘ , {\displaystyle e={\frac {\sin \beta }{\sin \alpha }},\ \ 0<\alpha <90^{\circ },\ 0\leq \beta \leq 90^{\circ }\ ,} where β is the angle between the plane and the horizontal and α is the angle between the cone's slant generator and the horizontal. For β = 0 {\displaystyle \beta =0} the plane section is a circle, for β = α {\displaystyle \beta =\alpha } a parabola. (The plane must not meet the vertex of the cone.) The linear eccentricity of an ellipse or hyperbola, denoted c (or sometimes f or e), is the distance between its center and either of its two foci. The eccentricity can be defined as the ratio of the linear eccentricity to the semimajor axis a: that is, e = c a {\displaystyle e={\frac {c}{a}}} (lacking a center, the linear eccentricity for parabolas is not defined). A parabola can be treated as a limiting case of an ellipse or a hyperbola with one focal point at infinity. == Alternative names == The eccentricity is sometimes called the first eccentricity to distinguish it from the second eccentricity and third eccentricity defined for ellipses (see below). The eccentricity is also sometimes called the numerical eccentricity. In the case of ellipses and hyperbolas the linear eccentricity is sometimes called the half-focal separation. == Notation == Three notational conventions are in common use: e for the eccentricity and c for the linear eccentricity. ε for the eccentricity and e for the linear eccentricity. e or ϵ< for the eccentricity and f for the linear eccentricity (mnemonic for half-focal separation). This article uses the first notation. == Values == === Standard form === Here, for the ellipse and the hyperbola, a is the length of the semi-major axis and b is the length of the semi-minor axis. === General form === When the conic section is given in the general quadratic form A x 2 + B x y + C y 2 + D x + E y + F = 0 , {\displaystyle Ax^{2}+Bxy+Cy^{2}+Dx+Ey+F=0,} the following formula gives the eccentricity e if the conic section is not a parabola (which has eccentricity equal to 1), not a degenerate hyperbola or degenerate ellipse, and not an imaginary ellipse: e = 2 ( A − C ) 2 + B 2 η ( A + C ) + ( A − C ) 2 + B 2 {\displaystyle e={\sqrt {\frac {2{\sqrt {(A-C)^{2}+B^{2}}}}{\eta (A+C)+{\sqrt {(A-C)^{2}+B^{2}}}}}}} where η = 1 {\displaystyle \eta =1} if the determinant of the 3×3 matrix [ A B / 2 D / 2 B / 2 C E / 2 D / 2 E / 2 F ] {\displaystyle {\begin{bmatrix}A&B/2&D/2\\B/2&C&E/2\\D/2&E/2&F\end{bmatrix}}} is negative or η = − 1 {\displaystyle \eta =-1} if that determinant is positive. == Ellipses == The eccentricity of an ellipse is strictly less than 1. When circles (which have eccentricity 0) are counted as ellipses, the eccentricity of an ellipse is greater than or equal to 0; if circles are given a special category and are excluded from the category of ellipses, then the eccentricity of an ellipse is strictly greater than 0. For any ellipse, let a be the length of its semi-major axis and b be the length of its semi-minor axis. In the coordinate system with origin at the ellipse's center and x-axis aligned with the major axis, points on the ellipse satisfy the equation x 2 a 2 + y 2 b 2 = 1 , {\displaystyle {\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1,} with foci at coordinates ( ± c , 0 ) {\displaystyle (\pm c,0)} for c = a 2 − b 2 . {\textstyle c={\sqrt {a^{2}-b^{2}}}.} We define a number of related additional concepts (only for ellipses): === Other formulae for the eccentricity of an ellipse === The eccentricity of an ellipse is, most simply, the ratio of the linear eccentricity c (distance between the center of the ellipse and each focus) to the length of the semimajor axis a. e = c a . {\displaystyle e={\frac {c}{a}}.} The eccentricity is also the ratio of the semimajor axis a to the distance d from the center to the directrix: e = a d . {\displaystyle e={\frac {a}{d}}.} The eccentricity can be expressed in terms of the flattening f (defined as f = 1 − b / a {\displaystyle f=1-b/a} for semimajor axis a and semiminor axis b): e = 1 − ( 1 − f ) 2 = f ( 2 − f ) . {\displaystyle e={\sqrt {1-(1-f)^{2}}}={\sqrt {f(2-f)}}.} (Flattening may be denoted by g in some subject areas if f is linear eccentricity.) Define the maximum and minimum radii r max {\displaystyle r_{\text{max}}} and r min {\displaystyle r_{\text{min}}} as the maximum and minimum distances from either focus to the ellipse (that is, the distances from either focus to the two ends of the major axis). Then with semimajor axis a, the eccentricity is given by e = r max − r min r max + r min = r max − r min 2 a , {\displaystyle e={\frac {r_{\text{max}}-r_{\text{min}}}{r_{\text{max}}+r_{\text{min}}}}={\frac {r_{\text{max}}-r_{\text{min}}}{2a}},} which is the distance between the foci divided by the length of the major axis. == Hyperbolas == The eccentricity of a hyperbola can be any real number greater than 1, with no upper bound. The eccentricity of a rectangular hyperbola is 2 {\displaystyle {\sqrt {2}}} . == Quadrics == The eccentricity of a three-dimensional quadric is the eccentricity of a designated section of it. For example, on a triaxial ellipsoid, the meridional eccentricity is that of the ellipse formed by a section containing both the longest and the shortest axes (one of which will be the polar axis), and the equatorial eccentricity is the eccentricity of the ellipse formed by a section through the centre, perpendicular to the polar axis (i.e. in the equatorial plane). But: conic sections may occur on surfaces of higher order, too (see image). == Celestial mechanics == In celestial mechanics, for bound orbits in a spherical potential, the definition above is informally generalized. When the apocenter distance is close to the pericenter distance, the orbit is said to have low eccentricity; when they are very different, the orbit is said be eccentric or having eccentricity near unity. This definition coincides with the mathematical definition of eccentricity for ellipses, in Keplerian, i.e., 1 / r {\displaystyle 1/r} potentials. == Analogous classifications == A number of classifications in mathematics use derived terminology from the classification of conic sections by eccentricity: Classification of elements of SL2(R) as elliptic, parabolic, and hyperbolic – and similarly for classification of elements of PSL2(R), the real Möbius transformations. Classification of discrete distributions by variance-to-mean ratio; see cumulants of some discrete probability distributions for details. Classification of partial differential equations is by analogy with the conic sections classification; see elliptic, parabolic and hyperbolic partial differential equations. == See also == Kepler orbits Eccentricity vector Orbital eccentricity Roundness (object) Conic constant == References == == External links == MathWorld: Eccentricity
|
https://en.wikipedia.org/wiki/Eccentricity_(mathematics)
|
In mathematics, the word constant conveys multiple meanings. As an adjective, it refers to non-variance (i.e. unchanging with respect to some other value); as a noun, it has two different meanings: A fixed and well-defined number or other non-changing mathematical object, or the symbol denoting it. The terms mathematical constant or physical constant are sometimes used to distinguish this meaning. A function whose value remains unchanged (i.e., a constant function). Such a constant is commonly represented by a variable which does not depend on the main variable(s) in question. For example, a general quadratic function is commonly written as: a x 2 + b x + c , {\displaystyle ax^{2}+bx+c\,,} where a, b and c are constants (coefficients or parameters), and x a variable—a placeholder for the argument of the function being studied. A more explicit way to denote this function is x ↦ a x 2 + b x + c , {\displaystyle x\mapsto ax^{2}+bx+c\,,} which makes the function-argument status of x (and by extension the constancy of a, b and c) clear. In this example a, b and c are coefficients of the polynomial. Since c occurs in a term that does not involve x, it is called the constant term of the polynomial and can be thought of as the coefficient of x0. More generally, any polynomial term or expression of degree zero (no variable) is a constant.: 18 == Constant function == A constant may be used to define a constant function that ignores its arguments and always gives the same value. A constant function of a single variable, such as f ( x ) = 5 {\displaystyle f(x)=5} , has a graph of a horizontal line parallel to the x-axis. Such a function always takes the same value (in this case 5), because the variable does not appear in the expression defining the function. == Context-dependence == The context-dependent nature of the concept of "constant" can be seen in this example from elementary calculus: d d x 2 x = lim h → 0 2 x + h − 2 x h = lim h → 0 2 x 2 h − 1 h = 2 x lim h → 0 2 h − 1 h since x is constant (i.e. does not depend on h ) = 2 x ⋅ c o n s t a n t , where c o n s t a n t means not depending on x . {\displaystyle {\begin{aligned}{\frac {d}{dx}}2^{x}&=\lim _{h\to 0}{\frac {2^{x+h}-2^{x}}{h}}=\lim _{h\to 0}2^{x}{\frac {2^{h}-1}{h}}\\[8pt]&=2^{x}\lim _{h\to 0}{\frac {2^{h}-1}{h}}&&{\text{since }}x{\text{ is constant (i.e. does not depend on }}h{\text{)}}\\[8pt]&=2^{x}\cdot \mathbf {constant,} &&{\text{where }}\mathbf {constant} {\text{ means not depending on }}x.\end{aligned}}} "Constant" means not depending on some variable; not changing as that variable changes. In the first case above, it means not depending on h; in the second, it means not depending on x. A constant in a narrower context could be regarded as a variable in a broader context. == Notable mathematical constants == Some values occur frequently in mathematics and are conventionally denoted by a specific symbol. These standard symbols and their values are called mathematical constants. Examples include: 0 (zero). 1 (one), the natural number after zero. π (pi), the constant representing the ratio of a circle's circumference to its diameter, approximately equal to 3.141592653589793238462643. e, approximately equal to 2.718281828459045235360287. i, the imaginary unit such that i2 = −1. 2 {\displaystyle {\sqrt {2}}} (square root of 2), the length of the diagonal of a square with unit sides, approximately equal to 1.414213562373095048801688. φ (golden ratio), approximately equal to 1.618033988749894848204586, or algebraically, 1 + 5 2 {\displaystyle 1+{\sqrt {5}} \over 2} . == Constants in calculus == In calculus, constants are treated in several different ways depending on the operation. For example, the derivative (rate of change) of a constant function is zero. This is because constants, by definition, do not change. Their derivative is hence zero. Conversely, when integrating a constant function, the constant is multiplied by the variable of integration. During the evaluation of a limit, a constant remains the same as it was before and after evaluation. Integration of a function of one variable often involves a constant of integration. This arises because the integral is the inverse (opposite) of the derivative meaning that the aim of integration is to recover the original function before differentiation. The derivative of a constant function is zero, as noted above, and the differential operator is a linear operator, so functions that only differ by a constant term have the same derivative. To acknowledge this, a constant of integration is added to an indefinite integral; this ensures that all possible solutions are included. The constant of integration is generally written as 'c', and represents a constant with a fixed but undefined value. === Examples === If f is the constant function such that f ( x ) = 72 {\displaystyle f(x)=72} for every x then f ′ ( x ) = 0 ∫ f ( x ) d x = 72 x + c lim x → 0 f ( x ) = 72 {\displaystyle {\begin{aligned}f'(x)&=0\\\int f(x)\,dx&=72x+c\\\lim _{x\rightarrow 0}f(x)&=72\end{aligned}}} == See also == Constant (disambiguation) Expression Level set List of mathematical constants Physical constant == References == == External links == Media related to Constants at Wikimedia Commons
|
https://en.wikipedia.org/wiki/Constant_(mathematics)
|
In the mathematical areas of number theory and analysis, an infinite sequence or a function is said to eventually have a certain property, if it does not have the said property across all its ordered instances, but will after some instances have passed. The use of the term "eventually" can be often rephrased as "for sufficiently large numbers", and can be also extended to the class of properties that apply to elements of any ordered set (such as sequences and subsets of R {\displaystyle \mathbb {R} } ). == Notation == The general form where the phrase eventually (or sufficiently large) is found appears as follows: P {\displaystyle P} is eventually true for x {\displaystyle x} ( P {\displaystyle P} is true for sufficiently large x {\displaystyle x} ), where ∀ {\displaystyle \forall } and ∃ {\displaystyle \exists } are the universal and existential quantifiers, which is actually a shorthand for: ∃ a ∈ R {\displaystyle \exists a\in \mathbb {R} } such that P {\displaystyle P} is true ∀ x ≥ a {\displaystyle \forall x\geq a} or somewhat more formally: ∃ a ∈ R : ∀ x ∈ R : x ≥ a ⇒ P ( x ) {\displaystyle \exists a\in \mathbb {R} :\forall x\in \mathbb {R} :x\geq a\Rightarrow P(x)} This does not necessarily mean that any particular value for a {\displaystyle a} is known, but only that such an a {\displaystyle a} exists. The phrase "sufficiently large" should not be confused with the phrases "arbitrarily large" or "infinitely large". For more, see Arbitrarily large#Arbitrarily large vs. sufficiently large vs. infinitely large. == Motivation and definition == For an infinite sequence, one is often more interested in the long-term behaviors of the sequence than the behaviors it exhibits early on. In which case, one way to formally capture this concept is to say that the sequence possesses a certain property eventually, or equivalently, that the property is satisfied by one of its subsequences ( a n ) n ≥ N {\displaystyle (a_{n})_{n\geq N}} , for some N ∈ N {\displaystyle N\in \mathbb {N} } . For example, the definition of a sequence of real numbers ( a n ) {\displaystyle (a_{n})} converging to some limit a {\displaystyle a} is: For each positive number ε {\displaystyle \varepsilon } , there exists a natural number N {\displaystyle N} such that for all n > N {\displaystyle n>N} , | a n − a | < ε {\displaystyle \left\vert a_{n}-a\right\vert <\varepsilon } . When the term "eventually" is used as a shorthand for "there exists a natural number N {\displaystyle N} such that for all n > N {\displaystyle n>N} ", the convergence definition can be restated more simply as: For each positive number ε > 0 {\displaystyle \varepsilon >0} , eventually | a n − a | < ε {\displaystyle \left\vert a_{n}-a\right\vert <\varepsilon } . Here, notice that the set of natural numbers that do not satisfy this property is a finite set; that is, the set is empty or has a maximum element. As a result, the use of "eventually" in this case is synonymous with the expression "for all but a finite number of terms" – a special case of the expression "for almost all terms" (although "almost all" can also be used to allow for infinitely many exceptions as well). At the basic level, a sequence can be thought of as a function with natural numbers as its domain, and the notion of "eventually" applies to functions on more general sets as well—in particular to those that have an ordering with no greatest element. More specifically, if S {\displaystyle S} is such a set and there is an element s {\displaystyle s} in S {\displaystyle S} such that the function f {\displaystyle f} is defined for all elements greater than s {\displaystyle s} , then f {\displaystyle f} is said to have some property eventually if there is an element x 0 {\displaystyle x_{0}} such that whenever x > x 0 {\displaystyle x>x_{0}} , f ( x ) {\displaystyle f(x)} has the said property. This notion is used, for example, in the study of Hardy fields, which are fields made up of real functions, each of which have certain properties eventually. == Examples == "All primes greater than 2 are odd" can be written as "Eventually, all primes are odd.” Eventually, all primes are congruent to ±1 modulo 6. The square of a prime is eventually congruent to 1 mod 24 (specifically, this is true for all primes greater than 3). The factorial of a natural number eventually ends in the digit 0 (specifically, this is true for all natural numbers greater than 4). == Other uses in mathematics == A 3-manifold is called sufficiently large if it contains a properly embedded 2-sided incompressible surface. This property is the main requirement for a 3-manifold to be called a Haken manifold. Temporal logic introduces an operator that can be used to express statements interpretable as: Certain property will eventually hold in a future moment in time. == See also == Almost all Big O notation Mathematical jargon Number theory == References ==
|
https://en.wikipedia.org/wiki/Eventually_(mathematics)
|
Foundations of mathematics are the logical and mathematical framework that allows the development of mathematics without generating self-contradictory theories, and to have reliable concepts of theorems, proofs, algorithms, etc. in particular. This may also include the philosophical study of the relation of this framework with reality. The term "foundations of mathematics" was not coined before the end of the 19th century, although foundations were first established by the ancient Greek philosophers under the name of Aristotle's logic and systematically applied in Euclid's Elements. A mathematical assertion is considered as truth only if it is a theorem that is proved from true premises by means of a sequence of syllogisms (inference rules), the premises being either already proved theorems or self-evident assertions called axioms or postulates. These foundations were tacitly assumed to be definitive until the introduction of infinitesimal calculus by Isaac Newton and Gottfried Wilhelm Leibniz in the 17th century. This new area of mathematics involved new methods of reasoning and new basic concepts (continuous functions, derivatives, limits) that were not well founded, but had astonishing consequences, such as the deduction from Newton's law of gravitation that the orbits of the planets are ellipses. During the 19th century, progress was made towards elaborating precise definitions of the basic concepts of infinitesimal calculus, notably the natural and real numbers. This led to a series of seemingly paradoxical mathematical results near the end of the 19th century that challenged the general confidence in the reliability and truth of mathematical results. This has been called the foundational crisis of mathematics. The resolution of this crisis involved the rise of a new mathematical discipline called mathematical logic that includes set theory, model theory, proof theory, computability and computational complexity theory, and more recently, parts of computer science. Subsequent discoveries in the 20th century then stabilized the foundations of mathematics into a coherent framework valid for all mathematics. This framework is based on a systematic use of axiomatic method and on set theory, specifically Zermelo–Fraenkel set theory with the axiom of choice. It results from this that the basic mathematical concepts, such as numbers, points, lines, and geometrical spaces are not defined as abstractions from reality but from basic properties (axioms). Their adequation with their physical origins does not belong to mathematics anymore, although their relation with reality is still used for guiding mathematical intuition: physical reality is still used by mathematicians to choose axioms, find which theorems are interesting to prove, and obtain indications of possible proofs. == Ancient Greece == Most civilisations developed some mathematics, mainly for practical purposes, such as counting (merchants), surveying (delimitation of fields), prosody, astronomy, and astrology. It seems that ancient Greek philosophers were the first to study the nature of mathematics and its relation with the real world. Zeno of Elea (c. 490 – c. 430 BC) produced several paradoxes he used to support his thesis that movement does not exist. These paradoxes involve mathematical infinity, a concept that was outside the mathematical foundations of that time and was not well understood before the end of the 19th century. The Pythagorean school of mathematics originally insisted that the only numbers are natural numbers and ratios of natural numbers. The discovery (c. 5th century BC) that the ratio of the diagonal of a square to its side is not the ratio of two natural numbers was a shock to them which they only reluctantly accepted. A testimony of this is the modern terminology of irrational number for referring to a number that is not the quotient of two integers, since "irrational" means originally "not reasonable" or "not accessible with reason". The fact that length ratios are not represented by rational numbers was resolved by Eudoxus of Cnidus (408–355 BC), a student of Plato, who reduced the comparison of two irrational ratios to comparisons of integer multiples of the magnitudes involved. His method anticipated that of Dedekind cuts in the modern definition of real numbers by Richard Dedekind (1831–1916); see Eudoxus of Cnidus § Eudoxus' proportions. In the Posterior Analytics, Aristotle (384–322 BC) laid down the logic for organizing a field of knowledge by means of primitive concepts, axioms, postulates, definitions, and theorems. Aristotle took a majority of his examples for this from arithmetic and from geometry, and his logic served as the foundation of mathematics for centuries. This method resembles the modern axiomatic method but with a big philosophical difference: axioms and postulates were supposed to be true, being either self-evident or resulting from experiments, while no other truth than the correctness of the proof is involved in the axiomatic method. So, for Aristotle, a proved theorem is true, while in the axiomatic methods, the proof says only that the axioms imply the statement of the theorem. Aristotle's logic reached its high point with Euclid's Elements (300 BC), a treatise on mathematics structured with very high standards of rigor: Euclid justifies each proposition by a demonstration in the form of chains of syllogisms (though they do not always conform strictly to Aristotelian templates). Aristotle's syllogistic logic, together with its exemplification by Euclid's Elements, are recognized as scientific achievements of ancient Greece, and remained as the foundations of mathematics for centuries. == Before infinitesimal calculus == During Middle Ages, Euclid's Elements stood as a perfectly solid foundation for mathematics, and philosophy of mathematics concentrated on the ontological status of mathematical concepts; the question was whether they exist independently of perception (realism) or within the mind only (conceptualism); or even whether they are simply names of collection of individual objects (nominalism). In Elements, the only numbers that are considered are natural numbers and ratios of lengths. This geometrical view of non-integer numbers remained dominant until the end of Middle Ages, although the rise of algebra led to consider them independently from geometry, which implies implicitly that there are foundational primitives of mathematics. For example, the transformations of equations introduced by Al-Khwarizmi and the cubic and quartic formulas discovered in the 16th century result from algebraic manipulations that have no geometric counterpart. Nevertheless, this did not challenge the classical foundations of mathematics since all properties of numbers that were used can be deduced from their geometrical definition. In 1637, René Descartes published La Géométrie, in which he showed that geometry can be reduced to algebra by means of coordinates, which are numbers determining the position of a point. This gives to the numbers that he called real numbers a more foundational role (before him, numbers were defined as the ratio of two lengths). Descartes' book became famous after 1649 and paved the way to infinitesimal calculus. == Infinitesimal calculus == Isaac Newton (1642–1727) in England and Leibniz (1646–1716) in Germany independently developed the infinitesimal calculus for dealing with mobile points (such as planets in the sky) and variable quantities. This needed the introduction of new concepts such as continuous functions, derivatives and limits. For dealing with these concepts in a logical way, they were defined in terms of infinitesimals that are hypothetical numbers that are infinitely close to zero. The strong implications of infinitesimal calculus on foundations of mathematics is illustrated by a pamphlet of the Protestant philosopher George Berkeley (1685–1753), who wrote "[Infinitesimals] are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?". Also, a lack of rigor has been frequently invoked, because infinitesimals and the associated concepts were not formally defined (lines and planes were not formally defined either, but people were more accustomed to them). Real numbers, continuous functions, derivatives were not formally defined before the 19th century, as well as Euclidean geometry. It is only in the 20th century that a formal definition of infinitesimals has been given, with the proof that the whole infinitesimal can be deduced from them. Despite its lack of firm logical foundations, infinitesimal calculus was quickly adopted by mathematicians, and validated by its numerous applications; in particular the fact that the planet trajectories can be deduced from the Newton's law of gravitation. == 19th century == In the 19th century, mathematics developed quickly in many directions. Several of the problems that were considered led to questions on the foundations of mathematics. Frequently, the proposed solutions led to further questions that were often simultaneously of philosophical and mathematical nature. All these questions led, at the end of the 19th century and the beginning of the 20th century, to debates which have been called the foundational crisis of mathematics. The following subsections describe the main such foundational problems revealed during the 19th century. === Real analysis === Cauchy (1789–1857) started the project of giving rigorous bases to infinitesimal calculus. In particular, he rejected the heuristic principle that he called the generality of algebra, which consisted to apply properties of algebraic operations to infinite sequences without proper proofs. In his Cours d'Analyse (1821), he considered very small quantities, which could presently be called "sufficiently small quantities"; that is, a sentence such that "if x is very small then ..." must be understood as "there is a (sufficiently large) natural number n such that |x| < 1/n". In the proofs he used this in a way that predated the modern (ε, δ)-definition of limit. The modern (ε, δ)-definition of limits and continuous functions was first developed by Bolzano in 1817, but remained relatively unknown, and Cauchy probably did know Bolzano's work. Karl Weierstrass (1815–1897) formalized and popularized the (ε, δ)-definition of limits, and discovered some pathological functions that seemed paradoxical at this time, such as continuous, nowhere-differentiable functions. Indeed, such functions contradict previous conceptions of a function as a rule for computation or a smooth graph. At this point, the program of arithmetization of analysis (reduction of mathematical analysis to arithmetic and algebraic operations) advocated by Weierstrass was essentially completed, except for two points. Firstly, a formal definition of real numbers was still lacking. Indeed, beginning with Richard Dedekind in 1858, several mathematicians worked on the definition of the real numbers, including Hermann Hankel, Charles Méray, and Eduard Heine, but this is only in 1872 that two independent complete definitions of real numbers were published: one by Dedekind, by means of Dedekind cuts; the other one by Georg Cantor as equivalence classes of Cauchy sequences. Several problems were left open by these definitions, which contributed to the foundational crisis of mathematics. Firstly both definitions suppose that rational numbers and thus natural numbers are rigorously defined; this was done a few years later with Peano axioms. Secondly, both definitions involve infinite sets (Dedekind cuts and sets of the elements of a Cauchy sequence), and Cantor's set theory was published several years later. The third problem is more subtle: and is related to the foundations of logic: classical logic is a first-order logic; that is, quantifiers apply to variables representing individual elements, not to variables representing (infinite) sets of elements. The basic property of the completeness of the real numbers that is required for defining and using real numbers involves a quantification on infinite sets. Indeed, this property may be expressed either as for every infinite sequence of real numbers, if it is a Cauchy sequence, it has a limit that is a real number, or as every subset of the real numbers that is bounded has a least upper bound that is a real number. This need of quantification over infinite sets is one of the motivation of the development of higher-order logics during the first half of the 20th century. === Non-Euclidean geometries === Before the 19th century, there were many failed attempts to derive the parallel postulate from other axioms of geometry. In an attempt to prove that its negation leads to a contradiction, Johann Heinrich Lambert (1728–1777) started to build hyperbolic geometry and introduced the hyperbolic functions and computed the area of a hyperbolic triangle (where the sum of angles is less than 180°). Continuing the construction of this new geometry, several mathematicians proved independently that if it is inconsistent, then Euclidean geometry is also inconsistent and thus that the parallel postulate cannot be proved. This was proved by Nikolai Lobachevsky in 1826, János Bolyai (1802–1860) in 1832 and Carl Friedrich Gauss (unpublished). Later in the 19th century, the German mathematician Bernhard Riemann developed Elliptic geometry, another non-Euclidean geometry where no parallel can be found and the sum of angles in a triangle is more than 180°. It was proved consistent by defining points as pairs of antipodal points on a sphere (or hypersphere), and lines as great circles on the sphere. These proofs of unprovability of the parallel postulate lead to several philosophical problems, the main one being that before this discovery, the parallel postulate and all its consequences were considered as true. So, the non-Euclidean geometries challenged the concept of mathematical truth. === Synthetic vs. analytic geometry === Since the introduction of analytic geometry by René Descartes in the 17th century, there were two approaches to geometry, the old one called synthetic geometry, and the new one, where everything is specified in terms of real numbers called coordinates. Mathematicians did not worry much about the contradiction between these two approaches before the mid-nineteenth century, where there was "an acrimonious controversy between the proponents of synthetic and analytic methods in projective geometry, the two sides accusing each other of mixing projective and metric concepts". Indeed, there is no concept of distance in a projective space, and the cross-ratio, which is a number, is a basic concept of synthetic projective geometry. Karl von Staudt developed a purely geometric approach to this problem by introducing "throws" that form what is presently called a field, in which the cross ratio can be expressed. Apparently, the problem of the equivalence between analytic and synthetic approach was completely solved only with Emil Artin's book Geometric Algebra published in 1957. It was well known that, given a field k, one may define affine and projective spaces over k in terms of k-vector spaces. In these spaces, the Pappus hexagon theorem holds. Conversely, if the Pappus hexagon theorem is included in the axioms of a plane geometry, then one can define a field k such that the geometry is the same as the affine or projective geometry over k. === Natural numbers === The work of making rigorous real analysis and the definition of real numbers, consisted of reducing everything to rational numbers and thus to natural numbers, since positive rational numbers are fractions of natural numbers. There was therefore a need of a formal definition of natural numbers, which imply as axiomatic theory of arithmetic. This was started with Charles Sanders Peirce in 1881 and Richard Dedekind in 1888, who defined a natural numbers as the cardinality of a finite set. However, this involves set theory, which was not formalized at this time. Giuseppe Peano provided in 1888 a complete axiomatisation based on the ordinal property of the natural numbers. The last Peano's axiom is the only one that induces logical difficulties, as it begin with either "if S is a set then" or "if φ {\displaystyle \varphi } is a predicate then". So, Peano's axioms induce a quantification on infinite sets, and this means that Peano arithmetic is what is presently called a Second-order logic. This was not well understood at that times, but the fact that infinity occurred in the definition of the natural numbers was a problem for many mathematicians of this time. For example, Henri Poincaré stated that axioms can only be demonstrated in their finite application, and concluded that it is "the power of the mind" which allows conceiving of the indefinite repetition of the same act. This applies in particular to the use of the last Peano axiom for showing that the successor function generates all natural numbers. Also, Leopold Kronecker said "God made the integers, all else is the work of man". This may be interpreted as "the integers cannot be mathematically defined". === Infinite sets === Before the second half of the 19th century, infinity was a philosophical concept that did not belong to mathematics. However, with the rise of infinitesimal calculus, mathematicians became accustomed to infinity, mainly through potential infinity, that is, as the result of an endless process, such as the definition of an infinite sequence, an infinite series or a limit. The possibility of an actual infinity was the subject of many philosophical disputes. Sets, and more specially infinite sets were not considered as a mathematical concept; in particular, there was no fixed term for them. A dramatic change arose with the work of Georg Cantor who was the first mathematician to systematically study infinite sets. In particular, he introduced cardinal numbers that measure the size of infinite sets, and ordinal numbers that, roughly speaking, allow one to continue to count after having reach infinity. One of his major results is the discovery that there are strictly more real numbers than natural numbers (the cardinal of the continuum of the real numbers is greater than that of the natural numbers). These results were rejected by many mathematicians and philosophers, and led to debates that are a part of the foundational crisis of mathematics. The crisis was amplified with the Russel's paradox that asserts that the phrase "the set of all sets" is self-contradictory. This condradiction introduced a doubt on the consistency of all mathematics. With the introduction of the Zermelo–Fraenkel set theory (c. 1925) and its adoption by the mathematical community, the doubt about the consistency was essentially removed, although consistency of set theory cannot be proved because of Gödel's incompleteness theorem. === Mathematical logic === In 1847, De Morgan published his laws and George Boole devised an algebra, now called Boolean algebra, that allows expressing Aristotle's logic in terms of formulas and algebraic operations. Boolean algebra is the starting point of mathematization logic and the basis of propositional calculus Independently, in the 1870's, Charles Sanders Peirce and Gottlob Frege extended propositional calculus by introducing quantifiers, for building predicate logic. Frege pointed out three desired properties of a logical theory:consistency (impossibility of proving contradictory statements), completeness (any statement is either provable or refutable; that is, its negation is provable), and decidability (there is a decision procedure to test every statement). By near the turn of the century, Bertrand Russell popularized Frege's work and discovered Russel's paradox which implies that the phrase "the set of all sets" is self-contradictory. This paradox seemed to make the whole mathematics inconsistent and is one of the major causes of the foundational crisis of mathematics. == Foundational crisis == The foundational crisis of mathematics arose at the end of the 19th century and the beginning of the 20th century with the discovery of several paradoxes or counter-intuitive results. The first one was the proof that the parallel postulate cannot be proved. This results from a construction of a non-Euclidean geometry inside Euclidean geometry, whose inconsistency would imply the inconsistency of Euclidean geometry. A well known paradox is Russell's paradox, which shows that the phrase "the set of all sets that do not contain themselves" is self-contradictory. Other philosophical problems were the proof of the existence of mathematical objects that cannot be computed or explicitly described, and the proof of the existence of theorems of arithmetic that cannot be proved with Peano arithmetic. Several schools of philosophy of mathematics were challenged with these problems in the 20th century, and are described below. These problems were also studied by mathematicians, and this led to establish mathematical logic as a new area of mathematics, consisting of providing mathematical definitions to logics (sets of inference rules), mathematical and logical theories, theorems, and proofs, and of using mathematical methods to prove theorems about these concepts. This led to unexpected results, such as Gödel's incompleteness theorems, which, roughly speaking, assert that, if a theory contains the standard arithmetic, it cannot be used to prove that it itself is not self-contradictory; and, if it is not self-contradictory, there are theorems that cannot be proved inside the theory, but are nevertheless true in some technical sense. Zermelo–Fraenkel set theory with the axiom of choice (ZFC) is a logical theory established by Ernst Zermelo and Abraham Fraenkel. It became the standard foundation of modern mathematics, and, unless the contrary is explicitly specified, it is used in all modern mathematical texts, generally implicitly. Simultaneously, the axiomatic method became a de facto standard: the proof of a theorem must result from explicit axioms and previously proved theorems by the application of clearly defined inference rules. The axioms need not correspond to some reality. Nevertheless, it is an open philosophical problem to explain why the axiom systems that lead to rich and useful theories are those resulting from abstraction from the physical reality or other mathematical theory. In summary, the foundational crisis is essentially resolved, and this opens new philosophical problems. In particular, it cannot be proved that the new foundation (ZFC) is not self-contradictory. It is a general consensus that, if this would happen, the problem could be solved by a mild modification of ZFC. === Philosophical views === When the foundational crisis arose, there was much debate among mathematicians and logicians about what should be done for restoring confidence in mathematics. This involved philosophical questions about mathematical truth, the relationship of mathematics with reality, the reality of mathematical objects, and the nature of mathematics. For the problem of foundations, there were two main options for trying to avoid paradoxes. The first one led to intuitionism and constructivism, and consisted to restrict the logical rules for remaining closer to intuition, while the second, which has been called formalism, considers that a theorem is true if it can be deduced from axioms by applying inference rules (formal proof), and that no "trueness" of the axioms is needed for the validity of a theorem. ==== Formalism ==== It has been claimed that formalists, such as David Hilbert (1862–1943), hold that mathematics is only a language and a series of games. Hilbert insisted that formalism, called "formula game" by him, is a fundamental part of mathematics, but that mathematics must not be reduced to formalism. Indeed, he used the words "formula game" in his 1927 response to L. E. J. Brouwer's criticisms: And to what extent has the formula game thus made possible been successful? This formula game enables us to express the entire thought-content of the science of mathematics in a uniform manner and develop it in such a way that, at the same time, the interconnections between the individual propositions and facts become clear ... The formula game that Brouwer so deprecates has, besides its mathematical value, an important general philosophical significance. For this formula game is carried out according to certain definite rules, in which the technique of our thinking is expressed. These rules form a closed system that can be discovered and definitively stated. Thus Hilbert is insisting that mathematics is not an arbitrary game with arbitrary rules; rather it must agree with how our thinking, and then our speaking and writing, proceeds. We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules. Rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. The foundational philosophy of formalism, as exemplified by David Hilbert, is a response to the paradoxes of set theory, and is based on formal logic. Virtually all mathematical theorems today can be formulated as theorems of set theory. The truth of a mathematical statement, in this view, is represented by the fact that the statement can be derived from the axioms of set theory using the rules of formal logic. Merely the use of formalism alone does not explain several issues: why we should use the axioms we do and not some others, why we should employ the logical rules we do and not some others, why "true" mathematical statements (e.g., the laws of arithmetic) appear to be true, and so on. Hermann Weyl posed these very questions to Hilbert: What "truth" or objectivity can be ascribed to this theoretic construction of the world, which presses far beyond the given, is a profound philosophical problem. It is closely connected with the further question: what impels us to take as a basis precisely the particular axiom system developed by Hilbert? Consistency is indeed a necessary but not a sufficient condition. For the time being we probably cannot answer this question ... In some cases these questions may be sufficiently answered through the study of formal theories, in disciplines such as reverse mathematics and computational complexity theory. As noted by Weyl, formal logical systems also run the risk of inconsistency; in Peano arithmetic, this arguably has already been settled with several proofs of consistency, but there is debate over whether or not they are sufficiently finitary to be meaningful. Gödel's second incompleteness theorem establishes that logical systems of arithmetic can never contain a valid proof of their own consistency. What Hilbert wanted to do was prove a logical system S was consistent, based on principles P that only made up a small part of S. But Gödel proved that the principles P could not even prove P to be consistent, let alone S. ==== Intuitionism ==== Intuitionists, such as L. E. J. Brouwer (1882–1966), hold that mathematics is a creation of the human mind. Numbers, like fairy tale characters, are merely mental entities, which would not exist if there were never any human minds to think about them. The foundational philosophy of intuitionism or constructivism, as exemplified in the extreme by Brouwer and Stephen Kleene, requires proofs to be "constructive" in nature – the existence of an object must be demonstrated rather than inferred from a demonstration of the impossibility of its non-existence. For example, as a consequence of this the form of proof known as reductio ad absurdum is suspect. Some modern theories in the philosophy of mathematics deny the existence of foundations in the original sense. Some theories tend to focus on mathematical practice, and aim to describe and analyze the actual working of mathematicians as a social group. Others try to create a cognitive science of mathematics, focusing on human cognition as the origin of the reliability of mathematics when applied to the real world. These theories would propose to find foundations only in human thought, not in any objective outside construct. The matter remains controversial. ==== Logicism ==== Logicism is a school of thought, and research programme, in the philosophy of mathematics, based on the thesis that mathematics is an extension of logic or that some or all mathematics may be derived in a suitable formal system whose axioms and rules of inference are 'logical' in nature. Bertrand Russell and Alfred North Whitehead championed this theory initiated by Gottlob Frege and influenced by Richard Dedekind. ==== Set-theoretic Platonism ==== Many researchers in axiomatic set theory have subscribed to what is known as set-theoretic Platonism, exemplified by Kurt Gödel. Several set theorists followed this approach and actively searched for axioms that may be considered as true for heuristic reasons and that would decide the continuum hypothesis. Many large cardinal axioms were studied, but the hypothesis always remained independent from them and it is now considered unlikely that CH can be resolved by a new large cardinal axiom. Other types of axioms were considered, but none of them has reached consensus on the continuum hypothesis yet. Recent work by Hamkins proposes a more flexible alternative: a set-theoretic multiverse allowing free passage between set-theoretic universes that satisfy the continuum hypothesis and other universes that do not. ==== Indispensability argument for realism ==== This argument by Willard Quine and Hilary Putnam says (in Putnam's shorter words), ... quantification over mathematical entities is indispensable for science ... therefore we should accept such quantification; but this commits us to accepting the existence of the mathematical entities in question. However, Putnam was not a Platonist. ==== Rough-and-ready realism ==== Few mathematicians are typically concerned on a daily, working basis over logicism, formalism or any other philosophical position. Instead, their primary concern is that the mathematical enterprise as a whole always remains productive. Typically, they see this as ensured by remaining open-minded, practical and busy; as potentially threatened by becoming overly-ideological, fanatically reductionistic or lazy. Such a view has also been expressed by some well-known physicists. For example, the Physics Nobel Prize laureate Richard Feynman said People say to me, "Are you looking for the ultimate laws of physics?" No, I'm not ... If it turns out there is a simple ultimate law which explains everything, so be it – that would be very nice to discover. If it turns out it's like an onion with millions of layers ... then that's the way it is. But either way there's Nature and she's going to come out the way She is. So therefore when we go to investigate we shouldn't predecide what it is we're looking for only to find out more about it. And Steven Weinberg: The insights of philosophers have occasionally benefited physicists, but generally in a negative fashion – by protecting them from the preconceptions of other philosophers. ... without some guidance from our preconceptions one could do nothing at all. It is just that philosophical principles have not generally provided us with the right preconceptions. Weinberg believed that any undecidability in mathematics, such as the continuum hypothesis, could be potentially resolved despite the incompleteness theorem, by finding suitable further axioms to add to set theory. ==== Philosophical consequences of Gödel's completeness theorem ==== Gödel's completeness theorem establishes an equivalence in first-order logic between the formal provability of a formula and its truth in all possible models. Precisely, for any consistent first-order theory it gives an "explicit construction" of a model described by the theory; this model will be countable if the language of the theory is countable. However this "explicit construction" is not algorithmic. It is based on an iterative process of completion of the theory, where each step of the iteration consists in adding a formula to the axioms if it keeps the theory consistent; but this consistency question is only semi-decidable (an algorithm is available to find any contradiction but if there is none this consistency fact can remain unprovable). === More paradoxes === The following lists some notable results in metamathematics. Zermelo–Fraenkel set theory is the most widely studied axiomatization of set theory. It is abbreviated ZFC when it includes the axiom of choice and ZF when the axiom of choice is excluded. 1920: Thoralf Skolem corrected Leopold Löwenheim's proof of what is now called the downward Löwenheim–Skolem theorem, leading to Skolem's paradox discussed in 1922, namely the existence of countable models of ZF, making infinite cardinalities a relative property. 1922: Proof by Abraham Fraenkel that the axiom of choice cannot be proved from the axioms of Zermelo set theory with urelements. 1931: Publication of Gödel's incompleteness theorems, showing that essential aspects of Hilbert's program could not be attained. It showed how to construct, for any sufficiently powerful and consistent recursively axiomatizable system – such as necessary to axiomatize the elementary theory of arithmetic on the (infinite) set of natural numbers – a statement that formally expresses its own unprovability, which he then proved equivalent to the claim of consistency of the theory; so that (assuming the consistency as true), the system is not powerful enough for proving its own consistency, let alone that a simpler system could do the job. It thus became clear that the notion of mathematical truth cannot be completely determined and reduced to a purely formal system as envisaged in Hilbert's program. This dealt a final blow to the heart of Hilbert's program, the hope that consistency could be established by finitistic means (it was never made clear exactly what axioms were the "finitistic" ones, but whatever axiomatic system was being referred to, it was a 'weaker' system than the system whose consistency it was supposed to prove). 1936: Alfred Tarski proved his truth undefinability theorem. 1936: Alan Turing proved that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. 1938: Gödel proved the consistency of the axiom of choice and of the generalized continuum hypothesis. 1936–1937: Alonzo Church and Alan Turing, respectively, published independent papers showing that a general solution to the Entscheidungsproblem is impossible: the universal validity of statements in first-order logic is not decidable (it is only semi-decidable as given by the completeness theorem). 1955: Pyotr Novikov showed that there exists a finitely presented group G such that the word problem for G is undecidable. 1963: Paul Cohen showed that the Continuum Hypothesis is unprovable from ZFC. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory. 1964: Inspired by the fundamental randomness in physics, Gregory Chaitin starts publishing results on algorithmic information theory (measuring incompleteness and randomness in mathematics). 1966: Paul Cohen showed that the axiom of choice is unprovable in ZF even without urelements. 1970: Hilbert's tenth problem is proven unsolvable: there is no recursive solution to decide whether a Diophantine equation (multivariable polynomial equation) has a solution in integers. 1971: Suslin's problem is proven to be independent from ZFC. == Toward resolution of the crisis == Starting in 1935, the Bourbaki group of French mathematicians started publishing a series of books to formalize many areas of mathematics on the new foundation of set theory. The intuitionistic school did not attract many adherents, and it was not until Bishop's work in 1967 that constructive mathematics was placed on a sounder footing. One may consider that Hilbert's program has been partially completed, so that the crisis is essentially resolved, satisfying ourselves with lower requirements than Hilbert's original ambitions. His ambitions were expressed in a time when nothing was clear: it was not clear whether mathematics could have a rigorous foundation at all. There are many possible variants of set theory, which differ in consistency strength, where stronger versions (postulating higher types of infinities) contain formal proofs of the consistency of weaker versions, but none contains a formal proof of its own consistency. Thus the only thing we do not have is a formal proof of consistency of whatever version of set theory we may prefer, such as ZF. In practice, most mathematicians either do not work from axiomatic systems, or if they do, do not doubt the consistency of ZFC, generally their preferred axiomatic system. In most of mathematics as it is practiced, the incompleteness and paradoxes of the underlying formal theories never played a role anyway, and in those branches in which they do or whose formalization attempts would run the risk of forming inconsistent theories (such as logic and category theory), they may be treated carefully. The development of category theory in the middle of the 20th century showed the usefulness of set theories guaranteeing the existence of larger classes than does ZFC, such as Von Neumann–Bernays–Gödel set theory or Tarski–Grothendieck set theory, albeit that in very many cases the use of large cardinal axioms or Grothendieck universes is formally eliminable. One goal of the reverse mathematics program is to identify whether there are areas of "core mathematics" in which foundational issues may again provoke a crisis. == See also == Aristotelian realist philosophy of mathematics Mathematical logic Brouwer–Hilbert controversy Church–Turing thesis Controversy over Cantor's theory Epistemology Euclid's Elements Hilbert's problems Implementation of mathematics in set theory Liar paradox New Foundations Philosophy of mathematics Principia Mathematica Quasi-empiricism in mathematics Mathematical thought of Charles Peirce == Notes == == References == Avigad, Jeremy (2003) Number theory and elementary arithmetic, Philosophia Mathematica Vol. 11, pp. 257–284 Eves, Howard (1990), Foundations and Fundamental Concepts of Mathematics Third Edition, Dover Publications, INC, Mineola NY, ISBN 0-486-69609-X (pbk.) cf §9.5 Philosophies of Mathematics pp. 266–271. Eves lists the three with short descriptions prefaced by a brief introduction. Goodman, N.D. (1979), "Mathematics as an Objective Science", in Tymoczko (ed., 1986). Hart, W.D. (ed., 1996), The Philosophy of Mathematics, Oxford University Press, Oxford, UK. Hersh, R. (1979), "Some Proposals for Reviving the Philosophy of Mathematics", in (Tymoczko 1986). Hilbert, D. (1922), "Neubegründung der Mathematik. Erste Mitteilung", Hamburger Mathematische Seminarabhandlungen 1, 157–177. Translated, "The New Grounding of Mathematics. First Report", in (Mancosu 1998). Katz, Robert (1964), Axiomatic Analysis, D. C. Heath and Company. Kleene, Stephen C. (1991) [1952]. Introduction to Meta-Mathematics (Tenth impression 1991 ed.). Amsterdam NY: North-Holland Pub. Co. ISBN 0-7204-2103-9. In Chapter III A Critique of Mathematic Reasoning, §11. The paradoxes, Kleene discusses Intuitionism and Formalism in depth. Throughout the rest of the book he treats, and compares, both Formalist (classical) and Intuitionist logics with an emphasis on the former. Extraordinary writing by an extraordinary mathematician. Mancosu, P. (ed., 1998), From Hilbert to Brouwer. The Debate on the Foundations of Mathematics in the 1920s, Oxford University Press, Oxford, UK. Putnam, Hilary (1967), "Mathematics Without Foundations", Journal of Philosophy 64/1, 5–22. Reprinted, pp. 168–184 in W.D. Hart (ed., 1996). —, "What is Mathematical Truth?", in Tymoczko (ed., 1986). Sudac, Olivier (Apr 2001). "The prime number theorem is PRA-provable". Theoretical Computer Science. 257 (1–2): 185–239. doi:10.1016/S0304-3975(00)00116-X. Troelstra, A. S. (no date but later than 1990), "A History of Constructivism in the 20th Century", A detailed survey for specialists: §1 Introduction, §2 Finitism & §2.2 Actualism, §3 Predicativism and Semi-Intuitionism, §4 Brouwerian Intuitionism, §5 Intuitionistic Logic and Arithmetic, §6 Intuitionistic Analysis and Stronger Theories, §7 Constructive Recursive Mathematics, §8 Bishop's Constructivism, §9 Concluding Remarks. Approximately 80 references. Tymoczko, T. (1986), "Challenging Foundations", in Tymoczko (ed., 1986). —,(ed., 1986), New Directions in the Philosophy of Mathematics, 1986. Revised edition, 1998. van Dalen D. (2008), "Brouwer, Luitzen Egbertus Jan (1881–1966)", in Biografisch Woordenboek van Nederland. URL:http://www.inghist.nl/Onderzoek/Projecten/BWN/lemmata/bwn2/brouwerle [2008-03-13] Weyl, H. (1921), "Über die neue Grundlagenkrise der Mathematik", Mathematische Zeitschrift 10, 39–79. Translated, "On the New Foundational Crisis of Mathematics", in (Mancosu 1998). Wilder, Raymond L. (1952), Introduction to the Foundations of Mathematics, John Wiley and Sons, New York, NY. == External links == Media related to Foundations of mathematics at Wikimedia Commons "Philosophy of mathematics". Internet Encyclopedia of Philosophy. Logic and Mathematics Harvey M. Friedman, Foundations of Mathematics: past, present, and future, May 31, 2000, 8 pages. A Century of Controversy over the Foundations of Mathematics by Gregory Chaitin.
|
https://en.wikipedia.org/wiki/Foundations_of_mathematics
|
In mathematics, a group is a set with a binary operation that satisfies the following constraints: the operation is associative, it has an identity element, and every element of the set has an inverse element. Many mathematical structures are groups endowed with other properties. For example, the integers with the addition operation form an infinite group that is generated by a single element called 1 {\displaystyle 1} (these properties fully characterize the integers). The concept of a group was elaborated for handling, in a unified way, many mathematical structures such as numbers, geometric shapes and polynomial roots. Because the concept of groups is ubiquitous in numerous areas both within and outside mathematics, some authors consider it as a central organizing principle of contemporary mathematics. In geometry, groups arise naturally in the study of symmetries and geometric transformations: The symmetries of an object form a group, called the symmetry group of the object, and the transformations of a given type form a general group. Lie groups appear in symmetry groups in geometry, and also in the Standard Model of particle physics. The Poincaré group is a Lie group consisting of the symmetries of spacetime in special relativity. Point groups describe symmetry in molecular chemistry. The concept of a group arose in the study of polynomial equations, starting with Évariste Galois in the 1830s, who introduced the term group (French: groupe) for the symmetry group of the roots of an equation, now called a Galois group. After contributions from other fields such as number theory and geometry, the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies groups in their own right. To explore groups, mathematicians have devised various notions to break groups into smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their abstract properties, group theorists also study the different ways in which a group can be expressed concretely, both from a point of view of representation theory (that is, through the representations of the group) and of computational group theory. A theory has been developed for finite groups, which culminated with the classification of finite simple groups, completed in 2004. Since the mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become an active area in group theory. == Definition and illustration == === First example: the integers === One of the more familiar groups is the set of integers Z = { … , − 4 , − 3 , − 2 , − 1 , 0 , 1 , 2 , 3 , 4 , … } {\displaystyle \mathbb {Z} =\{\ldots ,-4,-3,-2,-1,0,1,2,3,4,\ldots \}} together with addition. For any two integers a {\displaystyle a} and b {\displaystyle b} , the sum a + b {\displaystyle a+b} is also an integer; this closure property says that + {\displaystyle +} is a binary operation on Z {\displaystyle \mathbb {Z} } . The following properties of integer addition serve as a model for the group axioms in the definition below. For all integers a {\displaystyle a} , b {\displaystyle b} and c {\displaystyle c} , one has ( a + b ) + c = a + ( b + c ) {\displaystyle (a+b)+c=a+(b+c)} . Expressed in words, adding a {\displaystyle a} to b {\displaystyle b} first, and then adding the result to c {\displaystyle c} gives the same final result as adding a {\displaystyle a} to the sum of b {\displaystyle b} and c {\displaystyle c} . This property is known as associativity. If a {\displaystyle a} is any integer, then 0 + a = a {\displaystyle 0+a=a} and a + 0 = a {\displaystyle a+0=a} . Zero is called the identity element of addition because adding it to any integer returns the same integer. For every integer a {\displaystyle a} , there is an integer b {\displaystyle b} such that a + b = 0 {\displaystyle a+b=0} and b + a = 0 {\displaystyle b+a=0} . The integer b {\displaystyle b} is called the inverse element of the integer a {\displaystyle a} and is denoted − a {\displaystyle -a} . The integers, together with the operation + {\displaystyle +} , form a mathematical object belonging to a broad class sharing similar structural aspects. To appropriately understand these structures as a collective, the following definition is developed. === Definition === A group is a non-empty set G {\displaystyle G} together with a binary operation on G {\displaystyle G} , here denoted " ⋅ {\displaystyle \cdot } ", that combines any two elements a {\displaystyle a} and b {\displaystyle b} of G {\displaystyle G} to form an element of G {\displaystyle G} , denoted a ⋅ b {\displaystyle a\cdot b} , such that the following three requirements, known as group axioms, are satisfied: Associativity For all a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} in G {\displaystyle G} , one has ( a ⋅ b ) ⋅ c = a ⋅ ( b ⋅ c ) {\displaystyle (a\cdot b)\cdot c=a\cdot (b\cdot c)} . Identity element There exists an element e {\displaystyle e} in G {\displaystyle G} such that, for every a {\displaystyle a} in G {\displaystyle G} , one has e ⋅ a = a {\displaystyle e\cdot a=a} and a ⋅ e = a {\displaystyle a\cdot e=a} . Such an element is unique (see below). It is called the identity element (or sometimes neutral element) of the group. Inverse element For each a {\displaystyle a} in G {\displaystyle G} , there exists an element b {\displaystyle b} in G {\displaystyle G} such that a ⋅ b = e {\displaystyle a\cdot b=e} and b ⋅ a = e {\displaystyle b\cdot a=e} , where e {\displaystyle e} is the identity element. For each a {\displaystyle a} , the element b {\displaystyle b} is unique (see below); it is called the inverse of a {\displaystyle a} and is commonly denoted a − 1 {\displaystyle a^{-1}} . === Notation and terminology === Formally, a group is an ordered pair of a set and a binary operation on this set that satisfies the group axioms. The set is called the underlying set of the group, and the operation is called the group operation or the group law. A group and its underlying set are thus two different mathematical objects. To avoid cumbersome notation, it is common to abuse notation by using the same symbol to denote both. This reflects also an informal way of thinking: that the group is the same as the set except that it has been enriched by additional structure provided by the operation. For example, consider the set of real numbers R {\displaystyle \mathbb {R} } , which has the operations of addition a + b {\displaystyle a+b} and multiplication a b {\displaystyle ab} . Formally, R {\displaystyle \mathbb {R} } is a set, ( R , + ) {\displaystyle (\mathbb {R} ,+)} is a group, and ( R , + , ⋅ ) {\displaystyle (\mathbb {R} ,+,\cdot )} is a field. But it is common to write R {\displaystyle \mathbb {R} } to denote any of these three objects. The additive group of the field R {\displaystyle \mathbb {R} } is the group whose underlying set is R {\displaystyle \mathbb {R} } and whose operation is addition. The multiplicative group of the field R {\displaystyle \mathbb {R} } is the group R × {\displaystyle \mathbb {R} ^{\times }} whose underlying set is the set of nonzero real numbers R ∖ { 0 } {\displaystyle \mathbb {R} \smallsetminus \{0\}} and whose operation is multiplication. More generally, one speaks of an additive group whenever the group operation is notated as addition; in this case, the identity is typically denoted 0 {\displaystyle 0} , and the inverse of an element x {\displaystyle x} is denoted − x {\displaystyle -x} . Similarly, one speaks of a multiplicative group whenever the group operation is notated as multiplication; in this case, the identity is typically denoted 1 {\displaystyle 1} , and the inverse of an element x {\displaystyle x} is denoted x − 1 {\displaystyle x^{-1}} . In a multiplicative group, the operation symbol is usually omitted entirely, so that the operation is denoted by juxtaposition, a b {\displaystyle ab} instead of a ⋅ b {\displaystyle a\cdot b} . The definition of a group does not require that a ⋅ b = b ⋅ a {\displaystyle a\cdot b=b\cdot a} for all elements a {\displaystyle a} and b {\displaystyle b} in G {\displaystyle G} . If this additional condition holds, then the operation is said to be commutative, and the group is called an abelian group. It is a common convention that for an abelian group either additive or multiplicative notation may be used, but for a nonabelian group only multiplicative notation is used. Several other notations are commonly used for groups whose elements are not numbers. For a group whose elements are functions, the operation is often function composition f ∘ g {\displaystyle f\circ g} ; then the identity may be denoted id. In the more specific cases of geometric transformation groups, symmetry groups, permutation groups, and automorphism groups, the symbol ∘ {\displaystyle \circ } is often omitted, as for multiplicative groups. Many other variants of notation may be encountered. === Second example: a symmetry group === Two figures in the plane are congruent if one can be changed into the other using a combination of rotations, reflections, and translations. Any figure is congruent to itself. However, some figures are congruent to themselves in more than one way, and these extra congruences are called symmetries. A square has eight symmetries. These are: the identity operation leaving everything unchanged, denoted id; rotations of the square around its center by 90°, 180°, and 270° clockwise, denoted by r 1 {\displaystyle r_{1}} , r 2 {\displaystyle r_{2}} and r 3 {\displaystyle r_{3}} , respectively; reflections about the horizontal and vertical middle line ( f v {\displaystyle f_{\mathrm {v} }} and f h {\displaystyle f_{\mathrm {h} }} ), or through the two diagonals ( f d {\displaystyle f_{\mathrm {d} }} and f c {\displaystyle f_{\mathrm {c} }} ). These symmetries are functions. Each sends a point in the square to the corresponding point under the symmetry. For example, r 1 {\displaystyle r_{1}} sends a point to its rotation 90° clockwise around the square's center, and f h {\displaystyle f_{\mathrm {h} }} sends a point to its reflection across the square's vertical middle line. Composing two of these symmetries gives another symmetry. These symmetries determine a group called the dihedral group of degree four, denoted D 4 {\displaystyle \mathrm {D} _{4}} . The underlying set of the group is the above set of symmetries, and the group operation is function composition. Two symmetries are combined by composing them as functions, that is, applying the first one to the square, and the second one to the result of the first application. The result of performing first a {\displaystyle a} and then b {\displaystyle b} is written symbolically from right to left as b ∘ a {\displaystyle b\circ a} ("apply the symmetry b {\displaystyle b} after performing the symmetry a {\displaystyle a} "). This is the usual notation for composition of functions. A Cayley table lists the results of all such compositions possible. For example, rotating by 270° clockwise ( r 3 {\displaystyle r_{3}} ) and then reflecting horizontally ( f h {\displaystyle f_{\mathrm {h} }} ) is the same as performing a reflection along the diagonal ( f d {\displaystyle f_{\mathrm {d} }} ). Using the above symbols, highlighted in blue in the Cayley table: f h ∘ r 3 = f d . {\displaystyle f_{\mathrm {h} }\circ r_{3}=f_{\mathrm {d} }.} Given this set of symmetries and the described operation, the group axioms can be understood as follows. Binary operation: Composition is a binary operation. That is, a ∘ b {\displaystyle a\circ b} is a symmetry for any two symmetries a {\displaystyle a} and b {\displaystyle b} . For example, r 3 ∘ f h = f c , {\displaystyle r_{3}\circ f_{\mathrm {h} }=f_{\mathrm {c} },} that is, rotating 270° clockwise after reflecting horizontally equals reflecting along the counter-diagonal ( f c {\displaystyle f_{\mathrm {c} }} ). Indeed, every other combination of two symmetries still gives a symmetry, as can be checked using the Cayley table. Associativity: The associativity axiom deals with composing more than two symmetries: Starting with three elements a {\displaystyle a} , b {\displaystyle b} and c {\displaystyle c} of D 4 {\displaystyle \mathrm {D} _{4}} , there are two possible ways of using these three symmetries in this order to determine a symmetry of the square. One of these ways is to first compose a {\displaystyle a} and b {\displaystyle b} into a single symmetry, then to compose that symmetry with c {\displaystyle c} . The other way is to first compose b {\displaystyle b} and c {\displaystyle c} , then to compose the resulting symmetry with a {\displaystyle a} . These two ways must give always the same result, that is, ( a ∘ b ) ∘ c = a ∘ ( b ∘ c ) , {\displaystyle (a\circ b)\circ c=a\circ (b\circ c),} For example, ( f d ∘ f v ) ∘ r 2 = f d ∘ ( f v ∘ r 2 ) {\displaystyle (f_{\mathrm {d} }\circ f_{\mathrm {v} })\circ r_{2}=f_{\mathrm {d} }\circ (f_{\mathrm {v} }\circ r_{2})} can be checked using the Cayley table: ( f d ∘ f v ) ∘ r 2 = r 3 ∘ r 2 = r 1 f d ∘ ( f v ∘ r 2 ) = f d ∘ f h = r 1 . {\displaystyle {\begin{aligned}(f_{\mathrm {d} }\circ f_{\mathrm {v} })\circ r_{2}&=r_{3}\circ r_{2}=r_{1}\\f_{\mathrm {d} }\circ (f_{\mathrm {v} }\circ r_{2})&=f_{\mathrm {d} }\circ f_{\mathrm {h} }=r_{1}.\end{aligned}}} Identity element: The identity element is i d {\displaystyle \mathrm {id} } , as it does not change any symmetry a {\displaystyle a} when composed with it either on the left or on the right. Inverse element: Each symmetry has an inverse: i d {\displaystyle \mathrm {id} } , the reflections f h {\displaystyle f_{\mathrm {h} }} , f v {\displaystyle f_{\mathrm {v} }} , f d {\displaystyle f_{\mathrm {d} }} , f c {\displaystyle f_{\mathrm {c} }} and the 180° rotation r 2 {\displaystyle r_{2}} are their own inverse, because performing them twice brings the square back to its original orientation. The rotations r 3 {\displaystyle r_{3}} and r 1 {\displaystyle r_{1}} are each other's inverses, because rotating 90° and then rotation 270° (or vice versa) yields a rotation over 360° which leaves the square unchanged. This is easily verified on the table. In contrast to the group of integers above, where the order of the operation is immaterial, it does matter in D 4 {\displaystyle \mathrm {D} _{4}} , as, for example, f h ∘ r 1 = f c {\displaystyle f_{\mathrm {h} }\circ r_{1}=f_{\mathrm {c} }} but r 1 ∘ f h = f d {\displaystyle r_{1}\circ f_{\mathrm {h} }=f_{\mathrm {d} }} . In other words, D 4 {\displaystyle \mathrm {D} _{4}} is not abelian. == History == The modern concept of an abstract group developed out of several fields of mathematics. The original motivation for group theory was the quest for solutions of polynomial equations of degree higher than 4. The 19th-century French mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion for the solvability of a particular polynomial equation in terms of the symmetry group of its roots (solutions). The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois's ideas were rejected by his contemporaries, and published only posthumously. More general permutation groups were investigated in particular by Augustin Louis Cauchy. Arthur Cayley's On the theory of groups, as depending on the symbolic equation θ n = 1 {\displaystyle \theta ^{n}=1} (1854) gives the first abstract definition of a finite group. Geometry was a second field in which groups were used systematically, especially symmetry groups as part of Felix Klein's 1872 Erlangen program. After novel geometries such as hyperbolic and projective geometry had emerged, Klein used group theory to organize them in a more coherent way. Further advancing these ideas, Sophus Lie founded the study of Lie groups in 1884. The third field contributing to group theory was number theory. Certain abelian group structures had been used implicitly in Carl Friedrich Gauss's number-theoretical work Disquisitiones Arithmeticae (1798), and more explicitly by Leopold Kronecker. In 1847, Ernst Kummer made early attempts to prove Fermat's Last Theorem by developing groups describing factorization into prime numbers. The convergence of these various sources into a uniform theory of groups started with Camille Jordan's Traité des substitutions et des équations algébriques (1870). Walther von Dyck (1882) introduced the idea of specifying a group by means of generators and relations, and was also the first to give an axiomatic definition of an "abstract group", in the terminology of the time. As of the 20th century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside (who worked on representation theory of finite groups), Richard Brauer's modular representation theory and Issai Schur's papers. The theory of Lie groups, and more generally locally compact groups was studied by Hermann Weyl, Élie Cartan and many others. Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley (from the late 1930s) and later by the work of Armand Borel and Jacques Tits. The University of Chicago's 1960–61 Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying the foundation of a collaboration that, with input from numerous other mathematicians, led to the classification of finite simple groups, with the final step taken by Aschbacher and Smith in 2004. This project exceeded previous mathematical endeavours by its sheer size, in both length of proof and number of researchers. Research concerning this classification proof is ongoing. Group theory remains a highly active mathematical branch, impacting many other fields, as the examples below illustrate. == Elementary consequences of the group axioms == Basic facts about all groups that can be obtained directly from the group axioms are commonly subsumed under elementary group theory. For example, repeated applications of the associativity axiom show that the unambiguity of a ⋅ b ⋅ c = ( a ⋅ b ) ⋅ c = a ⋅ ( b ⋅ c ) {\displaystyle a\cdot b\cdot c=(a\cdot b)\cdot c=a\cdot (b\cdot c)} generalizes to more than three factors. Because this implies that parentheses can be inserted anywhere within such a series of terms, parentheses are usually omitted. === Uniqueness of identity element === The group axioms imply that the identity element is unique; that is, there exists only one identity element: any two identity elements e {\displaystyle e} and f {\displaystyle f} of a group are equal, because the group axioms imply e = e ⋅ f = f {\displaystyle e=e\cdot f=f} . It is thus customary to speak of the identity element of the group. === Uniqueness of inverses === The group axioms also imply that the inverse of each element is unique. Let a group element a {\displaystyle a} have both b {\displaystyle b} and c {\displaystyle c} as inverses. Then b = b ⋅ e ( e is the identity element) = b ⋅ ( a ⋅ c ) ( c and a are inverses of each other) = ( b ⋅ a ) ⋅ c (associativity) = e ⋅ c ( b is an inverse of a ) = c ( e is the identity element and b = c ) {\displaystyle {\begin{aligned}b&=b\cdot e&&{\text{(}}e{\text{ is the identity element)}}\\&=b\cdot (a\cdot c)&&{\text{(}}c{\text{ and }}a{\text{ are inverses of each other)}}\\&=(b\cdot a)\cdot c&&{\text{(associativity)}}\\&=e\cdot c&&{\text{(}}b{\text{ is an inverse of }}a{\text{)}}\\&=c&&{\text{(}}e{\text{ is the identity element and }}b=c{\text{)}}\end{aligned}}} Therefore, it is customary to speak of the inverse of an element. === Division === Given elements a {\displaystyle a} and b {\displaystyle b} of a group G {\displaystyle G} , there is a unique solution x {\displaystyle x} in G {\displaystyle G} to the equation a ⋅ x = b {\displaystyle a\cdot x=b} , namely a − 1 ⋅ b {\displaystyle a^{-1}\cdot b} . It follows that for each a {\displaystyle a} in G {\displaystyle G} , the function G → G {\displaystyle G\to G} that maps each x {\displaystyle x} to a ⋅ x {\displaystyle a\cdot x} is a bijection; it is called left multiplication by a {\displaystyle a} or left translation by a {\displaystyle a} . Similarly, given a {\displaystyle a} and b {\displaystyle b} , the unique solution to x ⋅ a = b {\displaystyle x\cdot a=b} is b ⋅ a − 1 {\displaystyle b\cdot a^{-1}} . For each a {\displaystyle a} , the function G → G {\displaystyle G\to G} that maps each x {\displaystyle x} to x ⋅ a {\displaystyle x\cdot a} is a bijection called right multiplication by a {\displaystyle a} or right translation by a {\displaystyle a} . === Equivalent definition with relaxed axioms === The group axioms for identity and inverses may be "weakened" to assert only the existence of a left identity and left inverses. From these one-sided axioms, one can prove that the left identity is also a right identity and a left inverse is also a right inverse for the same element. Since they define exactly the same structures as groups, collectively the axioms are not weaker. In particular, assuming associativity and the existence of a left identity e {\displaystyle e} (that is, e ⋅ f = f {\displaystyle e\cdot f=f} ) and a left inverse f − 1 {\displaystyle f^{-1}} for each element f {\displaystyle f} (that is, f − 1 ⋅ f = e {\displaystyle f^{-1}\cdot f=e} ), it follows that every left inverse is also a right inverse of the same element as follows. Indeed, one has f ⋅ f − 1 = e ⋅ ( f ⋅ f − 1 ) (left identity) = ( ( f − 1 ) − 1 ⋅ f − 1 ) ⋅ ( f ⋅ f − 1 ) (left inverse) = ( f − 1 ) − 1 ⋅ ( ( f − 1 ⋅ f ) ⋅ f − 1 ) (associativity) = ( f − 1 ) − 1 ⋅ ( e ⋅ f − 1 ) (left inverse) = ( f − 1 ) − 1 ⋅ f − 1 (left identity) = e (left inverse) {\displaystyle {\begin{aligned}f\cdot f^{-1}&=e\cdot (f\cdot f^{-1})&&{\text{(left identity)}}\\&=((f^{-1})^{-1}\cdot f^{-1})\cdot (f\cdot f^{-1})&&{\text{(left inverse)}}\\&=(f^{-1})^{-1}\cdot ((f^{-1}\cdot f)\cdot f^{-1})&&{\text{(associativity)}}\\&=(f^{-1})^{-1}\cdot (e\cdot f^{-1})&&{\text{(left inverse)}}\\&=(f^{-1})^{-1}\cdot f^{-1}&&{\text{(left identity)}}\\&=e&&{\text{(left inverse)}}\end{aligned}}} Similarly, the left identity is also a right identity: f ⋅ e = f ⋅ ( f − 1 ⋅ f ) (left inverse) = ( f ⋅ f − 1 ) ⋅ f (associativity) = e ⋅ f (right inverse) = f (left identity) {\displaystyle {\begin{aligned}f\cdot e&=f\cdot (f^{-1}\cdot f)&&{\text{(left inverse)}}\\&=(f\cdot f^{-1})\cdot f&&{\text{(associativity)}}\\&=e\cdot f&&{\text{(right inverse)}}\\&=f&&{\text{(left identity)}}\end{aligned}}} These results do not hold if any of these axioms (associativity, existence of left identity and existence of left inverse) is removed. For a structure with a looser definition (like a semigroup) one may have, for example, that a left identity is not necessarily a right identity. The same result can be obtained by only assuming the existence of a right identity and a right inverse. However, only assuming the existence of a left identity and a right inverse (or vice versa) is not sufficient to define a group. For example, consider the set G = { e , f } {\displaystyle G=\{e,f\}} with the operator ⋅ {\displaystyle \cdot } satisfying e ⋅ e = f ⋅ e = e {\displaystyle e\cdot e=f\cdot e=e} and e ⋅ f = f ⋅ f = f {\displaystyle e\cdot f=f\cdot f=f} . This structure does have a left identity (namely, e {\displaystyle e} ), and each element has a right inverse (which is e {\displaystyle e} for both elements). Furthermore, this operation is associative (since the product of any number of elements is always equal to the rightmost element in that product, regardless of the order in which these operations are applied). However, ( G , ⋅ ) {\displaystyle (G,\cdot )} is not a group, since it lacks a right identity. == Basic concepts == When studying sets, one uses concepts such as subset, function, and quotient by an equivalence relation. When studying groups, one uses instead subgroups, homomorphisms, and quotient groups. These are the analogues that take the group structure into account. === Group homomorphisms === Group homomorphisms are functions that respect group structure; they may be used to relate two groups. A homomorphism from a group ( G , ⋅ ) {\displaystyle (G,\cdot )} to a group ( H , ∗ ) {\displaystyle (H,*)} is a function φ : G → H {\displaystyle \varphi :G\to H} such that It would be natural to require also that φ {\displaystyle \varphi } respect identities, φ ( 1 G ) = 1 H {\displaystyle \varphi (1_{G})=1_{H}} , and inverses, φ ( a − 1 ) = φ ( a ) − 1 {\displaystyle \varphi (a^{-1})=\varphi (a)^{-1}} for all a {\displaystyle a} in G {\displaystyle G} . However, these additional requirements need not be included in the definition of homomorphisms, because they are already implied by the requirement of respecting the group operation. The identity homomorphism of a group G {\displaystyle G} is the homomorphism ι G : G → G {\displaystyle \iota _{G}:G\to G} that maps each element of G {\displaystyle G} to itself. An inverse homomorphism of a homomorphism φ : G → H {\displaystyle \varphi :G\to H} is a homomorphism ψ : H → G {\displaystyle \psi :H\to G} such that ψ ∘ φ = ι G {\displaystyle \psi \circ \varphi =\iota _{G}} and φ ∘ ψ = ι H {\displaystyle \varphi \circ \psi =\iota _{H}} , that is, such that ψ ( φ ( g ) ) = g {\displaystyle \psi {\bigl (}\varphi (g){\bigr )}=g} for all g {\displaystyle g} in G {\displaystyle G} and such that φ ( ψ ( h ) ) = h {\displaystyle \varphi {\bigl (}\psi (h){\bigr )}=h} for all h {\displaystyle h} in H {\displaystyle H} . An isomorphism is a homomorphism that has an inverse homomorphism; equivalently, it is a bijective homomorphism. Groups G {\displaystyle G} and H {\displaystyle H} are called isomorphic if there exists an isomorphism φ : G → H {\displaystyle \varphi :G\to H} . In this case, H {\displaystyle H} can be obtained from G {\displaystyle G} simply by renaming its elements according to the function φ {\displaystyle \varphi } ; then any statement true for G {\displaystyle G} is true for H {\displaystyle H} , provided that any specific elements mentioned in the statement are also renamed. The collection of all groups, together with the homomorphisms between them, form a category, the category of groups. An injective homomorphism ϕ : G ′ → G {\displaystyle \phi :G'\to G} factors canonically as an isomorphism followed by an inclusion, G ′ → ∼ H ↪ G {\displaystyle G'\;{\stackrel {\sim }{\to }}\;H\hookrightarrow G} for some subgroup H {\displaystyle H} of G {\displaystyle G} . Injective homomorphisms are the monomorphisms in the category of groups. === Subgroups === Informally, a subgroup is a group H {\displaystyle H} contained within a bigger one, G {\displaystyle G} : it has a subset of the elements of G {\displaystyle G} , with the same operation. Concretely, this means that the identity element of G {\displaystyle G} must be contained in H {\displaystyle H} , and whenever h 1 {\displaystyle h_{1}} and h 2 {\displaystyle h_{2}} are both in H {\displaystyle H} , then so are h 1 ⋅ h 2 {\displaystyle h_{1}\cdot h_{2}} and h 1 − 1 {\displaystyle h_{1}^{-1}} , so the elements of H {\displaystyle H} , equipped with the group operation on G {\displaystyle G} restricted to H {\displaystyle H} , indeed form a group. In this case, the inclusion map H → G {\displaystyle H\to G} is a homomorphism. In the example of symmetries of a square, the identity and the rotations constitute a subgroup R = { i d , r 1 , r 2 , r 3 } {\displaystyle R=\{\mathrm {id} ,r_{1},r_{2},r_{3}\}} , highlighted in red in the Cayley table of the example: any two rotations composed are still a rotation, and a rotation can be undone by (i.e., is inverse to) the complementary rotations 270° for 90°, 180° for 180°, and 90° for 270°. The subgroup test provides a necessary and sufficient condition for a nonempty subset H {\displaystyle H} of a group G {\displaystyle G} to be a subgroup: it is sufficient to check that g − 1 ⋅ h ∈ H {\displaystyle g^{-1}\cdot h\in H} for all elements g {\displaystyle g} and h {\displaystyle h} in H {\displaystyle H} . Knowing a group's subgroups is important in understanding the group as a whole. Given any subset S {\displaystyle S} of a group G {\displaystyle G} , the subgroup generated by S {\displaystyle S} consists of all products of elements of S {\displaystyle S} and their inverses. It is the smallest subgroup of G {\displaystyle G} containing S {\displaystyle S} . In the example of symmetries of a square, the subgroup generated by r 2 {\displaystyle r_{2}} and f v {\displaystyle f_{\mathrm {v} }} consists of these two elements, the identity element i d {\displaystyle \mathrm {id} } , and the element f h = f v ⋅ r 2 {\displaystyle f_{\mathrm {h} }=f_{\mathrm {v} }\cdot r_{2}} . Again, this is a subgroup, because combining any two of these four elements or their inverses (which are, in this particular case, these same elements) yields an element of this subgroup. === Cosets === In many situations it is desirable to consider two group elements the same if they differ by an element of a given subgroup. For example, in the symmetry group of a square, once any reflection is performed, rotations alone cannot return the square to its original position, so one can think of the reflected positions of the square as all being equivalent to each other, and as inequivalent to the unreflected positions; the rotation operations are irrelevant to the question whether a reflection has been performed. Cosets are used to formalize this insight: a subgroup H {\displaystyle H} determines left and right cosets, which can be thought of as translations of H {\displaystyle H} by an arbitrary group element g {\displaystyle g} . In symbolic terms, the left and right cosets of H {\displaystyle H} , containing an element g {\displaystyle g} , are The left cosets of any subgroup H {\displaystyle H} form a partition of G {\displaystyle G} ; that is, the union of all left cosets is equal to G {\displaystyle G} and two left cosets are either equal or have an empty intersection. The first case g 1 H = g 2 H {\displaystyle g_{1}H=g_{2}H} happens precisely when g 1 − 1 ⋅ g 2 ∈ H {\displaystyle g_{1}^{-1}\cdot g_{2}\in H} , i.e., when the two elements differ by an element of H {\displaystyle H} . Similar considerations apply to the right cosets of H {\displaystyle H} . The left cosets of H {\displaystyle H} may or may not be the same as its right cosets. If they are (that is, if all g {\displaystyle g} in G {\displaystyle G} satisfy g H = H g {\displaystyle gH=Hg} ), then H {\displaystyle H} is said to be a normal subgroup. In D 4 {\displaystyle \mathrm {D} _{4}} , the group of symmetries of a square, with its subgroup R {\displaystyle R} of rotations, the left cosets g R {\displaystyle gR} are either equal to R {\displaystyle R} , if g {\displaystyle g} is an element of R {\displaystyle R} itself, or otherwise equal to U = f c R = { f c , f d , f v , f h } {\displaystyle U=f_{\mathrm {c} }R=\{f_{\mathrm {c} },f_{\mathrm {d} },f_{\mathrm {v} },f_{\mathrm {h} }\}} (highlighted in green in the Cayley table of D 4 {\displaystyle \mathrm {D} _{4}} ). The subgroup R {\displaystyle R} is normal, because f c R = U = R f c {\displaystyle f_{\mathrm {c} }R=U=Rf_{\mathrm {c} }} and similarly for the other elements of the group. (In fact, in the case of D 4 {\displaystyle \mathrm {D} _{4}} , the cosets generated by reflections are all equal: f h R = f v R = f d R = f c R {\displaystyle f_{\mathrm {h} }R=f_{\mathrm {v} }R=f_{\mathrm {d} }R=f_{\mathrm {c} }R} .) === Quotient groups === Suppose that N {\displaystyle N} is a normal subgroup of a group G {\displaystyle G} , and G / N = { g N ∣ g ∈ G } {\displaystyle G/N=\{gN\mid g\in G\}} denotes its set of cosets. Then there is a unique group law on G / N {\displaystyle G/N} for which the map G → G / N {\displaystyle G\to G/N} sending each element g {\displaystyle g} to g N {\displaystyle gN} is a homomorphism. Explicitly, the product of two cosets g N {\displaystyle gN} and h N {\displaystyle hN} is ( g h ) N {\displaystyle (gh)N} , the coset e N = N {\displaystyle eN=N} serves as the identity of G / N {\displaystyle G/N} , and the inverse of g N {\displaystyle gN} in the quotient group is ( g N ) − 1 = ( g − 1 ) N {\displaystyle (gN)^{-1}=\left(g^{-1}\right)N} . The group G / N {\displaystyle G/N} , read as " G {\displaystyle G} modulo N {\displaystyle N} ", is called a quotient group or factor group. The quotient group can alternatively be characterized by a universal property. The elements of the quotient group D 4 / R {\displaystyle \mathrm {D} _{4}/R} are R {\displaystyle R} and U = f v R {\displaystyle U=f_{\mathrm {v} }R} . The group operation on the quotient is shown in the table. For example, U ⋅ U = f v R ⋅ f v R = ( f v ⋅ f v ) R = R {\displaystyle U\cdot U=f_{\mathrm {v} }R\cdot f_{\mathrm {v} }R=(f_{\mathrm {v} }\cdot f_{\mathrm {v} })R=R} . Both the subgroup R = { i d , r 1 , r 2 , r 3 } {\displaystyle R=\{\mathrm {id} ,r_{1},r_{2},r_{3}\}} and the quotient D 4 / R {\displaystyle \mathrm {D} _{4}/R} are abelian, but D 4 {\displaystyle \mathrm {D} _{4}} is not. Sometimes a group can be reconstructed from a subgroup and quotient (plus some additional data), by the semidirect product construction; D 4 {\displaystyle \mathrm {D} _{4}} is an example. The first isomorphism theorem implies that any surjective homomorphism ϕ : G → H {\displaystyle \phi :G\to H} factors canonically as a quotient homomorphism followed by an isomorphism: G → G / ker ϕ → ∼ H {\displaystyle G\to G/\ker \phi \;{\stackrel {\sim }{\to }}\;H} . Surjective homomorphisms are the epimorphisms in the category of groups. === Presentations === Every group is isomorphic to a quotient of a free group, in many ways. For example, the dihedral group D 4 {\displaystyle \mathrm {D} _{4}} is generated by the right rotation r 1 {\displaystyle r_{1}} and the reflection f v {\displaystyle f_{\mathrm {v} }} in a vertical line (every element of D 4 {\displaystyle \mathrm {D} _{4}} is a finite product of copies of these and their inverses). Hence there is a surjective homomorphism ϕ {\displaystyle \phi } from the free group ⟨ r , f ⟩ {\displaystyle \langle r,f\rangle } on two generators to D 4 {\displaystyle \mathrm {D} _{4}} sending r {\displaystyle r} to r 1 {\displaystyle r_{1}} and f {\displaystyle f} to f 1 {\displaystyle f_{1}} . Elements in ker ϕ {\displaystyle \ker \phi } are called relations; examples include r 4 , f 2 , ( r ⋅ f ) 2 {\displaystyle r^{4},f^{2},(r\cdot f)^{2}} . In fact, it turns out that ker ϕ {\displaystyle \ker \phi } is the smallest normal subgroup of ⟨ r , f ⟩ {\displaystyle \langle r,f\rangle } containing these three elements; in other words, all relations are consequences of these three. The quotient of the free group by this normal subgroup is denoted ⟨ r , f ∣ r 4 = f 2 = ( r ⋅ f ) 2 = 1 ⟩ {\displaystyle \langle r,f\mid r^{4}=f^{2}=(r\cdot f)^{2}=1\rangle } . This is called a presentation of D 4 {\displaystyle \mathrm {D} _{4}} by generators and relations, because the first isomorphism theorem for ϕ {\displaystyle \phi } yields an isomorphism ⟨ r , f ∣ r 4 = f 2 = ( r ⋅ f ) 2 = 1 ⟩ → D 4 {\displaystyle \langle r,f\mid r^{4}=f^{2}=(r\cdot f)^{2}=1\rangle \to \mathrm {D} _{4}} . A presentation of a group can be used to construct the Cayley graph, a graphical depiction of a discrete group. == Examples and applications == Examples and applications of groups abound. A starting point is the group Z {\displaystyle \mathbb {Z} } of integers with addition as group operation, introduced above. If instead of addition multiplication is considered, one obtains multiplicative groups. These groups are predecessors of important constructions in abstract algebra. Groups are also applied in many other mathematical areas. Mathematical objects are often examined by associating groups to them and studying the properties of the corresponding groups. For example, Henri Poincaré founded what is now called algebraic topology by introducing the fundamental group. By means of this connection, topological properties such as proximity and continuity translate into properties of groups. Elements of the fundamental group of a topological space are equivalence classes of loops, where loops are considered equivalent if one can be smoothly deformed into another, and the group operation is "concatenation" (tracing one loop then the other). For example, as shown in the figure, if the topological space is the plane with one point removed, then loops which do not wrap around the missing point (blue) can be smoothly contracted to a single point and are the identity element of the fundamental group. A loop which wraps around the missing point k {\displaystyle k} times cannot be deformed into a loop which wraps m {\displaystyle m} times (with m ≠ k {\displaystyle m\neq k} ), because the loop cannot be smoothly deformed across the hole, so each class of loops is characterized by its winding number around the missing point. The resulting group is isomorphic to the integers under addition. In more recent applications, the influence has also been reversed to motivate geometric constructions by a group-theoretical background. In a similar vein, geometric group theory employs geometric concepts, for example in the study of hyperbolic groups. Further branches crucially applying groups include algebraic geometry and number theory. In addition to the above theoretical applications, many practical applications of groups exist. Cryptography relies on the combination of the abstract group theory approach together with algorithmical knowledge obtained in computational group theory, in particular when implemented for finite groups. Applications of group theory are not restricted to mathematics; sciences such as physics, chemistry and computer science benefit from the concept. === Numbers === Many number systems, such as the integers and the rationals, enjoy a naturally given group structure. In some cases, such as with the rationals, both addition and multiplication operations give rise to group structures. Such number systems are predecessors to more general algebraic structures known as rings and fields. Further abstract algebraic concepts such as modules, vector spaces and algebras also form groups. ==== Integers ==== The group of integers Z {\displaystyle \mathbb {Z} } under addition, denoted ( Z , + ) {\displaystyle \left(\mathbb {Z} ,+\right)} , has been described above. The integers, with the operation of multiplication instead of addition, ( Z , ⋅ ) {\displaystyle \left(\mathbb {Z} ,\cdot \right)} do not form a group. The associativity and identity axioms are satisfied, but inverses do not exist: for example, a = 2 {\displaystyle a=2} is an integer, but the only solution to the equation a ⋅ b = 1 {\displaystyle a\cdot b=1} in this case is b = 1 2 {\displaystyle b={\tfrac {1}{2}}} , which is a rational number, but not an integer. Hence not every element of Z {\displaystyle \mathbb {Z} } has a (multiplicative) inverse. ==== Rationals ==== The desire for the existence of multiplicative inverses suggests considering fractions a b . {\displaystyle {\frac {a}{b}}.} Fractions of integers (with b {\displaystyle b} nonzero) are known as rational numbers. The set of all such irreducible fractions is commonly denoted Q {\displaystyle \mathbb {Q} } . There is still a minor obstacle for ( Q , ⋅ ) {\displaystyle \left(\mathbb {Q} ,\cdot \right)} , the rationals with multiplication, being a group: because zero does not have a multiplicative inverse (i.e., there is no x {\displaystyle x} such that x ⋅ 0 = 1 {\displaystyle x\cdot 0=1} ), ( Q , ⋅ ) {\displaystyle \left(\mathbb {Q} ,\cdot \right)} is still not a group. However, the set of all nonzero rational numbers Q ∖ { 0 } = { q ∈ Q ∣ q ≠ 0 } {\displaystyle \mathbb {Q} \smallsetminus \left\{0\right\}=\left\{q\in \mathbb {Q} \mid q\neq 0\right\}} does form an abelian group under multiplication, also denoted Q × {\displaystyle \mathbb {Q} ^{\times }} . Associativity and identity element axioms follow from the properties of integers. The closure requirement still holds true after removing zero, because the product of two nonzero rationals is never zero. Finally, the inverse of a / b {\displaystyle a/b} is b / a {\displaystyle b/a} , therefore the axiom of the inverse element is satisfied. The rational numbers (including zero) also form a group under addition. Intertwining addition and multiplication operations yields more complicated structures called rings and – if division by other than zero is possible, such as in Q {\displaystyle \mathbb {Q} } – fields, which occupy a central position in abstract algebra. Group theoretic arguments therefore underlie parts of the theory of those entities. === Modular arithmetic === Modular arithmetic for a modulus n {\displaystyle n} defines any two elements a {\displaystyle a} and b {\displaystyle b} that differ by a multiple of n {\displaystyle n} to be equivalent, denoted by a ≡ b ( mod n ) {\displaystyle a\equiv b{\pmod {n}}} . Every integer is equivalent to one of the integers from 0 {\displaystyle 0} to n − 1 {\displaystyle n-1} , and the operations of modular arithmetic modify normal arithmetic by replacing the result of any operation by its equivalent representative. Modular addition, defined in this way for the integers from 0 {\displaystyle 0} to n − 1 {\displaystyle n-1} , forms a group, denoted as Z n {\displaystyle \mathrm {Z} _{n}} or ( Z / n Z , + ) {\displaystyle (\mathbb {Z} /n\mathbb {Z} ,+)} , with 0 {\displaystyle 0} as the identity element and n − a {\displaystyle n-a} as the inverse element of a {\displaystyle a} . A familiar example is addition of hours on the face of a clock, where 12 rather than 0 is chosen as the representative of the identity. If the hour hand is on 9 {\displaystyle 9} and is advanced 4 {\displaystyle 4} hours, it ends up on 1 {\displaystyle 1} , as shown in the illustration. This is expressed by saying that 9 + 4 {\displaystyle 9+4} is congruent to 1 {\displaystyle 1} "modulo 12 {\displaystyle 12} " or, in symbols, 9 + 4 ≡ 1 ( mod 12 ) . {\displaystyle 9+4\equiv 1{\pmod {12}}.} For any prime number p {\displaystyle p} , there is also the multiplicative group of integers modulo p {\displaystyle p} . Its elements can be represented by 1 {\displaystyle 1} to p − 1 {\displaystyle p-1} . The group operation, multiplication modulo p {\displaystyle p} , replaces the usual product by its representative, the remainder of division by p {\displaystyle p} . For example, for p = 5 {\displaystyle p=5} , the four group elements can be represented by 1 , 2 , 3 , 4 {\displaystyle 1,2,3,4} . In this group, 4 ⋅ 4 ≡ 1 mod 5 {\displaystyle 4\cdot 4\equiv 1{\bmod {5}}} , because the usual product 16 {\displaystyle 16} is equivalent to 1 {\displaystyle 1} : when divided by 5 {\displaystyle 5} it yields a remainder of 1 {\displaystyle 1} . The primality of p {\displaystyle p} ensures that the usual product of two representatives is not divisible by p {\displaystyle p} , and therefore that the modular product is nonzero. The identity element is represented by 1 {\displaystyle 1} , and associativity follows from the corresponding property of the integers. Finally, the inverse element axiom requires that given an integer a {\displaystyle a} not divisible by p {\displaystyle p} , there exists an integer b {\displaystyle b} such that a ⋅ b ≡ 1 ( mod p ) , {\displaystyle a\cdot b\equiv 1{\pmod {p}},} that is, such that p {\displaystyle p} evenly divides a ⋅ b − 1 {\displaystyle a\cdot b-1} . The inverse b {\displaystyle b} can be found by using Bézout's identity and the fact that the greatest common divisor gcd ( a , p ) {\displaystyle \gcd(a,p)} equals 1 {\displaystyle 1} . In the case p = 5 {\displaystyle p=5} above, the inverse of the element represented by 4 {\displaystyle 4} is that represented by 4 {\displaystyle 4} , and the inverse of the element represented by 3 {\displaystyle 3} is represented by 2 {\displaystyle 2} , as 3 ⋅ 2 = 6 ≡ 1 mod 5 {\displaystyle 3\cdot 2=6\equiv 1{\bmod {5}}} . Hence all group axioms are fulfilled. This example is similar to ( Q ∖ { 0 } , ⋅ ) {\displaystyle \left(\mathbb {Q} \smallsetminus \left\{0\right\},\cdot \right)} above: it consists of exactly those elements in the ring Z / p Z {\displaystyle \mathbb {Z} /p\mathbb {Z} } that have a multiplicative inverse. These groups, denoted F p × {\displaystyle \mathbb {F} _{p}^{\times }} , are crucial to public-key cryptography. === Cyclic groups === A cyclic group is a group all of whose elements are powers of a particular element a {\displaystyle a} . In multiplicative notation, the elements of the group are … , a − 3 , a − 2 , a − 1 , a 0 , a , a 2 , a 3 , … , {\displaystyle \dots ,a^{-3},a^{-2},a^{-1},a^{0},a,a^{2},a^{3},\dots ,} where a 2 {\displaystyle a^{2}} means a ⋅ a {\displaystyle a\cdot a} , a − 3 {\displaystyle a^{-3}} stands for a − 1 ⋅ a − 1 ⋅ a − 1 = ( a ⋅ a ⋅ a ) − 1 {\displaystyle a^{-1}\cdot a^{-1}\cdot a^{-1}=(a\cdot a\cdot a)^{-1}} , etc. Such an element a {\displaystyle a} is called a generator or a primitive element of the group. In additive notation, the requirement for an element to be primitive is that each element of the group can be written as … , ( − a ) + ( − a ) , − a , 0 , a , a + a , … . {\displaystyle \dots ,(-a)+(-a),-a,0,a,a+a,\dots .} In the groups ( Z / n Z , + ) {\displaystyle (\mathbb {Z} /n\mathbb {Z} ,+)} introduced above, the element 1 {\displaystyle 1} is primitive, so these groups are cyclic. Indeed, each element is expressible as a sum all of whose terms are 1 {\displaystyle 1} . Any cyclic group with n {\displaystyle n} elements is isomorphic to this group. A second example for cyclic groups is the group of n {\displaystyle n} th complex roots of unity, given by complex numbers z {\displaystyle z} satisfying z n = 1 {\displaystyle z^{n}=1} . These numbers can be visualized as the vertices on a regular n {\displaystyle n} -gon, as shown in blue in the image for n = 6 {\displaystyle n=6} . The group operation is multiplication of complex numbers. In the picture, multiplying with z {\displaystyle z} corresponds to a counter-clockwise rotation by 60°. From field theory, the group F p × {\displaystyle \mathbb {F} _{p}^{\times }} is cyclic for prime p {\displaystyle p} : for example, if p = 5 {\displaystyle p=5} , 3 {\displaystyle 3} is a generator since 3 1 = 3 {\displaystyle 3^{1}=3} , 3 2 = 9 ≡ 4 {\displaystyle 3^{2}=9\equiv 4} , 3 3 ≡ 2 {\displaystyle 3^{3}\equiv 2} , and 3 4 ≡ 1 {\displaystyle 3^{4}\equiv 1} . Some cyclic groups have an infinite number of elements. In these groups, for every non-zero element a {\displaystyle a} , all the powers of a {\displaystyle a} are distinct; despite the name "cyclic group", the powers of the elements do not cycle. An infinite cyclic group is isomorphic to ( Z , + ) {\displaystyle (\mathbb {Z} ,+)} , the group of integers under addition introduced above. As these two prototypes are both abelian, so are all cyclic groups. The study of finitely generated abelian groups is quite mature, including the fundamental theorem of finitely generated abelian groups; and reflecting this state of affairs, many group-related notions, such as center and commutator, describe the extent to which a given group is not abelian. === Symmetry groups === Symmetry groups are groups consisting of symmetries of given mathematical objects, principally geometric entities, such as the symmetry group of the square given as an introductory example above, although they also arise in algebra such as the symmetries among the roots of polynomial equations dealt with in Galois theory (see below). Conceptually, group theory can be thought of as the study of symmetry. Symmetries in mathematics greatly simplify the study of geometrical or analytical objects. A group is said to act on another mathematical object X {\displaystyle X} if every group element can be associated to some operation on X {\displaystyle X} and the composition of these operations follows the group law. For example, an element of the (2,3,7) triangle group acts on a triangular tiling of the hyperbolic plane by permuting the triangles. By a group action, the group pattern is connected to the structure of the object being acted on. In chemistry, point groups describe molecular symmetries, while space groups describe crystal symmetries in crystallography. These symmetries underlie the chemical and physical behavior of these systems, and group theory enables simplification of quantum mechanical analysis of these properties. For example, group theory is used to show that optical transitions between certain quantum levels cannot occur simply because of the symmetry of the states involved. Group theory helps predict the changes in physical properties that occur when a material undergoes a phase transition, for example, from a cubic to a tetrahedral crystalline form. An example is ferroelectric materials, where the change from a paraelectric to a ferroelectric state occurs at the Curie temperature and is related to a change from the high-symmetry paraelectric state to the lower symmetry ferroelectric state, accompanied by a so-called soft phonon mode, a vibrational lattice mode that goes to zero frequency at the transition. Such spontaneous symmetry breaking has found further application in elementary particle physics, where its occurrence is related to the appearance of Goldstone bosons. Finite symmetry groups such as the Mathieu groups are used in coding theory, which is in turn applied in error correction of transmitted data, and in CD players. Another application is differential Galois theory, which characterizes functions having antiderivatives of a prescribed form, giving group-theoretic criteria for when solutions of certain differential equations are well-behaved. Geometric properties that remain stable under group actions are investigated in (geometric) invariant theory. === General linear group and representation theory === Matrix groups consist of matrices together with matrix multiplication. The general linear group G L ( n , R ) {\displaystyle \mathrm {GL} (n,\mathbb {R} )} consists of all invertible n {\displaystyle n} -by- n {\displaystyle n} matrices with real entries. Its subgroups are referred to as matrix groups or linear groups. The dihedral group example mentioned above can be viewed as a (very small) matrix group. Another important matrix group is the special orthogonal group S O ( n ) {\displaystyle \mathrm {SO} (n)} . It describes all possible rotations in n {\displaystyle n} dimensions. Rotation matrices in this group are used in computer graphics. Representation theory is both an application of the group concept and important for a deeper understanding of groups. It studies the group by its group actions on other spaces. A broad class of group representations are linear representations in which the group acts on a vector space, such as the three-dimensional Euclidean space R 3 {\displaystyle \mathbb {R} ^{3}} . A representation of a group G {\displaystyle G} on an n {\displaystyle n} -dimensional real vector space is simply a group homomorphism ρ : G → G L ( n , R ) {\displaystyle \rho :G\to \mathrm {GL} (n,\mathbb {R} )} from the group to the general linear group. This way, the group operation, which may be abstractly given, translates to the multiplication of matrices making it accessible to explicit computations. A group action gives further means to study the object being acted on. On the other hand, it also yields information about the group. Group representations are an organizing principle in the theory of finite groups, Lie groups, algebraic groups and topological groups, especially (locally) compact groups. === Galois groups === Galois groups were developed to help solve polynomial equations by capturing their symmetry features. For example, the solutions of the quadratic equation a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} are given by x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.} Each solution can be obtained by replacing the ± {\displaystyle \pm } sign by + {\displaystyle +} or − {\displaystyle -} ; analogous formulae are known for cubic and quartic equations, but do not exist in general for degree 5 and higher. In the quadratic formula, changing the sign (permuting the resulting two solutions) can be viewed as a (very simple) group operation. Analogous Galois groups act on the solutions of higher-degree polynomial equations and are closely related to the existence of formulas for their solution. Abstract properties of these groups (in particular their solvability) give a criterion for the ability to express the solutions of these polynomials using solely addition, multiplication, and roots similar to the formula above. Modern Galois theory generalizes the above type of Galois groups by shifting to field theory and considering field extensions formed as the splitting field of a polynomial. This theory establishes—via the fundamental theorem of Galois theory—a precise relationship between fields and groups, underlining once again the ubiquity of groups in mathematics. == Finite groups == A group is called finite if it has a finite number of elements. The number of elements is called the order of the group. An important class is the symmetric groups S N {\displaystyle \mathrm {S} _{N}} , the groups of permutations of N {\displaystyle N} objects. For example, the symmetric group on 3 letters S 3 {\displaystyle \mathrm {S} _{3}} is the group of all possible reorderings of the objects. The three letters ABC can be reordered into ABC, ACB, BAC, BCA, CAB, CBA, forming in total 6 (factorial of 3) elements. The group operation is composition of these reorderings, and the identity element is the reordering operation that leaves the order unchanged. This class is fundamental insofar as any finite group can be expressed as a subgroup of a symmetric group S N {\displaystyle \mathrm {S} _{N}} for a suitable integer N {\displaystyle N} , according to Cayley's theorem. Parallel to the group of symmetries of the square above, S 3 {\displaystyle \mathrm {S} _{3}} can also be interpreted as the group of symmetries of an equilateral triangle. The order of an element a {\displaystyle a} in a group G {\displaystyle G} is the least positive integer n {\displaystyle n} such that a n = e {\displaystyle a^{n}=e} , where a n {\displaystyle a^{n}} represents a ⋯ a ⏟ n factors , {\displaystyle \underbrace {a\cdots a} _{n{\text{ factors}}},} that is, application of the operation " ⋅ {\displaystyle \cdot } " to n {\displaystyle n} copies of a {\displaystyle a} . (If " ⋅ {\displaystyle \cdot } " represents multiplication, then a n {\displaystyle a^{n}} corresponds to the n {\displaystyle n} th power of a {\displaystyle a} .) In infinite groups, such an n {\displaystyle n} may not exist, in which case the order of a {\displaystyle a} is said to be infinity. The order of an element equals the order of the cyclic subgroup generated by this element. More sophisticated counting techniques, for example, counting cosets, yield more precise statements about finite groups: Lagrange's Theorem states that for a finite group G {\displaystyle G} the order of any finite subgroup H {\displaystyle H} divides the order of G {\displaystyle G} . The Sylow theorems give a partial converse. The dihedral group D 4 {\displaystyle \mathrm {D} _{4}} of symmetries of a square is a finite group of order 8. In this group, the order of r 1 {\displaystyle r_{1}} is 4, as is the order of the subgroup R {\displaystyle R} that this element generates. The order of the reflection elements f v {\displaystyle f_{\mathrm {v} }} etc. is 2. Both orders divide 8, as predicted by Lagrange's theorem. The groups F p × {\displaystyle \mathbb {F} _{p}^{\times }} of multiplication modulo a prime p {\displaystyle p} have order p − 1 {\displaystyle p-1} . === Finite abelian groups === Any finite abelian group is isomorphic to a product of finite cyclic groups; this statement is part of the fundamental theorem of finitely generated abelian groups. Any group of prime order p {\displaystyle p} is isomorphic to the cyclic group Z p {\displaystyle \mathrm {Z} _{p}} (a consequence of Lagrange's theorem). Any group of order p 2 {\displaystyle p^{2}} is abelian, isomorphic to Z p 2 {\displaystyle \mathrm {Z} _{p^{2}}} or Z p × Z p {\displaystyle \mathrm {Z} _{p}\times \mathrm {Z} _{p}} . But there exist nonabelian groups of order p 3 {\displaystyle p^{3}} ; the dihedral group D 4 {\displaystyle \mathrm {D} _{4}} of order 2 3 {\displaystyle 2^{3}} above is an example. === Simple groups === When a group G {\displaystyle G} has a normal subgroup N {\displaystyle N} other than { 1 } {\displaystyle \{1\}} and G {\displaystyle G} itself, questions about G {\displaystyle G} can sometimes be reduced to questions about N {\displaystyle N} and G / N {\displaystyle G/N} . A nontrivial group is called simple if it has no such normal subgroup. Finite simple groups are to finite groups as prime numbers are to positive integers: they serve as building blocks, in a sense made precise by the Jordan–Hölder theorem. === Classification of finite simple groups === Computer algebra systems have been used to list all groups of order up to 2000. But classifying all finite groups is a problem considered too hard to be solved. The classification of all finite simple groups was a major achievement in contemporary group theory. There are several infinite families of such groups, as well as 26 "sporadic groups" that do not belong to any of the families. The largest sporadic group is called the monster group. The monstrous moonshine conjectures, proved by Richard Borcherds, relate the monster group to certain modular functions. The gap between the classification of simple groups and the classification of all groups lies in the extension problem. == Groups with additional structure == An equivalent definition of group consists of replacing the "there exist" part of the group axioms by operations whose result is the element that must exist. So, a group is a set G {\displaystyle G} equipped with a binary operation G × G → G {\displaystyle G\times G\rightarrow G} (the group operation), a unary operation G → G {\displaystyle G\rightarrow G} (which provides the inverse) and a nullary operation, which has no operand and results in the identity element. Otherwise, the group axioms are exactly the same. This variant of the definition avoids existential quantifiers and is used in computing with groups and for computer-aided proofs. This way of defining groups lends itself to generalizations such as the notion of group object in a category. Briefly, this is an object with morphisms that mimic the group axioms. === Topological groups === Some topological spaces may be endowed with a group law. In order for the group law and the topology to interweave well, the group operations must be continuous functions; informally, g ⋅ h {\displaystyle g\cdot h} and g − 1 {\displaystyle g^{-1}} must not vary wildly if g {\displaystyle g} and h {\displaystyle h} vary only a little. Such groups are called topological groups, and they are the group objects in the category of topological spaces. The most basic examples are the group of real numbers under addition and the group of nonzero real numbers under multiplication. Similar examples can be formed from any other topological field, such as the field of complex numbers or the field of p-adic numbers. These examples are locally compact, so they have Haar measures and can be studied via harmonic analysis. Other locally compact topological groups include the group of points of an algebraic group over a local field or adele ring; these are basic to number theory Galois groups of infinite algebraic field extensions are equipped with the Krull topology, which plays a role in infinite Galois theory. A generalization used in algebraic geometry is the étale fundamental group. === Lie groups === A Lie group is a group that also has the structure of a differentiable manifold; informally, this means that it looks locally like a Euclidean space of some fixed dimension. Again, the definition requires the additional structure, here the manifold structure, to be compatible: the multiplication and inverse maps are required to be smooth. A standard example is the general linear group introduced above: it is an open subset of the space of all n {\displaystyle n} -by- n {\displaystyle n} matrices, because it is given by the inequality det ( A ) ≠ 0 , {\displaystyle \det(A)\neq 0,} where A {\displaystyle A} denotes an n {\displaystyle n} -by- n {\displaystyle n} matrix. Lie groups are of fundamental importance in modern physics: Noether's theorem links continuous symmetries to conserved quantities. Rotation, as well as translations in space and time, are basic symmetries of the laws of mechanics. They can, for instance, be used to construct simple models—imposing, say, axial symmetry on a situation will typically lead to significant simplification in the equations one needs to solve to provide a physical description. Another example is the group of Lorentz transformations, which relate measurements of time and velocity of two observers in motion relative to each other. They can be deduced in a purely group-theoretical way, by expressing the transformations as a rotational symmetry of Minkowski space. The latter serves—in the absence of significant gravitation—as a model of spacetime in special relativity. The full symmetry group of Minkowski space, i.e., including translations, is known as the Poincaré group. By the above, it plays a pivotal role in special relativity and, by implication, for quantum field theories. Symmetries that vary with location are central to the modern description of physical interactions with the help of gauge theory. An important example of a gauge theory is the Standard Model, which describes three of the four known fundamental forces and classifies all known elementary particles. == Generalizations == More general structures may be defined by relaxing some of the axioms defining a group. The table gives a list of several structures generalizing groups. For example, if the requirement that every element has an inverse is eliminated, the resulting algebraic structure is called a monoid. The natural numbers N {\displaystyle \mathbb {N} } (including zero) under addition form a monoid, as do the nonzero integers under multiplication ( Z ∖ { 0 } , ⋅ ) {\displaystyle (\mathbb {Z} \smallsetminus \{0\},\cdot )} . Adjoining inverses of all elements of the monoid ( Z ∖ { 0 } , ⋅ ) {\displaystyle (\mathbb {Z} \smallsetminus \{0\},\cdot )} produces a group ( Q ∖ { 0 } , ⋅ ) {\displaystyle (\mathbb {Q} \smallsetminus \{0\},\cdot )} , and likewise adjoining inverses to any (abelian) monoid M {\displaystyle M} produces a group known as the Grothendieck group of M {\displaystyle M} . A group can be thought of as a small category with one object x {\displaystyle x} in which every morphism is an isomorphism: given such a category, the set Hom ( x , x ) {\displaystyle \operatorname {Hom} (x,x)} is a group; conversely, given a group G {\displaystyle G} , one can build a small category with one object x {\displaystyle x} in which Hom ( x , x ) ≃ G {\displaystyle \operatorname {Hom} (x,x)\simeq G} . More generally, a groupoid is any small category in which every morphism is an isomorphism. In a groupoid, the set of all morphisms in the category is usually not a group, because the composition is only partially defined: f g {\displaystyle fg} is defined only when the source of f {\displaystyle f} matches the target of g {\displaystyle g} . Groupoids arise in topology (for instance, the fundamental groupoid) and in the theory of stacks. Finally, it is possible to generalize any of these concepts by replacing the binary operation with an n-ary operation (i.e., an operation taking n arguments, for some nonnegative integer n). With the proper generalization of the group axioms, this gives a notion of n-ary group. == See also == List of group theory topics == Notes == == Citations == == References == == External links == Weisstein, Eric W., "Group", MathWorld
|
https://en.wikipedia.org/wiki/Group_(mathematics)
|
Structuralism is a theory in the philosophy of mathematics that holds that mathematical theories describe structures of mathematical objects. Mathematical objects are exhaustively defined by their place in such structures. Consequently, structuralism maintains that mathematical objects do not possess any intrinsic properties but are defined by their external relations in a system. For instance, structuralism holds that the number 1 is exhaustively defined by being the successor of 0 in the structure of the theory of natural numbers. By generalization of this example, any natural number is defined by its respective place in that theory. Other examples of mathematical objects might include lines and planes in geometry, or elements and operations in abstract algebra. Structuralism is an epistemologically realistic view in that it holds that mathematical statements have an objective truth value. However, its central claim only relates to what kind of entity a mathematical object is, not to what kind of existence mathematical objects or structures have (not, in other words, to their ontology). The kind of existence that mathematical objects have would be dependent on that of the structures in which they are embedded; different sub-varieties of structuralism make different ontological claims in this regard. Structuralism in the philosophy of mathematics is particularly associated with Paul Benacerraf, Geoffrey Hellman, Michael Resnik, Stewart Shapiro and James Franklin. == Historical motivation == The historical motivation for the development of structuralism derives from a fundamental problem of ontology. Since Medieval times, philosophers have argued as to whether the ontology of mathematics contains abstract objects. In the philosophy of mathematics, an abstract object is traditionally defined as an entity that: (1) exists independent of the mind; (2) exists independent of the empirical world; and (3) has eternal, unchangeable properties. Traditional mathematical Platonism maintains that some set of mathematical elements—natural numbers, real numbers, functions, relations, systems—are such abstract objects. Contrarily, mathematical nominalism denies the existence of any such abstract objects in the ontology of mathematics. In the late 19th and early 20th century, a number of anti-Platonist programs gained in popularity. These included intuitionism, formalism, and predicativism. By the mid-20th century, however, these anti-Platonist theories had a number of their own issues. This subsequently resulted in a resurgence of interest in Platonism. It was in this historic context that the motivations for structuralism developed. In 1965, Paul Benacerraf published an article entitled "What Numbers Could Not Be". Benacerraf concluded, on two principal arguments, that set-theoretic Platonism cannot succeed as a philosophical theory of mathematics. Firstly, Benacerraf argued that Platonic approaches do not pass the ontological test. He developed an argument against the ontology of set-theoretic Platonism, which is now historically referred to as Benacerraf's identification problem. Benacerraf noted that there are elementarily equivalent, set-theoretic ways of relating natural numbers to pure sets. However, if someone asks for the "true" identity statements for relating natural numbers to pure sets, then different set-theoretic methods yield contradictory identity statements when these elementarily equivalent sets are related together. This generates a set-theoretic falsehood. Consequently, Benacerraf inferred that this set-theoretic falsehood demonstrates it is impossible for there to be any Platonic method of reducing numbers to sets that reveals any abstract objects. Secondly, Benacerraf argued that Platonic approaches do not pass the epistemological test. Benacerraf contended that there does not exist an empirical or rational method for accessing abstract objects. If mathematical objects are not spatial or temporal, then Benacerraf infers that such objects are not accessible through the causal theory of knowledge. The fundamental epistemological problem thus arises for the Platonist to offer a plausible account of how a mathematician with a limited, empirical mind is capable of accurately accessing mind-independent, world-independent, eternal truths. It was from these considerations, the ontological argument and the epistemological argument, that Benacerraf's anti-Platonic critiques motivated the development of structuralism in the philosophy of mathematics. == Varieties == Stewart Shapiro divides structuralism into three major schools of thought. These schools are referred to as the ante rem, the in re, and the post rem. The ante rem structuralism ("before the thing"), or abstract structuralism or abstractionism (particularly associated with Michael Resnik, Stewart Shapiro, Edward N. Zalta, and Øystein Linnebo) has a similar ontology to Platonism (see also modal neo-logicism). Structures are held to have a real but abstract and immaterial existence. As such, it faces the standard epistemological problem, as noted by Benacerraf, of explaining the interaction between such abstract structures and flesh-and-blood mathematicians. The in rem structuralism ("in the thing"), or modal structuralism (particularly associated with Geoffrey Hellman), is the equivalent of Aristotelian realism (realism in truth value, but anti-realism about abstract objects in ontology). Structures are held to exist inasmuch as some concrete system exemplifies them. This incurs the usual issues that some perfectly legitimate structures might accidentally happen not to exist, and that a finite physical world might not be "big" enough to accommodate some otherwise legitimate structures. The Aristotelian realism of James Franklin is also an in re structuralism, arguing that structural properties such as symmetry are instantiated in the physical world and are perceivable. In reply to the problem of uninstantiated structures that are too big to fit into the physical world, Franklin replies that other sciences can also deal with uninstantiated universals; for example the science of color can deal with a shade of blue that happens not to occur on any real object. The post rem structuralism ("after the thing"), or eliminative structuralism (particularly associated with Paul Benacerraf), is anti-realist about structures in a way that parallels nominalism. Like nominalism, the post rem approach denies the existence of abstract mathematical objects with properties other than their place in a relational structure. According to this view mathematical systems exist, and have structural features in common. If something is true of a structure, it will be true of all systems exemplifying the structure. However, it is merely instrumental to talk of structures being "held in common" between systems: they in fact have no independent existence. == See also == Abstract object theory Foundations of mathematics Univalent foundations Aristotelian realist philosophy of mathematics Precursors Nicolas Bourbaki == References == == Bibliography == == External links == Mathematical Structuralism, Internet Encyclopaedia of Philosophy Abstractionism, Internet Encyclopaedia of Philosophy Foundations of Structuralism research project, University of Bristol, UK
|
https://en.wikipedia.org/wiki/Structuralism_(philosophy_of_mathematics)
|
Experimental mathematics is an approach to mathematics in which computation is used to investigate mathematical objects and identify properties and patterns. It has been defined as "that branch of mathematics that concerns itself ultimately with the codification and transmission of insights within the mathematical community through the use of experimental (in either the Galilean, Baconian, Aristotelian or Kantian sense) exploration of conjectures and more informal beliefs and a careful analysis of the data acquired in this pursuit." As expressed by Paul Halmos: "Mathematics is not a deductive science—that's a cliché. When you try to prove a theorem, you don't just list the hypotheses, and then start to reason. What you do is trial and error, experimentation, guesswork. You want to find out what the facts are, and what you do is in that respect similar to what a laboratory technician does." == History == Mathematicians have always practiced experimental mathematics. Existing records of early mathematics, such as Babylonian mathematics, typically consist of lists of numerical examples illustrating algebraic identities. However, modern mathematics, beginning in the 17th century, developed a tradition of publishing results in a final, formal and abstract presentation. The numerical examples that may have led a mathematician to originally formulate a general theorem were not published, and were generally forgotten. Experimental mathematics as a separate area of study re-emerged in the twentieth century, when the invention of the electronic computer vastly increased the range of feasible calculations, with a speed and precision far greater than anything available to previous generations of mathematicians. A significant milestone and achievement of experimental mathematics was the discovery in 1995 of the Bailey–Borwein–Plouffe formula for the binary digits of π. This formula was discovered not by formal reasoning, but instead by numerical searches on a computer; only afterwards was a rigorous proof found. == Objectives and uses == The objectives of experimental mathematics are "to generate understanding and insight; to generate and confirm or confront conjectures; and generally to make mathematics more tangible, lively and fun for both the professional researcher and the novice". The uses of experimental mathematics have been defined as follows: Gaining insight and intuition. Discovering new patterns and relationships. Using graphical displays to suggest underlying mathematical principles. Testing and especially falsifying conjectures. Exploring a possible result to see if it is worth formal proof. Suggesting approaches for formal proof. Replacing lengthy hand derivations with computer-based derivations. Confirming analytically derived results. == Tools and techniques == Experimental mathematics makes use of numerical methods to calculate approximate values for integrals and infinite series. Arbitrary precision arithmetic is often used to establish these values to a high degree of precision – typically 100 significant figures or more. Integer relation algorithms are then used to search for relations between these values and mathematical constants. Working with high precision values reduces the possibility of mistaking a mathematical coincidence for a true relation. A formal proof of a conjectured relation will then be sought – it is often easier to find a formal proof once the form of a conjectured relation is known. If a counterexample is being sought or a large-scale proof by exhaustion is being attempted, distributed computing techniques may be used to divide the calculations between multiple computers. Frequent use is made of general mathematical software or domain-specific software written for attacks on problems that require high efficiency. Experimental mathematics software usually includes error detection and correction mechanisms, integrity checks and redundant calculations designed to minimise the possibility of results being invalidated by a hardware or software error. == Applications and examples == Applications and examples of experimental mathematics include: Searching for a counterexample to a conjecture Roger Frye used experimental mathematics techniques to find the smallest counterexample to Euler's sum of powers conjecture. The ZetaGrid project was set up to search for a counterexample to the Riemann hypothesis. Tomás Oliveira e Silva searched for a counterexample to the Collatz conjecture. Finding new examples of numbers or objects with particular properties The Great Internet Mersenne Prime Search is searching for new Mersenne primes. The Great Periodic Path Hunt is searching for new periodic paths. distributed.net's OGR project searched for optimal Golomb rulers. The PrimeGrid project is searching for the smallest Riesel and Sierpiński numbers. Finding serendipitous numerical patterns Edward Lorenz found the Lorenz attractor, an early example of a chaotic dynamical system, by investigating anomalous behaviours in a numerical weather model. The Ulam spiral was discovered by accident. The pattern in the Ulam numbers was discovered by accident. Mitchell Feigenbaum's discovery of the Feigenbaum constant was based initially on numerical observations, followed by a rigorous proof. Use of computer programs to check a large but finite number of cases to complete a computer-assisted proof by exhaustion Thomas Hales's proof of the Kepler conjecture. Various proofs of the four colour theorem. Clement Lam's proof of the non-existence of a finite projective plane of order 10. Gary McGuire proved a minimum uniquely solvable Sudoku requires 17 clues. Symbolic validation (via computer algebra) of conjectures to motivate the search for an analytical proof Solutions to a special case of the quantum three-body problem known as the hydrogen molecule-ion were found standard quantum chemistry basis sets before realizing they all lead to the same unique analytical solution in terms of a generalization of the Lambert W function. Related to this work is the isolation of a previously unknown link between gravity theory and quantum mechanics in lower dimensions (see quantum gravity and references therein). In the realm of relativistic many-bodied mechanics, namely the time-symmetric Wheeler–Feynman absorber theory: the equivalence between an advanced Liénard–Wiechert potential of particle j acting on particle i and the corresponding potential for particle i acting on particle j was demonstrated exhaustively to order 1 / c 10 {\displaystyle 1/c^{10}} before being proved mathematically. The Wheeler-Feynman theory has regained interest because of quantum nonlocality. In the realm of linear optics, verification of the series expansion of the envelope of the electric field for ultrashort light pulses travelling in non isotropic media. Previous expansions had been incomplete: the outcome revealed an extra term vindicated by experiment. Evaluation of infinite series, infinite products and integrals (also see symbolic integration), typically by carrying out a high precision numerical calculation, and then using an integer relation algorithm (such as the Inverse Symbolic Calculator) to find a linear combination of mathematical constants that matches this value. For example, the following identity was rediscovered by Enrico Au-Yeung, a student of Jonathan Borwein using computer search and PSLQ algorithm in 1993: ∑ k = 1 ∞ 1 k 2 ( 1 + 1 2 + 1 3 + ⋯ + 1 k ) 2 = 17 π 4 360 . {\displaystyle {\begin{aligned}\sum _{k=1}^{\infty }{\frac {1}{k^{2}}}\left(1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots +{\frac {1}{k}}\right)^{2}={\frac {17\pi ^{4}}{360}}.\end{aligned}}} Visual investigations In Indra's Pearls, David Mumford and others investigated various properties of Möbius transformation and the Schottky group using computer generated images of the groups which: furnished convincing evidence for many conjectures and lures to further exploration. == Plausible but false examples == Some plausible relations hold to a high degree of accuracy, but are still not true. One example is: ∫ 0 ∞ cos ( 2 x ) ∏ n = 1 ∞ cos ( x n ) d x = π 8 . {\displaystyle \int _{0}^{\infty }\cos(2x)\prod _{n=1}^{\infty }\cos \left({\frac {x}{n}}\right)\mathrm {d} x={\frac {\pi }{8}}.} The two sides of this expression actually differ after the 42nd decimal place. Another example is that the maximum height (maximum absolute value of coefficients) of all the factors of xn − 1 appears to be the same as the height of the nth cyclotomic polynomial. This was shown by computer to be true for n < 10000 and was expected to be true for all n. However, a larger computer search showed that this equality fails to hold for n = 14235, when the height of the nth cyclotomic polynomial is 2, but maximum height of the factors is 3. == Practitioners == The following mathematicians and computer scientists have made significant contributions to the field of experimental mathematics: == See also == Borwein integral Computer-aided proof Proofs and Refutations Experimental Mathematics (journal) Institute for Experimental Mathematics == References == == External links == Experimental Mathematics (Journal) Centre for Experimental and Constructive Mathematics (CECM) at Simon Fraser University Collaborative Group for Research in Mathematics Education at University of Southampton Recognizing Numerical Constants by David H. Bailey and Simon Plouffe Psychology of Experimental Mathematics Experimental Mathematics Website (Links and resources) The Great Periodic Path Hunt Website (Links and resources) An Algorithm for the Ages: PSLQ, A Better Way to Find Integer Relations (Alternative link Archived 2021-02-13 at the Wayback Machine) Experimental Algorithmic Information Theory Sample Problems of Experimental Mathematics by David H. Bailey and Jonathan M. Borwein Ten Problems in Experimental Mathematics Archived 2011-06-10 at the Wayback Machine by David H. Bailey, Jonathan M. Borwein, Vishaal Kapoor, and Eric W. Weisstein Institute for Experimental Mathematics Archived 2015-02-10 at the Wayback Machine at University of Duisburg-Essen
|
https://en.wikipedia.org/wiki/Experimental_mathematics
|
In mathematics, an element (or member) of a set is any one of the distinct objects that belong to that set. For example, given a set called A containing the first four positive integers ( A = { 1 , 2 , 3 , 4 } {\displaystyle A=\{1,2,3,4\}} ), one could say that "3 is an element of A", expressed notationally as 3 ∈ A {\displaystyle 3\in A} . == Sets == Writing A = { 1 , 2 , 3 , 4 } {\displaystyle A=\{1,2,3,4\}} means that the elements of the set A are the numbers 1, 2, 3 and 4. Sets of elements of A, for example { 1 , 2 } {\displaystyle \{1,2\}} , are subsets of A. Sets can themselves be elements. For example, consider the set B = { 1 , 2 , { 3 , 4 } } {\displaystyle B=\{1,2,\{3,4\}\}} . The elements of B are not 1, 2, 3, and 4. Rather, there are only three elements of B, namely the numbers 1 and 2, and the set { 3 , 4 } {\displaystyle \{3,4\}} . The elements of a set can be anything. For example the elements of the set C = { r e d , 12 , B } {\displaystyle C=\{\mathrm {\color {Red}red} ,\mathrm {12} ,B\}} are the color red, the number 12, and the set B. In logical terms, ( x ∈ y ) ↔ ∀ x [ P x = y ] : x ∈ D y {\displaystyle (x\in y)\leftrightarrow \forall x[P_{x}=y]:x\in {\mathfrak {D}}y} . == Notation and terminology == The binary relation "is an element of", also called set membership, is denoted by the symbol "∈". Writing x ∈ A {\displaystyle x\in A} means that "x is an element of A". Equivalent expressions are "x is a member of A", "x belongs to A", "x is in A" and "x lies in A". The expressions "A includes x" and "A contains x" are also used to mean set membership, although some authors use them to mean instead "x is a subset of A". Logician George Boolos strongly urged that "contains" be used for membership only, and "includes" for the subset relation only. For the relation ∈ , the converse relation ∈T may be written A ∋ x {\displaystyle A\ni x} meaning "A contains or includes x". The negation of set membership is denoted by the symbol "∉". Writing x ∉ A {\displaystyle x\notin A} means that "x is not an element of A". The symbol ∈ was first used by Giuseppe Peano, in his 1889 work Arithmetices principia, nova methodo exposita. Here he wrote on page X: Signum ∈ significat est. Ita a ∈ b legitur a est quoddam b; … which means The symbol ∈ means is. So a ∈ b is read as a is a certain b; … The symbol itself is a stylized lowercase Greek letter epsilon ("ϵ"), the first letter of the word ἐστί, which means "is". == Examples == Using the sets defined above, namely A = {1, 2, 3, 4}, B = {1, 2, {3, 4}} and C = {red, green, blue}, the following statements are true: 2 ∈ A 5 ∉ A {3, 4} ∈ B 3 ∉ B 4 ∉ B yellow ∉ C == Cardinality of sets == The number of elements in a particular set is a property known as cardinality; informally, this is the size of a set. In the above examples, the cardinality of the set A is 4, while the cardinality of set B and set C are both 3. An infinite set is a set with an infinite number of elements, while a finite set is a set with a finite number of elements. The above examples are examples of finite sets. An example of an infinite set is the set of positive integers {1, 2, 3, 4, ...}. == Formal relation == As a relation, set membership must have a domain and a range. Conventionally the domain is called the universe denoted U. The range is the set of subsets of U called the power set of U and denoted P(U). Thus the relation ∈ {\displaystyle \in } is a subset of U × P(U). The converse relation ∋ {\displaystyle \ni } is a subset of P(U) × U. == See also == Identity element Singleton (mathematics) == References == == Further reading == Halmos, Paul R. (1974) [1960], Naive Set Theory, Undergraduate Texts in Mathematics (Hardcover ed.), NY: Springer-Verlag, ISBN 0-387-90092-6 - "Naive" means that it is not fully axiomatized, not that it is silly or easy (Halmos's treatment is neither). Jech, Thomas (2002), "Set Theory", Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University Suppes, Patrick (1972) [1960], Axiomatic Set Theory, NY: Dover Publications, Inc., ISBN 0-486-61630-4 - Both the notion of set (a collection of members), membership or element-hood, the axiom of extension, the axiom of separation, and the union axiom (Suppes calls it the sum axiom) are needed for a more thorough understanding of "set element".
|
https://en.wikipedia.org/wiki/Element_(mathematics)
|
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions. In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma. In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math. == United Kingdom == === Background === A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles. The structure of the qualification varies between exam boards. With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available. Although the subject has about 60% of its cohort obtaining "A" grades, students choosing the subject are assumed to be more proficient in mathematics, and there is much more overlap of topics compared to base mathematics courses at A level. Some medicine courses do not count maths and further maths as separate subjects for the purposes of making offers. This is due to the overlap in content, and the potentially narrow education a candidate with maths, further maths and just one other subject may have. === Support === There are numerous sources of support for both teachers and students. The AMSP (formerly FMSP) is a government-funded organisation that offers professional development, enrichment activities and is a source of additional materials via its website. Registering with AMSP gives access to Integral, another source of both teaching and learning materials hosted by Mathematics Education Innovation (MEI). Underground Mathematics is another resource in active development which reflects the emphasis on problem solving and reasoning in the UK curriculum. A collection of tasks for post-16 mathematics can be also found on the NRICH site. == Australia (Victoria) == In contrast with other Further Mathematics courses, Further Maths as part of the VCE is the easiest level of mathematics. Any student wishing to undertake tertiary studies in areas such as Science, Engineering, Commerce, Economics and some Information Technology courses must undertake one or both of the other two VCE maths subjects— Mathematical Methods or Specialist Mathematics. The Further Mathematics syllabus in VCE consists of three core modules, which all students undertake, plus two modules chosen by the student (or usually by the school or teacher) from a list of four. The core modules are Univariate Data, Bivariate Data, Time Series, Number Patterns and Business-Related Mathematics. The optional modules are Geometry and Trigonometry, Graphs and Relations, Networks and Decision Mathematics, or Matrices. == Singapore == Further Mathematics is available as a second and higher mathematics course at A Level (now H2), in addition to the Mathematics course at A Level. Students can pursue this subject if they have A2 and better in 'O' Level Mathematics and Additional Mathematics, depending on the school. Some topics covered in this course include mathematical induction, complex number, polar curve and conic sections, differential equations, recurrence relations, matrices and linear spaces, numerical methods, random variables and hypothesis testing and confidence intervals. == International Baccalaureate Diploma == Further Mathematics, as studied within the International Baccalaureate Diploma Programme, was a Higher Level (HL) course that could be taken in conjunction with Mathematics HL or on its own. It consisted of studying all four of the options in Mathematics HL, plus two additional topics. Topics studied in Further Mathematics included: Topic 1 - Linear algebra - studies on matrices, vector spaces, linear and geometric transformations Topic 2 - Geometry - closer look on triangles, circles and conic sections Topic 3 - Statistics and probability - the geometric and negative binomial distributions, unbiased estimators, statistical hypothesis testing and an introduction to bivariate distributions Topic 4 - Sets, relations and groups - algebra of sets, ordered pairs, binary operations and group homomorphism Topic 5 - Calculus - infinite sequences and series, limits, improper integrals and various first-order ordinary differential equations Topic 6 - Discrete mathematics - complete mathematical induction, linear Diophantine equations, Fermat's little theorem, route inspection problem and recurrence relations From 2019, the course has been discontinued and transited into the followings modules: Mathematics: analysis and approaches SL Mathematics: analysis and approaches HL Mathematics: applications and interpretation SL Mathematics: applications and interpretation HL == See also == Additional Mathematics Advanced level mathematics == References == == External links == The Further Mathematics Support Programme Mechanics M1 Material AMSP (Advanced Math Support Program) Integral (High level support for AS/A level Maths & Further Maths) Underground Mathematics (Resources on A level mathematics)
|
https://en.wikipedia.org/wiki/Further_Mathematics
|
A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics. The most basic symbols are the decimal digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9), and the letters of the Latin alphabet. The decimal digits are used for representing numbers through the Hindu–Arabic numeral system. Historically, upper-case letters were used for representing points in geometry, and lower-case letters were used for variables and constants. Letters are used for representing many other types of mathematical object. As the number of these types has increased, the Greek alphabet and some Hebrew letters have also come to be used. For more symbols, other typefaces are also used, mainly boldface a , A , b , B , … {\displaystyle \mathbf {a,A,b,B} ,\ldots } , script typeface A , B , … {\displaystyle {\mathcal {A,B}},\ldots } (the lower-case script face is rarely used because of the possible confusion with the standard face), German fraktur a , A , b , B , … {\displaystyle {\mathfrak {a,A,b,B}},\ldots } , and blackboard bold N , Z , Q , R , C , H , F q {\displaystyle \mathbb {N,Z,Q,R,C,H,F} _{q}} (the other letters are rarely used in this face, or their use is unconventional). It is commonplace to use alphabets, fonts and typefaces to group symbols by type. The use of specific Latin and Greek letters as symbols for denoting mathematical objects is not described in this article. For such uses, see Variable § Conventional variable names and List of mathematical constants. However, some symbols that are described here have the same shape as the letter from which they are derived, such as ∏ {\displaystyle \textstyle \prod {}} and ∑ {\displaystyle \textstyle \sum {}} . These letters alone are not sufficient for the needs of mathematicians, and many other symbols are used. Some take their origin in punctuation marks and diacritics traditionally used in typography; others by deforming letter forms, as in the cases of ∈ {\displaystyle \in } and ∀ {\displaystyle \forall } . Others, such as + and =, were specially designed for mathematics. == Layout of this article == Normally, entries of a glossary are structured by topics and sorted alphabetically. This is not possible here, as there is no natural order on symbols, and many symbols are used in different parts of mathematics with different meanings, often completely unrelated. Therefore, some arbitrary choices had to be made, which are summarized below. The article is split into sections that are sorted by an increasing level of technicality. That is, the first sections contain the symbols that are encountered in most mathematical texts, and that are supposed to be known even by beginners. On the other hand, the last sections contain symbols that are specific to some area of mathematics and are ignored outside these areas. However, the long section on brackets has been placed near to the end, although most of its entries are elementary: this makes it easier to search for a symbol entry by scrolling. Most symbols have multiple meanings that are generally distinguished either by the area of mathematics where they are used or by their syntax, that is, by their position inside a formula and the nature of the other parts of the formula that are close to them. As readers may not be aware of the area of mathematics to which the symbol that they are looking for is related, the different meanings of a symbol are grouped in the section corresponding to their most common meaning. When the meaning depends on the syntax, a symbol may have different entries depending on the syntax. For summarizing the syntax in the entry name, the symbol ◻ {\displaystyle \Box } is used for representing the neighboring parts of a formula that contains the symbol. See § Brackets for examples of use. Most symbols have two printed versions. They can be displayed as Unicode characters, or in LaTeX format. With the Unicode version, using search engines and copy-pasting are easier. On the other hand, the LaTeX rendering is often much better (more aesthetic), and is generally considered a standard in mathematics. Therefore, in this article, the Unicode version of the symbols is used (when possible) for labelling their entry, and the LaTeX version is used in their description. So, for finding how to type a symbol in LaTeX, it suffices to look at the source of the article. For most symbols, the entry name is the corresponding Unicode symbol. So, for searching the entry of a symbol, it suffices to type or copy the Unicode symbol into the search textbox. Similarly, when possible, the entry name of a symbol is also an anchor, which allows linking easily from another Wikipedia article. When an entry name contains special characters such as [,], and |, there is also an anchor, but one has to look at the article source to know it. Finally, when there is an article on the symbol itself (not its mathematical meaning), it is linked to in the entry name. == Arithmetic operators == + (plus sign) 1. Denotes addition and is read as plus; for example, 3 + 2. 2. Denotes that a number is positive and is read as plus. Redundant, but sometimes used for emphasizing that a number is positive, specially when other numbers in the context are or may be negative; for example, +2. 3. Sometimes used instead of ⊔ {\displaystyle \sqcup } for a disjoint union of sets. − (minus sign) 1. Denotes subtraction and is read as minus; for example, 3 − 2. 2. Denotes the additive inverse and is read as minus, the negative of, or the opposite of; for example, −2. 3. Also used in place of \ for denoting the set-theoretic complement; see \ in § Set theory. × (multiplication sign) 1. In elementary arithmetic, denotes multiplication, and is read as times; for example, 3 × 2. 2. In geometry and linear algebra, denotes the cross product. 3. In set theory and category theory, denotes the Cartesian product and the direct product. See also × in § Set theory. · (dot) 1. Denotes multiplication and is read as times; for example, 3 ⋅ 2. 2. In geometry and linear algebra, denotes the dot product. 3. Placeholder used for replacing an indeterminate element. For example, saying "the absolute value is denoted by | · |" is perhaps clearer than saying that it is denoted as | |. ± (plus–minus sign) 1. Denotes either a plus sign or a minus sign. 2. Denotes the range of values that a measured quantity may have; for example, 10 ± 2 denotes an unknown value that lies between 8 and 12. ∓ (minus-plus sign) Used paired with ±, denotes the opposite sign; that is, + if ± is −, and − if ± is +. ÷ (division sign) Widely used for denoting division in Anglophone countries, it is no longer in common use in mathematics and its use is "not recommended". In some countries, it can indicate subtraction. : (colon) 1. Denotes the ratio of two quantities. 2. In some countries, may denote division. 3. In set-builder notation, it is used as a separator meaning "such that"; see {□ : □}. / (slash) 1. Denotes division and is read as divided by or over. Often replaced by a horizontal bar. For example, 3 / 2 or 3 2 {\displaystyle {\frac {3}{2}}} . 2. Denotes a quotient structure. For example, quotient set, quotient group, quotient category, etc. 3. In number theory and field theory, F / E {\displaystyle F/E} denotes a field extension, where F is an extension field of the field E. 4. In probability theory, denotes a conditional probability. For example, P ( A / B ) {\displaystyle P(A/B)} denotes the probability of A, given that B occurs. Usually denoted P ( A ∣ B ) {\displaystyle P(A\mid B)} : see "|". √ (square-root symbol) Denotes square root and is read as the square root of. Rarely used in modern mathematics without a horizontal bar delimiting the width of its argument (see the next item). For example, √2. √ (radical symbol) 1. Denotes square root and is read as the square root of. For example, 3 + 2 {\displaystyle {\sqrt {3+2}}} . 2. With an integer greater than 2 as a left superscript, denotes an nth root. For example, 3 7 {\displaystyle {\sqrt[{7}]{3}}} denotes the 7th root of 3. ^ (caret) 1. Exponentiation is normally denoted with a superscript. However, x y {\displaystyle x^{y}} is often denoted x^y when superscripts are not easily available, such as in programming languages (including LaTeX) or plain text emails. 2. Not to be confused with ∧ == Equality, equivalence and similarity == = (equals sign) 1. Denotes equality. 2. Used for naming a mathematical object in a sentence like "let x = E {\displaystyle x=E} ", where E is an expression. See also ≝, ≜ or := {\displaystyle :=} . ≜ = d e f := {\displaystyle \triangleq \quad {\stackrel {\scriptscriptstyle \mathrm {def} }{=}}\quad :=} Any of these is sometimes used for naming a mathematical object. Thus, x ≜ E , {\displaystyle x\triangleq E,} x = d e f E , {\displaystyle x\mathrel {\stackrel {\scriptscriptstyle \mathrm {def} }{=}} E,} x := E {\displaystyle x\mathrel {:=} E} and E =: x {\displaystyle E\mathrel {=:} x} are each an abbreviation of the phrase "let x = E {\displaystyle x=E} ", where E {\displaystyle E} is an expression and x {\displaystyle x} is a variable. This is similar to the concept of assignment in computer science, which is variously denoted (depending on the programming language used) = , := , ← , … {\displaystyle =,:=,\leftarrow ,\ldots } ≠ (not-equal sign) Denotes inequality and means "not equal". ≈ The most common symbol for denoting approximate equality. For example, π ≈ 3.14159. {\displaystyle \pi \approx 3.14159.} ~ (tilde) 1. Between two numbers, either it is used instead of ≈ to mean "approximatively equal", or it means "has the same order of magnitude as". 2. Denotes the asymptotic equivalence of two functions or sequences. 3. Often used for denoting other types of similarity, for example, matrix similarity or similarity of geometric shapes. 4. Standard notation for an equivalence relation. 5. In probability and statistics, may specify the probability distribution of a random variable. For example, X ∼ N ( 0 , 1 ) {\displaystyle X\sim N(0,1)} means that the distribution of the random variable X is standard normal. 6. Notation for proportionality. See also ∝ for a less ambiguous symbol. ≡ (triple bar) 1. Denotes an identity; that is, an equality that is true whichever values are given to the variables occurring in it. 2. In number theory, and more specifically in modular arithmetic, denotes the congruence modulo an integer. 3. May denote a logical equivalence. ≅ {\displaystyle \cong } 1. May denote an isomorphism between two mathematical structures, and is read as "is isomorphic to". 2. In geometry, may denote the congruence of two geometric shapes (that is the equality up to a displacement), and is read "is congruent to". == Comparison == < (less-than sign) 1. Strict inequality between two numbers; means and is read as "less than". 2. Commonly used for denoting any strict order. 3. Between two groups, may mean that the first one is a proper subgroup of the second one. > (greater-than sign) 1. Strict inequality between two numbers; means and is read as "greater than". 2. Commonly used for denoting any strict order. 3. Between two groups, may mean that the second one is a proper subgroup of the first one. ≤ 1. Means "less than or equal to". That is, whatever A and B are, A ≤ B is equivalent to A < B or A = B. 2. Between two groups, may mean that the first one is a subgroup of the second one. ≥ 1. Means "greater than or equal to". That is, whatever A and B are, A ≥ B is equivalent to A > B or A = B. 2. Between two groups, may mean that the second one is a subgroup of the first one. ≪ and ≫ {\displaystyle \ll {\text{ and }}\gg } 1. Means "much less than" and "much greater than". Generally, much is not formally defined, but means that the lesser quantity can be neglected with respect to the other. This is generally the case when the lesser quantity is smaller than the other by one or several orders of magnitude. 2. In measure theory, μ ≪ ν {\displaystyle \mu \ll \nu } means that the measure μ {\displaystyle \mu } is absolutely continuous with respect to the measure ν {\displaystyle \nu } . ≦ {\displaystyle \leqq } A rarely used symbol, generally a synonym of ≤. ≺ and ≻ {\displaystyle \prec {\text{ and }}\succ } 1. Often used for denoting an order or, more generally, a preorder, when it would be confusing or not convenient to use < and >. 2. Sequention in asynchronous logic. == Set theory == ∅ Denotes the empty set, and is more often written ∅ {\displaystyle \emptyset } . Using set-builder notation, it may also be denoted { } {\displaystyle \{\}} . # (number sign) 1. Number of elements: # S {\displaystyle \#{}S} may denote the cardinality of the set S. An alternative notation is | S | {\displaystyle |S|} ; see | ◻ | {\displaystyle |\square |} . 2. Primorial: n # {\displaystyle n{}\#} denotes the product of the prime numbers that are not greater than n. 3. In topology, M # N {\displaystyle M\#N} denotes the connected sum of two manifolds or two knots. ∈ Denotes set membership, and is read "is in", "belongs to", or "is a member of". That is, x ∈ S {\displaystyle x\in S} means that x is an element of the set S. ∉ Means "is not in". That is, x ∉ S {\displaystyle x\notin S} means ¬ ( x ∈ S ) {\displaystyle \neg (x\in S)} . ⊂ Denotes set inclusion. However two slightly different definitions are common. 1. A ⊂ B {\displaystyle A\subset B} may mean that A is a subset of B, and is possibly equal to B; that is, every element of A belongs to B; expressed as a formula, ∀ x , x ∈ A ⇒ x ∈ B {\displaystyle \forall {}x,\,x\in A\Rightarrow x\in B} . 2. A ⊂ B {\displaystyle A\subset B} may mean that A is a proper subset of B, that is the two sets are different, and every element of A belongs to B; expressed as a formula, A ≠ B ∧ ∀ x , x ∈ A ⇒ x ∈ B {\displaystyle A\neq B\land \forall {}x,\,x\in A\Rightarrow x\in B} . ⊆ A ⊆ B {\displaystyle A\subseteq B} means that A is a subset of B. Used for emphasizing that equality is possible, or when A ⊂ B {\displaystyle A\subset B} means that A {\displaystyle A} is a proper subset of B . {\displaystyle B.} ⊊ A ⊊ B {\displaystyle A\subsetneq B} means that A is a proper subset of B. Used for emphasizing that A ≠ B {\displaystyle A\neq B} , or when A ⊂ B {\displaystyle A\subset B} does not imply that A {\displaystyle A} is a proper subset of B . {\displaystyle B.} ⊃, ⊇, ⊋ Denote the converse relation of ⊂ {\displaystyle \subset } , ⊆ {\displaystyle \subseteq } , and ⊊ {\displaystyle \subsetneq } respectively. For example, B ⊃ A {\displaystyle B\supset A} is equivalent to A ⊂ B {\displaystyle A\subset B} . ∪ Denotes set-theoretic union, that is, A ∪ B {\displaystyle A\cup B} is the set formed by the elements of A and B together. That is, A ∪ B = { x ∣ ( x ∈ A ) ∨ ( x ∈ B ) } {\displaystyle A\cup B=\{x\mid (x\in A)\lor (x\in B)\}} . ∩ Denotes set-theoretic intersection, that is, A ∩ B {\displaystyle A\cap B} is the set formed by the elements of both A and B. That is, A ∩ B = { x ∣ ( x ∈ A ) ∧ ( x ∈ B ) } {\displaystyle A\cap B=\{x\mid (x\in A)\land (x\in B)\}} . ∖ (backslash) Set difference; that is, A ∖ B {\displaystyle A\setminus B} is the set formed by the elements of A that are not in B. Sometimes, A − B {\displaystyle A-B} is used instead; see − in § Arithmetic operators. ⊖ or △ {\displaystyle \triangle } Symmetric difference: that is, A ⊖ B {\displaystyle A\ominus B} or A △ B {\displaystyle A\operatorname {\triangle } B} is the set formed by the elements that belong to exactly one of the two sets A and B. ∁ {\displaystyle \complement } 1. With a subscript, denotes a set complement: that is, if B ⊆ A {\displaystyle B\subseteq A} , then ∁ A B = A ∖ B {\displaystyle \complement _{A}B=A\setminus B} . 2. Without a subscript, denotes the absolute complement; that is, ∁ A = ∁ U A {\displaystyle \complement A=\complement _{U}A} , where U is a set implicitly defined by the context, which contains all sets under consideration. This set U is sometimes called the universe of discourse. × (multiplication sign) See also × in § Arithmetic operators. 1. Denotes the Cartesian product of two sets. That is, A × B {\displaystyle A\times B} is the set formed by all pairs of an element of A and an element of B. 2. Denotes the direct product of two mathematical structures of the same type, which is the Cartesian product of the underlying sets, equipped with a structure of the same type. For example, direct product of rings, direct product of topological spaces. 3. In category theory, denotes the direct product (often called simply product) of two objects, which is a generalization of the preceding concepts of product. ⊔ {\displaystyle \sqcup } Denotes the disjoint union. That is, if A and B are sets then A ⊔ B = ( A × { i A } ) ∪ ( B × { i B } ) {\displaystyle A\sqcup B=\left(A\times \{i_{A}\}\right)\cup \left(B\times \{i_{B}\}\right)} is a set of pairs where iA and iB are distinct indices discriminating the members of A and B in A ⊔ B {\displaystyle A\sqcup B} . ⨆ or ∐ {\displaystyle \bigsqcup {\text{ or }}\coprod } 1. Used for the disjoint union of a family of sets, such as in ⨆ i ∈ I A i . {\textstyle \bigsqcup _{i\in I}A_{i}.} 2. Denotes the coproduct of mathematical structures or of objects in a category. == Basic logic == Several logical symbols are widely used in all mathematics, and are listed here. For symbols that are used only in mathematical logic, or are rarely used, see List of logic symbols. ¬ (not sign) Denotes logical negation, and is read as "not". If E is a logical predicate, ¬ E {\displaystyle \neg E} is the predicate that evaluates to true if and only if E evaluates to false. For clarity, it is often replaced by the word "not". In programming languages and some mathematical texts, it is sometimes replaced by "~" or "!", which are easier to type on some keyboards. ∨ (descending wedge) 1. Denotes the logical or, and is read as "or". If E and F are logical predicates, E ∨ F {\displaystyle E\lor F} is true if either E, F, or both are true. It is often replaced by the word "or". 2. In lattice theory, denotes the join or least upper bound operation. 3. In topology, denotes the wedge sum of two pointed spaces. ∧ (wedge) 1. Denotes the logical and, and is read as "and". If E and F are logical predicates, E ∧ F {\displaystyle E\land F} is true if E and F are both true. It is often replaced by the word "and" or the symbol "&". 2. In lattice theory, denotes the meet or greatest lower bound operation. 3. In multilinear algebra, geometry, and multivariable calculus, denotes the wedge product or the exterior product. ⊻ Exclusive or: if E and F are two Boolean variables or predicates, E ⊻ F {\displaystyle E\veebar F} denotes the exclusive or. Notations E XOR F and E ⊕ F {\displaystyle E\oplus F} are also commonly used; see ⊕. ∀ (turned A) 1. Denotes universal quantification and is read as "for all". If E is a logical predicate, ∀ x E {\displaystyle \forall x\;E} means that E is true for all possible values of the variable x. 2. Often used in plain text as an abbreviation of "for all" or "for every". ∃ 1. Denotes existential quantification and is read "there exists ... such that". If E is a logical predicate, ∃ x E {\displaystyle \exists x\;E} means that there exists at least one value of x for which E is true. 2. Often used in plain text as an abbreviation of "there exists". ∃! Denotes uniqueness quantification, that is, ∃ ! x P {\displaystyle \exists !x\;P} means "there exists exactly one x such that P (is true)". In other words, ∃ ! x P ( x ) {\displaystyle \exists !x\;P(x)} is an abbreviation of ∃ x ( P ( x ) ∧ ¬ ∃ y ( P ( y ) ∧ y ≠ x ) ) {\displaystyle \exists x\,(P(x)\,\wedge \neg \exists y\,(P(y)\wedge y\neq x))} . ⇒ 1. Denotes material conditional, and is read as "implies". If P and Q are logical predicates, P ⇒ Q {\displaystyle P\Rightarrow Q} means that if P is true, then Q is also true. Thus, P ⇒ Q {\displaystyle P\Rightarrow Q} is logically equivalent with Q ∨ ¬ P {\displaystyle Q\lor \neg P} . 2. Often used in plain text as an abbreviation of "implies". ⇔ 1. Denotes logical equivalence, and is read "is equivalent to" or "if and only if". If P and Q are logical predicates, P ⇔ Q {\displaystyle P\Leftrightarrow Q} is thus an abbreviation of ( P ⇒ Q ) ∧ ( Q ⇒ P ) {\displaystyle (P\Rightarrow Q)\land (Q\Rightarrow P)} , or of ( P ∧ Q ) ∨ ( ¬ P ∧ ¬ Q ) {\displaystyle (P\land Q)\lor (\neg P\land \neg Q)} . 2. Often used in plain text as an abbreviation of "if and only if". ⊤ (tee) 1. ⊤ {\displaystyle \top } denotes the logical predicate always true. 2. Denotes also the truth value true. 3. Sometimes denotes the top element of a bounded lattice (previous meanings are specific examples). 4. For the use as a superscript, see □⊤. ⊥ (up tack) 1. ⊥ {\displaystyle \bot } denotes the logical predicate always false. 2. Denotes also the truth value false. 3. Sometimes denotes the bottom element of a bounded lattice (previous meanings are specific examples). 4. In cryptography often denotes an error in place of a regular value. 5. For the use as a superscript, see □⊥. 6. For the similar symbol, see ⊥ {\displaystyle \perp } . == Blackboard bold == The blackboard bold typeface is widely used for denoting the basic number systems. These systems are often also denoted by the corresponding uppercase bold letter. A clear advantage of blackboard bold is that these symbols cannot be confused with anything else. This allows using them in any area of mathematics, without having to recall their definition. For example, if one encounters R {\displaystyle \mathbb {R} } in combinatorics, one should immediately know that this denotes the real numbers, although combinatorics does not study the real numbers (but it uses them for many proofs). N {\displaystyle \mathbb {N} } Denotes the set of natural numbers { 1 , 2 , … } , {\displaystyle \{1,2,\ldots \},} or sometimes { 0 , 1 , 2 , … } . {\displaystyle \{0,1,2,\ldots \}.} When the distinction is important and readers might assume either definition, N 1 {\displaystyle \mathbb {N} _{1}} and N 0 {\displaystyle \mathbb {N} _{0}} are used, respectively, to denote one of them unambiguously. Notation N {\displaystyle \mathbf {N} } is also commonly used. Z {\displaystyle \mathbb {Z} } Denotes the set of integers { … , − 2 , − 1 , 0 , 1 , 2 , … } . {\displaystyle \{\ldots ,-2,-1,0,1,2,\ldots \}.} It is often denoted also by Z . {\displaystyle \mathbf {Z} .} Z p {\displaystyle \mathbb {Z} _{p}} 1. Denotes the set of p-adic integers, where p is a prime number. 2. Sometimes, Z n {\displaystyle \mathbb {Z} _{n}} denotes the integers modulo n, where n is an integer greater than 0. The notation Z / n Z {\displaystyle \mathbb {Z} /n\mathbb {Z} } is also used, and is less ambiguous. Q {\displaystyle \mathbb {Q} } Denotes the set of rational numbers (fractions of two integers). It is often denoted also by Q . {\displaystyle \mathbf {Q} .} Q p {\displaystyle \mathbb {Q} _{p}} Denotes the set of p-adic numbers, where p is a prime number. R {\displaystyle \mathbb {R} } Denotes the set of real numbers. It is often denoted also by R . {\displaystyle \mathbf {R} .} C {\displaystyle \mathbb {C} } Denotes the set of complex numbers. It is often denoted also by C . {\displaystyle \mathbf {C} .} H {\displaystyle \mathbb {H} } Denotes the set of quaternions. It is often denoted also by H . {\displaystyle \mathbf {H} .} F q {\displaystyle \mathbb {F} _{q}} Denotes the finite field with q elements, where q is a prime power (including prime numbers). It is denoted also by GF(q). O {\displaystyle \mathbb {O} } Used on rare occasions to denote the set of octonions. It is often denoted also by O . {\displaystyle \mathbf {O} .} == Calculus == □' Lagrange's notation for the derivative: If f is a function of a single variable, f ′ {\displaystyle f'} , read as "f prime", is the derivative of f with respect to this variable. The second derivative is the derivative of f ′ {\displaystyle f'} , and is denoted f ″ {\displaystyle f''} . ◻ ˙ {\displaystyle {\dot {\Box }}} Newton's notation, most commonly used for the derivative with respect to time. If x is a variable depending on time, then x ˙ , {\displaystyle {\dot {x}},} read as "x dot", is its derivative with respect to time. In particular, if x represents a moving point, then x ˙ {\displaystyle {\dot {x}}} is its velocity. ◻ ¨ {\displaystyle {\ddot {\Box }}} Newton's notation, for the second derivative: If x is a variable that represents a moving point, then x ¨ {\displaystyle {\ddot {x}}} is its acceleration. d □/d □ Leibniz's notation for the derivative, which is used in several slightly different ways. 1. If y is a variable that depends on x, then d y d x {\displaystyle \textstyle {\frac {\mathrm {d} y}{\mathrm {d} x}}} , read as "d y over d x" (commonly shortened to "d y d x"), is the derivative of y with respect to x. 2. If f is a function of a single variable x, then d f d x {\displaystyle \textstyle {\frac {\mathrm {d} f}{\mathrm {d} x}}} is the derivative of f, and d f d x ( a ) {\displaystyle \textstyle {\frac {\mathrm {d} f}{\mathrm {d} x}}(a)} is the value of the derivative at a. 3. Total derivative: If f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} is a function of several variables that depend on x, then d f d x {\displaystyle \textstyle {\frac {\mathrm {d} f}{\mathrm {d} x}}} is the derivative of f considered as a function of x. That is, d f d x = ∑ i = 1 n ∂ f ∂ x i d x i d x {\displaystyle \textstyle {\frac {\mathrm {d} f}{dx}}=\sum _{i=1}^{n}{\frac {\partial f}{\partial x_{i}}}\,{\frac {\mathrm {d} x_{i}}{\mathrm {d} x}}} . ∂ □/∂ □ Partial derivative: If f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} is a function of several variables, ∂ f ∂ x i {\displaystyle \textstyle {\frac {\partial f}{\partial x_{i}}}} is the derivative with respect to the ith variable considered as an independent variable, the other variables being considered as constants. 𝛿 □/𝛿 □ Functional derivative: If f ( y 1 , … , y n ) {\displaystyle f(y_{1},\ldots ,y_{n})} is a functional of several functions, δ f δ y i {\displaystyle \textstyle {\frac {\delta f}{\delta y_{i}}}} is the functional derivative with respect to the nth function considered as an independent variable, the other functions being considered constant. ◻ ¯ {\displaystyle {\overline {\Box }}} 1. Complex conjugate: If z is a complex number, then z ¯ {\displaystyle {\overline {z}}} is its complex conjugate. For example, a + b i ¯ = a − b i {\displaystyle {\overline {a+bi}}=a-bi} . 2. Topological closure: If S is a subset of a topological space T, then S ¯ {\displaystyle {\overline {S}}} is its topological closure, that is, the smallest closed subset of T that contains S. 3. Algebraic closure: If F is a field, then F ¯ {\displaystyle {\overline {F}}} is its algebraic closure, that is, the smallest algebraically closed field that contains F. For example, Q ¯ {\displaystyle {\overline {\mathbb {Q} }}} is the field of all algebraic numbers. 4. Mean value: If x is a variable that takes its values in some sequence of numbers S, then x ¯ {\displaystyle {\overline {x}}} may denote the mean of the elements of S. 5. Negation: Sometimes used to denote negation of the entire expression under the bar, particularly when dealing with Boolean algebra. For example, one of De Morgan's laws says that A ∧ B ¯ = A ¯ ∨ B ¯ {\displaystyle {\overline {A\land B}}={\overline {A}}\lor {\overline {B}}} . → 1. A → B {\displaystyle A\to B} denotes a function with domain A and codomain B. For naming such a function, one writes f : A → B {\displaystyle f:A\to B} , which is read as "f from A to B". 2. More generally, A → B {\displaystyle A\to B} denotes a homomorphism or a morphism from A to B. 3. May denote a logical implication. For the material implication that is widely used in mathematics reasoning, it is nowadays generally replaced by ⇒. In mathematical logic, it remains used for denoting implication, but its exact meaning depends on the specific theory that is studied. 4. Over a variable name, means that the variable represents a vector, in a context where ordinary variables represent scalars; for example, v → {\displaystyle {\overrightarrow {v}}} . Boldface ( v {\displaystyle \mathbf {v} } ) or a circumflex ( v ^ {\displaystyle {\hat {v}}} ) are often used for the same purpose. 5. In Euclidean geometry and more generally in affine geometry, P Q → {\displaystyle {\overrightarrow {PQ}}} denotes the vector defined by the two points P and Q, which can be identified with the translation that maps P to Q. The same vector can be denoted also Q − P {\displaystyle Q-P} ; see Affine space. ↦ "Maps to": Used for defining a function without having to name it. For example, x ↦ x 2 {\displaystyle x\mapsto x^{2}} is the square function. ○ 1. Function composition: If f and g are two functions, then g ∘ f {\displaystyle g\circ f} is the function such that ( g ∘ f ) ( x ) = g ( f ( x ) ) {\displaystyle (g\circ f)(x)=g(f(x))} for every value of x. 2. Hadamard product of matrices: If A and B are two matrices of the same size, then A ∘ B {\displaystyle A\circ B} is the matrix such that ( A ∘ B ) i , j = ( A ) i , j ( B ) i , j {\displaystyle (A\circ B)_{i,j}=(A)_{i,j}(B)_{i,j}} . Possibly, ∘ {\displaystyle \circ } is also used instead of ⊙ for the Hadamard product of power series. ∂ 1. Boundary of a topological subspace: If S is a subspace of a topological space, then its boundary, denoted ∂ S {\displaystyle \partial S} , is the set difference between the closure and the interior of S. 2. Partial derivative: see ∂□/∂□. ∫ 1. Without a subscript, denotes an antiderivative. For example, ∫ x 2 d x = x 3 3 + C {\displaystyle \textstyle \int x^{2}dx={\frac {x^{3}}{3}}+C} . 2. With a subscript and a superscript, or expressions placed below and above it, denotes a definite integral. For example, ∫ a b x 2 d x = b 3 − a 3 3 {\displaystyle \textstyle \int _{a}^{b}x^{2}dx={\frac {b^{3}-a^{3}}{3}}} . 3. With a subscript that denotes a curve, denotes a line integral. For example, ∫ C f = ∫ a b f ( r ( t ) ) r ′ ( t ) d t {\displaystyle \textstyle \int _{C}f=\int _{a}^{b}f(r(t))r'(t)\operatorname {d} t} , if r is a parametrization of the curve C, from a to b. ∮ Often used, typically in physics, instead of ∫ {\displaystyle \textstyle \int } for line integrals over a closed curve. ∬, ∯ Similar to ∫ {\displaystyle \textstyle \int } and ∮ {\displaystyle \textstyle \oint } for surface integrals. ∇ {\displaystyle {\boldsymbol {\nabla }}} or ∇ → {\displaystyle {\vec {\nabla }}} Nabla, the gradient, vector derivative operator ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) {\displaystyle \textstyle \left({\frac {\partial }{\partial x}},{\frac {\partial }{\partial y}},{\frac {\partial }{\partial z}}\right)} , also called del or grad, or the covariant derivative. ∇2 or ∇⋅∇ Laplace operator or Laplacian: ∂ 2 ∂ x 2 + ∂ 2 ∂ y 2 + ∂ 2 ∂ z 2 {\displaystyle \textstyle {\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {\partial ^{2}}{\partial z^{2}}}} . The forms ∇ 2 {\displaystyle \nabla ^{2}} and ∇ ⋅ ∇ {\displaystyle {\boldsymbol {\nabla }}\cdot {\boldsymbol {\nabla }}} represent the dot product of the gradient ( ∇ {\displaystyle {\boldsymbol {\nabla }}} or ∇ → {\displaystyle {\vec {\nabla }}} ) with itself. Also notated Δ (next item). Δ (Capital Greek letter delta—not to be confused with △ {\displaystyle \triangle } , which may denote a geometric triangle or, alternatively, the symmetric difference of two sets.) 1. Another notation for the Laplacian (see above). 2. Operator of finite difference. ∂ {\displaystyle {\boldsymbol {\partial }}} or ∂ μ {\displaystyle \partial _{\mu }} (Note: the notation ◻ {\displaystyle \Box } is not recommended for the four-gradient since both ◻ {\displaystyle \Box } and ◻ 2 {\displaystyle {\Box }^{2}} are used to denote the d'Alembertian; see below.) Quad, the 4-vector gradient operator or four-gradient, ( ∂ ∂ t , ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) {\displaystyle \textstyle \left({\frac {\partial }{\partial t}},{\frac {\partial }{\partial x}},{\frac {\partial }{\partial y}},{\frac {\partial }{\partial z}}\right)} . ◻ {\displaystyle \Box } or ◻ 2 {\displaystyle {\Box }^{2}} (here an actual box, not a placeholder) Denotes the d'Alembertian or squared four-gradient, which is a generalization of the Laplacian to four-dimensional spacetime. In flat spacetime with Euclidean coordinates, this may mean either − ∂ 2 ∂ t 2 + ∂ 2 ∂ x 2 + ∂ 2 ∂ y 2 + ∂ 2 ∂ z 2 {\displaystyle ~\textstyle -{\frac {\partial ^{2}}{\partial t^{2}}}+{\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}+{\frac {\partial ^{2}}{\partial z^{2}}}~\;} or + ∂ 2 ∂ t 2 − ∂ 2 ∂ x 2 − ∂ 2 ∂ y 2 − ∂ 2 ∂ z 2 {\displaystyle \;~\textstyle +{\frac {\partial ^{2}}{\partial t^{2}}}-{\frac {\partial ^{2}}{\partial x^{2}}}-{\frac {\partial ^{2}}{\partial y^{2}}}-{\frac {\partial ^{2}}{\partial z^{2}}}~\;} ; the sign convention must be specified. In curved spacetime (or flat spacetime with non-Euclidean coordinates), the definition is more complicated. Also called box or quabla. == Linear and multilinear algebra == ∑ (capital-sigma notation) 1. Denotes the sum of a finite number of terms, which are determined by subscripts and superscripts (which can also be placed below and above), such as in ∑ i = 1 n i 2 {\displaystyle \textstyle \sum _{i=1}^{n}i^{2}} or ∑ 0 < i < j < n j − i {\displaystyle \textstyle \sum _{0<i<j<n}j-i} . 2. Denotes a series and, if the series is convergent, the sum of the series. For example, ∑ i = 0 ∞ x i i ! = e x {\displaystyle \textstyle \sum _{i=0}^{\infty }{\frac {x^{i}}{i!}}=e^{x}} . ∏ (capital-pi notation) 1. Denotes the product of a finite number of terms, which are determined by subscripts and superscripts (which can also be placed below and above), such as in ∏ i = 1 n i 2 {\displaystyle \textstyle \prod _{i=1}^{n}i^{2}} or ∏ 0 < i < j < n j − i {\displaystyle \textstyle \prod _{0<i<j<n}j-i} . 2. Denotes an infinite product. For example, the Euler product formula for the Riemann zeta function is ζ ( z ) = ∏ n = 1 ∞ 1 1 − p n − z {\displaystyle \textstyle \zeta (z)=\prod _{n=1}^{\infty }{\frac {1}{1-p_{n}^{-z}}}} . 3. Also used for the Cartesian product of any number of sets and the direct product of any number of mathematical structures. ⊕ {\displaystyle \oplus } 1. Internal direct sum: if E and F are abelian subgroups of an abelian group V, notation V = E ⊕ F {\displaystyle V=E\oplus F} means that V is the direct sum of E and F; that is, every element of V can be written in a unique way as the sum of an element of E and an element of F. This applies also when E and F are linear subspaces or submodules of the vector space or module V. 2. Direct sum: if E and F are two abelian groups, vector spaces, or modules, then their direct sum, denoted E ⊕ F {\displaystyle E\oplus F} is an abelian group, vector space, or module (respectively) equipped with two monomorphisms f : E → E ⊕ F {\displaystyle f:E\to E\oplus F} and g : F → E ⊕ F {\displaystyle g:F\to E\oplus F} such that E ⊕ F {\displaystyle E\oplus F} is the internal direct sum of f ( E ) {\displaystyle f(E)} and g ( F ) {\displaystyle g(F)} . This definition makes sense because this direct sum is unique up to a unique isomorphism. 3. Exclusive or: if E and F are two Boolean variables or predicates, E ⊕ F {\displaystyle E\oplus F} may denote the exclusive or. Notations E XOR F and E ⊻ F {\displaystyle E\veebar F} are also commonly used; see ⊻. ⊗ {\displaystyle \otimes } 1. Denotes the tensor product of abelian groups, vector spaces, modules, or other mathematical structures, such as in E ⊗ F , {\displaystyle E\otimes F,} or E ⊗ K F . {\displaystyle E\otimes _{K}F.} 2. Denotes the tensor product of elements: if x ∈ E {\displaystyle x\in E} and y ∈ F , {\displaystyle y\in F,} then x ⊗ y ∈ E ⊗ F . {\displaystyle x\otimes y\in E\otimes F.} □⊤ 1. Transpose: if A is a matrix, A ⊤ {\displaystyle A^{\top }} denotes the transpose of A, that is, the matrix obtained by exchanging rows and columns of A. Notation ⊤ A {\displaystyle ^{\top }\!\!A} is also used. The symbol ⊤ {\displaystyle \top } is often replaced by the letter T or t. 2. For inline uses of the symbol, see ⊤. □⊥ 1. Orthogonal complement: If W is a linear subspace of an inner product space V, then W ⊥ {\displaystyle W^{\bot }} denotes its orthogonal complement, that is, the linear space of the elements of V whose inner products with the elements of W are all zero. 2. Orthogonal subspace in the dual space: If W is a linear subspace (or a submodule) of a vector space (or of a module) V, then W ⊥ {\displaystyle W^{\bot }} may denote the orthogonal subspace of W, that is, the set of all linear forms that map W to zero. 3. For inline uses of the symbol, see ⊥. == Advanced group theory == ⊲⊴ Normal subgroup of and normal subgroup of including equality, respectively. If N and G are groups such that N is a normal subgroup of (including equality) G, this is written N ⊴ G {\displaystyle N\trianglelefteq G} . ⋉⋊ 1. Inner semidirect product: if N and H are subgroups of a group G, such that N is a normal subgroup of G, then G = N ⋊ H {\displaystyle G=N\rtimes H} and G = H ⋉ N {\displaystyle G=H\ltimes N} mean that G is the semidirect product of N and H, that is, that every element of G can be uniquely decomposed as the product of an element of N and an element of H. (Unlike for the direct product of groups, the element of H may change if the order of the factors is changed.) 2. Outer semidirect product: if N and H are two groups, and φ {\displaystyle \varphi } is a group homomorphism from N to the automorphism group of H, then N ⋊ φ H = H ⋉ φ N {\displaystyle N\rtimes _{\varphi }H=H\ltimes _{\varphi }N} denotes a group G, unique up to a group isomorphism, which is a semidirect product of N and H, with the commutation of elements of N and H defined by φ {\displaystyle \varphi } . ≀ In group theory, G ≀ H {\displaystyle G\wr H} denotes the wreath product of the groups G and H. It is also denoted as G wr H {\displaystyle G\operatorname {wr} H} or G Wr H {\displaystyle G\operatorname {Wr} H} ; see Wreath product § Notation and conventions for several notation variants. == Infinite numbers == ∞ {\displaystyle \infty } (infinity symbol) 1. The symbol is read as infinity. As an upper bound of a summation, an infinite product, an integral, etc., means that the computation is unlimited. Similarly, − ∞ {\displaystyle -\infty } in a lower bound means that the computation is not limited toward negative values. 2. − ∞ {\displaystyle -\infty } and + ∞ {\displaystyle +\infty } are the generalized numbers that are added to the real line to form the extended real line. 3. ∞ {\displaystyle \infty } is the generalized number that is added to the real line to form the projectively extended real line. c {\displaystyle {\mathfrak {c}}} (fraktur 𝔠) c {\displaystyle {\mathfrak {c}}} denotes the cardinality of the continuum, which is the cardinality of the set of real numbers. ℵ {\displaystyle \aleph } (aleph) With an ordinal i as a subscript, denotes the ith aleph number, that is the ith infinite cardinal. For example, ℵ 0 {\displaystyle \aleph _{0}} is the smallest infinite cardinal, that is, the cardinal of the natural numbers. ℶ {\displaystyle \beth } (bet (letter)) With an ordinal i as a subscript, denotes the ith beth number. For example, ℶ 0 {\displaystyle \beth _{0}} is the cardinal of the natural numbers, and ℶ 1 {\displaystyle \beth _{1}} is the cardinal of the continuum. ω {\displaystyle \omega } (omega) 1. Denotes the first limit ordinal. It is also denoted ω 0 {\displaystyle \omega _{0}} and can be identified with the ordered set of the natural numbers. 2. With an ordinal i as a subscript, denotes the ith limit ordinal that has a cardinality greater than that of all preceding ordinals. 3. In computer science, denotes the (unknown) greatest lower bound for the exponent of the computational complexity of matrix multiplication. 4. Written as a function of another function, it is used for comparing the asymptotic growth of two functions. See Big O notation § Related asymptotic notations. 5. In number theory, may denote the prime omega function. That is, ω ( n ) {\displaystyle \omega (n)} is the number of distinct prime factors of the integer n. == Brackets == Many types of bracket are used in mathematics. Their meanings depend not only on their shapes, but also on the nature and the arrangement of what is delimited by them, and sometimes what appears between or before them. For this reason, in the entry titles, the symbol □ is used as a placeholder for schematizing the syntax that underlies the meaning. === Parentheses === (□) Used in an expression for specifying that the sub-expression between the parentheses has to be considered as a single entity; typically used for specifying the order of operations. □(□)□(□, □)□(□, ..., □) 1. Functional notation: if the first ◻ {\displaystyle \Box } is the name (symbol) of a function, denotes the value of the function applied to the expression between the parentheses; for example, f ( x ) {\displaystyle f(x)} , sin ( x + y ) {\displaystyle \sin(x+y)} . In the case of a multivariate function, the parentheses contain several expressions separated by commas, such as f ( x , y ) {\displaystyle f(x,y)} . 2. May also denote a product, such as in a ( b + c ) {\displaystyle a(b+c)} . When the confusion is possible, the context must distinguish which symbols denote functions, and which ones denote variables. (□, □) 1. Denotes an ordered pair of mathematical objects, for example, ( π , 0 ) {\displaystyle (\pi ,0)} . 2. If a and b are real numbers, − ∞ {\displaystyle -\infty } , or + ∞ {\displaystyle +\infty } , and a < b, then ( a , b ) {\displaystyle (a,b)} denotes the open interval delimited by a and b. See ]□, □[ for an alternative notation. 3. If a and b are integers, ( a , b ) {\displaystyle (a,b)} may denote the greatest common divisor of a and b. Notation gcd ( a , b ) {\displaystyle \gcd(a,b)} is often used instead. (□, □, □) If x, y, z are vectors in R 3 {\displaystyle \mathbb {R} ^{3}} , then ( x , y , z ) {\displaystyle (x,y,z)} may denote the scalar triple product. See also [□,□,□] in § Square brackets. (□, ..., □) Denotes a tuple. If there are n objects separated by commas, it is an n-tuple. (□, □, ...)(□, ..., □, ...) Denotes an infinite sequence. ( ◻ ⋯ ◻ ⋮ ⋱ ⋮ ◻ ⋯ ◻ ) {\displaystyle {\begin{pmatrix}\Box &\cdots &\Box \\\vdots &\ddots &\vdots \\\Box &\cdots &\Box \end{pmatrix}}} Denotes a matrix. Often denoted with square brackets. ( ◻ ◻ ) {\displaystyle {\binom {\Box }{\Box }}} Denotes a binomial coefficient: Given two nonnegative integers, ( n k ) {\displaystyle {\binom {n}{k}}} is read as "n choose k", and is defined as the integer n ( n − 1 ) ⋯ ( n − k + 1 ) 1 ⋅ 2 ⋯ k = n ! k ! ( n − k ) ! {\displaystyle {\frac {n(n-1)\cdots (n-k+1)}{1\cdot 2\cdots k}}={\frac {n!}{k!\,(n-k)!}}} (if k = 0, its value is conventionally 1). Using the left-hand-side expression, it denotes a polynomial in n, and is thus defined and used for any real or complex value of n. ( ◻ ◻ ) {\displaystyle \left({\frac {\Box }{\Box }}\right)} Legendre symbol: If p is an odd prime number and a is an integer, the value of ( a p ) {\displaystyle \left({\frac {a}{p}}\right)} is 1 if a is a quadratic residue modulo p; it is −1 if a is a quadratic non-residue modulo p; it is 0 if p divides a. The same notation is used for the Jacobi symbol and Kronecker symbol, which are generalizations where p is respectively any odd positive integer, or any integer. === Square brackets === [□] 1. Sometimes used as a synonym of (□) for avoiding nested parentheses. 2. Equivalence class: given an equivalence relation, [ x ] {\displaystyle [x]} often denotes the equivalence class of the element x. 3. Integral part: if x is a real number, [ x ] {\displaystyle [x]} often denotes the integral part or truncation of x, that is, the integer obtained by removing all digits after the decimal mark. This notation has also been used for other variants of floor and ceiling functions. 4. Iverson bracket: if P is a predicate, [ P ] {\displaystyle [P]} may denote the Iverson bracket, that is the function that takes the value 1 for the values of the free variables in P for which P is true, and takes the value 0 otherwise. For example, [ x = y ] {\displaystyle [x=y]} is the Kronecker delta function, which equals one if x = y {\displaystyle x=y} , and zero otherwise. 5. In combinatorics or computer science, sometimes [ n ] {\displaystyle [n]} with n ∈ N {\displaystyle n\in \mathbb {N} } denotes the set { 1 , 2 , 3 , … , n } {\displaystyle \{1,2,3,\ldots ,n\}} of positive integers up to n, with [ 0 ] = ∅ {\displaystyle [0]=\emptyset } . □[□] Image of a subset: if S is a subset of the domain of the function f, then f [ S ] {\displaystyle f[S]} is sometimes used for denoting the image of S. When no confusion is possible, notation f(S) is commonly used. [□, □] 1. Closed interval: if a and b are real numbers such that a ≤ b {\displaystyle a\leq b} , then [ a , b ] {\displaystyle [a,b]} denotes the closed interval defined by them. 2. Commutator (group theory): if a and b belong to a group, then [ a , b ] = a − 1 b − 1 a b {\displaystyle [a,b]=a^{-1}b^{-1}ab} . 3. Commutator (ring theory): if a and b belong to a ring, then [ a , b ] = a b − b a {\displaystyle [a,b]=ab-ba} . 4. Denotes the Lie bracket, the operation of a Lie algebra. [□ : □] 1. Degree of a field extension: if F is an extension of a field E, then [ F : E ] {\displaystyle [F:E]} denotes the degree of the field extension F / E {\displaystyle F/E} . For example, [ C : R ] = 2 {\displaystyle [\mathbb {C} :\mathbb {R} ]=2} . 2. Index of a subgroup: if H is a subgroup of a group E, then [ G : H ] {\displaystyle [G:H]} denotes the index of H in G. The notation |G:H| is also used [□, □, □] If x, y, z are vectors in R 3 {\displaystyle \mathbb {R} ^{3}} , then [ x , y , z ] {\displaystyle [x,y,z]} may denote the scalar triple product. See also (□,□,□) in § Parentheses. [ ◻ ⋯ ◻ ⋮ ⋱ ⋮ ◻ ⋯ ◻ ] {\displaystyle {\begin{bmatrix}\Box &\cdots &\Box \\\vdots &\ddots &\vdots \\\Box &\cdots &\Box \end{bmatrix}}} Denotes a matrix. Often denoted with parentheses. === Braces === { } Set-builder notation for the empty set, also denoted ∅ {\displaystyle \emptyset } or ∅. {□} 1. Sometimes used as a synonym of (□) and [□] for avoiding nested parentheses. 2. Set-builder notation for a singleton set: { x } {\displaystyle \{x\}} denotes the set that has x as a single element. {□, ..., □} Set-builder notation: denotes the set whose elements are listed between the braces, separated by commas. {□ : □} {□ | □} Set-builder notation: if P ( x ) {\displaystyle P(x)} is a predicate depending on a variable x, then both { x : P ( x ) } {\displaystyle \{x:P(x)\}} and { x ∣ P ( x ) } {\displaystyle \{x\mid P(x)\}} denote the set formed by the values of x for which P ( x ) {\displaystyle P(x)} is true. Single brace 1. Used for emphasizing that several equations have to be considered as simultaneous equations; for example, { 2 x + y = 1 3 x − y = 1 {\displaystyle \textstyle {\begin{cases}2x+y=1\\3x-y=1\end{cases}}} . 2. Piecewise definition; for example, | x | = { x if x ≥ 0 − x if x < 0 {\displaystyle \textstyle |x|={\begin{cases}x&{\text{if }}x\geq 0\\-x&{\text{if }}x<0\end{cases}}} . 3. Used for grouped annotation of elements in a formula; for example, ( a , b , … , z ) ⏟ 26 {\displaystyle \textstyle \underbrace {(a,b,\ldots ,z)} _{26}} , 1 + 2 + ⋯ + 100 ⏞ = 5050 {\displaystyle \textstyle \overbrace {1+2+\cdots +100} ^{=5050}} , [ A B ] } m + n rows {\displaystyle \textstyle \left.{\begin{bmatrix}A\\B\end{bmatrix}}\right\}m+n{\text{ rows}}} === Other brackets === |□| 1. Absolute value: if x is a real or complex number, | x | {\displaystyle |x|} denotes its absolute value. 2. Number of elements: If S is a set, | S | {\displaystyle |S|} may denote its cardinality, that is, its number of elements. # S {\displaystyle \#S} is also often used, see #. 3. Length of a line segment: If P and Q are two points in a Euclidean space, then | P Q | {\displaystyle |PQ|} often denotes the length of the line segment that they define, which is the distance from P to Q, and is often denoted d ( P , Q ) {\displaystyle d(P,Q)} . 4. For a similar-looking operator, see |. |□:□| Index of a subgroup: if H is a subgroup of a group G, then | G : H | {\displaystyle |G:H|} denotes the index of H in G. The notation [G:H] is also used | ◻ ⋯ ◻ ⋮ ⋱ ⋮ ◻ ⋯ ◻ | {\displaystyle \textstyle {\begin{vmatrix}\Box &\cdots &\Box \\\vdots &\ddots &\vdots \\\Box &\cdots &\Box \end{vmatrix}}} | x 1 , 1 ⋯ x 1 , n ⋮ ⋱ ⋮ x n , 1 ⋯ x n , n | {\displaystyle {\begin{vmatrix}x_{1,1}&\cdots &x_{1,n}\\\vdots &\ddots &\vdots \\x_{n,1}&\cdots &x_{n,n}\end{vmatrix}}} denotes the determinant of the square matrix [ x 1 , 1 ⋯ x 1 , n ⋮ ⋱ ⋮ x n , 1 ⋯ x n , n ] {\displaystyle {\begin{bmatrix}x_{1,1}&\cdots &x_{1,n}\\\vdots &\ddots &\vdots \\x_{n,1}&\cdots &x_{n,n}\end{bmatrix}}} . ||□|| 1. Denotes the norm of an element of a normed vector space. 2. For the similar-looking operator named parallel, see ∥. ⌊□⌋ Floor function: if x is a real number, ⌊ x ⌋ {\displaystyle \lfloor x\rfloor } is the greatest integer that is not greater than x. ⌈□⌉ Ceiling function: if x is a real number, ⌈ x ⌉ {\displaystyle \lceil x\rceil } is the lowest integer that is not lesser than x. ⌊□⌉ Nearest integer function: if x is a real number, ⌊ x ⌉ {\displaystyle \lfloor x\rceil } is the integer that is the closest to x. ]□, □[ Open interval: If a and b are real numbers, − ∞ {\displaystyle -\infty } , or + ∞ {\displaystyle +\infty } , and a < b {\displaystyle a<b} , then ] a , b [ {\displaystyle ]a,b[} denotes the open interval delimited by a and b. See (□, □) for an alternative notation. (□, □]]□, □] Both notations are used for a left-open interval. [□, □)[□, □[ Both notations are used for a right-open interval. ⟨□⟩ 1. Generated object: if S is a set of elements in an algebraic structure, ⟨ S ⟩ {\displaystyle \langle S\rangle } denotes often the object generated by S. If S = { s 1 , … , s n } {\displaystyle S=\{s_{1},\ldots ,s_{n}\}} , one writes ⟨ s 1 , … , s n ⟩ {\displaystyle \langle s_{1},\ldots ,s_{n}\rangle } (that is, braces are omitted). In particular, this may denote the linear span in a vector space (also often denoted Span(S)), the generated subgroup in a group, the generated ideal in a ring, the generated submodule in a module. 2. Often used, mainly in physics, for denoting an expected value. In probability theory, E ( X ) {\displaystyle E(X)} is generally used instead of ⟨ S ⟩ {\displaystyle \langle S\rangle } . ⟨□, □⟩⟨□ | □⟩ Both ⟨ x , y ⟩ {\displaystyle \langle x,y\rangle } and ⟨ x ∣ y ⟩ {\displaystyle \langle x\mid y\rangle } are commonly used for denoting the inner product in an inner product space. ⟨ ◻ | and | ◻ ⟩ {\displaystyle \langle \Box |{\text{ and }}|\Box \rangle } Bra–ket notation or Dirac notation: if x and y are elements of an inner product space, | x ⟩ {\displaystyle |x\rangle } is the vector defined by x, and ⟨ y | {\displaystyle \langle y|} is the covector defined by y; their inner product is ⟨ y ∣ x ⟩ {\displaystyle \langle y\mid x\rangle } . == Symbols that do not belong to formulas == In this section, the symbols that are listed are used as some sorts of punctuation marks in mathematical reasoning, or as abbreviations of natural language phrases. They are generally not used inside a formula. Some were used in classical logic for indicating the logical dependence between sentences written in plain language. Except for the first two, they are normally not used in printed mathematical texts since, for readability, it is generally recommended to have at least one word between two formulas. However, they are still used on a black board for indicating relationships between formulas. ■ , □ Used for marking the end of a proof and separating it from the current text. The initialism Q.E.D. or QED (Latin: quod erat demonstrandum, "as was to be shown") is often used for the same purpose, either in its upper-case form or in lower case. ☡ Bourbaki dangerous bend symbol: Sometimes used in the margin to forewarn readers against serious errors, where they risk falling, or to mark a passage that is tricky on a first reading because of an especially subtle argument. ∴ Abbreviation of "therefore". Placed between two assertions, it means that the first one implies the second one. For example: "All humans are mortal, and Socrates is a human. ∴ Socrates is mortal." ∵ Abbreviation of "because" or "since". Placed between two assertions, it means that the first one is implied by the second one. For example: "11 is prime ∵ it has no positive integer factors other than itself and one." ∋ 1. Abbreviation of "such that". For example, x ∋ x > 3 {\displaystyle x\ni x>3} is normally printed "x such that x > 3 {\displaystyle x>3} ". 2. Sometimes used for reversing the operands of ∈ {\displaystyle \in } ; that is, S ∋ x {\displaystyle S\ni x} has the same meaning as x ∈ S {\displaystyle x\in S} . See ∈ in § Set theory. ∝ Abbreviation of "is proportional to". == Miscellaneous == ! 1. Factorial: if n is a positive integer, n! is the product of the first n positive integers, and is read as "n factorial". 2. Double factorial: if n is a positive integer, n!! is the product of all positive integers up to n with the same parity as n, and is read as "the double factorial of n". 3. Subfactorial: if n is a positive integer, !n is the number of derangements of a set of n elements, and is read as "the subfactorial of n". * Many different uses in mathematics; see Asterisk § Mathematics. | 1. Divisibility: if m and n are two integers, m ∣ n {\displaystyle m\mid n} means that m divides n evenly. 2. In set-builder notation, it is used as a separator meaning "such that"; see {□ | □}. 3. Restriction of a function: if f is a function, and S is a subset of its domain, then f | S {\displaystyle f|_{S}} is the function with S as a domain that equals f on S. 4. Conditional probability: P ( X ∣ E ) {\displaystyle P(X\mid E)} denotes the probability of X given that the event E occurs. Also denoted P ( X / E ) {\displaystyle P(X/E)} ; see "/". 5. For several uses as brackets (in pairs or with ⟨ and ⟩) see § Other brackets. ∤ Non-divisibility: n ∤ m {\displaystyle n\nmid m} means that n is not a divisor of m. ∥ 1. Denotes parallelism in elementary geometry: if PQ and RS are two lines, P Q ∥ R S {\displaystyle PQ\parallel RS} means that they are parallel. 2. Parallel, an arithmetical operation used in electrical engineering for modeling parallel resistors: x ∥ y = 1 1 x + 1 y {\displaystyle x\parallel y={\frac {1}{{\frac {1}{x}}+{\frac {1}{y}}}}} . 3. Used in pairs as brackets, denotes a norm; see ||□||. 4. Concatenation: Typically used in computer science, x | | y {\displaystyle x\mathbin {\vert \vert } y} is said to represent the value resulting from appending the digits of y to the end of x. 5. D KL ( P ∥ Q ) {\displaystyle {\displaystyle D_{\text{KL}}(P\parallel Q)}} , denotes a statistical distance or measure of how one probability distribution P is different from a second, reference probability distribution Q. ∦ Sometimes used for denoting that two lines are not parallel; for example, P Q ∦ R S {\displaystyle PQ\not \parallel RS} . ⊥ {\displaystyle \perp } 1. Denotes perpendicularity and orthogonality. For example, if A, B, C are three points in a Euclidean space, then A B ⊥ A C {\displaystyle AB\perp AC} means that the line segments AB and AC are perpendicular, and form a right angle. 2. For the similar symbol, see ⊥ {\displaystyle \bot } . ⊙ Hadamard product of power series: if S = ∑ i = 0 ∞ s i x i {\displaystyle \textstyle S=\sum _{i=0}^{\infty }s_{i}x^{i}} and T = ∑ i = 0 ∞ t i x i {\displaystyle \textstyle T=\sum _{i=0}^{\infty }t_{i}x^{i}} , then S ⊙ T = ∑ i = 0 ∞ s i t i x i {\displaystyle \textstyle S\odot T=\sum _{i=0}^{\infty }s_{i}t_{i}x^{i}} . Possibly, ⊙ {\displaystyle \odot } is also used instead of ○ for the Hadamard product of matrices. == See also == === Related articles === Language of mathematics Mathematical notation Notation in probability and statistics Physical constants === Related lists === List of logic symbols List of mathematical constants Table of mathematical symbols by introduction date Blackboard bold Greek letters used in mathematics, science, and engineering Latin letters used in mathematics, science, and engineering List of common physics notations List of letters used in mathematics, science, and engineering List of mathematical abbreviations List of typographical symbols and punctuation marks ISO 31-11 (Mathematical signs and symbols for use in physical sciences and technology) List of APL functions === Unicode symbols === Unicode block Mathematical Alphanumeric Symbols (Unicode block) List of Unicode characters Letterlike Symbols Mathematical operators and symbols in Unicode Miscellaneous Mathematical Symbols: A, B, Technical Arrow (symbol) and Miscellaneous Symbols and Arrows Number Forms Geometric Shapes == References == == External links == Jeff Miller: Earliest Uses of Various Mathematical Symbols Numericana: Scientific Symbols and Icons GIF and PNG Images for Math Symbols Mathematical Symbols in Unicode Detexify: LaTeX Handwriting Recognition Tool Some Unicode charts of mathematical operators and symbols: Index of Unicode symbols Range 2100–214F: Unicode Letterlike Symbols Range 2190–21FF: Unicode Arrows Range 2200–22FF: Unicode Mathematical Operators Range 27C0–27EF: Unicode Miscellaneous Mathematical Symbols–A Range 2980–29FF: Unicode Miscellaneous Mathematical Symbols–B Range 2A00–2AFF: Unicode Supplementary Mathematical Operators Some Unicode cross-references: Short list of commonly used LaTeX symbols and Comprehensive LaTeX Symbol List MathML Characters - sorts out Unicode, HTML and MathML/TeX names on one page Unicode values and MathML names Unicode values and Postscript names from the source code for Ghostscript
|
https://en.wikipedia.org/wiki/Glossary_of_mathematical_symbols
|
In geometry, a disk (also spelled disc) is the region in a plane bounded by a circle. A disk is said to be closed if it contains the circle that constitutes its boundary, and open if it does not. For a radius r {\displaystyle r} , an open disk is usually denoted as D r {\displaystyle D_{r}} , and a closed disk is D r ¯ {\displaystyle {\overline {D_{r}}}} . However in the field of topology the closed disk is usually denoted as D 2 {\displaystyle D^{2}} , while the open disk is int D 2 {\displaystyle \operatorname {int} D^{2}} . == Formulas == In Cartesian coordinates, the open disk with center ( a , b ) {\displaystyle (a,b)} and radius R is given by the formula D = { ( x , y ) ∈ R 2 : ( x − a ) 2 + ( y − b ) 2 < R 2 } , {\displaystyle D=\{(x,y)\in \mathbb {R} ^{2}:(x-a)^{2}+(y-b)^{2}<R^{2}\},} while the closed disk with the same center and radius is given by D ¯ = { ( x , y ) ∈ R 2 : ( x − a ) 2 + ( y − b ) 2 ≤ R 2 } . {\displaystyle {\overline {D}}=\{(x,y)\in \mathbb {R} ^{2}:(x-a)^{2}+(y-b)^{2}\leq R^{2}\}.} The area of a closed or open disk of radius R is πR2 (see area of a disk). == Properties == The disk has circular symmetry. The open disk and the closed disk are not topologically equivalent (that is, they are not homeomorphic), as they have different topological properties from each other. For instance, every closed disk is compact whereas every open disk is not compact. However from the viewpoint of algebraic topology they share many properties: both of them are contractible and so are homotopy equivalent to a single point. This implies that their fundamental groups are trivial, and all homology groups are trivial except the 0th one, which is isomorphic to Z. The Euler characteristic of a point (and therefore also that of a closed or open disk) is 1. Every continuous map from the closed disk to itself has at least one fixed point (we don't require the map to be bijective or even surjective); this is the case n=2 of the Brouwer fixed-point theorem. The statement is false for the open disk: Consider for example the function f ( x , y ) = ( x + 1 − y 2 2 , y ) {\displaystyle f(x,y)=\left({\frac {x+{\sqrt {1-y^{2}}}}{2}},y\right)} which maps every point of the open unit disk to another point on the open unit disk to the right of the given one. But for the closed unit disk it fixes every point on the half circle x 2 + y 2 = 1 , x > 0. {\displaystyle x^{2}+y^{2}=1,x>0.} == As a statistical distribution == A uniform distribution on a unit circular disk is occasionally encountered in statistics. It most commonly occurs in operations research in the mathematics of urban planning, where it may be used to model a population within a city. Other uses may take advantage of the fact that it is a distribution for which it is easy to compute the probability that a given set of linear inequalities will be satisfied. (Gaussian distributions in the plane require numerical quadrature.) "An ingenious argument via elementary functions" shows the mean Euclidean distance between two points in the disk to be 128/45π ≈ 0.90541, while direct integration in polar coordinates shows the mean squared distance to be 1. If we are given an arbitrary location at a distance q from the center of the disk, it is also of interest to determine the average distance b(q) from points in the distribution to this location and the average square of such distances. The latter value can be computed directly as q2+1/2. === Average distance to an arbitrary internal point === To find b(q) we need to look separately at the cases in which the location is internal or external, i.e. in which q ≶ 1, and we find that in both cases the result can only be expressed in terms of complete elliptic integrals. If we consider an internal location, our aim (looking at the diagram) is to compute the expected value of r under a distribution whose density is 1/π for 0 ≤ r ≤ s(θ), integrating in polar coordinates centered on the fixed location for which the area of a cell is r dr dθ ; hence b ( q ) = 1 π ∫ 0 2 π d θ ∫ 0 s ( θ ) r 2 d r = 1 3 π ∫ 0 2 π s ( θ ) 3 d θ . {\displaystyle b(q)={\frac {1}{\pi }}\int _{0}^{2\pi }{\textrm {d}}\theta \int _{0}^{s(\theta )}r^{2}{\textrm {d}}r={\frac {1}{3\pi }}\int _{0}^{2\pi }s(\theta )^{3}{\textrm {d}}\theta .} Here s(θ) can be found in terms of q and θ using the Law of cosines. The steps needed to evaluate the integral, together with several references, will be found in the paper by Lew et al.; the result is that b ( q ) = 4 9 π { 4 ( q 2 − 1 ) K ( q 2 ) + ( q 2 + 7 ) E ( q 2 ) } {\displaystyle b(q)={\frac {4}{9\pi }}{\biggl \{}4(q^{2}-1)K(q^{2})+(q^{2}+7)E(q^{2}){\biggr \}}} where K and E are complete elliptic integrals of the first and second kinds. b(0) = 2/3; b(1) = 32/9π ≈ 1.13177. === Average distance to an arbitrary external point === Turning to an external location, we can set up the integral in a similar way, this time obtaining b ( q ) = 2 3 π ∫ 0 sin − 1 1 q { s + ( θ ) 3 − s − ( θ ) 3 } d θ {\displaystyle b(q)={\frac {2}{3\pi }}\int _{0}^{{\textrm {sin}}^{-1}{\tfrac {1}{q}}}{\biggl \{}s_{+}(\theta )^{3}-s_{-}(\theta )^{3}{\biggr \}}{\textrm {d}}\theta } where the law of cosines tells us that s+(θ) and s–(θ) are the roots for s of the equation s 2 − 2 q s cos θ + q 2 − 1 = 0. {\displaystyle s^{2}-2qs\,{\textrm {cos}}\theta +q^{2}\!-\!1=0.} Hence b ( q ) = 4 3 π ∫ 0 sin − 1 1 q { 3 q 2 cos 2 θ 1 − q 2 sin 2 θ + ( 1 − q 2 sin 2 θ ) 3 2 } d θ . {\displaystyle b(q)={\frac {4}{3\pi }}\int _{0}^{{\textrm {sin}}^{-1}{\tfrac {1}{q}}}{\biggl \{}3q^{2}{\textrm {cos}}^{2}\theta {\sqrt {1-q^{2}{\textrm {sin}}^{2}\theta }}+{\Bigl (}1-q^{2}{\textrm {sin}}^{2}\theta {\Bigr )}^{\tfrac {3}{2}}{\biggl \}}{\textrm {d}}\theta .} We may substitute u = q sinθ to get b ( q ) = 4 3 π ∫ 0 1 { 3 q 2 − u 2 1 − u 2 + ( 1 − u 2 ) 3 2 q 2 − u 2 } d u = 4 3 π ∫ 0 1 { 4 q 2 − u 2 1 − u 2 − q 2 − 1 q 1 − u 2 q 2 − u 2 } d u = 4 3 π { 4 q 3 ( ( q 2 + 1 ) E ( 1 q 2 ) − ( q 2 − 1 ) K ( 1 q 2 ) ) − ( q 2 − 1 ) ( q E ( 1 q 2 ) − q 2 − 1 q K ( 1 q 2 ) ) } = 4 9 π { q ( q 2 + 7 ) E ( 1 q 2 ) − q 2 − 1 q ( q 2 + 3 ) K ( 1 q 2 ) } {\displaystyle {\begin{aligned}b(q)&={\frac {4}{3\pi }}\int _{0}^{1}{\biggl \{}3{\sqrt {q^{2}-u^{2}}}{\sqrt {1-u^{2}}}+{\frac {(1-u^{2})^{\tfrac {3}{2}}}{\sqrt {q^{2}-u^{2}}}}{\biggr \}}{\textrm {d}}u\\[0.6ex]&={\frac {4}{3\pi }}\int _{0}^{1}{\biggl \{}4{\sqrt {q^{2}-u^{2}}}{\sqrt {1-u^{2}}}-{\frac {q^{2}-1}{q}}{\frac {\sqrt {1-u^{2}}}{\sqrt {q^{2}-u^{2}}}}{\biggr \}}{\textrm {d}}u\\[0.6ex]&={\frac {4}{3\pi }}{\biggl \{}{\frac {4q}{3}}{\biggl (}(q^{2}+1)E({\tfrac {1}{q^{2}}})-(q^{2}-1)K({\tfrac {1}{q^{2}}}){\biggr )}-(q^{2}-1){\biggl (}qE({\tfrac {1}{q^{2}}})-{\frac {q^{2}-1}{q}}K({\tfrac {1}{q^{2}}}){\biggr )}{\biggr \}}\\[0.6ex]&={\frac {4}{9\pi }}{\biggl \{}q(q^{2}+7)E({\tfrac {1}{q^{2}}})-{\frac {q^{2}-1}{q}}(q^{2}+3)K({\tfrac {1}{q^{2}}}){\biggr \}}\end{aligned}}} using standard integrals. Hence again b(1) = 32/9π, while also lim q → ∞ b ( q ) = q + 1 8 q . {\displaystyle \lim _{q\to \infty }b(q)=q+{\tfrac {1}{8q}}.} == See also == Unit disk, a disk with radius one Annulus (mathematics), the region between two concentric circles Ball (mathematics), the usual term for the 3-dimensional analogue of a disk Disk algebra, a space of functions on a disk Circular segment Orthocentroidal disk, containing certain centers of a triangle == References ==
|
https://en.wikipedia.org/wiki/Disk_(mathematics)
|
In mathematics, the origin of a Euclidean space is a special point, usually denoted by the letter O, used as a fixed point of reference for the geometry of the surrounding space. In physical problems, the choice of origin is often arbitrary, meaning any choice of origin will ultimately give the same answer. This allows one to pick an origin point that makes the mathematics as simple as possible, often by taking advantage of some kind of geometric symmetry. == Cartesian coordinates == In a Cartesian coordinate system, the origin is the point where the axes of the system intersect. The origin divides each of these axes into two halves, a positive and a negative semiaxis. Points can then be located with reference to the origin by giving their numerical coordinates—that is, the positions of their projections along each axis, either in the positive or negative direction. The coordinates of the origin are always all zero, for example (0,0) in two dimensions and (0,0,0) in three. == Other coordinate systems == In a polar coordinate system, the origin may also be called the pole. It does not itself have well-defined polar coordinates, because the polar coordinates of a point include the angle made by the positive x-axis and the ray from the origin to the point, and this ray is not well-defined for the origin itself. In Euclidean geometry, the origin may be chosen freely as any convenient point of reference. The origin of the complex plane can be referred as the point where real axis and imaginary axis intersect each other. In other words, it is the complex number zero. == See also == Coordinate frame Distance from a point to a plane Null vector, an analogous point of a vector space Pointed space, a topological space with a distinguished point Radial basis function, a function depending only on the distance from the origin == References ==
|
https://en.wikipedia.org/wiki/Origin_(mathematics)
|
Pure mathematics is the study of mathematical concepts independently of any application outside mathematics. These concepts may originate in real-world concerns, and the results obtained may later turn out to be useful for practical applications, but pure mathematicians are not primarily motivated by such applications. Instead, the appeal is attributed to the intellectual challenge and aesthetic beauty of working out the logical consequences of basic principles. While pure mathematics has existed as an activity since at least ancient Greece, the concept was elaborated upon around the year 1900, after the introduction of theories with counter-intuitive properties (such as non-Euclidean geometries and Cantor's theory of infinite sets), and the discovery of apparent paradoxes (such as continuous functions that are nowhere differentiable, and Russell's paradox). This introduced the need to renew the concept of mathematical rigor and rewrite all mathematics accordingly, with a systematic use of axiomatic methods. This led many mathematicians to focus on mathematics for its own sake, that is, pure mathematics. Nevertheless, almost all mathematical theories remained motivated by problems coming from the real world or from less abstract mathematical theories. Also, many mathematical theories, which had seemed to be totally pure mathematics, were eventually used in applied areas, mainly physics and computer science. A famous early example is Isaac Newton's demonstration that his law of universal gravitation implied that planets move in orbits that are conic sections, geometrical curves that had been studied in antiquity by Apollonius. Another example is the problem of factoring large integers, which is the basis of the RSA cryptosystem, widely used to secure internet communications. It follows that, currently, the distinction between pure and applied mathematics is more a philosophical point of view or a mathematician's preference rather than a rigid subdivision of mathematics. == History == === Ancient Greece === Ancient Greek mathematicians were among the earliest to make a distinction between pure and applied mathematics. Plato helped to create the gap between "arithmetic", now called number theory, and "logistic", now called arithmetic. Plato regarded logistic (arithmetic) as appropriate for businessmen and men of war who "must learn the art of numbers or [they] will not know how to array [their] troops" and arithmetic (number theory) as appropriate for philosophers "because [they have] to arise out of the sea of change and lay hold of true being." Euclid of Alexandria, when asked by one of his students of what use was the study of geometry, asked his slave to give the student threepence, "since he must make gain of what he learns." The Greek mathematician Apollonius of Perga was asked about the usefulness of some of his theorems in Book IV of Conics to which he proudly asserted, They are worthy of acceptance for the sake of the demonstrations themselves, in the same way as we accept many other things in mathematics for this and for no other reason. And since many of his results were not applicable to the science or engineering of his day, Apollonius further argued in the preface of the fifth book of Conics that the subject is one of those that "...seem worthy of study for their own sake." === 19th century === The term itself is enshrined in the full title of the Sadleirian Chair, "Sadleirian Professor of Pure Mathematics", founded (as a professorship) in the mid-nineteenth century. The idea of a separate discipline of pure mathematics may have emerged at that time. The generation of Gauss made no sweeping distinction of the kind between pure and applied. In the following years, specialisation and professionalisation (particularly in the Weierstrass approach to mathematical analysis) started to make a rift more apparent. === 20th century === At the start of the twentieth century mathematicians took up the axiomatic method, strongly influenced by David Hilbert's example. The logical formulation of pure mathematics suggested by Bertrand Russell in terms of a quantifier structure of propositions seemed more and more plausible, as large parts of mathematics became axiomatised and thus subject to the simple criteria of rigorous proof. Pure mathematics, according to a view that can be ascribed to the Bourbaki group, is what is proved. "Pure mathematician" became a recognized vocation, achievable through training. The case was made that pure mathematics is useful in engineering education: There is a training in habits of thought, points of view, and intellectual comprehension of ordinary engineering problems, which only the study of higher mathematics can give. == Generality and abstraction == One central concept in pure mathematics is the idea of generality; pure mathematics often exhibits a trend towards increased generality. Uses and advantages of generality include the following: Generalizing theorems or mathematical structures can lead to deeper understanding of the original theorems or structures Generality can simplify the presentation of material, resulting in shorter proofs or arguments that are easier to follow. One can use generality to avoid duplication of effort, proving a general result instead of having to prove separate cases independently, or using results from other areas of mathematics. Generality can facilitate connections between different branches of mathematics. Category theory is one area of mathematics dedicated to exploring this commonality of structure as it plays out in some areas of math. Generality's impact on intuition is both dependent on the subject and a matter of personal preference or learning style. Often generality is seen as a hindrance to intuition, although it can certainly function as an aid to it, especially when it provides analogies to material for which one already has good intuition. As a prime example of generality, the Erlangen program involved an expansion of geometry to accommodate non-Euclidean geometries as well as the field of topology, and other forms of geometry, by viewing geometry as the study of a space together with a group of transformations. The study of numbers, called algebra at the beginning undergraduate level, extends to abstract algebra at a more advanced level; and the study of functions, called calculus at the college freshman level becomes mathematical analysis and functional analysis at a more advanced level. Each of these branches of more abstract mathematics have many sub-specialties, and there are in fact many connections between pure mathematics and applied mathematics disciplines. A steep rise in abstraction was seen mid 20th century. In practice, however, these developments led to a sharp divergence from physics, particularly from 1950 to 1983. Later this was criticised, for example by Vladimir Arnold, as too much Hilbert, not enough Poincaré. The point does not yet seem to be settled, in that string theory pulls one way, while discrete mathematics pulls back towards proof as central. == Pure vs. applied mathematics == Mathematicians have always had differing opinions regarding the distinction between pure and applied mathematics. One of the most famous (but perhaps misunderstood) modern examples of this debate can be found in G.H. Hardy's 1940 essay A Mathematician's Apology. It is widely believed that Hardy considered applied mathematics to be ugly and dull. Although it is true that Hardy preferred pure mathematics, which he often compared to painting and poetry, Hardy saw the distinction between pure and applied mathematics to be simply that applied mathematics sought to express physical truth in a mathematical framework, whereas pure mathematics expressed truths that were independent of the physical world. Hardy made a separate distinction in mathematics between what he called "real" mathematics, "which has permanent aesthetic value", and "the dull and elementary parts of mathematics" that have practical use. Hardy considered some physicists, such as Einstein and Dirac, to be among the "real" mathematicians, but at the time that he was writing his Apology, he considered general relativity and quantum mechanics to be "useless", which allowed him to hold the opinion that only "dull" mathematics was useful. Moreover, Hardy briefly admitted that—just as the application of matrix theory and group theory to physics had come unexpectedly—the time may come where some kinds of beautiful, "real" mathematics may be useful as well. Another insightful view is offered by American mathematician Andy Magid: I've always thought that a good model here could be drawn from ring theory. In that subject, one has the subareas of commutative ring theory and non-commutative ring theory. An uninformed observer might think that these represent a dichotomy, but in fact the latter subsumes the former: a non-commutative ring is a not-necessarily-commutative ring. If we use similar conventions, then we could refer to applied mathematics and nonapplied mathematics, where by the latter we mean not-necessarily-applied mathematics... [emphasis added] Friedrich Engels argued in his 1878 book Anti-Dühring that "it is not at all true that in pure mathematics the mind deals only with its own creations and imaginations. The concepts of number and figure have not been invented from any source other than the world of reality".: 36 He further argued that "Before one came upon the idea of deducing the form of a cylinder from the rotation of a rectangle about one of its sides, a number of real rectangles and cylinders, however imperfect in form, must have been examined. Like all other sciences, mathematics arose out of the needs of men...But, as in every department of thought, at a certain stage of development the laws, which were abstracted from the real world, become divorced from the real world, and are set up against it as something independent, as laws coming from outside, to which the world has to conform.": 37 == See also == Applied mathematics Logic Metalogic Metamathematics == References == == External links == What is Pure Mathematics? – Department of Pure Mathematics, University of Waterloo The Principles of Mathematics by Bertrand Russell
|
https://en.wikipedia.org/wiki/Pure_mathematics
|
Analysis is the branch of mathematics dealing with continuous functions, limits, and related theories, such as differentiation, integration, measure, infinite sequences, series, and analytic functions. These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. Analysis may be distinguished from geometry; however, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space). == History == === Ancient === Mathematical analysis formally developed in the 17th century during the Scientific Revolution, but many of its ideas can be traced back to earlier mathematicians. Early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, an infinite geometric sum is implicit in Zeno's paradox of the dichotomy. (Strictly speaking, the point of the paradox is to deny that the infinite sum exists.) Later, Greek mathematicians such as Eudoxus and Archimedes made more explicit, but informal, use of the concepts of limits and convergence when they used the method of exhaustion to compute the area and volume of regions and solids. The explicit use of infinitesimals appears in Archimedes' The Method of Mechanical Theorems, a work rediscovered in the 20th century. In Asia, the Chinese mathematician Liu Hui used the method of exhaustion in the 3rd century CE to find the area of a circle. From Jain literature, it appears that Hindus were in possession of the formulae for the sum of the arithmetic and geometric series as early as the 4th century BCE. Ācārya Bhadrabāhu uses the sum of a geometric series in his Kalpasūtra in 433 BCE. === Medieval === Zu Chongzhi established a method that would later be called Cavalieri's principle to find the volume of a sphere in the 5th century. In the 12th century, the Indian mathematician Bhāskara II used infinitesimal and used what is now known as Rolle's theorem. In the 14th century, Madhava of Sangamagrama developed infinite series expansions, now called Taylor series, of functions such as sine, cosine, tangent and arctangent. Alongside his development of Taylor series of trigonometric functions, he also estimated the magnitude of the error terms resulting of truncating these series, and gave a rational approximation of some infinite series. His followers at the Kerala School of Astronomy and Mathematics further expanded his works, up to the 16th century. === Modern === ==== Foundations ==== The modern foundations of mathematical analysis were established in 17th century Europe. This began when Fermat and Descartes developed analytic geometry, which is the precursor to modern calculus. Fermat's method of adequality allowed him to determine the maxima and minima of functions and the tangents of curves. Descartes's publication of La Géométrie in 1637, which introduced the Cartesian coordinate system, is considered to be the establishment of mathematical analysis. It would be a few decades later that Newton and Leibniz independently developed infinitesimal calculus, which grew, with the stimulus of applied work that continued through the 18th century, into analysis topics such as the calculus of variations, ordinary and partial differential equations, Fourier analysis, and generating functions. During this period, calculus techniques were applied to approximate discrete problems by continuous ones. ==== Modernization ==== In the 18th century, Euler introduced the notion of a mathematical function. Real analysis began to emerge as an independent subject when Bernard Bolzano introduced the modern definition of continuity in 1816, but Bolzano's work did not become widely known until the 1870s. In 1821, Cauchy began to put calculus on a firm logical foundation by rejecting the principle of the generality of algebra widely used in earlier work, particularly by Euler. Instead, Cauchy formulated calculus in terms of geometric ideas and infinitesimals. Thus, his definition of continuity required an infinitesimal change in x to correspond to an infinitesimal change in y. He also introduced the concept of the Cauchy sequence, and started the formal theory of complex analysis. Poisson, Liouville, Fourier and others studied partial differential equations and harmonic analysis. The contributions of these mathematicians and others, such as Weierstrass, developed the (ε, δ)-definition of limit approach, thus founding the modern field of mathematical analysis. Around the same time, Riemann introduced his theory of integration, and made significant advances in complex analysis. Towards the end of the 19th century, mathematicians started worrying that they were assuming the existence of a continuum of real numbers without proof. Dedekind then constructed the real numbers by Dedekind cuts, in which irrational numbers are formally defined, which serve to fill the "gaps" between rational numbers, thereby creating a complete set: the continuum of real numbers, which had already been developed by Simon Stevin in terms of decimal expansions. Around that time, the attempts to refine the theorems of Riemann integration led to the study of the "size" of the set of discontinuities of real functions. Also, various pathological objects, (such as nowhere continuous functions, continuous but nowhere differentiable functions, and space-filling curves), commonly known as "monsters", began to be investigated. In this context, Jordan developed his theory of measure, Cantor developed what is now called naive set theory, and Baire proved the Baire category theorem. In the early 20th century, calculus was formalized using an axiomatic set theory. Lebesgue greatly improved measure theory, and introduced his own theory of integration, now known as Lebesgue integration, which proved to be a big improvement over Riemann's. Hilbert introduced Hilbert spaces to solve integral equations. The idea of normed vector space was in the air, and in the 1920s Banach created functional analysis. == Important concepts == === Metric spaces === In mathematics, a metric space is a set where a notion of distance (called a metric) between elements of the set is defined. Much of analysis happens in some metric space; the most commonly used are the real line, the complex plane, Euclidean space, other vector spaces, and the integers. Examples of analysis without a metric include measure theory (which describes size rather than distance) and functional analysis (which studies topological vector spaces that need not have any sense of distance). Formally, a metric space is an ordered pair ( M , d ) {\displaystyle (M,d)} where M {\displaystyle M} is a set and d {\displaystyle d} is a metric on M {\displaystyle M} , i.e., a function d : M × M → R {\displaystyle d\colon M\times M\rightarrow \mathbb {R} } such that for any x , y , z ∈ M {\displaystyle x,y,z\in M} , the following holds: d ( x , y ) ≥ 0 {\displaystyle d(x,y)\geq 0} , with equality if and only if x = y {\displaystyle x=y} (identity of indiscernibles), d ( x , y ) = d ( y , x ) {\displaystyle d(x,y)=d(y,x)} (symmetry), and d ( x , z ) ≤ d ( x , y ) + d ( y , z ) {\displaystyle d(x,z)\leq d(x,y)+d(y,z)} (triangle inequality). By taking the third property and letting z = x {\displaystyle z=x} , it can be shown that d ( x , y ) ≥ 0 {\displaystyle d(x,y)\geq 0} (non-negative). === Sequences and limits === A sequence is an ordered list. Like a set, it contains members (also called elements, or terms). Unlike a set, order matters, and exactly the same elements can appear multiple times at different positions in the sequence. Most precisely, a sequence can be defined as a function whose domain is a countable totally ordered set, such as the natural numbers. One of the most important properties of a sequence is convergence. Informally, a sequence converges if it has a limit. Continuing informally, a (singly-infinite) sequence has a limit if it approaches some point x, called the limit, as n becomes very large. That is, for an abstract sequence (an) (with n running from 1 to infinity understood) the distance between an and x approaches 0 as n → ∞, denoted lim n → ∞ a n = x . {\displaystyle \lim _{n\to \infty }a_{n}=x.} == Main branches == === Calculus === === Real analysis === Real analysis (traditionally, the "theory of functions of a real variable") is a branch of mathematical analysis dealing with the real numbers and real-valued functions of a real variable. In particular, it deals with the analytic properties of real functions and sequences, including convergence and limits of sequences of real numbers, the calculus of the real numbers, and continuity, smoothness and related properties of real-valued functions. === Complex analysis === Complex analysis (traditionally known as the "theory of functions of a complex variable") is the branch of mathematical analysis that investigates functions of complex numbers. It is useful in many branches of mathematics, including algebraic geometry, number theory, applied mathematics; as well as in physics, including hydrodynamics, thermodynamics, mechanical engineering, electrical engineering, and particularly, quantum field theory. Complex analysis is particularly concerned with the analytic functions of complex variables (or, more generally, meromorphic functions). Because the separate real and imaginary parts of any analytic function must satisfy Laplace's equation, complex analysis is widely applicable to two-dimensional problems in physics. === Functional analysis === Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (e.g. inner product, norm, topology, etc.) and the linear operators acting upon these spaces and respecting these structures in a suitable sense. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining continuous, unitary etc. operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations. === Harmonic analysis === Harmonic analysis is a branch of mathematical analysis concerned with the representation of functions and signals as the superposition of basic waves. This includes the study of the notions of Fourier series and Fourier transforms (Fourier analysis), and of their generalizations. Harmonic analysis has applications in areas as diverse as music theory, number theory, representation theory, signal processing, quantum mechanics, tidal analysis, and neuroscience. === Differential equations === A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. Differential equations play a prominent role in engineering, physics, economics, biology, and other disciplines. Differential equations arise in many areas of science and technology, specifically whenever a deterministic relation involving some continuously varying quantities (modeled by functions) and their rates of change in space or time (expressed as derivatives) is known or postulated. This is illustrated in classical mechanics, where the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow one (given the position, velocity, acceleration and various forces acting on the body) to express these variables dynamically as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation (called an equation of motion) may be solved explicitly. === Measure theory === A measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, and volume. A particularly important example is the Lebesgue measure on a Euclidean space, which assigns the conventional length, area, and volume of Euclidean geometry to suitable subsets of the n {\displaystyle n} -dimensional Euclidean space R n {\displaystyle \mathbb {R} ^{n}} . For instance, the Lebesgue measure of the interval [ 0 , 1 ] {\displaystyle \left[0,1\right]} in the real numbers is its length in the everyday sense of the word – specifically, 1. Technically, a measure is a function that assigns a non-negative real number or +∞ to (certain) subsets of a set X {\displaystyle X} . It must assign 0 to the empty set and be (countably) additive: the measure of a 'large' subset that can be decomposed into a finite (or countable) number of 'smaller' disjoint subsets, is the sum of the measures of the "smaller" subsets. In general, if one wants to associate a consistent size to each subset of a given set while satisfying the other axioms of a measure, one only finds trivial examples like the counting measure. This problem was resolved by defining measure only on a sub-collection of all subsets; the so-called measurable subsets, which are required to form a σ {\displaystyle \sigma } -algebra. This means that the empty set, countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are necessarily complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a non-trivial consequence of the axiom of choice. === Numerical analysis === Numerical analysis is the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). Modern numerical analysis does not seek exact answers, because exact answers are often impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors. Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. Ordinary differential equations appear in celestial mechanics (planets, stars and galaxies); numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology. === Vector analysis === Vector analysis, also called vector calculus, is a branch of mathematical analysis dealing with vector-valued functions. === Scalar analysis === Scalar analysis is a branch of mathematical analysis dealing with values related to scale as opposed to direction. Values such as temperature are scalar because they describe the magnitude of a value without regard to direction, force, or displacement that value may or may not have. === Tensor analysis === == Other topics == Calculus of variations deals with extremizing functionals, as opposed to ordinary calculus which deals with functions. Harmonic analysis deals with the representation of functions or signals as the superposition of basic waves. Geometric analysis involves the use of geometrical methods in the study of partial differential equations and the application of the theory of partial differential equations to geometry. Clifford analysis, the study of Clifford valued functions that are annihilated by Dirac or Dirac-like operators, termed in general as monogenic or Clifford analytic functions. p-adic analysis, the study of analysis within the context of p-adic numbers, which differs in some interesting and surprising ways from its real and complex counterparts. Non-standard analysis, which investigates the hyperreal numbers and their functions and gives a rigorous treatment of infinitesimals and infinitely large numbers. Computable analysis, the study of which parts of analysis can be carried out in a computable manner. Stochastic calculus – analytical notions developed for stochastic processes. Set-valued analysis – applies ideas from analysis and topology to set-valued functions. Convex analysis, the study of convex sets and functions. Idempotent analysis – analysis in the context of an idempotent semiring, where the lack of an additive inverse is compensated somewhat by the idempotent rule A + A = A. Tropical analysis – analysis of the idempotent semiring called the tropical semiring (or max-plus algebra/min-plus algebra). Constructive analysis, which is built upon a foundation of constructive, rather than classical, logic and set theory. Intuitionistic analysis, which is developed from constructive logic like constructive analysis but also incorporates choice sequences. Paraconsistent analysis, which is built upon a foundation of paraconsistent, rather than classical, logic and set theory. Smooth infinitesimal analysis, which is developed in a smooth topos. == Applications == Techniques from analysis are also found in other areas such as: === Physical sciences === The vast majority of classical mechanics, relativity, and quantum mechanics is based on applied analysis, and differential equations in particular. Examples of important differential equations include Newton's second law, the Schrödinger equation, and the Einstein field equations. Functional analysis is also a major factor in quantum mechanics. === Signal processing === When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate individual components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation. === Other areas of mathematics === Techniques from analysis are used in many areas of mathematics, including: Analytic number theory Analytic combinatorics Continuous probability Differential entropy in information theory Differential games Differential geometry, the application of calculus to specific mathematical spaces known as manifolds that possess a complicated internal structure but behave in a simple manner locally. Differentiable manifolds Differential topology Partial differential equations == Famous Textbooks == Foundation of Analysis: The Arithmetic of Whole Rational, Irrational and Complex Numbers, by Edmund Landau Introductory Real Analysis, by Andrey Kolmogorov, Sergei Fomin Differential and Integral Calculus (3 volumes), by Grigorii Fichtenholz The Fundamentals of Mathematical Analysis (2 volumes), by Grigorii Fichtenholz A Course Of Mathematical Analysis (2 volumes), by Sergey Nikolsky Mathematical Analysis (2 volumes), by Vladimir Zorich A Course of Higher Mathematics (5 volumes, 6 parts), by Vladimir Smirnov Differential And Integral Calculus, by Nikolai Piskunov A Course of Mathematical Analysis, by Aleksandr Khinchin Mathematical Analysis: A Special Course, by Georgiy Shilov Theory of Functions of a Real Variable (2 volumes), by Isidor Natanson Problems in Mathematical Analysis, by Boris Demidovich Problems and Theorems in Analysis (2 volumes), by George Pólya, Gábor Szegő Mathematical Analysis: A Modern Approach to Advanced Calculus, by Tom Apostol Principles of Mathematical Analysis, by Walter Rudin Real Analysis: Measure Theory, Integration, and Hilbert Spaces, by Elias Stein Complex Analysis: An Introduction to the Theory of Analytic Functions of One Complex Variable, by Lars Ahlfors Complex Analysis, by Elias Stein Functional Analysis: Introduction to Further Topics in Analysis, by Elias Stein Analysis (2 volumes), by Terence Tao Analysis (3 volumes), by Herbert Amann, Joachim Escher Real and Functional Analysis, by Vladimir Bogachev, Oleg Smolyanov Real and Functional Analysis, by Serge Lang == See also == Constructive analysis History of calculus Hypercomplex analysis Multiple rule-based problems Multivariable calculus Paraconsistent logic Smooth infinitesimal analysis Timeline of calculus and mathematical analysis == References == == Further reading == Aleksandrov, A. D.; Kolmogorov, A. N.; Lavrent'ev, M. A., eds. (March 1969). Mathematics: Its Content, Methods, and Meaning. Vol. 1–3. Translated by Gould, S. H. (2nd ed.). Cambridge, Massachusetts: The M.I.T. Press / American Mathematical Society. Apostol, Tom M. (1974). Mathematical Analysis (2nd ed.). Addison–Wesley. ISBN 978-0201002881. Binmore, Kenneth George (1981) [1981]. The foundations of analysis: a straightforward introduction. Cambridge University Press. Johnsonbaugh, Richard; Pfaffenberger, William Elmer (1981). Foundations of mathematical analysis. New York: M. Dekker. Nikol'skiĭ [Нико́льский], Sergey Mikhailovich [Серге́й Миха́йлович] (2002). "Mathematical analysis". In Hazewinkel, Michiel (ed.). Encyclopaedia of Mathematics. Springer-Verlag. ISBN 978-1402006098. Fusco, Nicola; Marcellini, Paolo; Sbordone, Carlo (1996). Analisi Matematica Due (in Italian). Liguori Editore. ISBN 978-8820726751. Rombaldi, Jean-Étienne (2004). Éléments d'analyse réelle : CAPES et agrégation interne de mathématiques (in French). EDP Sciences. ISBN 978-2868836816. Rudin, Walter (1976). Principles of Mathematical Analysis (3rd ed.). New York: McGraw-Hill. ISBN 978-0070542358. Rudin, Walter (1987). Real and Complex Analysis (3rd ed.). New York: McGraw-Hill. ISBN 978-0070542341. Whittaker, Edmund Taylor; Watson, George Neville (1927-01-02). A Course Of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions; with an Account of the Principal Transcendental Functions (4th ed.). Cambridge: at the University Press. ISBN 0521067944. {{cite book}}: ISBN / Date incompatibility (help) (vi+608 pages) (reprinted: 1935, 1940, 1946, 1950, 1952, 1958, 1962, 1963, 1992) "Real Analysis – Course Notes" (PDF). Archived (PDF) from the original on 2007-04-19. == External links == Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis Basic Analysis: Introduction to Real Analysis by Jiri Lebl (Creative Commons BY-NC-SA) Mathematical Analysis – Encyclopædia Britannica Calculus and Analysis
|
https://en.wikipedia.org/wiki/Mathematical_analysis
|
Integrated mathematics is the term used in the United States to describe the style of mathematics education which integrates many topics or strands of mathematics throughout each year of secondary school. Each math course in secondary school covers topics in algebra, geometry, trigonometry and functions. Nearly all countries throughout the world, except the United States, normally follow this type of integrated curriculum. In the United States, topics are usually integrated throughout elementary school up to the seventh or sometimes eighth grade. Beginning with high school level courses, topics are usually separated so that one year a student focuses entirely on algebra (if it was not already taken in the eighth grade), the next year entirely on geometry, then another year of algebra (sometimes with trigonometry), and later an optional fourth year of precalculus or calculus. Precalculus is the exception to the rule, as it usually integrates algebra, trigonometry, and geometry topics. Statistics may be integrated into all the courses or presented as a separate course. New York State began using integrated math curricula in the 1980s, but recently returned to a traditional curriculum. A few other localities in the United States have also tried such integrated curricula, including Georgia, which mandated them in 2008 but subsequently made them optional. More recently, a few other states have mandated that all districts change to integrated curricula, including North Carolina, West Virginia and Utah. Some districts in other states, including California, have either switched or are considering switching to an integrated curriculum. Under the Common Core Standards adopted by most states in 2012, high school mathematics may be taught using either a traditional American approach or an integrated curriculum. The only difference would be the order in which the topics are taught. Supporters of using integrated curricula in the United States believe that students will be able to see the connections between algebra and geometry better in an integrated curriculum. General mathematics is another term for a mathematics course organized around different branches of mathematics, with topics arranged according to the main objective of the course. When applied to primary education, the term general mathematics may encompass mathematical concepts more complex than basic arithmetic, like number notation, addition and multiplication tables, fractions and related operations, measurement units. When used in context of higher education, the term may encompass mathematical terminology and concepts, finding and applying appropriate techniques to solve routine problems, interpreting and representing practical information given in various forms, interpreting and using mathematical models, and constructing mathematical arguments to solve familiar and unfamiliar problems. == References ==
|
https://en.wikipedia.org/wiki/Integrated_mathematics
|
Ancient Greek mathematics refers to the history of mathematical ideas and texts in Ancient Greece during Classical and Late antiquity, mostly from the 5th century BC to the 6th century AD. Greek mathematicians lived in cities spread around the shores of the ancient Mediterranean, from Anatolia to Italy and North Africa, but were united by Greek culture and the Greek language. The development of mathematics as a theoretical discipline and the use of deductive reasoning in proofs is an important difference between Greek mathematics and those of preceding civilizations, such as Ancient Egypt and Babylonia. The early history of Greek mathematics is obscure, and traditional narratives of mathematical theorems found before the fifth century BC are regarded as later inventions. It is now generally accepted that treatises of deductive mathematics written in Greek began circulating around the mid-fifth century BC, but the earliest complete work on the subject is the Elements, written during the Hellenistic period. The works of renown mathematicians Archimedes and Apollonius, as well as of the astronomer Hipparchus, also belong to this period. In the Imperial Roman era, Ptolemy used trigonometry to determine the positions of stars in the sky, while Nicomachus and other ancient philosophers revived ancient number theory and harmonics. In Late antiquity, Pappus of Alexandria wrote his Collection, summarizing the work of his predecessors, while Diophantus' Arithmetica dealt with the solution of arithmetic problems by way of pre-modern algebra. Later authors such as Theon of Alexandria, his daughter Hypatia, and Eutocius of Ascalon wrote commentaries on the authors making up the ancient Greek mathematical corpus. The works of ancient Greek mathematicians were copied in the medieval Byzantine period and translated into Arabic and Latin, where they exerted influence on mathematics in the Islamic world and in Medieval Europe. During the Renaissance, the texts of Euclid, Archimedes, Apollonius, and Pappus in particular went on to influence the development of early modern mathematics. Some problems in Ancient Greek mathematics were solved only in the modern era by mathematicians such as Gauss, and attempts to prove or disprove Euclid's parallel line postulate spurred the development of non-Euclidean geometry. == Etymology == Greek mathēmatikē (Ancient Greek: μαθηματική) derives from the Ancient Greek: μάθημα, romanized: máthēma, Attic Greek: [má.tʰɛː.ma] Koinē Greek: [ˈma.θi.ma], from the verb manthano, "I learn". Strictly speaking, a máthēma could be any branch of learning, or anything learnt; however, since antiquity certain mathēmata were granted special status: arithmetic, geometry, astronomy, and harmonics. These four mathēmata, which appear listed together around the time of Archytas and Plato, would later become the medieval quadrivium. == Origins == The origins of Greek mathematics are not well documented. The earliest known written treatises on Ancient Greek mathematics, starting with Hippocrates of Chios in the 5th century BC, have been lost, and the early history of mathematics must be reconstructed from information passed down through later authors, beginning in the mid-4th century BC. Much of the knowledge about Ancient Greek mathematics in this period is thanks to references by Plato, Aristotle, and from quotations of Eudemus of Rhodes' histories of geometry and arithmetic by later authors. These references provide near-contemporary accounts for many mathematicians active in the 4th century BC. Euclid's Elements is also believed to contain many theorems that are attributed to mathematicians in the preceding centuries. === Bronze Age === The earliest advanced civilizations in Greece were the Minoan and later Mycenaean (1500-1200 BC) civilizations, both of which flourished in the second half of the Bronze Age. While these civilizations possessed writing, and many Linear B documents written in Mycenaean Greek have been deciphered, no mathematical writings have yet been discovered. When Greek writing re-emerge in the 7th century BC after the Bronze Age collapse, it was based on an entirely new system derived from the Phoenician alphabet, with papyrus from Ancient Egyptian being the preferred medium. Unlike later Greek mathematics, the mathematics from the preceding Babylonian and Egyptian Bronze Age civilizations were primarily focused on land mensuration and accounting; although some mathematical problems went beyond purely utilitarian aims, including constructing artificial scenarios involving the solution of quadratic equations, there are no signs of explicit theoretical concerns. Though no direct evidence of transmission is available, it is generally thought that Babylonian and Egyptian mathematics had an influence on the younger Greek culture, possibly through an oral tradition of mathematical problems over the course of centuries. === Archaic period === Later traditions attribute the origin of Greek mathematics to either Thales of Miletus, one of the legendary Seven Sages of Greece, or to Pythagoras of Samos, both of whom are said to have visited Egypt and Babylon and learned mathematics there. However, modern scholarship tends to be skeptical of such claims as neither Thales or Pythagoras left any writings behind that were available in the Classical period. Additionally, widespread literacy and the scribal culture that would have supported the transmission of mathematical treatises did not emerge fully until the 5th century; the oral literature of their time was primarily focused on public speeches and recitations of poetry. The standard view among historians is that the discoveries Thales and Pythagoras are credited with, such as Thales' Theorem, the Pythagorean theorem, and the Platonic solids, are the product of attributions by much later authors. === Classical Greece === The earliest traces of Greek mathematical treatises appear in the second half of the fifth century BC. According to Eudemus, Hippocrates of Chios was the first to write a book of Elements in the tradition later continued by Euclid. Fragments from another treatise written by Hippocrates on lunes also survives, possibly as an attempt to square the circle. Eudemus' states that Hippocrates studied with an astronomer named Oenopides of Chios. Other mathematicians associated with Chios include Andron and Zenodotus, who may be associated with a "school of Oenopides" mentioned by Proclus. Although many stories of the early Pythagoreans are likely apocryphal, including stories about people being drowned or exiled for sharing mathematical discoveries, some fifth-century Pythagoreans may have contributed to mathematics. Beginning with Philolaus of Croton, a contemporary of Socrates, studies in arithmetic, geometry, astronomy, and harmonics became increasingly associated with Pythagoreanism. Fragments of Philolaus' work are preserved in quotations from later authors. Aristotle is one of the earliest authors to associate Pythagoreanism with mathematics, though he never attributed anything specifically to Pythagoras. Mathematical discussions from other fifth-century philosophers are also extant: Antiphon claimed to be able to construct a rectilinear figure with the same area as a given circle, while Hippias is credited with a method for squaring a circle with a neusis construction. Protagoras and Democritus debated the possibility for a line to intersect a circle at a single point. According to Archimedes, Democritus also asserted, apparently without proof, that the area of a cone was 1/3 the area of a cylinder with the same base, a result which was later proved by Eudoxus of Cnidus. ==== Mathematics in the time of Plato ==== While Plato was not a mathematician himself, numerous early mathematicians, including Archytas, Theaetetus, and Eudoxus, were associated with Plato or with his Academy, and Plato mentions mathematics in several of his dialogues, including the Meno, the Theaetetus, the Republic, and the Timaeus. Archytas, a Pythagorean philosopher from Tarentum, was a friend of Plato who made several mathematical discoveries. Archytas is often credited with books VII to IX in the Elements, which deal with the Euclidean algorithm, prime numbers, mean ratios, and perfect numbers. Archytas solved the problem of doubling the cube, now known to be impossible with only a compass and a straightedge, with an alternative method, systematized the Pythagorean means, and made contributions to optics and mechanics. Theaetetus, who figures as a character in the Platonic dialogue named after him, where he is working on a problem given to him by Theodorus of Cyrene to demonstrate that the square roots of several numbers from 3 to 17 are irrational, a construction now known as the Spiral of Theodorus. Theaetetus is traditionally credited with much of the work contained in Books X of Euclid's Elements, concerned with incommensurable magnitudes, Book XIII, which outlines the construction of the regular polyhedra. Although some of the regular polyhedra were certainly known prior to Theaetetus, he is credited with the systematic construction of them, and the proof that only five of them exist. Another mathematician associated with Plato's academy is Eudoxus of Cnidus, developed a theory of proportion in book V of the Elements. Archimedes also credits Eudoxus of Cnidus with two propositions in book XII of Euclid's Elements, proving that the volume of a cone is one-third the volume of a cylinder with the same base, which use an early form of calculus known as the method of exhaustion. This method is also used by Archimedes himself in order to find an approximation to π (Measurement of the Circle) and to prove that the area enclosed by a parabola and a straight line is 4/3 times the area of a triangle with equal base and height (Quadrature of the Parabola). Eudoxus also developed an astronomical calendar, now lost, that remains partially preserved by an imitation in poetic form called Phaenomena by Aratus. Eudoxus seems to have founded a school of mathematics in Cyzicus, where one of Eudoxus' students, Menaechmus went on to develop a theory of conic sections. == Hellenistic and early Roman period == Ancient Greek mathematics reached its acme during the Hellenistic era and early Roman periods following Alexander the Great's conquest of the Eastern Mediterranean, Egypt, Mesopotamia, the Iranian plateau, Central Asia, and parts of India, leading to the spread of the Greek language and culture across these regions. Koine Greek became the lingua franca of scholarship throughout the Hellenistic world, and the mathematics of the Classical period merged with Egyptian and Babylonian mathematics to give rise to Hellenistic mathematics, and several centers of learning appeared during the Hellenistic period, of which the most important one was the Musaeum in Alexandria, in Ptolemaic Egypt. Although few in number, Hellenistic mathematicians actively communicated with each other via letters; who were then responsible for distributing publication consisted of passing and copying someone's work among colleagues. Working at the Library of Alexandria, Euclid collected many previous mathematical results and theorems in the Elements, a compilation of many of the works of his predecessors that would become a canon of geometry and elementary number theory for many centuries. Archimedes, building on the work in Elements, used the method of exhaustion to approximate Pi (Measurement of a Circle), measured the surface area and volume of a sphere (On the Sphere and Cylinder), devised a mechanical method for developing solutions to mathematical problems using the law of the lever, (Method of Mechanical Theorems), and a developed method for representing very large numbers in order to show that the number of grains of sand filling the universe was not uncountable.(The Sand-Reckoner), Apollonius of Perga, in his extant work Conics, refined and developed the theory of conic sections first outlined by Menaechmus, Euclid, and Conon of Samos. Trigonometry was developed around the time of Hipparchus, an early astronomer, and both trigonometry and astronomy further developed by Ptolemy in his Almagest. === Construction problems === Much of the extant literature on hellenistic mathematics deals with three construction problems: Doubling the Cube, Angle trisection, and Squaring the circle, all of which are now known to be impossible with only a straight edge and a compass, however, many attempts were made using neusis constructions including the Cissoid of Diocles, Quadratrix, and the Conchoid of Nicomedes. The constructions regular polygons and polyhedra had already been known by the time of the publication of Euclid's elements. Archimedes extended this in a now lost work by constructing the semiregular polyhedra, also sometimes known as Archimedean solids. A work transmitted as "Book XIV" of Euclid's Elements, likely written a few centuries later by Hypsicles, provides a historical development after Theatetus; Aristaeus the Elder's comparison of five figures and Apollonius of Perga's Comparison of the Dodecahedron and the Icosahedron. Another book, transmitted as "Book XV" of Euclid's elements, which was compiled in the 6th century CE, provides further developments. Many of the works on the solution of construction problems became part of a standard curriculum of works which were studied during the Hellenistic period: Data and Porisms by Euclid, several works by Apollonius of Perga including Cutting off a ratio, Cutting off an area, Determinate section, Tangencies, and Neusis, and several works dealing with loci, including Plane Loci and Conics by Apollonius, Solid Loci by Aristaeus the Elder, Loci on a Surface by Euclid, and On Means by Eratosthenes of Cyrene. All of these works other than Data, Conics Books I to VII, and Cutting off a ratio are lost. However, a rough outline of the contents of can be obtained in Book 7 of the Collection of Pappus of Alexandria, who provides brief epitomes of each of the works, along with lemmas for Cutting off an area, Determinate section, Tangencies, Porisms, Neusis, Plane Loci, and Book VIII of the Conics. The study of optics in Ancient Greece was also considered a part of geometry. An extant work on Catoptrics is dubiously attributed to Euclid, Archimedes is known to have written a now lost work on catoptrics, and another work, On Burning Mirrors, by Diocles is extant in an arabic translation. === Astronomy === The Little Astronomy, a collection of shorter works from the 4th–2nd century BC, mostly with astronomical relevance, have survived because they were bundled together as an astronomy curriculum beginning in the 2nd century AD and transmitted as a group: Theodosius's Spherics, Autolycus's On the Moving Sphere, Euclid's Optics and Phaenomena, Theodosius's On Habitations and On Days and Nights, Aristarchus's On the Sizes and Distances, Autolycus's On Risings and Settings, and Hypsicles's On Ascensions. These works are all extant in Vaticanus gr. 204, which also contains Apollonius's Conics books I-IV and the commentary by Eutocius, and Euclid's Catoptrics and his Data with an introduction by Marinus of Neapolis. This collection was translated into Arabic with a few additions such as Euclid's Data, Menelaus's Spherics (which only survives in Arabic), and various works by Archimedes as the Middle Books, intermediate between Euclid's Elements and Ptolemy's Almagest. Around the 2nd century BC, the works of Babylonian astronomers became available to Ancient Greek mathematicians. The development of trigonometry as a synthesis of Babylonian and Greek methods in mathematical astronomy is commonly attributed to Hipparchus, who made extensive astronomical observations and wrote several mathematical treatises; however, all of Hipparchus's works have been lost with the exception of his Commentary on the Phaenomena of Eudoxus and Aratus, a critical commentary on a lost treatise by Eudoxus and a popular poem based on it by Aratus about astronomical phenomena, which was preserved bundled among other commentary on Aratus's poem. In the 2nd century AD, Claudius Ptolemy compiled the observations of Hipparchus and other astronomers and wrote a work now called the Almagest explaining the motions of the stars and planets according to a geocentric model, and calculated out chord tables to a higher degree of precision than had been done previously, along with an instruction manual, Handy Tables. === Arithmetic === Building on the works of the earlier Pythagoreans, Nicomachus of Gerasa wrote an Introduction to Arithmetic which would go on to receive later commentary in Neopythagoreanism. The continuing influence of Platonism in mathematics is shown by another extant work, Mathematics Useful For Understanding Plato, by Theon of Smyrna, written around the same time. Diophantus wrote on polygonal numbers and a work in pre-modern algebra (Arithmetica), === Applied mathematics === Much of the work represented by authors such as Euclid, Archimedes, Apollonius, Hipparchus, and Diophantus was of a very advanced level and rarely mastered outside a small circle. Ancient Greek mathematics was not limited to theoretical works but was also used in other activities, such as business transactions and in land mensuration, as evidenced by extant texts where computational procedures and practical considerations took more of a central role. Examples of applied mathematics around this time include the construction of analogue computers like the Antikythera mechanism, the accurate measurement of the circumference of the Earth by Eratosthenes, and the mathematical and mechanical works of Heron. == Mathematics in late antiquity == The mathematicians in the later Roman era from the 4th century onward generally had few notable original works, however, they are distinguished for their commentaries and expositions on the works of earlier mathematicians. These commentaries have preserved valuable extracts from works which have perished, or historical allusions which, in the absence of original documents, are precious because of their rarity. === Pappus' Collection === Pappus of Alexandria compiled a canon of results of earlier mathematics in the Collection in eight books, of which part of book II and books III through VII are extant in Greek and book VIII is extant in Arabic. The collection attempts to sum up the whole of Ancient Greek mathematics up to that time as interpreted by Pappus: Book III is framed as a letter to Pandrosion, a mathematican in Athens, and discusses three construction problems and attempts to solve them: Doubling the Cube, Angle trisection, and Squaring the Circle. Book IV discusses classical geometry, which Pappus divides into plane geometry, Line geometry, and Solid geometry, and includes a discussion of Archimedes' construction of the Arbelos, otherwise only known via a Pseudo-archimedean work, Book of Lemmas. Book V discusses isoperimetric figures, summarizing otherwise lost works by Zenodotus and Archimedes on isoperimetric plane figures and solid figures, respectively. Book VI deals with astronomy, providing commentary on some of the works of the Little Astronomy corpus. Book VII deals with analysis, providing epitomes and lemmas from otherwise lost works. Book VIII deals with mechanics. The Greek version breaks off in the middle of a sentence discussing Hero of Alexandria, but a complete edition of the book survives in Arabic. === Commentaries === The commentary tradition, which had begun during the Hellenistic period, continued into late antiquity. The first known commentary on the Elements was written by Hero of Alexandria, who likely set the format for future commentaries. Serenus of Antinoöpolis wrote a lost commentary on the Conics of Apollonius, along with two works that survive, Section of a Cylinder and Section of a Cone, expanding on specific subjects in the Conics. Pappus wrote a commentary on Book X of the elements, dealing with incommensurable magnitudes. Heliodorus of Larissa wrote a summary of the Optics. Many of the late antique commentators were associated with Neoplatonist philosophy; Porphyry of Tyre, a student of Plotinus, the founder of Neoplatonism, wrote a commentary on Ptolemy's Harmonics. Iamblichus, who was himself a student of Porphyry, wrote a commentary on Nicomachus' Introduction to Arithmetic. In Alexandria in the 4th century, Theon of Alexandria wrote commentaries on the writings of Ptolemy, including a commentary on the Almagest and two commentaries on the Handy Tables, one of which is more of an instruction manual ("Little Commentary"), and another with a much more detailed exposition and derivations ("Great Commentary"). Hypatia, Theon's daughter, also wrote a commentary on Diophantus' Arithmetica and a commentary on the Conics of Apollonius, which have not survived. In the 5th century, in Athens, Proclus wrote a commentary on Euclid's elements, which the first book survives. Proclus' contemporary, Domninus of Larissa, wrote a summary of Nicomachus' Introduction to Arithmetic, while Marinus of Neapolis, Proclus' successor, wrote an Introduction to Euclid's Data. Meanwhile in Alexandria, Ammonius Hermiae, John Philoponus and Simplicius of Cilicia wrote commentaries on the works of Aristotle that preserve information on earlier mathematicians and philosophers. Eutocius of Ascalon,(c. 480–540 AD) another student of Ammonius, wrote commentaries that are extant on Apollonius' Conics along with some treatises of Archimedes: On the Sphere and Cylinder, Measurement of a Circle, and On Balancing Planes (though the authorship of the last one is disputed). In Rome, Boethius, seeking to preserve Ancient Greek philosophical, translated works on the quadrivium into Latin, deriving much of his work on Arithmetic and Harmonics from Nicomachus. After the closure of the Neoplatonic schools by the emperor Justinian in 529 AD, the institution of mathematics as a formal enterprise entered a decline. However, two mathematicians connected to the Neoplatonic tradition were commissioned to build the Hagia Sophia: Anthemius of Tralles and Isidore of Miletus. Anthemius constructed many advanced mechanisms and wrote a work On Surprising Mechanisms which treats "burning mirrors" and skeptically attempts to explain the function of Archimedes' heat ray. Isidore, who continued the project of the Hagia Sophia after Anthemius' death, also supervised the revision of Eutocius' commentaries of Archimedes. From someone in Isidore's circle we also have a work on polyhedra that is transmitted pseudepigraphically as Book XV of Euclid's Elements. == Reception and legacy == The majority of mathematical treatises written in Ancient Greek, along with the discoveries made within them, have been lost; around 30% of the works known from references to them are extant. Authors whose works survive in Greek manuscripts include: Euclid, Autolycus of Pitane, Archimedes, Aristarchus of Samos, Philo of Byzantium, Biton of Pergamon, Apollonius of Perga, Hipparchus, Theodosius of Bithynia, Hypsicles, Athenaeus Mechanicus, Geminus, Hero of Alexandria, Apollodorus of Damascus, Theon of Smyrna, Cleomedes, Nicomachus, Ptolemy, Cleonides, Gaudentius, Anatolius of Laodicea, Aristides Quintilian, Porphyry, Diophantus, Alypius, Heliodorus of Larissa, Pappus of Alexandria, Serenus of Antinoöpolis, Theon of Alexandria, Proclus, Marinus of Neapolis, Domninus of Larissa, Anthemius of Tralles, and Eutocius. The earliest surviving papyrus to record a Greek mathematical text is P. Hib. i 27, which contains a parapegma of Eudoxus' astronomical calendar, along with several ostraca from the 3rd century BC that deal with propositions XIII.10 and XIII.16 of Euclid's Elements. A papyrus recovered from Herculaneum contains an essay by the Epicurean philosopher Demetrius Lacon on Euclid's Elements. Most of the oldest extant manuscripts for each text date from the 9th century onward, copies of works written during and before the Hellenistic period. The two major sources of manuscripts are Byzantine-era codices, copied some 500 to 1500 years after their originals, and Arabic translations of Greek works; what has survived reflects the preferences of readers in late antiquity along with the interests of mathematicians in the Byzantine empire and the medieval Islamic world who preserved and copied them. Despite the lack of original manuscripts, the dates for some Greek mathematicians are more certain than the dates of surviving Babylonian or Egyptian sources because a number of overlapping chronologies exist, though many dates remain uncertain. === Byzantine mathematics === With the closure of the Neoplatonist schools in the 6th century, Greek mathematics declined in the medieval Byzantine period, although many works were preserved in medieval manuscript transmission and translated into first Syriac and Arabic, and later into Latin. The transition to miniscule manuscript in the 9th century, however, many works that were not copied during this time period were lost, although a few uncial manuscripts do survive. Many surviving works are derived from only a single manuscript; such as Pappus' Collection and Books I-IV of the Conics. Many of the surviving manuscripts originate from two scholars in this period in the circle of Photios I, Leo the Mathematician and Arethas of Caesarea. Scholia written in the margins of Euclid's elements that have been copied throughout multiple extant manuscripts that were also written by Arethas, derived from Proclus' commentary along with many commentaries on Euclid which are now lost. The works of Archimedes survived in three different recensions in manuscripts from the 9th and 10th centuries; two of which are now lost after being copied, the third of which, the Archimedes Palimpsest, was only rediscovered in 1906. In the later Byzantine period, George Pachymeres wrote a summary of the quadrivium, and Maximus Planudes wrote scholia on the first two books of Diophantus. === Medieval Islamic mathematics === Numerous mathematical treatises were translated into Arabic in the 9th century; many works that are only extent today in Arabic translation, and there is evidence for several more that have since been lost. Medieval Islamic scientists such as Alhazen developed the ideas of the Ancient Greek geometry into advanced theories in optics and astronomy, and Diophantus' Arithmetica was synthezied with the works of Al-Khwarizmi and works from Indian mathematics to develop a theory of algebra. The following works are extant only in Arabic translations: Apollonius, Conics books V to VII, Cutting Off of a Ratio Archimedes, Book of Lemmas Diocles, On Burning Mirrors Diophantus, Arithmetica books IV to VII Euclid, On Divisions of Figures, On Weights Menelaus, Sphaerica Hero, Catoptrica, Mechanica Pappus, Commentary on Euclid's Elements book X, Collection Book VIII Ptolemy, Planisphaerium, Additionally, the work Optics by Ptolemy only survives in a Latin translations of the Arabic translation of a Greek original. === In Latin Medieval Europe === The works derived from Ancient Greek mathematical writings that had been written in late antiquity by Boethius and Martianus Capella had formed the basis of early medieval quadrivium of arithmetic, geometry, astronomy, and music. In the 12th century the original works of Ancient Greek mathematics were translated into Latin first from Arabic by Gerard of Cremona, and then from the original Greek a century later by William of Moerbeke. === Renaissance === The publication of Greek mathematical works increased their audience; Pappus's collection was published in 1588, Diophantus in 1621. Diophantus would go on to influence Pierre de Fermat's work on number theory; Fermat scribbled his famous note about Fermat's Last theorem in his copy of Arithmetica. Descartes, working through the Problem of Apollonius from his edition of Pappus, proved what is now called Descartes' theorem and laid the foundations for Analytic geometry. === Modern mathematics === Ancient Greek mathematics constitutes an important period in the history of mathematics: fundamental in respect of geometry and for the idea of formal proof. Greek mathematicians also contributed to number theory, mathematical astronomy, combinatorics, mathematical physics, and, at times, approached ideas close to the integral calculus. Richard Dedekind acknowledged Eudoxus's theory of proportion as an inspiration for the Dedekind cut, a method of contructing the real numbers. == See also == Timeline of ancient Greek mathematicians List of Greek mathematicians Music of ancient Greece – Musical traditions of ancient Greece == Notes == === Footnotes === === Citations === == References == Acerbi, Fabio (2018), "Hellenistic Mathematics", in Keyser, Paul T; Scarborough, John (eds.), Oxford Handbook of Science and Medicine in the Classical World, pp. 268–292, doi:10.1093/oxfordhb/9780199734146.013.69, ISBN 978-0-19-973414-6, retrieved 2021-05-26 Boyer, Carl B. (1991), A History of Mathematics (3rd ed.), John Wiley & Sons, Inc., ISBN 978-0-471-54397-8 Cameron, A. (1990), "Isidore of Miletus and Hypatia: On the Editing of Mathematical Texts", Greek, Roman, and Byzantine Studies, 31 (1): 103–127 Fowler, D. H. (1999), The Mathematics of Plato's Academy (2nd ed.), Clarendon Press Høyrup, J. (1990), "Sub-scientific mathematics: Undercurrents and missing links in the mathematical technology of the Hellenistic and Roman world" (PDF) (Unpublished manuscript, written for Aufstieg und Niedergang der römischen Welt) Knorr, Wilbur R. (1986), The Ancient Tradition of Geometric Problems Knorr, Wilbur R. (1996), "The method of indivisibles in Ancient Geometry", Vita Mathematica, MAA Press, pp. 67–86 Mansfeld, J. (2016), Prolegomena Mathematica: From Apollonius of Perga to the Late Neoplatonism. With an Appendix on Pappus and the History of Platonism, Brill, ISBN 978-90-04-32105-2 Netz, Reviel (2022), A New History of Greek Mathematics, Cambridge University Press, ISBN 978-1-108-83384-4 Netz, Reviel (2014), "The problem of Pythagorean mathematics", in Huffman, Carl A. (ed.), A History of Pythagoreanism, Cambridge University Press, pp. 167–184, doi:10.1017/CBO9781139028172.009, ISBN 978-1-107-01439-8 Schofield, Malcolm (2014), "Archytas", in Huffman, Carl A. (ed.), A History of Pythagoreanism, Cambridge University Press, pp. 69–87, doi:10.1017/CBO9781139028172.009, ISBN 978-1-107-01439-8 == Further reading == A. Barker, Porphyry’s Commentary on Ptolemy’s Harmonics A. Barker, Greek Musical Writings, Vol. 2: Harmonic and Acoustic Theory A. Bernard, “Ancient Rhetoric and Greek Mathematics: A Response to a Modern Historiographical Dilemma,” I. Bodnár, Oenopides of Chius: A Survey of the Modern Literature with a Collection of the Ancient Testimonia Burton, David M. (1997), The History of Mathematics: An Introduction (3rd ed.), The McGraw-Hill Companies, Inc., ISBN 978-0-07-009465-9 M. F. Burnyeat, “Plato on Why Mathematics Is Good for the Soul,” Proceedings of the British Academy 2000 M. F. Burnyeat, “The Philosophical Sense of Theaetetus’ Mathematics,” 1978 L. Corry, A Brief History of Number S. Cuomo, Pappus of Alexandria and the Mathematics of Late Antiquity Christianidis, Jean, ed. (2004), Classics in the History of Greek Mathematics, Dordrecht: Kluwer, ISBN 978-1-4020-0081-2 Cooke, Roger (1997), The History of Mathematics: A Brief Course, Wiley-Interscience, ISBN 978-0-471-18082-1 Derbyshire, John (2006), Unknown Quantity: A Real And Imaginary History of Algebra, Joseph Henry Press, ISBN 978-0-309-09657-7 E. J. Dijksterhuis, Archimedes M. N. Fried, and S. Unguru, Apollonius of Perga’s Conica: Text, Context, Subtext Heath, Thomas Little (1981) [First published 1921], A History of Greek Mathematics, Dover publications, ISBN 978-0-486-24073-2 Heath, Thomas Little (2003) [First published 1931], A Manual of Greek Mathematics, Dover publications, ISBN 978-0-486-43231-1 Huffman, Archytas Huffman, Philolaus A. Jones, A Portable Cosmos R. W. Knorr, The Evolution of the Euclidean Elements, 1975 H. Mendell, “Reflections on Eudoxus, Callippus and Their Curves: Hippopedes and Callippopedes,” I. Mueller, Philosophy of Mathematics and Deductive Structure in Euclid’s Elements Netz, “Eudemus of Rhodes, Hippocrates of Chios and the Earliest Form of a Greek Mathematical Text,” R. Netz, Ludic Proof: Greek Mathematics and the Alexandrian Aesthetics R. Netz, The Shaping of Deduction in Greek Mathematics O. Pedersen, A Survey of the Almagest: With Annotation and New Commentary by Alexander Jones D. N. Sedley, “Epicurus and the Mathematicians of Cyzicus,” M. Sialaros, J. Christianidis, and A. Megremi (eds.), “On Mathemata: Commenting on Greek and Arabic Mathematical Texts,” Sing, Robert; Berkel, Tazuko Angela van; Osborne, Robin (2022), Numbers and numeracy in the Greek polis, Brill, ISBN 978-90-04-46721-7 Stillwell, John (2004), Mathematics and its History (2nd ed.), Springer Science + Business Media Inc., ISBN 978-0-387-95336-6 Szabó, Árpád; Szabó, Árpád (1978), The Beginnings of Greek Mathematics, Budapest: Akadémiai Kiadó, ISBN 978-963-05-1416-3 S. Unguru, “On the Need to Rewrite the History of Greek Mathematics,” Archive for History of Exact Sciences 15 (1975): 67-114 G. Vlastos, “Elenchus and Mathematics: A Turning-Point in Plato’s Philosophical Development,” I. Yavetz, “On the Homocentric Spheres of Eudoxus,” Archive for History of Exact Sciences == External links == Vatican Exhibit History of Mathematics MacTutor archive of History of Mathematics
|
https://en.wikipedia.org/wiki/Ancient_Greek_mathematics
|
The history of mathematics deals with the origin of discoveries in mathematics and the mathematical methods and notation of the past. Before the modern age and the worldwide spread of knowledge, written examples of new mathematical developments have come to light only in a few locales. From 3000 BC the Mesopotamian states of Sumer, Akkad and Assyria, followed closely by Ancient Egypt and the Levantine state of Ebla began using arithmetic, algebra and geometry for purposes of taxation, commerce, trade and also in the field of astronomy to record time and formulate calendars. The earliest mathematical texts available are from Mesopotamia and Egypt – Plimpton 322 (Babylonian c. 2000 – 1900 BC), the Rhind Mathematical Papyrus (Egyptian c. 1800 BC) and the Moscow Mathematical Papyrus (Egyptian c. 1890 BC). All of these texts mention the so-called Pythagorean triples, so, by inference, the Pythagorean theorem seems to be the most ancient and widespread mathematical development after basic arithmetic and geometry. The study of mathematics as a "demonstrative discipline" began in the 6th century BC with the Pythagoreans, who coined the term "mathematics" from the ancient Greek μάθημα (mathema), meaning "subject of instruction". Greek mathematics greatly refined the methods (especially through the introduction of deductive reasoning and mathematical rigor in proofs) and expanded the subject matter of mathematics. The ancient Romans used applied mathematics in surveying, structural engineering, mechanical engineering, bookkeeping, creation of lunar and solar calendars, and even arts and crafts. Chinese mathematics made early contributions, including a place value system and the first use of negative numbers. The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today evolved over the course of the first millennium AD in India and were transmitted to the Western world via Islamic mathematics through the work of Muḥammad ibn Mūsā al-Khwārizmī. Islamic mathematics, in turn, developed and expanded the mathematics known to these civilizations. Contemporaneous with but independent of these traditions were the mathematics developed by the Maya civilization of Mexico and Central America, where the concept of zero was given a standard symbol in Maya numerals. Many Greek and Arabic texts on mathematics were translated into Latin from the 12th century onward, leading to further development of mathematics in Medieval Europe. From ancient times through the Middle Ages, periods of mathematical discovery were often followed by centuries of stagnation. Beginning in Renaissance Italy in the 15th century, new mathematical developments, interacting with new scientific discoveries, were made at an increasing pace that continues through the present day. This includes the groundbreaking work of both Isaac Newton and Gottfried Wilhelm Leibniz in the development of infinitesimal calculus during the course of the 17th century and following discoveries of German mathematicians like Carl Friedrich Gauss and David Hilbert. == Prehistoric == The origins of mathematical thought lie in the concepts of number, patterns in nature, magnitude, and form. Modern studies of animal cognition have shown that these concepts are not unique to humans. Such concepts would have been part of everyday life in hunter-gatherer societies. The idea of the "number" concept evolving gradually over time is supported by the existence of languages which preserve the distinction between "one", "two", and "many", but not of numbers larger than two. The use of yarn by Neanderthals some 40,000 years ago at a site in Abri du Maras in the south of France suggests they knew basic concepts in mathematics. The Ishango bone, found near the headwaters of the Nile river (northeastern Congo), may be more than 20,000 years old and consists of a series of marks carved in three columns running the length of the bone. Common interpretations are that the Ishango bone shows either a tally of the earliest known demonstration of sequences of prime numbers or a six-month lunar calendar. Peter Rudman argues that the development of the concept of prime numbers could only have come about after the concept of division, which he dates to after 10,000 BC, with prime numbers probably not being understood until about 500 BC. He also writes that "no attempt has been made to explain why a tally of something should exhibit multiples of two, prime numbers between 10 and 20, and some numbers that are almost multiples of 10." The Ishango bone, according to scholar Alexander Marshack, may have influenced the later development of mathematics in Egypt as, like some entries on the Ishango bone, Egyptian arithmetic also made use of multiplication by 2; this however, is disputed. Predynastic Egyptians of the 5th millennium BC pictorially represented geometric designs. It has been claimed that megalithic monuments in England and Scotland, dating from the 3rd millennium BC, incorporate geometric ideas such as circles, ellipses, and Pythagorean triples in their design. All of the above are disputed however, and the currently oldest undisputed mathematical documents are from Babylonian and dynastic Egyptian sources. == Babylonian == Babylonian mathematics refers to any mathematics of the peoples of Mesopotamia (modern Iraq) from the days of the early Sumerians through the Hellenistic period almost to the dawn of Christianity. The majority of Babylonian mathematical work comes from two widely separated periods: The first few hundred years of the second millennium BC (Old Babylonian period), and the last few centuries of the first millennium BC (Seleucid period). It is named Babylonian mathematics due to the central role of Babylon as a place of study. Later under the Arab Empire, Mesopotamia, especially Baghdad, once again became an important center of study for Islamic mathematics. In contrast to the sparsity of sources in Egyptian mathematics, knowledge of Babylonian mathematics is derived from more than 400 clay tablets unearthed since the 1850s. Written in Cuneiform script, tablets were inscribed whilst the clay was moist, and baked hard in an oven or by the heat of the sun. Some of these appear to be graded homework. The earliest evidence of written mathematics dates back to the ancient Sumerians, who built the earliest civilization in Mesopotamia. They developed a complex system of metrology from 3000 BC that was chiefly concerned with administrative/financial counting, such as grain allotments, workers, weights of silver, or even liquids, among other things. From around 2500 BC onward, the Sumerians wrote multiplication tables on clay tablets and dealt with geometrical exercises and division problems. The earliest traces of the Babylonian numerals also date back to this period. Babylonian mathematics were written using a sexagesimal (base-60) numeral system. From this derives the modern-day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 (60 × 6) degrees in a circle, as well as the use of seconds and minutes of arc to denote fractions of a degree. It is thought the sexagesimal system was initially used by Sumerian scribes because 60 can be evenly divided by 2, 3, 4, 5, 6, 10, 12, 15, 20 and 30, and for scribes (doling out the aforementioned grain allotments, recording weights of silver, etc.) being able to easily calculate by hand was essential, and so a sexagesimal system is pragmatically easier to calculate by hand with; however, there is the possibility that using a sexagesimal system was an ethno-linguistic phenomenon (that might not ever be known), and not a mathematical/practical decision. Also, unlike the Egyptians, Greeks, and Romans, the Babylonians had a place-value system, where digits written in the left column represented larger values, much as in the decimal system. The power of the Babylonian notational system lay in that it could be used to represent fractions as easily as whole numbers; thus multiplying two numbers that contained fractions was no different from multiplying integers, similar to modern notation. The notational system of the Babylonians was the best of any civilization until the Renaissance, and its power allowed it to achieve remarkable computational accuracy; for example, the Babylonian tablet YBC 7289 gives an approximation of √2 accurate to five decimal places. The Babylonians lacked, however, an equivalent of the decimal point, and so the place value of a symbol often had to be inferred from the context. By the Seleucid period, the Babylonians had developed a zero symbol as a placeholder for empty positions; however it was only used for intermediate positions. This zero sign does not appear in terminal positions, thus the Babylonians came close but did not develop a true place value system. Other topics covered by Babylonian mathematics include fractions, algebra, quadratic and cubic equations, and the calculation of regular numbers, and their reciprocal pairs. The tablets also include multiplication tables and methods for solving linear, quadratic equations and cubic equations, a remarkable achievement for the time. Tablets from the Old Babylonian period also contain the earliest known statement of the Pythagorean theorem. However, as with Egyptian mathematics, Babylonian mathematics shows no awareness of the difference between exact and approximate solutions, or the solvability of a problem, and most importantly, no explicit statement of the need for proofs or logical principles. == Egyptian == Egyptian mathematics refers to mathematics written in the Egyptian language. From the Hellenistic period, Greek replaced Egyptian as the written language of Egyptian scholars. Mathematical study in Egypt later continued under the Arab Empire as part of Islamic mathematics, when Arabic became the written language of Egyptian scholars. Archaeological evidence has suggested that the Ancient Egyptian counting system had origins in Sub-Saharan Africa. Also, fractal geometry designs which are widespread among Sub-Saharan African cultures are also found in Egyptian architecture and cosmological signs. The most extensive Egyptian mathematical text is the Rhind papyrus (sometimes also called the Ahmes Papyrus after its author), dated to c. 1650 BC but likely a copy of an older document from the Middle Kingdom of about 2000–1800 BC. It is an instruction manual for students in arithmetic and geometry. In addition to giving area formulas and methods for multiplication, division and working with unit fractions, it also contains evidence of other mathematical knowledge, including composite and prime numbers; arithmetic, geometric and harmonic means; and simplistic understandings of both the Sieve of Eratosthenes and perfect number theory (namely, that of the number 6). It also shows how to solve first order linear equations as well as arithmetic and geometric series. Another significant Egyptian mathematical text is the Moscow papyrus, also from the Middle Kingdom period, dated to c. 1890 BC. It consists of what are today called word problems or story problems, which were apparently intended as entertainment. One problem is considered to be of particular importance because it gives a method for finding the volume of a frustum (truncated pyramid). Finally, the Berlin Papyrus 6619 (c. 1800 BC) shows that ancient Egyptians could solve a second-order algebraic equation. == Greek == Greek mathematics refers to the mathematics written in the Greek language from the time of Thales of Miletus (~600 BC) to the closure of the Academy of Athens in 529 AD. Greek mathematicians lived in cities spread over the entire Eastern Mediterranean, from Italy to North Africa, but were united by culture and language. Greek mathematics of the period following Alexander the Great is sometimes called Hellenistic mathematics. Greek mathematics was much more sophisticated than the mathematics that had been developed by earlier cultures. All surviving records of pre-Greek mathematics show the use of inductive reasoning, that is, repeated observations used to establish rules of thumb. Greek mathematicians, by contrast, used deductive reasoning. The Greeks used logic to derive conclusions from definitions and axioms, and used mathematical rigor to prove them. Greek mathematics is thought to have begun with Thales of Miletus (c. 624–c.546 BC) and Pythagoras of Samos (c. 582–c. 507 BC). Although the extent of the influence is disputed, they were probably inspired by Egyptian and Babylonian mathematics. According to legend, Pythagoras traveled to Egypt to learn mathematics, geometry, and astronomy from Egyptian priests. Thales used geometry to solve problems such as calculating the height of pyramids and the distance of ships from the shore. He is credited with the first use of deductive reasoning applied to geometry, by deriving four corollaries to Thales' Theorem. As a result, he has been hailed as the first true mathematician and the first known individual to whom a mathematical discovery has been attributed. Pythagoras established the Pythagorean School, whose doctrine it was that mathematics ruled the universe and whose motto was "All is number". It was the Pythagoreans who coined the term "mathematics", and with whom the study of mathematics for its own sake begins. The Pythagoreans are credited with the first proof of the Pythagorean theorem, though the statement of the theorem has a long history, and with the proof of the existence of irrational numbers. Although he was preceded by the Babylonians, Indians and the Chinese, the Neopythagorean mathematician Nicomachus (60–120 AD) provided one of the earliest Greco-Roman multiplication tables, whereas the oldest extant Greek multiplication table is found on a wax tablet dated to the 1st century AD (now found in the British Museum). The association of the Neopythagoreans with the Western invention of the multiplication table is evident in its later Medieval name: the mensa Pythagorica. Plato (428/427 BC – 348/347 BC) is important in the history of mathematics for inspiring and guiding others. His Platonic Academy, in Athens, became the mathematical center of the world in the 4th century BC, and it was from this school that the leading mathematicians of the day, such as Eudoxus of Cnidus (c. 390 - c. 340 BC), came. Plato also discussed the foundations of mathematics, clarified some of the definitions (e.g. that of a line as "breadthless length"). Eudoxus developed the method of exhaustion, a precursor of modern integration and a theory of ratios that avoided the problem of incommensurable magnitudes. The former allowed the calculations of areas and volumes of curvilinear figures, while the latter enabled subsequent geometers to make significant advances in geometry. Though he made no specific technical mathematical discoveries, Aristotle (384–c. 322 BC) contributed significantly to the development of mathematics by laying the foundations of logic. In the 3rd century BC, the premier center of mathematical education and research was the Musaeum of Alexandria. It was there that Euclid (c. 300 BC) taught, and wrote the Elements, widely considered the most successful and influential textbook of all time. The Elements introduced mathematical rigor through the axiomatic method and is the earliest example of the format still used in mathematics today, that of definition, axiom, theorem, and proof. Although most of the contents of the Elements were already known, Euclid arranged them into a single, coherent logical framework. The Elements was known to all educated people in the West up through the middle of the 20th century and its contents are still taught in geometry classes today. In addition to the familiar theorems of Euclidean geometry, the Elements was meant as an introductory textbook to all mathematical subjects of the time, such as number theory, algebra and solid geometry, including proofs that the square root of two is irrational and that there are infinitely many prime numbers. Euclid also wrote extensively on other subjects, such as conic sections, optics, spherical geometry, and mechanics, but only half of his writings survive. Archimedes (c. 287–212 BC) of Syracuse, widely considered the greatest mathematician of antiquity, used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, in a manner not too dissimilar from modern calculus. He also showed one could use the method of exhaustion to calculate the value of π with as much precision as desired, and obtained the most accurate value of π then known, 3+10/71 < π < 3+10/70. He also studied the spiral bearing his name, obtained formulas for the volumes of surfaces of revolution (paraboloid, ellipsoid, hyperboloid), and an ingenious method of exponentiation for expressing very large numbers. While he is also known for his contributions to physics and several advanced mechanical devices, Archimedes himself placed far greater value on the products of his thought and general mathematical principles. He regarded as his greatest achievement his finding of the surface area and volume of a sphere, which he obtained by proving these are 2/3 the surface area and volume of a cylinder circumscribing the sphere. Apollonius of Perga (c. 262–190 BC) made significant advances to the study of conic sections, showing that one can obtain all three varieties of conic section by varying the angle of the plane that cuts a double-napped cone. He also coined the terminology in use today for conic sections, namely parabola ("place beside" or "comparison"), "ellipse" ("deficiency"), and "hyperbola" ("a throw beyond"). His work Conics is one of the best known and preserved mathematical works from antiquity, and in it he derives many theorems concerning conic sections that would prove invaluable to later mathematicians and astronomers studying planetary motion, such as Isaac Newton. While neither Apollonius nor any other Greek mathematicians made the leap to coordinate geometry, Apollonius' treatment of curves is in some ways similar to the modern treatment, and some of his work seems to anticipate the development of analytical geometry by Descartes some 1800 years later. Around the same time, Eratosthenes of Cyrene (c. 276–194 BC) devised the Sieve of Eratosthenes for finding prime numbers. The 3rd century BC is generally regarded as the "Golden Age" of Greek mathematics, with advances in pure mathematics henceforth in relative decline. Nevertheless, in the centuries that followed significant advances were made in applied mathematics, most notably trigonometry, largely to address the needs of astronomers. Hipparchus of Nicaea (c. 190–120 BC) is considered the founder of trigonometry for compiling the first known trigonometric table, and to him is also due the systematic use of the 360 degree circle. Heron of Alexandria (c. 10–70 AD) is credited with Heron's formula for finding the area of a scalene triangle and with being the first to recognize the possibility of negative numbers possessing square roots. Menelaus of Alexandria (c. 100 AD) pioneered spherical trigonometry through Menelaus' theorem. The most complete and influential trigonometric work of antiquity is the Almagest of Ptolemy (c. AD 90–168), a landmark astronomical treatise whose trigonometric tables would be used by astronomers for the next thousand years. Ptolemy is also credited with Ptolemy's theorem for deriving trigonometric quantities, and the most accurate value of π outside of China until the medieval period, 3.1416. Following a period of stagnation after Ptolemy, the period between 250 and 350 AD is sometimes referred to as the "Silver Age" of Greek mathematics. During this period, Diophantus made significant advances in algebra, particularly indeterminate analysis, which is also known as "Diophantine analysis". The study of Diophantine equations and Diophantine approximations is a significant area of research to this day. His main work was the Arithmetica, a collection of 150 algebraic problems dealing with exact solutions to determinate and indeterminate equations. The Arithmetica had a significant influence on later mathematicians, such as Pierre de Fermat, who arrived at his famous Last Theorem after trying to generalize a problem he had read in the Arithmetica (that of dividing a square into two squares). Diophantus also made significant advances in notation, the Arithmetica being the first instance of algebraic symbolism and syncopation. Among the last great Greek mathematicians is Pappus of Alexandria (4th century AD). He is known for his hexagon theorem and centroid theorem, as well as the Pappus configuration and Pappus graph. His Collection is a major source of knowledge on Greek mathematics as most of it has survived. Pappus is considered the last major innovator in Greek mathematics, with subsequent work consisting mostly of commentaries on earlier work. The first woman mathematician recorded by history was Hypatia of Alexandria (AD 350–415). She succeeded her father (Theon of Alexandria) as Librarian at the Great Library and wrote many works on applied mathematics. Because of a political dispute, the Christian community in Alexandria had her stripped publicly and executed. Her death is sometimes taken as the end of the era of the Alexandrian Greek mathematics, although work did continue in Athens for another century with figures such as Proclus, Simplicius and Eutocius. Although Proclus and Simplicius were more philosophers than mathematicians, their commentaries on earlier works are valuable sources on Greek mathematics. The closure of the neo-Platonic Academy of Athens by the emperor Justinian in 529 AD is traditionally held as marking the end of the era of Greek mathematics, although the Greek tradition continued unbroken in the Byzantine empire with mathematicians such as Anthemius of Tralles and Isidore of Miletus, the architects of the Hagia Sophia. Nevertheless, Byzantine mathematics consisted mostly of commentaries, with little in the way of innovation, and the centers of mathematical innovation were to be found elsewhere by this time. == Roman == Although ethnic Greek mathematicians continued under the rule of the late Roman Republic and subsequent Roman Empire, there were no noteworthy native Latin mathematicians in comparison. Ancient Romans such as Cicero (106–43 BC), an influential Roman statesman who studied mathematics in Greece, believed that Roman surveyors and calculators were far more interested in applied mathematics than the theoretical mathematics and geometry that were prized by the Greeks. It is unclear if the Romans first derived their numerical system directly from the Greek precedent or from Etruscan numerals used by the Etruscan civilization centered in what is now Tuscany, central Italy. Using calculation, Romans were adept at both instigating and detecting financial fraud, as well as managing taxes for the treasury. Siculus Flaccus, one of the Roman gromatici (i.e. land surveyor), wrote the Categories of Fields, which aided Roman surveyors in measuring the surface areas of allotted lands and territories. Aside from managing trade and taxes, the Romans also regularly applied mathematics to solve problems in engineering, including the erection of architecture such as bridges, road-building, and preparation for military campaigns. Arts and crafts such as Roman mosaics, inspired by previous Greek designs, created illusionist geometric patterns and rich, detailed scenes that required precise measurements for each tessera tile, the opus tessellatum pieces on average measuring eight millimeters square and the finer opus vermiculatum pieces having an average surface of four millimeters square. The creation of the Roman calendar also necessitated basic mathematics. The first calendar allegedly dates back to 8th century BC during the Roman Kingdom and included 356 days plus a leap year every other year. In contrast, the lunar calendar of the Republican era contained 355 days, roughly ten-and-one-fourth days shorter than the solar year, a discrepancy that was solved by adding an extra month into the calendar after the 23rd of February. This calendar was supplanted by the Julian calendar, a solar calendar organized by Julius Caesar (100–44 BC) and devised by Sosigenes of Alexandria to include a leap day every four years in a 365-day cycle. This calendar, which contained an error of 11 minutes and 14 seconds, was later corrected by the Gregorian calendar organized by Pope Gregory XIII (r. 1572–1585), virtually the same solar calendar used in modern times as the international standard calendar. At roughly the same time, the Han Chinese and the Romans both invented the wheeled odometer device for measuring distances traveled, the Roman model first described by the Roman civil engineer and architect Vitruvius (c. 80 BC – c. 15 BC). The device was used at least until the reign of emperor Commodus (r. 177 – 192 AD), but its design seems to have been lost until experiments were made during the 15th century in Western Europe. Perhaps relying on similar gear-work and technology found in the Antikythera mechanism, the odometer of Vitruvius featured chariot wheels measuring 4 feet (1.2 m) in diameter turning four-hundred times in one Roman mile (roughly 4590 ft/1400 m). With each revolution, a pin-and-axle device engaged a 400-tooth cogwheel that turned a second gear responsible for dropping pebbles into a box, each pebble representing one mile traversed. == Chinese == An analysis of early Chinese mathematics has demonstrated its unique development compared to other parts of the world, leading scholars to assume an entirely independent development. The oldest extant mathematical text from China is the Zhoubi Suanjing (周髀算經), variously dated to between 1200 BC and 100 BC, though a date of about 300 BC during the Warring States Period appears reasonable. However, the Tsinghua Bamboo Slips, containing the earliest known decimal multiplication table (although ancient Babylonians had ones with a base of 60), is dated around 305 BC and is perhaps the oldest surviving mathematical text of China. Of particular note is the use in Chinese mathematics of a decimal positional notation system, the so-called "rod numerals" in which distinct ciphers were used for numbers between 1 and 10, and additional ciphers for powers of ten. Thus, the number 123 would be written using the symbol for "1", followed by the symbol for "100", then the symbol for "2" followed by the symbol for "10", followed by the symbol for "3". This was the most advanced number system in the world at the time, apparently in use several centuries before the common era and well before the development of the Indian numeral system. Rod numerals allowed the representation of numbers as large as desired and allowed calculations to be carried out on the suan pan, or Chinese abacus. The date of the invention of the suan pan is not certain, but the earliest written mention dates from AD 190, in Xu Yue's Supplementary Notes on the Art of Figures. The oldest extant work on geometry in China comes from the philosophical Mohist canon c. 330 BC, compiled by the followers of Mozi (470–390 BC). The Mo Jing described various aspects of many fields associated with physical science, and provided a small number of geometrical theorems as well. It also defined the concepts of circumference, diameter, radius, and volume. In 212 BC, the Emperor Qin Shi Huang commanded all books in the Qin Empire other than officially sanctioned ones be burned. This decree was not universally obeyed, but as a consequence of this order little is known about ancient Chinese mathematics before this date. After the book burning of 212 BC, the Han dynasty (202 BC–220 AD) produced works of mathematics which presumably expanded on works that are now lost. The most important of these is The Nine Chapters on the Mathematical Art, the full title of which appeared by AD 179, but existed in part under other titles beforehand. It consists of 246 word problems involving agriculture, business, employment of geometry to figure height spans and dimension ratios for Chinese pagoda towers, engineering, surveying, and includes material on right triangles. It created mathematical proof for the Pythagorean theorem, and a mathematical formula for Gaussian elimination. The treatise also provides values of π, which Chinese mathematicians originally approximated as 3 until Liu Xin (d. 23 AD) provided a figure of 3.1457 and subsequently Zhang Heng (78–139) approximated pi as 3.1724, as well as 3.162 by taking the square root of 10. Liu Hui commented on the Nine Chapters in the 3rd century AD and gave a value of π accurate to 5 decimal places (i.e. 3.14159). Though more of a matter of computational stamina than theoretical insight, in the 5th century AD Zu Chongzhi computed the value of π to seven decimal places (between 3.1415926 and 3.1415927), which remained the most accurate value of π for almost the next 1000 years. He also established a method which would later be called Cavalieri's principle to find the volume of a sphere. The high-water mark of Chinese mathematics occurred in the 13th century during the latter half of the Song dynasty (960–1279), with the development of Chinese algebra. The most important text from that period is the Precious Mirror of the Four Elements by Zhu Shijie (1249–1314), dealing with the solution of simultaneous higher order algebraic equations using a method similar to Horner's method. The Precious Mirror also contains a diagram of Pascal's triangle with coefficients of binomial expansions through the eighth power, though both appear in Chinese works as early as 1100. The Chinese also made use of the complex combinatorial diagram known as the magic square and magic circles, described in ancient times and perfected by Yang Hui (AD 1238–1298). Even after European mathematics began to flourish during the Renaissance, European and Chinese mathematics were separate traditions, with significant Chinese mathematical output in decline from the 13th century onwards. Jesuit missionaries such as Matteo Ricci carried mathematical ideas back and forth between the two cultures from the 16th to 18th centuries, though at this point far more mathematical ideas were entering China than leaving. Japanese mathematics, Korean mathematics, and Vietnamese mathematics are traditionally viewed as stemming from Chinese mathematics and belonging to the Confucian-based East Asian cultural sphere. Korean and Japanese mathematics were heavily influenced by the algebraic works produced during China's Song dynasty, whereas Vietnamese mathematics was heavily indebted to popular works of China's Ming dynasty (1368–1644). For instance, although Vietnamese mathematical treatises were written in either Chinese or the native Vietnamese Chữ Nôm script, all of them followed the Chinese format of presenting a collection of problems with algorithms for solving them, followed by numerical answers. Mathematics in Vietnam and Korea were mostly associated with the professional court bureaucracy of mathematicians and astronomers, whereas in Japan it was more prevalent in the realm of private schools. == Indian == The earliest civilization on the Indian subcontinent is the Indus Valley civilization (mature second phase: 2600 to 1900 BC) that flourished in the Indus river basin. Their cities were laid out with geometric regularity, but no known mathematical documents survive from this civilization. The oldest extant mathematical records from India are the Sulba Sutras (dated variously between the 8th century BC and the 2nd century AD), appendices to religious texts which give simple rules for constructing altars of various shapes, such as squares, rectangles, parallelograms, and others. As with Egypt, the preoccupation with temple functions points to an origin of mathematics in religious ritual. The Sulba Sutras give methods for constructing a circle with approximately the same area as a given square, which imply several different approximations of the value of π. In addition, they compute the square root of 2 to several decimal places, list Pythagorean triples, and give a statement of the Pythagorean theorem. All of these results are present in Babylonian mathematics, indicating Mesopotamian influence. It is not known to what extent the Sulba Sutras influenced later Indian mathematicians. As in China, there is a lack of continuity in Indian mathematics; significant advances are separated by long periods of inactivity. Pāṇini (c. 5th century BC) formulated the rules for Sanskrit grammar. His notation was similar to modern mathematical notation, and used metarules, transformations, and recursion. Pingala (roughly 3rd–1st centuries BC) in his treatise of prosody uses a device corresponding to a binary numeral system. His discussion of the combinatorics of meters corresponds to an elementary version of the binomial theorem. Pingala's work also contains the basic ideas of Fibonacci numbers (called mātrāmeru). The next significant mathematical documents from India after the Sulba Sutras are the Siddhantas, astronomical treatises from the 4th and 5th centuries AD (Gupta period) showing strong Hellenistic influence. They are significant in that they contain the first instance of trigonometric relations based on the half-chord, as is the case in modern trigonometry, rather than the full chord, as was the case in Ptolemaic trigonometry. Through a series of translation errors, the words "sine" and "cosine" derive from the Sanskrit "jiya" and "kojiya". Around 500 AD, Aryabhata wrote the Aryabhatiya, a slim volume, written in verse, intended to supplement the rules of calculation used in astronomy and mathematical mensuration, though with no feeling for logic or deductive methodology. It is in the Aryabhatiya that the decimal place-value system first appears. Several centuries later, the Muslim mathematician Abu Rayhan Biruni described the Aryabhatiya as a "mix of common pebbles and costly crystals". In the 7th century, Brahmagupta identified the Brahmagupta theorem, Brahmagupta's identity and Brahmagupta's formula, and for the first time, in Brahma-sphuta-siddhanta, he lucidly explained the use of zero as both a placeholder and decimal digit, and explained the Hindu–Arabic numeral system. It was from a translation of this Indian text on mathematics (c. 770) that Islamic mathematicians were introduced to this numeral system, which they adapted as Arabic numerals. Islamic scholars carried knowledge of this number system to Europe by the 12th century, and it has now displaced all older number systems throughout the world. Various symbol sets are used to represent numbers in the Hindu–Arabic numeral system, all of which evolved from the Brahmi numerals. Each of the roughly dozen major scripts of India has its own numeral glyphs. In the 10th century, Halayudha's commentary on Pingala's work contains a study of the Fibonacci sequence and Pascal's triangle, and describes the formation of a matrix. In the 12th century, Bhāskara II, who lived in southern India, wrote extensively on all then known branches of mathematics. His work contains mathematical objects equivalent or approximately equivalent to infinitesimals, the mean value theorem and the derivative of the sine function although he did not develop the notion of a derivative. In the 14th century, Narayana Pandita completed his Ganita Kaumudi. Also in the 14th century, Madhava of Sangamagrama, the founder of the Kerala School of Mathematics, found the Madhava–Leibniz series and obtained from it a transformed series, whose first 21 terms he used to compute the value of π as 3.14159265359. Madhava also found the Madhava-Gregory series to determine the arctangent, the Madhava-Newton power series to determine sine and cosine and the Taylor approximation for sine and cosine functions. In the 16th century, Jyesthadeva consolidated many of the Kerala School's developments and theorems in the Yukti-bhāṣā. It has been argued that certain ideas of calculus like infinite series and taylor series of some trigonometry functions, were transmitted to Europe in the 16th century via Jesuit missionaries and traders who were active around the ancient port of Muziris at the time and, as a result, directly influenced later European developments in analysis and calculus. However, other scholars argue that the Kerala School did not formulate a systematic theory of differentiation and integration, and that there is not any direct evidence of their results being transmitted outside Kerala. == Islamic empires == The Islamic Empire established across the Middle East, Central Asia, North Africa, Iberia, and in parts of India in the 8th century made significant contributions towards mathematics. Although most Islamic texts on mathematics were written in Arabic, they were not all written by Arabs, since much like the status of Greek in the Hellenistic world, Arabic was used as the written language of non-Arab scholars throughout the Islamic world at the time. In the 9th century, the Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī wrote an important book on the Hindu–Arabic numerals and one on methods for solving equations. His book On the Calculation with Hindu Numerals, written about 825, along with the work of Al-Kindi, were instrumental in spreading Indian mathematics and Indian numerals to the West. The word algorithm is derived from the Latinization of his name, Algoritmi, and the word algebra from the title of one of his works, Al-Kitāb al-mukhtaṣar fī hīsāb al-ğabr wa’l-muqābala (The Compendious Book on Calculation by Completion and Balancing). He gave an exhaustive explanation for the algebraic solution of quadratic equations with positive roots, and he was the first to teach algebra in an elementary form and for its own sake. He also discussed the fundamental method of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation, that is, the cancellation of like terms on opposite sides of the equation. This is the operation which al-Khwārizmī originally described as al-jabr. His algebra was also no longer concerned "with a series of problems to be resolved, but an exposition which starts with primitive terms in which the combinations must give all possible prototypes for equations, which henceforward explicitly constitute the true object of study." He also studied an equation for its own sake and "in a generic manner, insofar as it does not simply emerge in the course of solving a problem, but is specifically called on to define an infinite class of problems." In Egypt, Abu Kamil extended algebra to the set of irrational numbers, accepting square roots and fourth roots as solutions and coefficients to quadratic equations. He also developed techniques used to solve three non-linear simultaneous equations with three unknown variables. One unique feature of his works was trying to find all the possible solutions to some of his problems, including one where he found 2676 solutions. His works formed an important foundation for the development of algebra and influenced later mathematicians, such as al-Karaji and Fibonacci. Further developments in algebra were made by Al-Karaji in his treatise al-Fakhri, where he extends the methodology to incorporate integer powers and integer roots of unknown quantities. Something close to a proof by mathematical induction appears in a book written by Al-Karaji around 1000 AD, who used it to prove the binomial theorem, Pascal's triangle, and the sum of integral cubes. The historian of mathematics, F. Woepcke, praised Al-Karaji for being "the first who introduced the theory of algebraic calculus." Also in the 10th century, Abul Wafa translated the works of Diophantus into Arabic. Ibn al-Haytham was the first mathematician to derive the formula for the sum of the fourth powers, using a method that is readily generalizable for determining the general formula for the sum of any integral powers. He performed an integration in order to find the volume of a paraboloid, and was able to generalize his result for the integrals of polynomials up to the fourth degree. He thus came close to finding a general formula for the integrals of polynomials, but he was not concerned with any polynomials higher than the fourth degree. In the late 11th century, Omar Khayyam wrote Discussions of the Difficulties in Euclid, a book about what he perceived as flaws in Euclid's Elements, especially the parallel postulate. He was also the first to find the general geometric solution to cubic equations. He was also very influential in calendar reform. In the 13th century, Nasir al-Din Tusi (Nasireddin) made advances in spherical trigonometry. He also wrote influential work on Euclid's parallel postulate. In the 15th century, Ghiyath al-Kashi computed the value of π to the 16th decimal place. Kashi also had an algorithm for calculating nth roots, which was a special case of the methods given many centuries later by Ruffini and Horner. Other achievements of Muslim mathematicians during this period include the addition of the decimal point notation to the Arabic numerals, the discovery of all the modern trigonometric functions besides the sine, al-Kindi's introduction of cryptanalysis and frequency analysis, the development of analytic geometry by Ibn al-Haytham, the beginning of algebraic geometry by Omar Khayyam and the development of an algebraic notation by al-Qalasādī. During the time of the Ottoman Empire and Safavid Empire from the 15th century, the development of Islamic mathematics became stagnant. == Maya == In the Pre-Columbian Americas, the Maya civilization that flourished in Mexico and Central America during the 1st millennium AD developed a unique tradition of mathematics that, due to its geographic isolation, was entirely independent of existing European, Egyptian, and Asian mathematics. Maya numerals used a base of twenty, the vigesimal system, instead of a base of ten that forms the basis of the decimal system used by most modern cultures. The Maya used mathematics to create the Maya calendar as well as to predict astronomical phenomena in their native Maya astronomy. While the concept of zero had to be inferred in the mathematics of many contemporary cultures, the Maya developed a standard symbol for it. == Medieval European == Medieval European interest in mathematics was driven by concerns quite different from those of modern mathematicians. One driving element was the belief that mathematics provided the key to understanding the created order of nature, frequently justified by Plato's Timaeus and the biblical passage (in the Book of Wisdom) that God had ordered all things in measure, and number, and weight. Boethius provided a place for mathematics in the curriculum in the 6th century when he coined the term quadrivium to describe the study of arithmetic, geometry, astronomy, and music. He wrote De institutione arithmetica, a free translation from the Greek of Nicomachus's Introduction to Arithmetic; De institutione musica, also derived from Greek sources; and a series of excerpts from Euclid's Elements. His works were theoretical, rather than practical, and were the basis of mathematical study until the recovery of Greek and Arabic mathematical works. In the 12th century, European scholars traveled to Spain and Sicily seeking scientific Arabic texts, including al-Khwārizmī's The Compendious Book on Calculation by Completion and Balancing, translated into Latin by Robert of Chester, and the complete text of Euclid's Elements, translated in various versions by Adelard of Bath, Herman of Carinthia, and Gerard of Cremona. These and other new sources sparked a renewal of mathematics. Leonardo of Pisa, now known as Fibonacci, serendipitously learned about the Hindu–Arabic numerals on a trip to what is now Béjaïa, Algeria with his merchant father. (Europe was still using Roman numerals.) There, he observed a system of arithmetic (specifically algorism) which due to the positional notation of Hindu–Arabic numerals was much more efficient and greatly facilitated commerce. Leonardo wrote Liber Abaci in 1202 (updated in 1254) introducing the technique to Europe and beginning a long period of popularizing it. The book also brought to Europe what is now known as the Fibonacci sequence (known to Indian mathematicians for hundreds of years before that) which Fibonacci used as an unremarkable example. The 14th century saw the development of new mathematical concepts to investigate a wide range of problems. One important contribution was development of mathematics of local motion. Thomas Bradwardine proposed that speed (V) increases in arithmetic proportion as the ratio of force (F) to resistance (R) increases in geometric proportion. Bradwardine expressed this by a series of specific examples, but although the logarithm had not yet been conceived, we can express his conclusion anachronistically by writing: V = log (F/R). Bradwardine's analysis is an example of transferring a mathematical technique used by al-Kindi and Arnald of Villanova to quantify the nature of compound medicines to a different physical problem. One of the 14th-century Oxford Calculators, William Heytesbury, lacking differential calculus and the concept of limits, proposed to measure instantaneous speed "by the path that would be described by [a body] if... it were moved uniformly at the same degree of speed with which it is moved in that given instant". Heytesbury and others mathematically determined the distance covered by a body undergoing uniformly accelerated motion (today solved by integration), stating that "a moving body uniformly acquiring or losing that increment [of speed] will traverse in some given time a [distance] completely equal to that which it would traverse if it were moving continuously through the same time with the mean degree [of speed]". Nicole Oresme at the University of Paris and the Italian Giovanni di Casali independently provided graphical demonstrations of this relationship, asserting that the area under the line depicting the constant acceleration, represented the total distance traveled. In a later mathematical commentary on Euclid's Elements, Oresme made a more detailed general analysis in which he demonstrated that a body will acquire in each successive increment of time an increment of any quality that increases as the odd numbers. Since Euclid had demonstrated the sum of the odd numbers are the square numbers, the total quality acquired by the body increases as the square of the time. == Renaissance == During the Renaissance, the development of mathematics and of accounting were intertwined. While there is no direct relationship between algebra and accounting, the teaching of the subjects and the books published often intended for the children of merchants who were sent to reckoning schools (in Flanders and Germany) or abacus schools (known as abbaco in Italy), where they learned the skills useful for trade and commerce. There is probably no need for algebra in performing bookkeeping operations, but for complex bartering operations or the calculation of compound interest, a basic knowledge of arithmetic was mandatory and knowledge of algebra was very useful. Piero della Francesca (c. 1415–1492) wrote books on solid geometry and linear perspective, including De Prospectiva Pingendi (On Perspective for Painting), Trattato d’Abaco (Abacus Treatise), and De quinque corporibus regularibus (On the Five Regular Solids). Luca Pacioli's Summa de Arithmetica, Geometria, Proportioni et Proportionalità (Italian: "Review of Arithmetic, Geometry, Ratio and Proportion") was first printed and published in Venice in 1494. It included a 27-page treatise on bookkeeping, "Particularis de Computis et Scripturis" (Italian: "Details of Calculation and Recording"). It was written primarily for, and sold mainly to, merchants who used the book as a reference text, as a source of pleasure from the mathematical puzzles it contained, and to aid the education of their sons. In Summa Arithmetica, Pacioli introduced symbols for plus and minus for the first time in a printed book, symbols that became standard notation in Italian Renaissance mathematics. Summa Arithmetica was also the first known book printed in Italy to contain algebra. Pacioli obtained many of his ideas from Piero Della Francesca whom he plagiarized. In Italy, during the first half of the 16th century, Scipione del Ferro and Niccolò Fontana Tartaglia discovered solutions for cubic equations. Gerolamo Cardano published them in his 1545 book Ars Magna, together with a solution for the quartic equations, discovered by his student Lodovico Ferrari. In 1572 Rafael Bombelli published his L'Algebra in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations. Simon Stevin's De Thiende ('the art of tenths'), first published in Dutch in 1585, contained the first systematic treatment of decimal notation in Europe, which influenced all later work on the real number system. Driven by the demands of navigation and the growing need for accurate maps of large areas, trigonometry grew to be a major branch of mathematics. Bartholomaeus Pitiscus was the first to use the word, publishing his Trigonometria in 1595. Regiomontanus's table of sines and cosines was published in 1533. During the Renaissance the desire of artists to represent the natural world realistically, together with the rediscovered philosophy of the Greeks, led artists to study mathematics. They were also the engineers and architects of that time, and so had need of mathematics in any case. The art of painting in perspective, and the developments in geometry that were involved, were studied intensely. == Mathematics during the Scientific Revolution == === 17th century === The 17th century saw an unprecedented increase of mathematical and scientific ideas across Europe. Tycho Brahe had gathered a large quantity of mathematical data describing the positions of the planets in the sky. By his position as Brahe's assistant, Johannes Kepler was first exposed to and seriously interacted with the topic of planetary motion. Kepler's calculations were made simpler by the contemporaneous invention of logarithms by John Napier and Jost Bürgi. Kepler succeeded in formulating mathematical laws of planetary motion. The analytic geometry developed by René Descartes (1596–1650) allowed those orbits to be plotted on a graph, in Cartesian coordinates. Building on earlier work by many predecessors, Isaac Newton discovered the laws of physics that explain Kepler's Laws, and brought together the concepts now known as calculus. Independently, Gottfried Wilhelm Leibniz, developed calculus and much of the calculus notation still in use today. He also refined the binary number system, which is the foundation of nearly all digital (electronic, solid-state, discrete logic) computers. Science and mathematics had become an international endeavor, which would soon spread over the entire world. In addition to the application of mathematics to the studies of the heavens, applied mathematics began to expand into new areas, with the correspondence of Pierre de Fermat and Blaise Pascal. Pascal and Fermat set the groundwork for the investigations of probability theory and the corresponding rules of combinatorics in their discussions over a game of gambling. Pascal, with his wager, attempted to use the newly developing probability theory to argue for a life devoted to religion, on the grounds that even if the probability of success was small, the rewards were infinite. In some sense, this foreshadowed the development of utility theory in the 18th and 19th centuries. === 18th century === The most influential mathematician of the 18th century was arguably Leonhard Euler (1707–1783). His contributions range from founding the study of graph theory with the Seven Bridges of Königsberg problem to standardizing many modern mathematical terms and notations. For example, he named the square root of minus 1 with the symbol i, and he popularized the use of the Greek letter π {\displaystyle \pi } to stand for the ratio of a circle's circumference to its diameter. He made numerous contributions to the study of topology, graph theory, calculus, combinatorics, and complex analysis, as evidenced by the multitude of theorems and notations named for him. Other important European mathematicians of the 18th century included Joseph Louis Lagrange, who did pioneering work in number theory, algebra, differential calculus, and the calculus of variations, and Pierre-Simon Laplace, who, in the age of Napoleon, did important work on the foundations of celestial mechanics and on statistics. == Modern == === 19th century === Throughout the 19th century mathematics became increasingly abstract. Carl Friedrich Gauss (1777–1855) epitomizes this trend. He did revolutionary work on functions of complex variables, in geometry, and on the convergence of series, leaving aside his many contributions to science. He also gave the first satisfactory proofs of the fundamental theorem of algebra and of the quadratic reciprocity law. This century saw the development of the two forms of non-Euclidean geometry, where the parallel postulate of Euclidean geometry no longer holds. The Russian mathematician Nikolai Ivanovich Lobachevsky and his rival, the Hungarian mathematician János Bolyai, independently defined and studied hyperbolic geometry, where uniqueness of parallels no longer holds. In this geometry the sum of angles in a triangle add up to less than 180°. Elliptic geometry was developed later in the 19th century by the German mathematician Bernhard Riemann; here no parallel can be found and the angles in a triangle add up to more than 180°. Riemann also developed Riemannian geometry, which unifies and vastly generalizes the three types of geometry, and he defined the concept of a manifold, which generalizes the ideas of curves and surfaces, and set the mathematical foundations for the theory of general relativity. The 19th century saw the beginning of a great deal of abstract algebra. Hermann Grassmann in Germany gave a first version of vector spaces, William Rowan Hamilton in Ireland developed noncommutative algebra. The British mathematician George Boole devised an algebra that soon evolved into what is now called Boolean algebra, in which the only numbers were 0 and 1. Boolean algebra is the starting point of mathematical logic and has important applications in electrical engineering and computer science. Augustin-Louis Cauchy, Bernhard Riemann, and Karl Weierstrass reformulated the calculus in a more rigorous fashion. Also, for the first time, the limits of mathematics were explored. Niels Henrik Abel, a Norwegian, and Évariste Galois, a Frenchman, proved that there is no general algebraic method for solving polynomial equations of degree greater than four (Abel–Ruffini theorem). Other 19th-century mathematicians used this in their proofs that straight edge and compass alone are not sufficient to trisect an arbitrary angle, to construct the side of a cube twice the volume of a given cube, nor to construct a square equal in area to a given circle. Mathematicians had vainly attempted to solve all of these problems since the time of the ancient Greeks. On the other hand, the limitation of three dimensions in geometry was surpassed in the 19th century through considerations of parameter space and hypercomplex numbers. Abel and Galois's investigations into the solutions of various polynomial equations laid the groundwork for further developments of group theory, and the associated fields of abstract algebra. In the 20th century physicists and other scientists have seen group theory as the ideal way to study symmetry. In the later 19th century, Georg Cantor established the first foundations of set theory, which enabled the rigorous treatment of the notion of infinity and has become the common language of nearly all mathematics. Cantor's set theory, and the rise of mathematical logic in the hands of Peano, L.E.J. Brouwer, David Hilbert, Bertrand Russell, and A.N. Whitehead, initiated a long running debate on the foundations of mathematics. The 19th century saw the founding of a number of national mathematical societies: the London Mathematical Society in 1865, the Société Mathématique de France in 1872, the Circolo Matematico di Palermo in 1884, the Edinburgh Mathematical Society in 1883, and the American Mathematical Society in 1888. The first international, special-interest society, the Quaternion Society, was formed in 1899, in the context of a vector controversy. In 1897, Kurt Hensel introduced p-adic numbers. === 20th century === The 20th century saw mathematics become a major profession. By the end of the century, thousands of new Ph.D.s in mathematics were being awarded every year, and jobs were available in both teaching and industry. An effort to catalogue the areas and applications of mathematics was undertaken in Klein's encyclopedia. In a 1900 speech to the International Congress of Mathematicians, David Hilbert set out a list of 23 unsolved problems in mathematics. These problems, spanning many areas of mathematics, formed a central focus for much of 20th-century mathematics. Today, 10 have been solved, 7 are partially solved, and 2 are still open. The remaining 4 are too loosely formulated to be stated as solved or not. Notable historical conjectures were finally proven. In 1976, Wolfgang Haken and Kenneth Appel proved the four color theorem, controversial at the time for the use of a computer to do so. Andrew Wiles, building on the work of others, proved Fermat's Last Theorem in 1995. Paul Cohen and Kurt Gödel proved that the continuum hypothesis is independent of (could neither be proved nor disproved from) the standard axioms of set theory. In 1998, Thomas Callister Hales proved the Kepler conjecture, also using a computer. Mathematical collaborations of unprecedented size and scope took place. An example is the classification of finite simple groups (also called the "enormous theorem"), whose proof between 1955 and 2004 required 500-odd journal articles by about 100 authors, and filling tens of thousands of pages. A group of French mathematicians, including Jean Dieudonné and André Weil, publishing under the pseudonym "Nicolas Bourbaki", attempted to exposit all of known mathematics as a coherent rigorous whole. The resulting several dozen volumes has had a controversial influence on mathematical education. Differential geometry came into its own when Albert Einstein used it in general relativity. Entirely new areas of mathematics such as mathematical logic, topology, and John von Neumann's game theory changed the kinds of questions that could be answered by mathematical methods. All kinds of structures were abstracted using axioms and given names like metric spaces, topological spaces etc. As mathematicians do, the concept of an abstract structure was itself abstracted and led to category theory. Grothendieck and Serre recast algebraic geometry using sheaf theory. Large advances were made in the qualitative study of dynamical systems that Poincaré had begun in the 1890s. Measure theory was developed in the late 19th and early 20th centuries. Applications of measures include the Lebesgue integral, Kolmogorov's axiomatisation of probability theory, and ergodic theory. Knot theory greatly expanded. Quantum mechanics led to the development of functional analysis, a branch of mathematics that was greatly developed by Stefan Banach and his collaborators who formed the Lwów School of Mathematics. Other new areas include Laurent Schwartz's distribution theory, fixed point theory, singularity theory and René Thom's catastrophe theory, model theory, and Mandelbrot's fractals. Lie theory with its Lie groups and Lie algebras became one of the major areas of study. Non-standard analysis, introduced by Abraham Robinson, rehabilitated the infinitesimal approach to calculus, which had fallen into disrepute in favour of the theory of limits, by extending the field of real numbers to the Hyperreal numbers which include infinitesimal and infinite quantities. An even larger number system, the surreal numbers were discovered by John Horton Conway in connection with combinatorial games. The development and continual improvement of computers, at first mechanical analog machines and then digital electronic machines, allowed industry to deal with larger and larger amounts of data to facilitate mass production and distribution and communication, and new areas of mathematics were developed to deal with this: Alan Turing's computability theory; complexity theory; Derrick Henry Lehmer's use of ENIAC to further number theory and the Lucas–Lehmer primality test; Rózsa Péter's recursive function theory; Claude Shannon's information theory; signal processing; data analysis; optimization and other areas of operations research. In the preceding centuries much mathematical focus was on calculus and continuous functions, but the rise of computing and communication networks led to an increasing importance of discrete concepts and the expansion of combinatorics including graph theory. The speed and data processing abilities of computers also enabled the handling of mathematical problems that were too time-consuming to deal with by pencil and paper calculations, leading to areas such as numerical analysis and symbolic computation. Some of the most important methods and algorithms of the 20th century are: the simplex algorithm, the fast Fourier transform, error-correcting codes, the Kalman filter from control theory and the RSA algorithm of public-key cryptography. At the same time, deep insights were made about the limitations to mathematics. In 1929 and 1930, it was proved the truth or falsity of all statements formulated about the natural numbers plus either addition or multiplication (but not both), was decidable, i.e. could be determined by some algorithm. In 1931, Kurt Gödel found that this was not the case for the natural numbers plus both addition and multiplication; this system, known as Peano arithmetic, was in fact incomplete. (Peano arithmetic is adequate for a good deal of number theory, including the notion of prime number.) A consequence of Gödel's two incompleteness theorems is that in any mathematical system that includes Peano arithmetic (including all of analysis and geometry), truth necessarily outruns proof, i.e. there are true statements that cannot be proved within the system. Hence mathematics cannot be reduced to mathematical logic, and David Hilbert's dream of making all of mathematics complete and consistent needed to be reformulated. One of the more colorful figures in 20th-century mathematics was Srinivasa Aiyangar Ramanujan (1887–1920), an Indian autodidact who conjectured or proved over 3000 theorems, including properties of highly composite numbers, the partition function and its asymptotics, and mock theta functions. He also made major investigations in the areas of gamma functions, modular forms, divergent series, hypergeometric series and prime number theory. Paul Erdős published more papers than any other mathematician in history, working with hundreds of collaborators. Mathematicians have a game equivalent to the Kevin Bacon Game, which leads to the Erdős number of a mathematician. This describes the "collaborative distance" between a person and Erdős, as measured by joint authorship of mathematical papers. Emmy Noether has been described by many as the most important woman in the history of mathematics. She studied the theories of rings, fields, and algebras. As in most areas of study, the explosion of knowledge in the scientific age has led to specialization: by the end of the century, there were hundreds of specialized areas in mathematics, and the Mathematics Subject Classification was dozens of pages long. More and more mathematical journals were published and, by the end of the century, the development of the World Wide Web led to online publishing. === 21st century === In 2000, the Clay Mathematics Institute announced the seven Millennium Prize Problems. In 2003 the Poincaré conjecture was solved by Grigori Perelman (who declined to accept an award, as he was critical of the mathematics establishment). Most mathematical journals now have online versions as well as print versions, and many online-only journals are launched. There is an increasing drive toward open access publishing, first made popular by arXiv. == Future == There are many observable trends in mathematics, the most notable being that the subject is growing ever larger as computers are ever more important and powerful; the volume of data being produced by science and industry, facilitated by computers, continues expanding exponentially. As a result, there is a corresponding growth in the demand for mathematics to help process and understand this big data. Math science careers are also expected to continue to grow, with the US Bureau of Labor Statistics estimating (in 2018) that "employment of mathematical science occupations is projected to grow 27.9 percent from 2016 to 2026." == See also == == Notes == == References == de Crespigny, Rafe (2007), A Biographical Dictionary of Later Han to the Three Kingdoms (23–220 AD), Leiden: Koninklijke Brill, ISBN 978-90-04-15605-0. Berggren, Lennart; Borwein, Jonathan M.; Borwein, Peter B. (2004), Pi: A Source Book, New York: Springer, ISBN 978-0-387-20571-7 Boyer, C.B. (1991) [1989], A History of Mathematics (2nd ed.), New York: Wiley, ISBN 978-0-471-54397-8 Cuomo, Serafina (2001), Ancient Mathematics, London: Routledge, ISBN 978-0-415-16495-5 Goodman, Michael, K.J. (2016), An introduction of the Early Development of Mathematics, Hoboken: Wiley, ISBN 978-1-119-10497-1{{citation}}: CS1 maint: multiple names: authors list (link) Gullberg, Jan (1997), Mathematics: From the Birth of Numbers, New York: W.W. Norton and Company, ISBN 978-0-393-04002-9 Joyce, Hetty (July 1979), "Form, Function and Technique in the Pavements of Delos and Pompeii", American Journal of Archaeology, 83 (3): 253–63, doi:10.2307/505056, JSTOR 505056, S2CID 191394716. Katz, Victor J. (1998), A History of Mathematics: An Introduction (2nd ed.), Addison-Wesley, ISBN 978-0-321-01618-8 Katz, Victor J. (2007), The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook, Princeton, NJ: Princeton University Press, ISBN 978-0-691-11485-9 Needham, Joseph; Wang, Ling (1995) [1959], Science and Civilization in China: Mathematics and the Sciences of the Heavens and the Earth, vol. 3, Cambridge: Cambridge University Press, ISBN 978-0-521-05801-8 Needham, Joseph; Wang, Ling (2000) [1965], Science and Civilization in China: Physics and Physical Technology: Mechanical Engineering, vol. 4 (reprint ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-05803-2 Sleeswyk, Andre (October 1981), "Vitruvius' odometer", Scientific American, 252 (4): 188–200, Bibcode:1981SciAm.245d.188S, doi:10.1038/scientificamerican1081-188. Straffin, Philip D. (1998), "Liu Hui and the First Golden Age of Chinese Mathematics", Mathematics Magazine, 71 (3): 163–81, doi:10.1080/0025570X.1998.11996627 Tang, Birgit (2005), Delos, Carthage, Ampurias: the Housing of Three Mediterranean Trading Centres, Rome: L'Erma di Bretschneider (Accademia di Danimarca), ISBN 978-88-8265-305-7. Volkov, Alexei (2009), "Mathematics and Mathematics Education in Traditional Vietnam", in Robson, Eleanor; Stedall, Jacqueline (eds.), The Oxford Handbook of the History of Mathematics, Oxford: Oxford University Press, pp. 153–76, ISBN 978-0-19-921312-2 == Further reading == === General === Aaboe, Asger (1964). Episodes from the Early History of Mathematics. New York: Random House. Bell, E. T. (1937). Men of Mathematics. Simon and Schuster. Burton, David M. (1997). The History of Mathematics: An Introduction. McGraw Hill. Grattan-Guinness, Ivor (2003). Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences. The Johns Hopkins University Press. ISBN 978-0-8018-7397-3. Kline, Morris. Mathematical Thought from Ancient to Modern Times. Struik, D. J. (1987). A Concise History of Mathematics, fourth revised edition. Dover Publications, New York. === Books on a specific period === Gillings, Richard J. (1972). Mathematics in the Time of the Pharaohs. Cambridge, MA: MIT Press. Heath, Thomas Little (1921). A History of Greek Mathematics. Oxford, Claredon Press. van der Waerden, B. L. (1983). Geometry and Algebra in Ancient Civilizations, Springer, ISBN 0-387-12159-5. === Books on a specific topic === Corry, Leo (2015), A Brief History of Numbers, Oxford University Press, ISBN 978-0198702597 Hoffman, Paul (1998). The Man Who Loved Only Numbers: The Story of Paul Erdős and the Search for Mathematical Truth. Hyperion. ISBN 0-7868-6362-5. Menninger, Karl W. (1969). Number Words and Number Symbols: A Cultural History of Numbers. MIT Press. ISBN 978-0-262-13040-0. Stigler, Stephen M. (1990). The History of Statistics: The Measurement of Uncertainty before 1900. Belknap Press. ISBN 978-0-674-40341-3. == External links == === Documentaries === BBC (2008). The Story of Maths. Renaissance Mathematics, BBC Radio 4 discussion with Robert Kaplan, Jim Bennett & Jackie Stedall (In Our Time, Jun 2, 2005) === Educational material === MacTutor History of Mathematics archive (John J. O'Connor and Edmund F. Robertson; University of St Andrews, Scotland). An award-winning website containing detailed biographies on many historical and contemporary mathematicians, as well as information on notable curves and various topics in the history of mathematics. History of Mathematics Home Page (David E. Joyce; Clark University). Articles on various topics in the history of mathematics with an extensive bibliography. The History of Mathematics (David R. Wilkins; Trinity College, Dublin). Collections of material on the mathematics between the 17th and 19th century. Earliest Known Uses of Some of the Words of Mathematics (Jeff Miller). Contains information on the earliest known uses of terms used in mathematics. Earliest Uses of Various Mathematical Symbols (Jeff Miller). Contains information on the history of mathematical notations. Mathematical Words: Origins and Sources (John Aldrich, University of Southampton) Discusses the origins of the modern mathematical word stock. Biographies of Women Mathematicians (Larry Riddle; Agnes Scott College). Mathematicians of the African Diaspora (Scott W. Williams; University at Buffalo). Notes for MAA minicourse: teaching a course in the history of mathematics. (2009) (V. Frederick Rickey & Victor J. Katz). Ancient Rome: The Odometer Of Vitruv. Pictorial (moving) re-construction of Vitusius' Roman ododmeter. === Bibliographies === A Bibliography of Collected Works and Correspondence of Mathematicians archive dated 2007/3/17 (Steven W. Rockey; Cornell University Library). === Organizations === International Commission for the History of Mathematics === Journals === Historia Mathematica Convergence Archived 2020-09-08 at the Wayback Machine, the Mathematical Association of America's online Math History Magazine History of Mathematics Archived 2006-10-04 at the Wayback Machine Math Archives (University of Tennessee, Knoxville) History/Biography The Math Forum (Drexel University) History of Mathematics (Courtright Memorial Library). History of Mathematics Web Sites Archived 2009-05-25 at the Wayback Machine (David Calvis; Baldwin-Wallace College) Historia de las Matemáticas (Universidad de La La guna) História da Matemática (Universidade de Coimbra) Using History in Math Class Mathematical Resources: History of Mathematics (Bruno Kevius) History of Mathematics (Roberta Tucci)
|
https://en.wikipedia.org/wiki/History_of_mathematics
|
Recreational mathematics is mathematics carried out for recreation (entertainment) rather than as a strictly research-and-application-based professional activity or as a part of a student's formal education. Although it is not necessarily limited to being an endeavor for amateurs, many topics in this field require no knowledge of advanced mathematics. Recreational mathematics involves mathematical puzzles and games, often appealing to children and untrained adults and inspiring their further study of the subject. The Mathematical Association of America (MAA) includes recreational mathematics as one of its seventeen Special Interest Groups, commenting: Recreational mathematics is not easily defined because it is more than mathematics done as a diversion or playing games that involve mathematics. Recreational mathematics is inspired by deep ideas that are hidden in puzzles, games, and other forms of play. The aim of the SIGMAA on Recreational Mathematics (SIGMAA-Rec) is to bring together enthusiasts and researchers in the myriad of topics that fall under recreational math. We will share results and ideas from our work, show that real, deep mathematics is there awaiting those who look, and welcome those who wish to become involved in this branch of mathematics. Mathematical competitions (such as those sponsored by mathematical associations) are also categorized under recreational mathematics. == Topics == Some of the more well-known topics in recreational mathematics are Rubik's Cubes, magic squares, fractals, logic puzzles and mathematical chess problems, but this area of mathematics includes the aesthetics and culture of mathematics, peculiar or amusing stories and coincidences about mathematics, and the personal lives of mathematicians. === Mathematical games === Mathematical games are multiplayer games whose rules, strategies, and outcomes can be studied and explained using mathematics. The players of the game may not need to use explicit mathematics in order to play mathematical games. For example, Mancala is studied in the mathematical field of combinatorial game theory, but no mathematics is necessary in order to play it. === Mathematical puzzles === Mathematical puzzles require mathematics in order to solve them. They have specific rules, as do multiplayer games, but mathematical puzzles do not usually involve competition between two or more players. Instead, in order to solve such a puzzle, the solver must find a solution that satisfies the given conditions. Logic puzzles and classical ciphers are common examples of mathematical puzzles. Cellular automata and fractals are also considered mathematical puzzles, even though the solver only interacts with them by providing a set of initial conditions. As they often include or require game-like features or thinking, mathematical puzzles are sometimes also called mathematical games. === Mathemagics === Magic tricks based on mathematical principles can produce self-working but surprising effects. For instance, a mathemagician might use the combinatorial properties of a deck of playing cards to guess a volunteer's selected card, or Hamming codes to identify whether a volunteer is lying. === Other activities === Other curiosities and pastimes of non-trivial mathematical interest include: patterns in juggling the sometimes profound algorithmic and geometrical characteristics of origami patterns and process in creating string figures such as Cat's cradles, etc. fractal-generating software == Online blogs, podcasts, and YouTube channels == There are many blogs and audio or video series devoted to recreational mathematics. Among the notable are the following: Cut-the-knot by Alexander Bogomolny Futility Closet by Greg Ross Mathologer by Burkard Polster The videos of Vi Hart Stand-Up Maths by Matt Parker Numberphile by Brady Haran == Publications == The journal Eureka published by the mathematical society of the University of Cambridge is one of the oldest publications in recreational mathematics. It has been published 60 times since 1939 and authors have included many famous mathematicians and scientists such as Martin Gardner, John Conway, Roger Penrose, Ian Stewart, Timothy Gowers, Stephen Hawking and Paul Dirac. The Journal of Recreational Mathematics was the largest publication on this topic from its founding in 1968 until 2014 when it ceased publication. Mathematical Games (1956 to 1981) was the title of a long-running Scientific American column on recreational mathematics by Martin Gardner. He inspired several generations of mathematicians and scientists through his interest in mathematical recreations. "Mathematical Games" was succeeded by 25 "Metamagical Themas" columns (1981-1983), a similarly distinguished, but shorter-running, column by Douglas Hofstadter, then by 78 "Mathematical Recreations" and "Computer Recreations" columns (1984 to 1991) by A. K. Dewdney, then by 96 "Mathematical Recreations" columns (1991 to 2001) by Ian Stewart, and most recently "Puzzling Adventures" by Dennis Shasha. The Recreational Mathematics Magazine, published by the Ludus Association, is electronic and semiannual, and focuses on results that provide amusing, witty but nonetheless original and scientifically profound mathematical nuggets. The issues are published in the exact moments of the equinox. == People == Prominent practitioners and advocates of recreational mathematics have included professional and amateur mathematicians: == See also == List of recreational number theory topics Mathematics of paper folding (origami) == References == == Further reading == W. W. Rouse Ball and H.S.M. Coxeter (1987). Mathematical Recreations and Essays, Thirteenth Edition, Dover. ISBN 0-486-25357-0. Henry E. Dudeney (1967). 536 Puzzles and Curious Problems. Charles Scribner's sons. ISBN 0-684-71755-7. Sam Loyd (1959. 2 Vols.). in Martin Gardner: The Mathematical Puzzles of Sam Loyd. Dover. OCLC 5720955. Raymond M. Smullyan (1991). The Lady or the Tiger? And Other Logic Puzzles. Oxford University Press. ISBN 0-19-286136-0. == External links == Recreational Mathematics from MathWorld at Wolfram Research
|
https://en.wikipedia.org/wiki/Recreational_mathematics
|
Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combination of mathematical science and specialized knowledge. The term "applied mathematics" also describes the professional specialty in which mathematicians work on practical problems by formulating and studying mathematical models. In the past, practical applications have motivated the development of mathematical theories, which then became the subject of study in pure mathematics where abstract concepts are studied for their own sake. The activity of applied mathematics is thus intimately connected with research in pure mathematics. == History == Historically, applied mathematics consisted principally of applied analysis, most notably differential equations; approximation theory (broadly construed, to include representations, asymptotic methods, variational methods, and numerical analysis); and applied probability. These areas of mathematics related directly to the development of Newtonian physics, and in fact, the distinction between mathematicians and physicists was not sharply drawn before the mid-19th century. This history left a pedagogical legacy in the United States: until the early 20th century, subjects such as classical mechanics were often taught in applied mathematics departments at American universities rather than in physics departments, and fluid mechanics may still be taught in applied mathematics departments. Engineering and computer science departments have traditionally made use of applied mathematics. As time passed, Applied Mathematics grew alongside the advancement of science and technology. With the advent of modern times, the application of mathematics in fields such as science, economics, technology, and more became deeper and more timely. The development of computers and other technologies enabled a more detailed study and application of mathematical concepts in various fields. Today, Applied Mathematics continues to be crucial for societal and technological advancement. It guides the development of new technologies, economic progress, and addresses challenges in various scientific fields and industries. The history of Applied Mathematics continually demonstrates the importance of mathematics in human progress. == Divisions == Today, the term "applied mathematics" is used in a broader sense. It includes the classical areas noted above as well as other areas that have become increasingly important in applications. Even fields such as number theory that are part of pure mathematics are now important in applications (such as cryptography), though they are not generally considered to be part of the field of applied mathematics per se. There is no consensus as to what the various branches of applied mathematics are. Such categorizations are made difficult by the way mathematics and science change over time, and also by the way universities organize departments, courses, and degrees. Many mathematicians distinguish between "applied mathematics", which is concerned with mathematical methods, and the "applications of mathematics" within science and engineering. A biologist using a population model and applying known mathematics would not be doing applied mathematics, but rather using it; however, mathematical biologists have posed problems that have stimulated the growth of pure mathematics. Mathematicians such as Poincaré and Arnold deny the existence of "applied mathematics" and claim that there are only "applications of mathematics." Similarly, non-mathematicians blend applied mathematics and applications of mathematics. The use and development of mathematics to solve industrial problems is also called "industrial mathematics". The success of modern numerical mathematical methods and software has led to the emergence of computational mathematics, computational science, and computational engineering, which use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary. === Applicable mathematics === Sometimes, the term applicable mathematics is used to distinguish between the traditional applied mathematics that developed alongside physics and the many areas of mathematics that are applicable to real-world problems today, although there is no consensus as to a precise definition. Mathematicians often distinguish between "applied mathematics" on the one hand, and the "applications of mathematics" or "applicable mathematics" both within and outside of science and engineering, on the other. Some mathematicians emphasize the term applicable mathematics to separate or delineate the traditional applied areas from new applications arising from fields that were previously seen as pure mathematics. For example, from this viewpoint, an ecologist or geographer using population models and applying known mathematics would not be doing applied, but rather applicable, mathematics. Even fields such as number theory that are part of pure mathematics are now important in applications (such as cryptography), though they are not generally considered to be part of the field of applied mathematics per se. Such descriptions can lead to applicable mathematics being seen as a collection of mathematical methods such as real analysis, linear algebra, mathematical modelling, optimisation, combinatorics, probability and statistics, which are useful in areas outside traditional mathematics and not specific to mathematical physics. Other authors prefer describing applicable mathematics as a union of "new" mathematical applications with the traditional fields of applied mathematics. With this outlook, the terms applied mathematics and applicable mathematics are thus interchangeable. == Utility == Historically, mathematics was most important in the natural sciences and engineering. However, since World War II, fields outside the physical sciences have spawned the creation of new areas of mathematics, such as game theory and social choice theory, which grew out of economic considerations. Further, the utilization and development of mathematical methods expanded into other areas leading to the creation of new fields such as mathematical finance and data science. The advent of the computer has enabled new applications: studying and using the new computer technology itself (computer science) to study problems arising in other areas of science (computational science) as well as the mathematics of computation (for example, theoretical computer science, computer algebra, numerical analysis). Statistics is probably the most widespread mathematical science used in the social sciences. == Status in academic departments == Academic institutions are not consistent in the way they group and label courses, programs, and degrees in applied mathematics. At some schools, there is a single mathematics department, whereas others have separate departments for Applied Mathematics and (Pure) Mathematics. It is very common for Statistics departments to be separated at schools with graduate programs, but many undergraduate-only institutions include statistics under the mathematics department. Many applied mathematics programs (as opposed to departments) consist primarily of cross-listed courses and jointly appointed faculty in departments representing applications. Some Ph.D. programs in applied mathematics require little or no coursework outside mathematics, while others require substantial coursework in a specific area of application. In some respects this difference reflects the distinction between "application of mathematics" and "applied mathematics". Some universities in the U.K. host departments of Applied Mathematics and Theoretical Physics, but it is now much less common to have separate departments of pure and applied mathematics. A notable exception to this is the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge, housing the Lucasian Professor of Mathematics whose past holders include Isaac Newton, Charles Babbage, James Lighthill, Paul Dirac, and Stephen Hawking. Schools with separate applied mathematics departments range from Brown University, which has a large Division of Applied Mathematics that offers degrees through the doctorate, to Santa Clara University, which offers only the M.S. in applied mathematics. Research universities dividing their mathematics department into pure and applied sections include MIT. Students in this program also learn another skill (computer science, engineering, physics, pure math, etc.) to supplement their applied math skills. == Associated mathematical sciences == Applied mathematics is associated with the following mathematical sciences: === Engineering === Mathematics is used in all branches of engineering and has subsequently developed as distinct specialties within the engineering profession. For example, continuum mechanics is foundational to civil, mechanical and aerospace engineering, with courses in solid mechanics and fluid mechanics being important components of the engineering curriculum. Continuum mechanics is also an important branch of mathematics in its own right. It has served as the inspiration for a vast range of difficult research questions for mathematicians involved in the analysis of partial differential equations, differential geometry and the calculus of variations. Perhaps the most well-known mathematical problem posed by a continuum mechanical system is the question of Navier-Stokes existence and smoothness. Prominent career mathematicians rather than engineers who have contributed to the mathematics of continuum mechanics are Clifford Truesdell, Walter Noll, Andrey Kolmogorov and George Batchelor. An essential discipline for many fields in engineering is that of control engineering. The associated mathematical theory of this specialism is control theory, a branch of applied mathematics that builds off the mathematics of dynamical systems. Control theory has played a significant enabling role in modern technology, serving a foundational role in electrical, mechanical and aerospace engineering. Like continuum mechanics, control theory has also become a field of mathematical research in its own right, with mathematicians such as Aleksandr Lyapunov, Norbert Wiener, Lev Pontryagin and fields medallist Pierre-Louis Lions contributing to its foundations. === Scientific computing === Scientific computing includes applied mathematics (especially numerical analysis), computing science (especially high-performance computing), and mathematical modelling in a scientific discipline. === Computer science === Computer science relies on logic, algebra, discrete mathematics such as graph theory, and combinatorics. === Operations research and management science === Operations research and management science are often taught in faculties of engineering, business, and public policy. === Statistics === Applied mathematics has substantial overlap with the discipline of statistics. Statistical theorists study and improve statistical procedures with mathematics, and statistical research often raises mathematical questions. Statistical theory relies on probability and decision theory, and makes extensive use of scientific computing, analysis, and optimization; for the design of experiments, statisticians use algebra and combinatorial design. Applied mathematicians and statisticians often work in a department of mathematical sciences (particularly at colleges and small universities). === Actuarial science === Actuarial science applies probability, statistics, and economic theory to assess risk in insurance, finance and other industries and professions. === Mathematical economics === Mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. The applied methods usually refer to nontrivial mathematical techniques or approaches. Mathematical economics is based on statistics, probability, mathematical programming (as well as other computational methods), operations research, game theory, and some methods from mathematical analysis. In this regard, it resembles (but is distinct from) financial mathematics, another part of applied mathematics. According to the Mathematics Subject Classification (MSC), mathematical economics falls into the Applied mathematics/other classification of category 91: Game theory, economics, social and behavioral sciences with MSC2010 classifications for 'Game theory' at codes 91Axx Archived 2015-04-02 at the Wayback Machine and for 'Mathematical economics' at codes 91Bxx Archived 2015-04-02 at the Wayback Machine. === Other disciplines === The line between applied mathematics and specific areas of application is often blurred. Many universities teach mathematical and statistical courses outside the respective departments, in departments and areas including business, engineering, physics, chemistry, psychology, biology, computer science, scientific computation, information theory, and mathematical physics. == Applied Mathematics Societies == The Society for Industrial and Applied Mathematics is an international applied mathematics organization. As of 2024, the society has 14,000 individual members. The American Mathematics Society has its Applied Mathematics Group. == See also == Analytics Applied science Engineering mathematics Society for Industrial and Applied Mathematics == References == == Further reading == === Applicable mathematics === The Morehead Journal of Applicable Mathematics hosted by Morehead State University Series on Concrete and Applicable Mathematics by World Scientific Handbook of Applicable Mathematics Series by Walter Ledermann == External links == Media related to Applied mathematics at Wikimedia Commons The Society for Industrial and Applied Mathematics (SIAM) is a professional society dedicated to promoting the interaction between mathematics and other scientific and technical communities. Aside from organizing and sponsoring numerous conferences, SIAM is a major publisher of research journals and books in applied mathematics. The Applicable Mathematics Research Group at Notre Dame University (archived 29 March 2013) Centre for Applicable Mathematics at Liverpool Hope University (archived 1 April 2018) Applicable Mathematics research group at Glasgow Caledonian University (archived 4 March 2016)
|
https://en.wikipedia.org/wiki/Applied_mathematics
|
Mathematical logic is the study of formal logic within mathematics. Major subareas include model theory, proof theory, set theory, and recursion theory (also known as computability theory). Research in mathematical logic commonly addresses the mathematical properties of formal systems of logic such as their expressive or deductive power. However, it can also include uses of logic to characterize correct mathematical reasoning or to establish foundations of mathematics. Since its inception, mathematical logic has both contributed to and been motivated by the study of foundations of mathematics. This study began in the late 19th century with the development of axiomatic frameworks for geometry, arithmetic, and analysis. In the early 20th century it was shaped by David Hilbert's program to prove the consistency of foundational theories. Results of Kurt Gödel, Gerhard Gentzen, and others provided partial resolution to the program, and clarified the issues involved in proving consistency. Work in set theory showed that almost all ordinary mathematics can be formalized in terms of sets, although there are some theorems that cannot be proven in common axiom systems for set theory. Contemporary work in the foundations of mathematics often focuses on establishing which parts of mathematics can be formalized in particular formal systems (as in reverse mathematics) rather than trying to find theories in which all of mathematics can be developed. == Subfields and scope == The Handbook of Mathematical Logic in 1977 makes a rough division of contemporary mathematical logic into four areas: set theory model theory recursion theory, and proof theory and constructive mathematics (considered as parts of a single area). Additionally, sometimes the field of computational complexity theory is also included together with mathematical logic. Each area has a distinct focus, although many techniques and results are shared among multiple areas. The borderlines amongst these fields, and the lines separating mathematical logic and other fields of mathematics, are not always sharp. Gödel's incompleteness theorem marks not only a milestone in recursion theory and proof theory, but has also led to Löb's theorem in modal logic. The method of forcing is employed in set theory, model theory, and recursion theory, as well as in the study of intuitionistic mathematics. The mathematical field of category theory uses many formal axiomatic methods, and includes the study of categorical logic, but category theory is not ordinarily considered a subfield of mathematical logic. Because of its applicability in diverse fields of mathematics, mathematicians including Saunders Mac Lane have proposed category theory as a foundational system for mathematics, independent of set theory. These foundations use toposes, which resemble generalized models of set theory that may employ classical or nonclassical logic. == History == Mathematical logic emerged in the mid-19th century as a subfield of mathematics, reflecting the confluence of two traditions: formal philosophical logic and mathematics. Mathematical logic, also called 'logistic', 'symbolic logic', the 'algebra of logic', and, more recently, simply 'formal logic', is the set of logical theories elaborated in the course of the nineteenth century with the aid of an artificial notation and a rigorously deductive method. Before this emergence, logic was studied with rhetoric, with calculationes, through the syllogism, and with philosophy. The first half of the 20th century saw an explosion of fundamental results, accompanied by vigorous debate over the foundations of mathematics. === Early history === Theories of logic were developed in many cultures in history, including in ancient China, India, Greece, Roman empire and the Islamic world. Greek methods, particularly Aristotelian logic (or term logic) as found in the Organon, found wide application and acceptance in Western science and mathematics for millennia. The Stoics, especially Chrysippus, began the development of propositional logic. In 18th-century Europe, attempts to treat the operations of formal logic in a symbolic or algebraic way had been made by philosophical mathematicians including Leibniz and Lambert, but their labors remained isolated and little known. === 19th century === In the middle of the nineteenth century, George Boole and then Augustus De Morgan presented systematic mathematical treatments of logic. Their work, building on work by algebraists such as George Peacock, extended the traditional Aristotelian doctrine of logic into a sufficient framework for the study of foundations of mathematics. In 1847, Vatroslav Bertić made substantial work on algebraization of logic, independently from Boole. Charles Sanders Peirce later built upon the work of Boole to develop a logical system for relations and quantifiers, which he published in several papers from 1870 to 1885. Gottlob Frege presented an independent development of logic with quantifiers in his Begriffsschrift, published in 1879, a work generally considered as marking a turning point in the history of logic. Frege's work remained obscure, however, until Bertrand Russell began to promote it near the turn of the century. The two-dimensional notation Frege developed was never widely adopted and is unused in contemporary texts. From 1890 to 1905, Ernst Schröder published Vorlesungen über die Algebra der Logik in three volumes. This work summarized and extended the work of Boole, De Morgan, and Peirce, and was a comprehensive reference to symbolic logic as it was understood at the end of the 19th century. ==== Foundational theories ==== Concerns that mathematics had not been built on a proper foundation led to the development of axiomatic systems for fundamental areas of mathematics such as arithmetic, analysis, and geometry. In logic, the term arithmetic refers to the theory of the natural numbers. Giuseppe Peano published a set of axioms for arithmetic that came to bear his name (Peano axioms), using a variation of the logical system of Boole and Schröder but adding quantifiers. Peano was unaware of Frege's work at the time. Around the same time Richard Dedekind showed that the natural numbers are uniquely characterized by their induction properties. Dedekind proposed a different characterization, which lacked the formal logical character of Peano's axioms. Dedekind's work, however, proved theorems inaccessible in Peano's system, including the uniqueness of the set of natural numbers (up to isomorphism) and the recursive definitions of addition and multiplication from the successor function and mathematical induction. In the mid-19th century, flaws in Euclid's axioms for geometry became known. In addition to the independence of the parallel postulate, established by Nikolai Lobachevsky in 1826, mathematicians discovered that certain theorems taken for granted by Euclid were not in fact provable from his axioms. Among these is the theorem that a line contains at least two points, or that circles of the same radius whose centers are separated by that radius must intersect. Hilbert developed a complete set of axioms for geometry, building on previous work by Pasch. The success in axiomatizing geometry motivated Hilbert to seek complete axiomatizations of other areas of mathematics, such as the natural numbers and the real line. This would prove to be a major area of research in the first half of the 20th century. The 19th century saw great advances in the theory of real analysis, including theories of convergence of functions and Fourier series. Mathematicians such as Karl Weierstrass began to construct functions that stretched intuition, such as nowhere-differentiable continuous functions. Previous conceptions of a function as a rule for computation, or a smooth graph, were no longer adequate. Weierstrass began to advocate the arithmetization of analysis, which sought to axiomatize analysis using properties of the natural numbers. The modern (ε, δ)-definition of limit and continuous functions was already developed by Bolzano in 1817, but remained relatively unknown. Cauchy in 1821 defined continuity in terms of infinitesimals (see Cours d'Analyse, page 34). In 1858, Dedekind proposed a definition of the real numbers in terms of Dedekind cuts of rational numbers, a definition still employed in contemporary texts. Georg Cantor developed the fundamental concepts of infinite set theory. His early results developed the theory of cardinality and proved that the reals and the natural numbers have different cardinalities. Over the next twenty years, Cantor developed a theory of transfinite numbers in a series of publications. In 1891, he published a new proof of the uncountability of the real numbers that introduced the diagonal argument, and used this method to prove Cantor's theorem that no set can have the same cardinality as its powerset. Cantor believed that every set could be well-ordered, but was unable to produce a proof for this result, leaving it as an open problem in 1895. === 20th century === In the early decades of the 20th century, the main areas of study were set theory and formal logic. The discovery of paradoxes in informal set theory caused some to wonder whether mathematics itself is inconsistent, and to look for proofs of consistency. In 1900, Hilbert posed a famous list of 23 problems for the next century. The first two of these were to resolve the continuum hypothesis and prove the consistency of elementary arithmetic, respectively; the tenth was to produce a method that could decide whether a multivariate polynomial equation over the integers has a solution. Subsequent work to resolve these problems shaped the direction of mathematical logic, as did the effort to resolve Hilbert's Entscheidungsproblem, posed in 1928. This problem asked for a procedure that would decide, given a formalized mathematical statement, whether the statement is true or false. ==== Set theory and paradoxes ==== Ernst Zermelo gave a proof that every set could be well-ordered, a result Georg Cantor had been unable to obtain. To achieve the proof, Zermelo introduced the axiom of choice, which drew heated debate and research among mathematicians and the pioneers of set theory. The immediate criticism of the method led Zermelo to publish a second exposition of his result, directly addressing criticisms of his proof. This paper led to the general acceptance of the axiom of choice in the mathematics community. Skepticism about the axiom of choice was reinforced by recently discovered paradoxes in naive set theory. Cesare Burali-Forti was the first to state a paradox: the Burali-Forti paradox shows that the collection of all ordinal numbers cannot form a set. Very soon thereafter, Bertrand Russell discovered Russell's paradox in 1901, and Jules Richard discovered Richard's paradox. Zermelo provided the first set of axioms for set theory. These axioms, together with the additional axiom of replacement proposed by Abraham Fraenkel, are now called Zermelo–Fraenkel set theory (ZF). Zermelo's axioms incorporated the principle of limitation of size to avoid Russell's paradox. In 1910, the first volume of Principia Mathematica by Russell and Alfred North Whitehead was published. This seminal work developed the theory of functions and cardinality in a completely formal framework of type theory, which Russell and Whitehead developed in an effort to avoid the paradoxes. Principia Mathematica is considered one of the most influential works of the 20th century, although the framework of type theory did not prove popular as a foundational theory for mathematics. Fraenkel proved that the axiom of choice cannot be proved from the axioms of Zermelo's set theory with urelements. Later work by Paul Cohen showed that the addition of urelements is not needed, and the axiom of choice is unprovable in ZF. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory. ==== Symbolic logic ==== Leopold Löwenheim and Thoralf Skolem obtained the Löwenheim–Skolem theorem, which says that first-order logic cannot control the cardinalities of infinite structures. Skolem realized that this theorem would apply to first-order formalizations of set theory, and that it implies any such formalization has a countable model. This counterintuitive fact became known as Skolem's paradox. In his doctoral thesis, Kurt Gödel proved the completeness theorem, which establishes a correspondence between syntax and semantics in first-order logic. Gödel used the completeness theorem to prove the compactness theorem, demonstrating the finitary nature of first-order logical consequence. These results helped establish first-order logic as the dominant logic used by mathematicians. In 1931, Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which proved the incompleteness (in a different meaning of the word) of all sufficiently strong, effective first-order theories. This result, known as Gödel's incompleteness theorem, establishes severe limitations on axiomatic foundations for mathematics, striking a strong blow to Hilbert's program. It showed the impossibility of providing a consistency proof of arithmetic within any formal theory of arithmetic. Hilbert, however, did not acknowledge the importance of the incompleteness theorem for some time. Gödel's theorem shows that a consistency proof of any sufficiently strong, effective axiom system cannot be obtained in the system itself, if the system is consistent, nor in any weaker system. This leaves open the possibility of consistency proofs that cannot be formalized within the system they consider. Gentzen proved the consistency of arithmetic using a finitistic system together with a principle of transfinite induction. Gentzen's result introduced the ideas of cut elimination and proof-theoretic ordinals, which became key tools in proof theory. Gödel gave a different consistency proof, which reduces the consistency of classical arithmetic to that of intuitionistic arithmetic in higher types. The first textbook on symbolic logic for the layman was written by Lewis Carroll, author of Alice's Adventures in Wonderland, in 1896. ==== Beginnings of the other branches ==== Alfred Tarski developed the basics of model theory. Beginning in 1935, a group of prominent mathematicians collaborated under the pseudonym Nicolas Bourbaki to publish Éléments de mathématique, a series of encyclopedic mathematics texts. These texts, written in an austere and axiomatic style, emphasized rigorous presentation and set-theoretic foundations. Terminology coined by these texts, such as the words bijection, injection, and surjection, and the set-theoretic foundations the texts employed, were widely adopted throughout mathematics. The study of computability came to be known as recursion theory or computability theory, because early formalizations by Gödel and Kleene relied on recursive definitions of functions. When these definitions were shown equivalent to Turing's formalization involving Turing machines, it became clear that a new concept – the computable function – had been discovered, and that this definition was robust enough to admit numerous independent characterizations. In his work on the incompleteness theorems in 1931, Gödel lacked a rigorous concept of an effective formal system; he immediately realized that the new definitions of computability could be used for this purpose, allowing him to state the incompleteness theorems in generality that could only be implied in the original paper. Numerous results in recursion theory were obtained in the 1940s by Stephen Cole Kleene and Emil Leon Post. Kleene introduced the concepts of relative computability, foreshadowed by Turing, and the arithmetical hierarchy. Kleene later generalized recursion theory to higher-order functionals. Kleene and Georg Kreisel studied formal versions of intuitionistic mathematics, particularly in the context of proof theory. == Formal logical systems == At its core, mathematical logic deals with mathematical concepts expressed using formal logical systems. These systems, though they differ in many details, share the common property of considering only expressions in a fixed formal language. The systems of propositional logic and first-order logic are the most widely studied today, because of their applicability to foundations of mathematics and because of their desirable proof-theoretic properties. Stronger classical logics such as second-order logic or infinitary logic are also studied, along with Non-classical logics such as intuitionistic logic. === First-order logic === First-order logic is a particular formal system of logic. Its syntax involves only finite expressions as well-formed formulas, while its semantics are characterized by the limitation of all quantifiers to a fixed domain of discourse. Early results from formal logic established limitations of first-order logic. The Löwenheim–Skolem theorem (1919) showed that if a set of sentences in a countable first-order language has an infinite model then it has at least one model of each infinite cardinality. This shows that it is impossible for a set of first-order axioms to characterize the natural numbers, the real numbers, or any other infinite structure up to isomorphism. As the goal of early foundational studies was to produce axiomatic theories for all parts of mathematics, this limitation was particularly stark. Gödel's completeness theorem established the equivalence between semantic and syntactic definitions of logical consequence in first-order logic. It shows that if a particular sentence is true in every model that satisfies a particular set of axioms, then there must be a finite deduction of the sentence from the axioms. The compactness theorem first appeared as a lemma in Gödel's proof of the completeness theorem, and it took many years before logicians grasped its significance and began to apply it routinely. It says that a set of sentences has a model if and only if every finite subset has a model, or in other words that an inconsistent set of formulas must have a finite inconsistent subset. The completeness and compactness theorems allow for sophisticated analysis of logical consequence in first-order logic and the development of model theory, and they are a key reason for the prominence of first-order logic in mathematics. Gödel's incompleteness theorems establish additional limits on first-order axiomatizations. The first incompleteness theorem states that for any consistent, effectively given (defined below) logical system that is capable of interpreting arithmetic, there exists a statement that is true (in the sense that it holds for the natural numbers) but not provable within that logical system (and which indeed may fail in some non-standard models of arithmetic which may be consistent with the logical system). For example, in every logical system capable of expressing the Peano axioms, the Gödel sentence holds for the natural numbers but cannot be proved. Here a logical system is said to be effectively given if it is possible to decide, given any formula in the language of the system, whether the formula is an axiom, and one which can express the Peano axioms is called "sufficiently strong." When applied to first-order logic, the first incompleteness theorem implies that any sufficiently strong, consistent, effective first-order theory has models that are not elementarily equivalent, a stronger limitation than the one established by the Löwenheim–Skolem theorem. The second incompleteness theorem states that no sufficiently strong, consistent, effective axiom system for arithmetic can prove its own consistency, which has been interpreted to show that Hilbert's program cannot be reached. === Other classical logics === Many logics besides first-order logic are studied. These include infinitary logics, which allow for formulas to provide an infinite amount of information, and higher-order logics, which include a portion of set theory directly in their semantics. The most well studied infinitary logic is L ω 1 , ω {\displaystyle L_{\omega _{1},\omega }} . In this logic, quantifiers may only be nested to finite depths, as in first-order logic, but formulas may have finite or countably infinite conjunctions and disjunctions within them. Thus, for example, it is possible to say that an object is a whole number using a formula of L ω 1 , ω {\displaystyle L_{\omega _{1},\omega }} such as ( x = 0 ) ∨ ( x = 1 ) ∨ ( x = 2 ) ∨ ⋯ . {\displaystyle (x=0)\lor (x=1)\lor (x=2)\lor \cdots .} Higher-order logics allow for quantification not only of elements of the domain of discourse, but subsets of the domain of discourse, sets of such subsets, and other objects of higher type. The semantics are defined so that, rather than having a separate domain for each higher-type quantifier to range over, the quantifiers instead range over all objects of the appropriate type. The logics studied before the development of first-order logic, for example Frege's logic, had similar set-theoretic aspects. Although higher-order logics are more expressive, allowing complete axiomatizations of structures such as the natural numbers, they do not satisfy analogues of the completeness and compactness theorems from first-order logic, and are thus less amenable to proof-theoretic analysis. Another type of logics are fixed-point logics that allow inductive definitions, like one writes for primitive recursive functions. One can formally define an extension of first-order logic — a notion which encompasses all logics in this section because they behave like first-order logic in certain fundamental ways, but does not encompass all logics in general, e.g. it does not encompass intuitionistic, modal or fuzzy logic. Lindström's theorem implies that the only extension of first-order logic satisfying both the compactness theorem and the downward Löwenheim–Skolem theorem is first-order logic. === Nonclassical and modal logic === Modal logics include additional modal operators, such as an operator which states that a particular formula is not only true, but necessarily true. Although modal logic is not often used to axiomatize mathematics, it has been used to study the properties of first-order provability and set-theoretic forcing. Intuitionistic logic was developed by Heyting to study Brouwer's program of intuitionism, in which Brouwer himself avoided formalization. Intuitionistic logic specifically does not include the law of the excluded middle, which states that each sentence is either true or its negation is true. Kleene's work with the proof theory of intuitionistic logic showed that constructive information can be recovered from intuitionistic proofs. For example, any provably total function in intuitionistic arithmetic is computable; this is not true in classical theories of arithmetic such as Peano arithmetic. === Algebraic logic === Algebraic logic uses the methods of abstract algebra to study the semantics of formal logics. A fundamental example is the use of Boolean algebras to represent truth values in classical propositional logic, and the use of Heyting algebras to represent truth values in intuitionistic propositional logic. Stronger logics, such as first-order logic and higher-order logic, are studied using more complicated algebraic structures such as cylindric algebras. == Set theory == Set theory is the study of sets, which are abstract collections of objects. Many of the basic notions, such as ordinal and cardinal numbers, were developed informally by Cantor before formal axiomatizations of set theory were developed. The first such axiomatization, due to Zermelo, was extended slightly to become Zermelo–Fraenkel set theory (ZF), which is now the most widely used foundational theory for mathematics. Other formalizations of set theory have been proposed, including von Neumann–Bernays–Gödel set theory (NBG), Morse–Kelley set theory (MK), and New Foundations (NF). Of these, ZF, NBG, and MK are similar in describing a cumulative hierarchy of sets. New Foundations takes a different approach; it allows objects such as the set of all sets at the cost of restrictions on its set-existence axioms. The system of Kripke–Platek set theory is closely related to generalized recursion theory. Two famous statements in set theory are the axiom of choice and the continuum hypothesis. The axiom of choice, first stated by Zermelo, was proved independent of ZF by Fraenkel, but has come to be widely accepted by mathematicians. It states that given a collection of nonempty sets there is a single set C that contains exactly one element from each set in the collection. The set C is said to "choose" one element from each set in the collection. While the ability to make such a choice is considered obvious by some, since each set in the collection is nonempty, the lack of a general, concrete rule by which the choice can be made renders the axiom nonconstructive. Stefan Banach and Alfred Tarski showed that the axiom of choice can be used to decompose a solid ball into a finite number of pieces which can then be rearranged, with no scaling, to make two solid balls of the original size. This theorem, known as the Banach–Tarski paradox, is one of many counterintuitive results of the axiom of choice. The continuum hypothesis, first proposed as a conjecture by Cantor, was listed by David Hilbert as one of his 23 problems in 1900. Gödel showed that the continuum hypothesis cannot be disproven from the axioms of Zermelo–Fraenkel set theory (with or without the axiom of choice), by developing the constructible universe of set theory in which the continuum hypothesis must hold. In 1963, Paul Cohen showed that the continuum hypothesis cannot be proven from the axioms of Zermelo–Fraenkel set theory. This independence result did not completely settle Hilbert's question, however, as it is possible that new axioms for set theory could resolve the hypothesis. Recent work along these lines has been conducted by W. Hugh Woodin, although its importance is not yet clear. Contemporary research in set theory includes the study of large cardinals and determinacy. Large cardinals are cardinal numbers with particular properties so strong that the existence of such cardinals cannot be proved in ZFC. The existence of the smallest large cardinal typically studied, an inaccessible cardinal, already implies the consistency of ZFC. Despite the fact that large cardinals have extremely high cardinality, their existence has many ramifications for the structure of the real line. Determinacy refers to the possible existence of winning strategies for certain two-player games (the games are said to be determined). The existence of these strategies implies structural properties of the real line and other Polish spaces. == Model theory == Model theory studies the models of various formal theories. Here a theory is a set of formulas in a particular formal logic and signature, while a model is a structure that gives a concrete interpretation of the theory. Model theory is closely related to universal algebra and algebraic geometry, although the methods of model theory focus more on logical considerations than those fields. The set of all models of a particular theory is called an elementary class; classical model theory seeks to determine the properties of models in a particular elementary class, or determine whether certain classes of structures form elementary classes. The method of quantifier elimination can be used to show that definable sets in particular theories cannot be too complicated. Tarski established quantifier elimination for real-closed fields, a result which also shows the theory of the field of real numbers is decidable. He also noted that his methods were equally applicable to algebraically closed fields of arbitrary characteristic. A modern subfield developing from this is concerned with o-minimal structures. Morley's categoricity theorem, proved by Michael D. Morley, states that if a first-order theory in a countable language is categorical in some uncountable cardinality, i.e. all models of this cardinality are isomorphic, then it is categorical in all uncountable cardinalities. A trivial consequence of the continuum hypothesis is that a complete theory with less than continuum many nonisomorphic countable models can have only countably many. Vaught's conjecture, named after Robert Lawson Vaught, says that this is true even independently of the continuum hypothesis. Many special cases of this conjecture have been established. == Recursion theory == Recursion theory, also called computability theory, studies the properties of computable functions and the Turing degrees, which divide the uncomputable functions into sets that have the same level of uncomputability. Recursion theory also includes the study of generalized computability and definability. Recursion theory grew from the work of Rózsa Péter, Alonzo Church and Alan Turing in the 1930s, which was greatly extended by Kleene and Post in the 1940s. Classical recursion theory focuses on the computability of functions from the natural numbers to the natural numbers. The fundamental results establish a robust, canonical class of computable functions with numerous independent, equivalent characterizations using Turing machines, λ calculus, and other systems. More advanced results concern the structure of the Turing degrees and the lattice of recursively enumerable sets. Generalized recursion theory extends the ideas of recursion theory to computations that are no longer necessarily finite. It includes the study of computability in higher types as well as areas such as hyperarithmetical theory and α-recursion theory. Contemporary research in recursion theory includes the study of applications such as algorithmic randomness, computable model theory, and reverse mathematics, as well as new results in pure recursion theory. === Algorithmically unsolvable problems === An important subfield of recursion theory studies algorithmic unsolvability; a decision problem or function problem is algorithmically unsolvable if there is no possible computable algorithm that returns the correct answer for all legal inputs to the problem. The first results about unsolvability, obtained independently by Church and Turing in 1936, showed that the Entscheidungsproblem is algorithmically unsolvable. Turing proved this by establishing the unsolvability of the halting problem, a result with far-ranging implications in both recursion theory and computer science. There are many known examples of undecidable problems from ordinary mathematics. The word problem for groups was proved algorithmically unsolvable by Pyotr Novikov in 1955 and independently by W. Boone in 1959. The busy beaver problem, developed by Tibor Radó in 1962, is another well-known example. Hilbert's tenth problem asked for an algorithm to determine whether a multivariate polynomial equation with integer coefficients has a solution in the integers. Partial progress was made by Julia Robinson, Martin Davis and Hilary Putnam. The algorithmic unsolvability of the problem was proved by Yuri Matiyasevich in 1970. == Proof theory and constructive mathematics == Proof theory is the study of formal proofs in various logical deduction systems. These proofs are represented as formal mathematical objects, facilitating their analysis by mathematical techniques. Several deduction systems are commonly considered, including Hilbert-style deduction systems, systems of natural deduction, and the sequent calculus developed by Gentzen. The study of constructive mathematics, in the context of mathematical logic, includes the study of systems in non-classical logic such as intuitionistic logic, as well as the study of predicative systems. An early proponent of predicativism was Hermann Weyl, who showed it is possible to develop a large part of real analysis using only predicative methods. Because proofs are entirely finitary, whereas truth in a structure is not, it is common for work in constructive mathematics to emphasize provability. The relationship between provability in classical (or nonconstructive) systems and provability in intuitionistic (or constructive, respectively) systems is of particular interest. Results such as the Gödel–Gentzen negative translation show that it is possible to embed (or translate) classical logic into intuitionistic logic, allowing some properties about intuitionistic proofs to be transferred back to classical proofs. Recent developments in proof theory include the study of proof mining by Ulrich Kohlenbach and the study of proof-theoretic ordinals by Michael Rathjen. == Applications == "Mathematical logic has been successfully applied not only to mathematics and its foundations (G. Frege, B. Russell, D. Hilbert, P. Bernays, H. Scholz, R. Carnap, S. Lesniewski, T. Skolem), but also to physics (R. Carnap, A. Dittrich, B. Russell, C. E. Shannon, A. N. Whitehead, H. Reichenbach, P. Fevrier), to biology (J. H. Woodger, A. Tarski), to psychology (F. B. Fitch, C. G. Hempel), to law and morals (K. Menger, U. Klug, P. Oppenheim), to economics (J. Neumann, O. Morgenstern), to practical questions (E. C. Berkeley, E. Stamm), and even to metaphysics (J. [Jan] Salamucha, H. Scholz, J. M. Bochenski). Its applications to the history of logic have proven extremely fruitful (J. Lukasiewicz, H. Scholz, B. Mates, A. Becker, E. Moody, J. Salamucha, K. Duerr, Z. Jordan, P. Boehner, J. M. Bochenski, S. [Stanislaw] T. Schayer, D. Ingalls)." "Applications have also been made to theology (F. Drewnowski, J. Salamucha, I. Thomas)." == Connections with computer science == The study of computability theory in computer science is closely related to the study of computability in mathematical logic. There is a difference of emphasis, however. Computer scientists often focus on concrete programming languages and feasible computability, while researchers in mathematical logic often focus on computability as a theoretical concept and on noncomputability. The theory of semantics of programming languages is related to model theory, as is program verification (in particular, model checking). The Curry–Howard correspondence between proofs and programs relates to proof theory, especially intuitionistic logic. Formal calculi such as the lambda calculus and combinatory logic are now studied as idealized programming languages. Computer science also contributes to mathematics by developing techniques for the automatic checking or even finding of proofs, such as automated theorem proving and logic programming. Descriptive complexity theory relates logics to computational complexity. The first significant result in this area, Fagin's theorem (1974) established that NP is precisely the set of languages expressible by sentences of existential second-order logic. == Foundations of mathematics == In the 19th century, mathematicians became aware of logical gaps and inconsistencies in their field. It was shown that Euclid's axioms for geometry, which had been taught for centuries as an example of the axiomatic method, were incomplete. The use of infinitesimals, and the very definition of function, came into question in analysis, as pathological examples such as Weierstrass' nowhere-differentiable continuous function were discovered. Cantor's study of arbitrary infinite sets also drew criticism. Leopold Kronecker famously stated "God made the integers; all else is the work of man," endorsing a return to the study of finite, concrete objects in mathematics. Although Kronecker's argument was carried forward by constructivists in the 20th century, the mathematical community as a whole rejected them. David Hilbert argued in favor of the study of the infinite, saying "No one shall expel us from the Paradise that Cantor has created." Mathematicians began to search for axiom systems that could be used to formalize large parts of mathematics. In addition to removing ambiguity from previously naive terms such as function, it was hoped that this axiomatization would allow for consistency proofs. In the 19th century, the main method of proving the consistency of a set of axioms was to provide a model for it. Thus, for example, non-Euclidean geometry can be proved consistent by defining point to mean a point on a fixed sphere and line to mean a great circle on the sphere. The resulting structure, a model of elliptic geometry, satisfies the axioms of plane geometry except the parallel postulate. With the development of formal logic, Hilbert asked whether it would be possible to prove that an axiom system is consistent by analyzing the structure of possible proofs in the system, and showing through this analysis that it is impossible to prove a contradiction. This idea led to the study of proof theory. Moreover, Hilbert proposed that the analysis should be entirely concrete, using the term finitary to refer to the methods he would allow but not precisely defining them. This project, known as Hilbert's program, was seriously affected by Gödel's incompleteness theorems, which show that the consistency of formal theories of arithmetic cannot be established using methods formalizable in those theories. Gentzen showed that it is possible to produce a proof of the consistency of arithmetic in a finitary system augmented with axioms of transfinite induction, and the techniques he developed to do so were seminal in proof theory. A second thread in the history of foundations of mathematics involves nonclassical logics and constructive mathematics. The study of constructive mathematics includes many different programs with various definitions of constructive. At the most accommodating end, proofs in ZF set theory that do not use the axiom of choice are called constructive by many mathematicians. More limited versions of constructivism limit themselves to natural numbers, number-theoretic functions, and sets of natural numbers (which can be used to represent real numbers, facilitating the study of mathematical analysis). A common idea is that a concrete means of computing the values of the function must be known before the function itself can be said to exist. In the early 20th century, Luitzen Egbertus Jan Brouwer founded intuitionism as a part of philosophy of mathematics. This philosophy, poorly understood at first, stated that in order for a mathematical statement to be true to a mathematician, that person must be able to intuit the statement, to not only believe its truth but understand the reason for its truth. A consequence of this definition of truth was the rejection of the law of the excluded middle, for there are statements that, according to Brouwer, could not be claimed to be true while their negations also could not be claimed true. Brouwer's philosophy was influential, and the cause of bitter disputes among prominent mathematicians. Kleene and Kreisel would later study formalized versions of intuitionistic logic (Brouwer rejected formalization, and presented his work in unformalized natural language). With the advent of the BHK interpretation and Kripke models, intuitionism became easier to reconcile with classical mathematics. == See also == Argument Informal logic Universal logic Knowledge representation and reasoning List of computability and complexity topics List of first-order theories List of logic symbols List of mathematical logic topics List of set theory topics Mereology == Notes == == References == === Undergraduate texts === Walicki, Michał (2011). Introduction to Mathematical Logic. Singapore: World Scientific Publishing. ISBN 9789814343879. Boolos, George; Burgess, John; Jeffrey, Richard (2002). Computability and Logic (4th ed.). Cambridge University Press. ISBN 9780521007580. Crossley, J.N.; Ash, C.J.; Brickhill, C.J.; Stillwell, J.C.; Williams, N.H. (1972). What is mathematical logic?. London, Oxford, New York City: Oxford University Press. ISBN 9780198880875. Zbl 0251.02001. Enderton, Herbert (2001). A mathematical introduction to logic (2nd ed.). Boston MA: Academic Press. ISBN 978-0-12-238452-3. Fisher, Alec (1982). Formal Number Theory and Computability: A Workbook. (suitable as a first course for independent study) (1st ed.). Oxford University Press. ISBN 978-0-19-853188-3. Hamilton, A.G. (1988). Logic for Mathematicians (2nd ed.). Cambridge University Press. ISBN 978-0-521-36865-0. Ebbinghaus, H.-D.; Flum, J.; Thomas, W. (1994). Mathematical Logic (2nd ed.). New York City: Springer. ISBN 9780387942582. Katz, Robert (1964). Axiomatic Analysis. Boston MA: D. C. Heath and Company. Mendelson, Elliott (1997). Introduction to Mathematical Logic (4th ed.). London: Chapman & Hall. ISBN 978-0-412-80830-2. Rautenberg, Wolfgang (2010). A Concise Introduction to Mathematical Logic (3rd ed.). New York City: Springer. doi:10.1007/978-1-4419-1221-3. ISBN 9781441912206. Schwichtenberg, Helmut (2003–2004). Mathematical Logic (PDF). Munich: Mathematisches Institut der Universität München. Retrieved 2016-02-24. Shawn Hedman, A first course in logic: an introduction to model theory, proof theory, computability, and complexity, Oxford University Press, 2004, ISBN 0-19-852981-3. Covers logics in close relation with computability theory and complexity theory van Dalen, Dirk (2013). Logic and Structure. Universitext. Berlin: Springer. doi:10.1007/978-1-4471-4558-5. ISBN 978-1-4471-4557-8. === Graduate texts === Hinman, Peter G. (2005). Fundamentals of mathematical logic. A K Peters, Ltd. ISBN 1-56881-262-0. Andrews, Peter B. (2002). An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof (2nd ed.). Boston: Kluwer Academic Publishers. ISBN 978-1-4020-0763-7. Barwise, Jon, ed. (1989). Handbook of Mathematical Logic. Studies in Logic and the Foundations of Mathematics. Amsterdam: Elsevier. ISBN 9780444863881. Hodges, Wilfrid (1997). A shorter model theory. Cambridge University Press. ISBN 9780521587136. Jech, Thomas (2003). Set Theory: Millennium Edition. Springer Monographs in Mathematics. Berlin, New York: Springer. ISBN 9783540440857. Kleene, Stephen Cole.(1952), Introduction to Metamathematics. New York: Van Nostrand. (Ishi Press: 2009 reprint). Kleene, Stephen Cole. (1967), Mathematical Logic. John Wiley. Dover reprint, 2002. ISBN 0-486-42533-9. Shoenfield, Joseph R. (2001) [1967]. Mathematical Logic (2nd ed.). A K Peters. ISBN 9781568811352. Troelstra, Anne Sjerp; Schwichtenberg, Helmut (2000). Basic Proof Theory. Cambridge Tracts in Theoretical Computer Science (2nd ed.). Cambridge University Press. ISBN 978-0-521-77911-1. === Research papers, monographs, texts, and surveys === Augusto, Luis M. (2017). Logical consequences. Theory and applications: An introduction. London: College Publications. ISBN 978-1-84890-236-7. Boehner, Philotheus (1950). Medieval Logic. Manchester.{{cite book}}: CS1 maint: location missing publisher (link) Cohen, Paul J. (1966). Set Theory and the Continuum Hypothesis. Menlo Park CA: W. A. Benjamin. Cohen, Paul J. (2008) [1966]. Set theory and the continuum hypothesis. Mineola NY: Dover Publications. ISBN 9780486469218. J.D. Sneed, The Logical Structure of Mathematical Physics. Reidel, Dordrecht, 1971 (revised edition 1979). Davis, Martin (1973). "Hilbert's tenth problem is unsolvable". The American Mathematical Monthly. 80 (3): 233–269. doi:10.2307/2318447. JSTOR 2318447. Reprinted as an appendix in Martin Davis (1985). Computability and Unsolvability. Dover. ISBN 9780486614717. Felscher, Walter (2000). "Bolzano, Cauchy, Epsilon, Delta". The American Mathematical Monthly. 107 (9): 844–862. doi:10.2307/2695743. JSTOR 2695743. Ferreirós, José (2001). "The Road to Modern Logic-An Interpretation" (PDF). Bulletin of Symbolic Logic. 7 (4): 441–484. doi:10.2307/2687794. hdl:11441/38373. JSTOR 2687794. S2CID 43258676. Hamkins, Joel David; Löwe, Benedikt (2007). "The modal logic of forcing". Transactions of the American Mathematical Society. 360 (4): 1793–1818. arXiv:math/0509616. doi:10.1090/s0002-9947-07-04297-3. S2CID 14724471. Katz, Victor J. (1998). A History of Mathematics. Addison–Wesley. ISBN 9780321016188. Morley, Michael (1965). "Categoricity in Power". Transactions of the American Mathematical Society. 114 (2): 514–538. doi:10.2307/1994188. JSTOR 1994188. Soare, Robert I. (1996). "Computability and recursion". Bulletin of Symbolic Logic. 2 (3): 284–321. CiteSeerX 10.1.1.35.5803. doi:10.2307/420992. JSTOR 420992. S2CID 5894394. Solovay, Robert M. (1976). "Provability Interpretations of Modal Logic". Israel Journal of Mathematics. 25 (3–4): 287–304. doi:10.1007/BF02757006. S2CID 121226261. Woodin, W. Hugh (2001). "The Continuum Hypothesis, Part I" (PDF). Notices of the American Mathematical Society. 48 (6). === Classical papers, texts, and collections === Banach, Stefan; Tarski, Alfred (1924). "Sur la décomposition des ensembles de points en parties respectivement congruentes" (PDF). Fundamenta Mathematicae (in French). 6: 244–277. doi:10.4064/fm-6-1-244-277. Bochenski, Jozef Maria, ed. (1959). A Precis of Mathematical Logic. Synthese Library, Vol. 1. Translated by Otto Bird. Dordrecht: Springer. doi:10.1007/978-94-017-0592-9. ISBN 9789048183296. {{cite book}}: ISBN / Date incompatibility (help) Burali-Forti, Cesare (1897). A question on transfinite numbers. Reprinted in van Heijenoort 1976, pp. 104–111 Cantor, Georg (1874). "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen" (PDF). Journal für die Reine und Angewandte Mathematik. 1874 (77): 258–262. doi:10.1515/crll.1874.77.258. S2CID 199545885. Carroll, Lewis (1896). Symbolic Logic. Kessinger Legacy Reprints. ISBN 9781163444955. {{cite book}}: ISBN / Date incompatibility (help) Dedekind, Richard (1872). Stetigkeit und irrationale Zahlen (in German). English translation as: "Consistency and irrational numbers". Dedekind, Richard (1888). Was sind und was sollen die Zahlen?. Two English translations: 1963 (1901). Essays on the Theory of Numbers. Beman, W. W., ed. and trans. Dover. 1996. In From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols, Ewald, William B., ed., Oxford University Press: 787–832. Fraenkel, Abraham A. (1922). "Der Begriff 'definit' und die Unabhängigkeit des Auswahlsaxioms". Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-mathematische Klasse (in German). pp. 253–257. Reprinted in English translation as "The notion of 'definite' and the independence of the axiom of choice" in van Heijenoort 1976, pp. 284–289. Frege, Gottlob (1879), Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle a. S.: Louis Nebert. Translation: Concept Script, a formal language of pure thought modelled upon that of arithmetic, by S. Bauer-Mengelberg in van Heijenoort 1976. Frege, Gottlob (1884), Die Grundlagen der Arithmetik: eine logisch-mathematische Untersuchung über den Begriff der Zahl. Breslau: W. Koebner. Translation: J. L. Austin, 1974. The Foundations of Arithmetic: A logico-mathematical enquiry into the concept of number, 2nd ed. Blackwell. Gentzen, Gerhard (1936). "Die Widerspruchsfreiheit der reinen Zahlentheorie". Mathematische Annalen. 112: 132–213. doi:10.1007/BF01565428. S2CID 122719892. Reprinted in English translation in Gentzen's Collected works, M. E. Szabo, ed., North-Holland, Amsterdam, 1969. Gödel, Kurt (1929). Über die Vollständigkeit des Logikkalküls [Completeness of the logical calculus]. doctoral dissertation. University Of Vienna. Gödel, Kurt (1930). "Die Vollständigkeit der Axiome des logischen Funktionen-kalküls" [The completeness of the axioms of the calculus of logical functions]. Monatshefte für Mathematik und Physik (in German). 37: 349–360. doi:10.1007/BF01696781. S2CID 123343522. Gödel, Kurt (1931). "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" [On Formally Undecidable Propositions of Principia Mathematica and Related Systems]. Monatshefte für Mathematik und Physik (in German). 38 (1): 173–198. doi:10.1007/BF01700692. S2CID 197663120. Gödel, Kurt (1958). "Über eine bisher noch nicht benützte Erweiterung des finiten Standpunktes". Dialectica (in German). 12 (3–4): 280–287. doi:10.1111/j.1746-8361.1958.tb01464.x. Reprinted in English translation in Gödel's Collected Works, vol II, Solomon Feferman et al., eds. Oxford University Press, 1993. van Heijenoort, Jean, ed. (1976) [1967]. From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931 (3rd ed.). Cambridge MA: Harvard University Press. ISBN 9780674324497. (pbk.). Hilbert, David (1899). Grundlagen der Geometrie (in German). Leipzig: Teubner. English 1902 edition (The Foundations of Geometry) republished 1980, Open Court, Chicago. Hilbert, David (1929). "Probleme der Grundlegung der Mathematik". Mathematische Annalen. 102: 1–9. doi:10.1007/BF01782335. S2CID 122870563. Lecture given at the International Congress of Mathematicians, 3 September 1928. Published in English translation as "The Grounding of Elementary Number Theory", in Mancosu 1998, pp. 266–273. Hilbert, David; Bernays, Paul (1934). Grundlagen der Mathematik. I. Die Grundlehren der mathematischen Wissenschaften. Vol. 40. Berlin, New York City: Springer. ISBN 9783540041344. JFM 60.0017.02. MR 0237246. {{cite book}}: ISBN / Date incompatibility (help) Kleene, Stephen Cole (1943). "Recursive Predicates and Quantifiers". Transactions of the American Mathematical Society. 53 (1): 41–73. doi:10.2307/1990131. JSTOR 1990131. Lobachevsky, Nikolai (1840). Geometrishe Untersuchungen zur Theorie der Parellellinien (in German). Reprinted in English translation as Robert Bonola, ed. (1955). "Geometric Investigations on the Theory of Parallel Lines". Non-Euclidean Geometry. Dover. ISBN 0-486-60027-0. {{cite book}}: ISBN / Date incompatibility (help) Löwenheim, Leopold (1915). "Über Möglichkeiten im Relativkalkül". Mathematische Annalen (in German). 76 (4): 447–470. doi:10.1007/BF01458217. ISSN 0025-5831. S2CID 116581304. Translated as "On possibilities in the calculus of relatives" in Jean van Heijenoort (1967). A Source Book in Mathematical Logic, 1879–1931. Harvard Univ. Press. pp. 228–251. Mancosu, Paolo, ed. (1998). From Brouwer to Hilbert. The Debate on the Foundations of Mathematics in the 1920s. Oxford University Press. Pasch, Moritz (1882). Vorlesungen über neuere Geometrie. Peano, Giuseppe (1889). Arithmetices principia, nova methodo exposita (in Lithuanian). Excerpt reprinted in English translation as "The principles of arithmetic, presented by a new method"in van Heijenoort 1976, pp. 83–97. Richard, Jules (1905). "Les principes des mathématiques et le problème des ensembles". Revue Générale des Sciences Pures et Appliquées (in French). 16: 541. Reprinted in English translation as "The principles of mathematics and the problems of sets" in van Heijenoort 1976, pp. 142–144. Skolem, Thoralf (1920). "Logisch-kombinatorische Untersuchungen über die Erfüllbarkeit oder Beweisbarkeit mathematischer Sätze nebst einem Theoreme über dichte Mengen". Videnskapsselskapet Skrifter, I. Matematisk-naturvidenskabelig Klasse (in German). 6: 1–36. Soare, Robert Irving (22 December 2011). "Computability Theory and Applications: The Art of Classical Computability" (PDF). Department of Mathematics. University of Chicago. Retrieved 23 August 2017. Swineshead, Richard (1498). Calculationes Suiseth Anglici (in Lithuanian). Papie: Per Franciscum Gyrardengum.{{cite book}}: CS1 maint: publisher location (link) Tarski, Alfred (1948). A decision method for elementary algebra and geometry. Santa Monica CA: RAND Corporation. Turing, Alan M. (1939). "Systems of Logic Based on Ordinals". Proceedings of the London Mathematical Society. 45 (2): 161–228. doi:10.1112/plms/s2-45.1.161. hdl:21.11116/0000-0001-91CE-3. Weyl, Hermann (1918). Das Kontinuum. Kritische Untersuchungen über die Grundlagen der Analysis (in German). Leipzig.{{cite book}}: CS1 maint: location missing publisher (link) Zermelo, Ernst (1904). "Beweis, daß jede Menge wohlgeordnet werden kann". Mathematische Annalen (in German). 59 (4): 514–516. doi:10.1007/BF01445300. S2CID 124189935. Reprinted in English translation as "Proof that every set can be well-ordered" in van Heijenoort 1976, pp. 139–141. Zermelo, Ernst (1908a). "Neuer Beweis für die Möglichkeit einer Wohlordnung". Mathematische Annalen (in German). 65: 107–128. doi:10.1007/BF01450054. ISSN 0025-5831. S2CID 119924143. Reprinted in English translation as "A new proof of the possibility of a well-ordering" in van Heijenoort 1976, pp. 183–198. Zermelo, Ernst (1908b). "Untersuchungen über die Grundlagen der Mengenlehre". Mathematische Annalen. 65 (2): 261–281. doi:10.1007/BF01449999. S2CID 120085563. == External links == "Mathematical logic", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Polyvalued logic and Quantity Relation Logic forall x: an introduction to formal logic, a free textbook by P. D. Magnus. A Problem Course in Mathematical Logic, a free textbook by Stefan Bilaniuk. Detlovs, Vilnis, and Podnieks, Karlis (University of Latvia), Introduction to Mathematical Logic. (hyper-textbook). In the Stanford Encyclopedia of Philosophy: Classical Logic by Stewart Shapiro. First-order Model Theory by Wilfrid Hodges. In the London Philosophy Study Guide: Mathematical Logic Set Theory & Further Logic Philosophy of Mathematics School of Mathematics, University of Manchester, Prof. Jeff Paris’s Mathematical Logic (course material and unpublished papers)
|
https://en.wikipedia.org/wiki/Mathematical_logic
|
In mathematics, an invariant is a property of a mathematical object (or a class of mathematical objects) which remains unchanged after operations or transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used. For example, the area of a triangle is an invariant with respect to isometries of the Euclidean plane. The phrases "invariant under" and "invariant to" a transformation are both used. More generally, an invariant with respect to an equivalence relation is a property that is constant on each equivalence class. Invariants are used in diverse areas of mathematics such as geometry, topology, algebra and discrete mathematics. Some important classes of transformations are defined by an invariant they leave unchanged. For example, conformal maps are defined as transformations of the plane that preserve angles. The discovery of invariants is an important step in the process of classifying mathematical objects. == Examples == A simple example of invariance is expressed in our ability to count. For a finite set of objects of any kind, there is a number to which we always arrive, regardless of the order in which we count the objects in the set. The quantity—a cardinal number—is associated with the set, and is invariant under the process of counting. An identity is an equation that remains true for all values of its variables. There are also inequalities that remain true when the values of their variables change. The distance between two points on a number line is not changed by adding the same quantity to both numbers. On the other hand, multiplication does not have this same property, as distance is not invariant under multiplication. Angles and ratios of distances are invariant under scalings, rotations, translations and reflections. These transformations produce similar shapes, which is the basis of trigonometry. In contrast, angles and ratios are not invariant under non-uniform scaling (such as stretching). The sum of a triangle's interior angles (180°) is invariant under all the above operations. As another example, all circles are similar: they can be transformed into each other and the ratio of the circumference to the diameter is invariant (denoted by the Greek letter π (pi)). Some more complicated examples: The real part and the absolute value of a complex number are invariant under complex conjugation. The tricolorability of knots. The degree of a polynomial is invariant under a linear change of variables. The dimension and homology groups of a topological object are invariant under homeomorphism. The number of fixed points of a dynamical system is invariant under many mathematical operations. Euclidean distance is invariant under orthogonal transformations. Area is invariant under linear maps which have determinant ±1 (see Equiareal map § Linear transformations). Some invariants of projective transformations include collinearity of three or more points, concurrency of three or more lines, conic sections, and the cross-ratio. The determinant, trace, eigenvectors, and eigenvalues of a linear endomorphism are invariant under a change of basis. In other words, the spectrum of a matrix is invariant under a change of basis. The principal invariants of tensors do not change with rotation of the coordinate system (see Invariants of tensors). The singular values of a matrix are invariant under orthogonal transformations. Lebesgue measure is invariant under translations. The variance of a probability distribution is invariant under translations of the real line. Hence the variance of a random variable is unchanged after the addition of a constant. The fixed points of a transformation are the elements in the domain that are invariant under the transformation. They may, depending on the application, be called symmetric with respect to that transformation. For example, objects with translational symmetry are invariant under certain translations. The integral ∫ M K d μ {\textstyle \int _{M}K\,d\mu } of the Gaussian curvature K {\displaystyle K} of a two-dimensional Riemannian manifold ( M , g ) {\displaystyle (M,g)} is invariant under changes of the Riemannian metric g {\displaystyle g} . This is the Gauss–Bonnet theorem. === MU puzzle === The MU puzzle is a good example of a logical problem where determining an invariant is of use for an impossibility proof. The puzzle asks one to start with the word MI and transform it into the word MU, using in each step one of the following transformation rules: If a string ends with an I, a U may be appended (xI → xIU) The string after the M may be completely duplicated (Mx → Mxx) Any three consecutive I's (III) may be replaced with a single U (xIIIy → xUy) Any two consecutive U's may be removed (xUUy → xy) An example derivation (with superscripts indicating the applied rules) is MI →2 MII →2 MIIII →3 MUI →2 MUIUI →1 MUIUIU →2 MUIUIUUIUIU →4 MUIUIIUIU → ... In light of this, one might wonder whether it is possible to convert MI into MU, using only these four transformation rules. One could spend many hours applying these transformation rules to strings. However, it might be quicker to find a property that is invariant to all rules (that is, not changed by any of them), and that demonstrates that getting to MU is impossible. By looking at the puzzle from a logical standpoint, one might realize that the only way to get rid of any I's is to have three consecutive I's in the string. This makes the following invariant interesting to consider: The number of I's in the string is not a multiple of 3. This is an invariant to the problem, if for each of the transformation rules the following holds: if the invariant held before applying the rule, it will also hold after applying it. Looking at the net effect of applying the rules on the number of I's and U's, one can see this actually is the case for all rules: The table above shows clearly that the invariant holds for each of the possible transformation rules, which means that whichever rule one picks, at whatever state, if the number of I's was not a multiple of three before applying the rule, then it will not be afterwards either. Given that there is a single I in the starting string MI, and one is not a multiple of three, one can then conclude that it is impossible to go from MI to MU (as the number of I's will never be a multiple of three). == Invariant set == A subset S of the domain U of a mapping T: U → U is an invariant set under the mapping when x ∈ S ⟺ T ( x ) ∈ S . {\displaystyle x\in S\iff T(x)\in S.} The elements of S are not necessarily fixed, even though the set S is fixed in the power set of U. (Some authors use the terminology setwise invariant, vs. pointwise invariant, to distinguish between these cases.) For example, a circle is an invariant subset of the plane under a rotation about the circle's center. Further, a conical surface is invariant as a set under a homothety of space. An invariant set of an operation T is also said to be stable under T. For example, the normal subgroups that are so important in group theory are those subgroups that are stable under the inner automorphisms of the ambient group. In linear algebra, if a linear transformation T has an eigenvector v, then the line through 0 and v is an invariant set under T, in which case the eigenvectors span an invariant subspace which is stable under T. When T is a screw displacement, the screw axis is an invariant line, though if the pitch is non-zero, T has no fixed points. In probability theory and ergodic theory, invariant sets are usually defined via the stronger property x ∈ S ⇔ T ( x ) ∈ S . {\displaystyle x\in S\Leftrightarrow T(x)\in S.} When the map T {\displaystyle T} is measurable, invariant sets form a sigma-algebra, the invariant sigma-algebra. == Formal statement == The notion of invariance is formalized in three different ways in mathematics: via group actions, presentations, and deformation. === Unchanged under group action === Firstly, if one has a group G acting on a mathematical object (or set of objects) X, then one may ask which points x are unchanged, "invariant" under the group action, or under an element g of the group. Frequently one will have a group acting on a set X, which leaves one to determine which objects in an associated set F(X) are invariant. For example, rotation in the plane about a point leaves the point about which it rotates invariant, while translation in the plane does not leave any points invariant, but does leave all lines parallel to the direction of translation invariant as lines. Formally, define the set of lines in the plane P as L(P); then a rigid motion of the plane takes lines to lines – the group of rigid motions acts on the set of lines – and one may ask which lines are unchanged by an action. More importantly, one may define a function on a set, such as "radius of a circle in the plane", and then ask if this function is invariant under a group action, such as rigid motions. Dual to the notion of invariants are coinvariants, also known as orbits, which formalizes the notion of congruence: objects which can be taken to each other by a group action. For example, under the group of rigid motions of the plane, the perimeter of a triangle is an invariant, while the set of triangles congruent to a given triangle is a coinvariant. These are connected as follows: invariants are constant on coinvariants (for example, congruent triangles have the same perimeter), while two objects which agree in the value of one invariant may or may not be congruent (for example, two triangles with the same perimeter need not be congruent). In classification problems, one might seek to find a complete set of invariants, such that if two objects have the same values for this set of invariants, then they are congruent. For example, triangles such that all three sides are equal are congruent under rigid motions, via SSS congruence, and thus the lengths of all three sides form a complete set of invariants for triangles. The three angle measures of a triangle are also invariant under rigid motions, but do not form a complete set as incongruent triangles can share the same angle measures. However, if one allows scaling in addition to rigid motions, then the AAA similarity criterion shows that this is a complete set of invariants. === Independent of presentation === Secondly, a function may be defined in terms of some presentation or decomposition of a mathematical object; for instance, the Euler characteristic of a cell complex is defined as the alternating sum of the number of cells in each dimension. One may forget the cell complex structure and look only at the underlying topological space (the manifold) – as different cell complexes give the same underlying manifold, one may ask if the function is independent of choice of presentation, in which case it is an intrinsically defined invariant. This is the case for the Euler characteristic, and a general method for defining and computing invariants is to define them for a given presentation, and then show that they are independent of the choice of presentation. Note that there is no notion of a group action in this sense. The most common examples are: The presentation of a manifold in terms of coordinate charts – invariants must be unchanged under change of coordinates. Various manifold decompositions, as discussed for Euler characteristic. Invariants of a presentation of a group. === Unchanged under perturbation === Thirdly, if one is studying an object which varies in a family, as is common in algebraic geometry and differential geometry, one may ask if the property is unchanged under perturbation (for example, if an object is constant on families or invariant under change of metric). == Invariants in computer science == In computer science, an invariant is a logical assertion that is always held to be true during a certain phase of execution of a computer program. For example, a loop invariant is a condition that is true at the beginning and the end of every iteration of a loop. Invariants are especially useful when reasoning about the correctness of a computer program. The theory of optimizing compilers, the methodology of design by contract, and formal methods for determining program correctness, all rely heavily on invariants. Programmers often use assertions in their code to make invariants explicit. Some object oriented programming languages have a special syntax for specifying class invariants. === Automatic invariant detection in imperative programs === Abstract interpretation tools can compute simple invariants of given imperative computer programs. The kind of properties that can be found depend on the abstract domains used. Typical example properties are single integer variable ranges like 0<=x<1024, relations between several variables like 0<=i-j<2*n-1, and modulus information like y%4==0. Academic research prototypes also consider simple properties of pointer structures. More sophisticated invariants generally have to be provided manually. In particular, when verifying an imperative program using the Hoare calculus, a loop invariant has to be provided manually for each loop in the program, which is one of the reasons that this approach is generally impractical for most programs. In the context of the above MU puzzle example, there is currently no general automated tool that can detect that a derivation from MI to MU is impossible using only the rules 1–4. However, once the abstraction from the string to the number of its "I"s has been made by hand, leading, for example, to the following C program, an abstract interpretation tool will be able to detect that ICount%3 cannot be 0, and hence the "while"-loop will never terminate. == See also == == Notes == == References == == External links == "Applet: Visual Invariants in Sorting Algorithms" Archived 2022-02-24 at the Wayback Machine by William Braynen in 1997
|
https://en.wikipedia.org/wiki/Invariant_(mathematics)
|
In mathematics, an expression is a written arrangement of symbols following the context-dependent, syntactic conventions of mathematical notation. Symbols can denote numbers, variables, operations, and functions. Other symbols include punctuation marks and brackets, used for grouping where there is not a well-defined order of operations. Expressions are commonly distinguished from formulas: expressions are a kind of mathematical object, whereas formulas are statements about mathematical objects. This is analogous to natural language, where a noun phrase refers to an object, and a whole sentence refers to a fact. For example, 8 x − 5 {\displaystyle 8x-5} is an expression, while the inequality 8 x − 5 ≥ 3 {\displaystyle 8x-5\geq 3} is a formula. To evaluate an expression means to find a numerical value equivalent to the expression. Expressions can be evaluated or simplified by replacing operations that appear in them with their result. For example, the expression 8 × 2 − 5 {\displaystyle 8\times 2-5} simplifies to 16 − 5 {\displaystyle 16-5} , and evaluates to 11. {\displaystyle 11.} An expression is often used to define a function, by taking the variables to be arguments, or inputs, of the function, and assigning the output to be the evaluation of the resulting expression. For example, x ↦ x 2 + 1 {\displaystyle x\mapsto x^{2}+1} and f ( x ) = x 2 + 1 {\displaystyle f(x)=x^{2}+1} define the function that associates to each number its square plus one. An expression with no variables would define a constant function. Usually, two expressions are considered equal or equivalent if they define the same function. Such an equality is called a "semantic equality", that is, both expressions "mean the same thing." == History == === Early written mathematics === The earliest written mathematics likely began with tally marks, where each mark represented one unit, carved into wood or stone. An example of early counting is the Ishango bone, found near the Nile and dating back over 20,000 years ago, which is thought to show a six-month lunar calendar. Ancient Egypt developed a symbolic system using hieroglyphics, assigning symbols for powers of ten and using addition and subtraction symbols resembling legs in motion. This system, recorded in texts like the Rhind Mathematical Papyrus (c. 2000–1800 BC), influenced other Mediterranean cultures. In Mesopotamia, a similar system evolved, with numbers written in a base-60 (sexagesimal) format on clay tablets written in Cuneiform, a technique originating with the Sumerians around 3000 BC. This base-60 system persists today in measuring time and angles. === Syncopated stage === The "syncopated" stage of mathematics introduced symbolic abbreviations for commonly used operations and quantities, marking a shift from purely geometric reasoning. Ancient Greek mathematics, largely geometric in nature, drew on Egyptian numerical systems (especially Attic numerals), with little interest in algebraic symbols, until the arrival of Diophantus of Alexandria, who pioneered a form of syncopated algebra in his Arithmetica, which introduced symbolic manipulation of expressions. His notation represented unknowns and powers symbolically, but without modern symbols for relations (such as equality or inequality) or exponents. An unknown number was called ζ {\displaystyle \zeta } . The square of ζ {\displaystyle \zeta } was Δ v {\displaystyle \Delta ^{v}} ; the cube was K v {\displaystyle K^{v}} ; the fourth power was Δ v Δ {\displaystyle \Delta ^{v}\Delta } ; the fifth power was Δ K v {\displaystyle \Delta K^{v}} ; and ⋔ {\displaystyle \pitchfork } meant to subtract everything on the right from the left. So for example, what would be written in modern notation as: x 3 − 2 x 2 + 10 x − 1 , {\displaystyle x^{3}-2x^{2}+10x-1,} Would be written in Diophantus's syncopated notation as: K υ α ¯ ζ ι ¯ ⋔ Δ υ β ¯ M α ¯ {\displaystyle \mathrm {K} ^{\upsilon }{\overline {\alpha }}\;\zeta {\overline {\iota }}\;\,\pitchfork \;\,\Delta ^{\upsilon }{\overline {\beta }}\;\mathrm {M} {\overline {\alpha }}\,\;} In the 7th century, Brahmagupta used different colours to represent the unknowns in algebraic equations in the Brāhmasphuṭasiddhānta. Greek and other ancient mathematical advances, were often trapped in cycles of bursts of creativity, followed by long periods of stagnation, but this began to change as knowledge spread in the early modern period. === Symbolic stage and early arithmetic === The transition to fully symbolic algebra began with Ibn al-Banna' al-Marrakushi (1256–1321) and Abū al-Ḥasan ibn ʿAlī al-Qalaṣādī, (1412–1482) who introduced symbols for operations using Arabic characters. The plus sign (+) appeared around 1351 with Nicole Oresme, likely derived from the Latin et (meaning "and"), while the minus sign (−) was first used in 1489 by Johannes Widmann. Luca Pacioli included these symbols in his works, though much was based on earlier contributions by Piero della Francesca. The radical symbol (√) for square root was introduced by Christoph Rudolff in the 1500s, and parentheses for precedence by Niccolò Tartaglia in 1556. François Viète’s New Algebra (1591) formalized modern symbolic manipulation. The multiplication sign (×) was first used by William Oughtred and the division sign (÷) by Johann Rahn. René Descartes further advanced algebraic symbolism in La Géométrie (1637), where he introduced the use of letters at the end of the alphabet (x, y, z) for variables, along with the Cartesian coordinate system, which bridged algebra and geometry. Isaac Newton and Gottfried Wilhelm Leibniz independently developed calculus in the late 17th century, with Leibniz's notation becoming the standard. == Variables and evaluation == In elementary algebra, a variable in an expression is a letter that represents a number whose value may change. To evaluate an expression with a variable means to find the value of the expression when the variable is assigned a given number. Expressions can be evaluated or simplified by replacing operations that appear in them with their result, or by combining like-terms. For example, take the expression 4 x 2 + 8 {\displaystyle 4x^{2}+8} ; it can be evaluated at x = 3 in the following steps: 4 ( 3 ) 2 + 3 {\textstyle 4(3)^{2}+3} , (replace x with 3) 4 ⋅ ( 3 ⋅ 3 ) + 8 {\displaystyle 4\cdot (3\cdot 3)+8} (use definition of exponent) 4 ⋅ 9 + 8 {\displaystyle 4\cdot 9+8} (simplify) 36 + 8 {\displaystyle 36+8} 44 {\displaystyle 44} A term is a constant or the product of a constant and one or more variables. Some examples include 7 , 5 x , 13 x 2 y , 4 b {\displaystyle 7,\;5x,\;13x^{2}y,\;4b} The constant of the product is called the coefficient. Terms that are either constants or have the same variables raised to the same powers are called like terms. If there are like terms in an expression, one can simplify the expression by combining the like terms. One adds the coefficients and keeps the same variable. 4 x + 7 x + 2 x = 15 x {\displaystyle 4x+7x+2x=15x} Any variable can be classified as being either a free variable or a bound variable. For a given combination of values for the free variables, an expression may be evaluated, although for some combinations of values of the free variables, the value of the expression may be undefined. Thus an expression represents an operation over constants and free variables and whose output is the resulting value of the expression. For a non-formalized language, that is, in most mathematical texts outside of mathematical logic, for an individual expression it is not always possible to identify which variables are free and bound. For example, in ∑ i < k a i k {\textstyle \sum _{i<k}a_{ik}} , depending on the context, the variable i {\textstyle i} can be free and k {\textstyle k} bound, or vice-versa, but they cannot both be free. Determining which value is assumed to be free depends on context and semantics. === Equivalence === An expression is often used to define a function, or denote compositions of functions, by taking the variables to be arguments, or inputs, of the function, and assigning the output to be the evaluation of the resulting expression. For example, x ↦ x 2 + 1 {\displaystyle x\mapsto x^{2}+1} and f ( x ) = x 2 + 1 {\displaystyle f(x)=x^{2}+1} define the function that associates to each number its square plus one. An expression with no variables would define a constant function. In this way, two expressions are said to be equivalent if, for each combination of values for the free variables, they have the same output, i.e., they represent the same function. The equivalence between two expressions is called an identity and is sometimes denoted with ≡ . {\displaystyle \equiv .} For example, in the expression ∑ n = 1 3 ( 2 n x ) , {\textstyle \sum _{n=1}^{3}(2nx),} the variable n is bound, and the variable x is free. This expression is equivalent to the simpler expression 12 x; that is ∑ n = 1 3 ( 2 n x ) ≡ 12 x . {\displaystyle \sum _{n=1}^{3}(2nx)\equiv 12x.} The value for x = 3 is 36, which can be denoted ∑ n = 1 3 ( 2 n x ) | x = 3 = 36. {\displaystyle \sum _{n=1}^{3}(2nx){\Big |}_{x=3}=36.} === Polynomial evaluation === A polynomial consists of variables and coefficients, that involve only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. The problem of polynomial evaluation arises frequently in practice. In computational geometry, polynomials are used to compute function approximations using Taylor polynomials. In cryptography and hash tables, polynomials are used to compute k-independent hashing. In the former case, polynomials are evaluated using floating-point arithmetic, which is not exact. Thus different schemes for the evaluation will, in general, give slightly different answers. In the latter case, the polynomials are usually evaluated in a finite field, in which case the answers are always exact. For evaluating the univariate polynomial a n x n + a n − 1 x n − 1 + ⋯ + a 0 , {\textstyle a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{0},} the most naive method would use n {\displaystyle n} multiplications to compute a n x n {\displaystyle a_{n}x^{n}} , use n − 1 {\textstyle n-1} multiplications to compute a n − 1 x n − 1 {\displaystyle a_{n-1}x^{n-1}} and so on for a total of n ( n + 1 ) 2 {\textstyle {\frac {n(n+1)}{2}}} multiplications and n {\displaystyle n} additions. Using better methods, such as Horner's rule, this can be reduced to n {\displaystyle n} multiplications and n {\displaystyle n} additions. If some preprocessing is allowed, even more savings are possible. === Computation === A computation is any type of arithmetic or non-arithmetic calculation that is "well-defined". The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing machine. Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages. Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements. All statements characterised in modern programming languages are well-defined, including C++, Python, and Java. Common examples of computation are basic arithmetic and the execution of computer algorithms. A calculation is a deliberate mathematical process that transforms one or more inputs into one or more outputs or results. For example, multiplying 7 by 6 is a simple algorithmic calculation. Extracting the square root or the cube root of a number using mathematical models is a more complex algorithmic calculation. ==== Rewriting ==== Expressions can be computed by means of an evaluation strategy. To illustrate, executing a function call f(a,b) may first evaluate the arguments a and b, store the results in references or memory locations ref_a and ref_b, then evaluate the function's body with those references passed in. This gives the function the ability to look up the original argument values passed in through dereferencing the parameters (some languages use specific operators to perform this), to modify them via assignment as if they were local variables, and to return values via the references. This is the call-by-reference evaluation strategy. Evaluation strategy is part of the semantics of the programming language definition. Some languages, such as PureScript, have variants with different evaluation strategies. Some declarative languages, such as Datalog, support multiple evaluation strategies. Some languages define a calling convention. In rewriting, a reduction strategy or rewriting strategy is a relation specifying a rewrite for each object or term, compatible with a given reduction relation. A rewriting strategy specifies, out of all the reducible subterms (redexes), which one should be reduced (contracted) within a term. One of the most common systems involves lambda calculus. == Well-defined expressions == The language of mathematics exhibits a kind of grammar (called formal grammar) about how expressions may be written. There are two considerations for well-definedness of mathematical expressions, syntax and semantics. Syntax is concerned with the rules used for constructing, or transforming the symbols of an expression without regard to any interpretation or meaning given to them. Expressions that are syntactically correct are called well-formed. Semantics is concerned with the meaning of these well-formed expressions. Expressions that are semantically correct are called well-defined. === Well-formed === The syntax of mathematical expressions can be described somewhat informally as follows: the allowed operators must have the correct number of inputs in the correct places (usually written with infix notation), the sub-expressions that make up these inputs must be well-formed themselves, have a clear order of operations, etc. Strings of symbols that conform to the rules of syntax are called well-formed, and those that are not well-formed are called, ill-formed, and do not constitute mathematical expressions. For example, in arithmetic, the expression 1 + 2 × 3 is well-formed, but × 4 ) x + , / y {\displaystyle \times 4)x+,/y} . is not. However, being well-formed is not enough to be considered well-defined. For example in arithmetic, the expression 1 0 {\textstyle {\frac {1}{0}}} is well-formed, but it is not well-defined. (See Division by zero). Such expressions are called undefined. === Well-defined === Semantics is the study of meaning. Formal semantics is about attaching meaning to expressions. An expression that defines a unique value or meaning is said to be well-defined. Otherwise, the expression is said to be ill defined or ambiguous. In general the meaning of expressions is not limited to designating values; for instance, an expression might designate a condition, or an equation that is to be solved, or it can be viewed as an object in its own right that can be manipulated according to certain rules. Certain expressions that designate a value simultaneously express a condition that is assumed to hold, for instance those involving the operator ⊕ {\displaystyle \oplus } to designate an internal direct sum. In algebra, an expression may be used to designate a value, which might depend on values assigned to variables occurring in the expression. The determination of this value depends on the semantics attached to the symbols of the expression. The choice of semantics depends on the context of the expression. The same syntactic expression 1 + 2 × 3 can have different values (mathematically 7, but also 9), depending on the order of operations implied by the context (See also Operations § Calculators). For real numbers, the product a × b × c {\displaystyle a\times b\times c} is unambiguous because ( a × b ) × c = a × ( b × c ) {\displaystyle (a\times b)\times c=a\times (b\times c)} ; hence the notation is said to be well defined. This property, also known as associativity of multiplication, guarantees the result does not depend on the sequence of multiplications; therefore, a specification of the sequence can be omitted. The subtraction operation is non-associative; despite that, there is a convention that a − b − c {\displaystyle a-b-c} is shorthand for ( a − b ) − c {\displaystyle (a-b)-c} , thus it is considered "well-defined". On the other hand, Division is non-associative, and in the case of a / b / c {\displaystyle a/b/c} , parenthesization conventions are not well established; therefore, this expression is often considered ill-defined. Unlike with functions, notational ambiguities can be overcome by means of additional definitions (e.g., rules of precedence, associativity of the operator). For example, in the programming language C, the operator - for subtraction is left-to-right-associative, which means that a-b-c is defined as (a-b)-c, and the operator = for assignment is right-to-left-associative, which means that a=b=c is defined as a=(b=c). In the programming language APL there is only one rule: from right to left – but parentheses first. == Formal definition == The term 'expression' is part of the language of mathematics, that is to say, it is not defined within mathematics, but taken as a primitive part of the language. To attempt to define the term would not be doing mathematics, but rather, one would be engaging in a kind of metamathematics (the metalanguage of mathematics), usually mathematical logic. Within mathematical logic, mathematics is usually described as a kind of formal language, and a well-formed expression can be defined recursively as follows: The alphabet consists of: A set of individual constants: Symbols representing fixed objects in the domain of discourse, such as numerals (1, 2.5, 1/7, ...), sets ( ∅ , { 1 , 2 , 3 } {\displaystyle \varnothing ,\{1,2,3\}} , ...), truth values (T or F), etc. A set of individual variables: A countably infinite amount of symbols representing variables used for representing an unspecified object in the domain. (Usually letters like x, or y) A set of operations: Function symbols representing operations that can be performed on elements over the domain, like addition (+), multiplication (×), or set operations like union (∪), or intersection (∩). (Functions can be understood as unary operations) Brackets ( ) With this alphabet, the recursive rules for forming a well-formed expression (WFE) are as follows: Any constant or variable as defined are the atomic expressions, the simplest well-formed expressions (WFE's). For instance, the constant 2 {\displaystyle 2} or the variable x {\displaystyle x} are syntactically correct expressions. Let F {\displaystyle F} be a metavariable for any n-ary operation over the domain, and let ϕ 1 , ϕ 2 , . . . ϕ n {\displaystyle \phi _{1},\phi _{2},...\phi _{n}} be metavariables for any WFE's. Then F ( ϕ 1 , ϕ 2 , . . . ϕ n ) {\displaystyle F(\phi _{1},\phi _{2},...\phi _{n})} is also well-formed. For the most often used operations, more convenient notations (like infix notation) have been developed over the centuries. For instance, if the domain of discourse is the real numbers, F {\displaystyle F} can denote the binary operation +, then ϕ 1 + ϕ 2 {\displaystyle \phi _{1}+\phi _{2}} is well-formed. Or F {\displaystyle F} can be the unary operation √ {\displaystyle \surd } so ϕ 1 {\displaystyle {\sqrt {\phi _{1}}}} is well-formed. Brackets are initially around each non-atomic expression, but they can be deleted in cases where there is a defined order of operations, or where order doesn't matter (i.e. where operations are associative). A well-formed expression can be thought as a syntax tree. The leaf nodes are always atomic expressions. Operations + {\displaystyle +} and ∪ {\displaystyle \cup } have exactly two child nodes, while operations x {\textstyle {\sqrt {x}}} , ln ( x ) {\textstyle {\text{ln}}(x)} and d d x {\textstyle {\frac {d}{dx}}} have exactly one. There are countably infinitely many WFE's, however, each WFE has a finite number of nodes. === Lambda calculus === Formal languages allow formalizing the concept of well-formed expressions. In the 1930s, a new type of expression, the lambda expression, was introduced by Alonzo Church and Stephen Kleene for formalizing functions and their evaluation. The lambda operators (lambda abstraction and function application) form the basis for lambda calculus, a formal system used in mathematical logic and programming language theory. The equivalence of two lambda expressions is undecidable (but see unification (computer science)). This is also the case for the expressions representing real numbers, which are built from the integers by using the arithmetical operations, the logarithm and the exponential (Richardson's theorem). == Types of expressions == === Algebraic expression === An algebraic expression is an expression built up from algebraic constants, variables, and the algebraic operations (addition, subtraction, multiplication, division and exponentiation by a rational number). For example, 3x2 − 2xy + c is an algebraic expression. Since taking the square root is the same as raising to the power 1/2, the following is also an algebraic expression: 1 − x 2 1 + x 2 {\displaystyle {\sqrt {\frac {1-x^{2}}{1+x^{2}}}}} See also: Algebraic equation and Algebraic closure === Polynomial expression === A polynomial expression is an expression built with scalars (numbers of elements of some field), indeterminates, and the operators of addition, multiplication, and exponentiation to nonnegative integer powers; for example 3 ( x + 1 ) 2 − x y . {\displaystyle 3(x+1)^{2}-xy.} Using associativity, commutativity and distributivity, every polynomial expression is equivalent to a polynomial, that is an expression that is a linear combination of products of integer powers of the indeterminates. For example the above polynomial expression is equivalent (denote the same polynomial as 3 x 2 − x y + 6 x + 3. {\displaystyle 3x^{2}-xy+6x+3.} Many author do not distinguish polynomials and polynomial expressions. In this case the expression of a polynomial expression as a linear combination is called the canonical form, normal form, or expanded form of the polynomial. === Computational expression === In computer science, an expression is a syntactic entity in a programming language that may be evaluated to determine its value or fail to terminate, in which case the expression is undefined. It is a combination of one or more constants, variables, functions, and operators that the programming language interprets (according to its particular rules of precedence and of association) and computes to produce ("to return", in a stateful environment) another value. This process, for mathematical expressions, is called evaluation. In simple settings, the resulting value is usually one of various primitive types, such as string, Boolean, or numerical (such as integer, floating-point, or complex). In computer algebra, formulas are viewed as expressions that can be evaluated as a Boolean, depending on the values that are given to the variables occurring in the expressions. For example 8 x − 5 ≥ 3 {\displaystyle 8x-5\geq 3} takes the value false if x is given a value less than 1, and the value true otherwise. Expressions are often contrasted with statements—syntactic entities that have no value (an instruction). Except for numbers and variables, every mathematical expression may be viewed as the symbol of an operator followed by a sequence of operands. In computer algebra software, the expressions are usually represented in this way. This representation is very flexible, and many things that seem not to be mathematical expressions at first glance, may be represented and manipulated as such. For example, an equation is an expression with "=" as an operator, a matrix may be represented as an expression with "matrix" as an operator and its rows as operands. See: Computer algebra expression === Logical expression === In mathematical logic, a "logical expression" can refer to either terms or formulas. A term denotes a mathematical object while a formula denotes a mathematical fact. In particular, terms appear as components of a formula. A first-order term is recursively constructed from constant symbols, variables, and function symbols. An expression formed by applying a predicate symbol to an appropriate number of terms is called an atomic formula, which evaluates to true or false in bivalent logics, given an interpretation. For example, ( x + 1 ) ∗ ( x + 1 ) {\displaystyle (x+1)*(x+1)} is a term built from the constant 1, the variable x, and the binary function symbols + {\displaystyle +} and ∗ {\displaystyle *} ; it is part of the atomic formula ( x + 1 ) ∗ ( x + 1 ) ≥ 0 {\displaystyle (x+1)*(x+1)\geq 0} which evaluates to true for each real-numbered value of x. === Formal expression === A formal expression is a kind of string of symbols, created by the same production rules as standard expressions, however, they are used without regard to the meaning of the expression. In this way, two formal expressions are considered equal only if they are syntactically equal, that is, if they are the exact same expression. For instance, the formal expressions "2" and "1+1" are not equal. == See also == == Notes == == References == == Works Cited == Descartes, René (2006) [1637]. A discourse on the method of correctly conducting one's reason and seeking truth in the sciences. Translated by Ian Maclean. Oxford University Press. ISBN 0-19-282514-3.
|
https://en.wikipedia.org/wiki/Expression_(mathematics)
|
In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. The set of candidate solutions that satisfy all constraints is called the feasible set. == Example == The following is a simple optimization problem: min f ( x ) = x 1 2 + x 2 4 {\displaystyle \min f(\mathbf {x} )=x_{1}^{2}+x_{2}^{4}} subject to x 1 ≥ 1 {\displaystyle x_{1}\geq 1} and x 2 = 1 , {\displaystyle x_{2}=1,} where x {\displaystyle \mathbf {x} } denotes the vector (x1, x2). In this example, the first line defines the function to be minimized (called the objective function, loss function, or cost function). The second and third lines define two constraints, the first of which is an inequality constraint and the second of which is an equality constraint. These two constraints are hard constraints, meaning that it is required that they be satisfied; they define the feasible set of candidate solutions. Without the constraints, the solution would be (0,0), where f ( x ) {\displaystyle f(\mathbf {x} )} has the lowest value. But this solution does not satisfy the constraints. The solution of the constrained optimization problem stated above is x = ( 1 , 1 ) {\displaystyle \mathbf {x} =(1,1)} , which is the point with the smallest value of f ( x ) {\displaystyle f(\mathbf {x} )} that satisfies the two constraints. == Terminology == If an inequality constraint holds with equality at the optimal point, the constraint is said to be binding, as the point cannot be varied in the direction of the constraint even though doing so would improve the value of the objective function. If an inequality constraint holds as a strict inequality at the optimal point (that is, does not hold with equality), the constraint is said to be non-binding, as the point could be varied in the direction of the constraint, although it would not be optimal to do so. Under certain conditions, as for example in convex optimization, if a constraint is non-binding, the optimization problem would have the same solution even in the absence of that constraint. If a constraint is not satisfied at a given point, the point is said to be infeasible. == Hard and soft constraints == If the problem mandates that the constraints be satisfied, as in the above discussion, the constraints are sometimes referred to as hard constraints. However, in some problems, called flexible constraint satisfaction problems, it is preferred but not required that certain constraints be satisfied; such non-mandatory constraints are known as soft constraints. Soft constraints arise in, for example, preference-based planning. In a MAX-CSP problem, a number of constraints are allowed to be violated, and the quality of a solution is measured by the number of satisfied constraints. == Global constraints == Global constraints are constraints representing a specific relation on a number of variables, taken altogether. Some of them, such as the alldifferent constraint, can be rewritten as a conjunction of atomic constraints in a simpler language: the alldifferent constraint holds on n variables x 1 . . . x n {\displaystyle x_{1}...x_{n}} , and is satisfied if the variables take values which are pairwise different. It is semantically equivalent to the conjunction of inequalities x 1 ≠ x 2 , x 1 ≠ x 3 . . . , x 2 ≠ x 3 , x 2 ≠ x 4 . . . x n − 1 ≠ x n {\displaystyle x_{1}\neq x_{2},x_{1}\neq x_{3}...,x_{2}\neq x_{3},x_{2}\neq x_{4}...x_{n-1}\neq x_{n}} . Other global constraints extend the expressivity of the constraint framework. In this case, they usually capture a typical structure of combinatorial problems. For instance, the regular constraint expresses that a sequence of variables is accepted by a deterministic finite automaton. Global constraints are used to simplify the modeling of constraint satisfaction problems, to extend the expressivity of constraint languages, and also to improve the constraint resolution: indeed, by considering the variables altogether, infeasible situations can be seen earlier in the solving process. Many of the global constraints are referenced into an online catalog. == See also == == References == == Further reading == Beveridge, Gordon S. G.; Schechter, Robert S. (1970). "Essential Features in Optimization". Optimization: Theory and Practice. New York: McGraw-Hill. pp. 5–8. ISBN 0-07-005128-3. == External links == Nonlinear programming FAQ Archived 2019-10-30 at the Wayback Machine Mathematical Programming Glossary Archived 2010-03-28 at the Wayback Machine
|
https://en.wikipedia.org/wiki/Constraint_(mathematics)
|
Reverse mathematics is a program in mathematical logic that seeks to determine which axioms are required to prove theorems of mathematics. Its defining method can briefly be described as "going backwards from the theorems to the axioms", in contrast to the ordinary mathematical practice of deriving theorems from axioms. It can be conceptualized as sculpting out necessary conditions from sufficient ones. The reverse mathematics program was foreshadowed by results in set theory such as the classical theorem that the axiom of choice and Zorn's lemma are equivalent over ZF set theory. The goal of reverse mathematics, however, is to study possible axioms of ordinary theorems of mathematics rather than possible axioms for set theory. Reverse mathematics is usually carried out using subsystems of second-order arithmetic, where many of its definitions and methods are inspired by previous work in constructive analysis and proof theory. The use of second-order arithmetic also allows many techniques from recursion theory to be employed; many results in reverse mathematics have corresponding results in computable analysis. In higher-order reverse mathematics, the focus is on subsystems of higher-order arithmetic, and the associated richer language. The program was founded by Harvey Friedman and brought forward by Steve Simpson. == General principles == In reverse mathematics, one starts with a framework language and a base theory—a core axiom system—that is too weak to prove most of the theorems one might be interested in, but still powerful enough to develop the definitions necessary to state these theorems. For example, to study the theorem “Every bounded sequence of real numbers has a supremum” it is necessary to use a base system that can speak of real numbers and sequences of real numbers. For each theorem that can be stated in the base system but is not provable in the base system, the goal is to determine the particular axiom system (stronger than the base system) that is necessary to prove that theorem. To show that a system S is required to prove a theorem T, two proofs are required. The first proof shows T is provable from S; this is an ordinary mathematical proof along with a justification that it can be carried out in the system S. The second proof, known as a reversal, shows that T itself implies S; this proof is carried out in the base system. The reversal establishes that no axiom system S′ that extends the base system can be weaker than S while still proving T. === Use of second-order arithmetic === Most reverse mathematics research focuses on subsystems of second-order arithmetic. The body of research in reverse mathematics has established that weak subsystems of second-order arithmetic suffice to formalize almost all undergraduate-level mathematics. In second-order arithmetic, all objects can be represented as either natural numbers or sets of natural numbers. For example, in order to prove theorems about real numbers, the real numbers can be represented as Cauchy sequences of rational numbers, each of which sequence can be represented as a set of natural numbers. The axiom systems most often considered in reverse mathematics are defined using axiom schemes called comprehension schemes. Such a scheme states that any set of natural numbers definable by a formula of a given complexity exists. In this context, the complexity of formulas is measured using the arithmetical hierarchy and analytical hierarchy. The reason that reverse mathematics is not carried out using set theory as a base system is that the language of set theory is too expressive. Extremely complex sets of natural numbers can be defined by simple formulas in the language of set theory (which can quantify over arbitrary sets). In the context of second-order arithmetic, results such as Post's theorem establish a close link between the complexity of a formula and the (non)computability of the set it defines. Another effect of using second-order arithmetic is the need to restrict general mathematical theorems to forms that can be expressed within arithmetic. For example, second-order arithmetic can express the principle "Every countable vector space has a basis" but it cannot express the principle "Every vector space has a basis". In practical terms, this means that theorems of algebra and combinatorics are restricted to countable structures, while theorems of analysis and topology are restricted to separable spaces. Many principles that imply the axiom of choice in their general form (such as "Every vector space has a basis") become provable in weak subsystems of second-order arithmetic when they are restricted. For example, "every field has an algebraic closure" is not provable in ZF set theory, but the restricted form "every countable field has an algebraic closure" is provable in RCA0, the weakest system typically employed in reverse mathematics. === Use of higher-order arithmetic === A recent strand of higher-order reverse mathematics research, initiated by Ulrich Kohlenbach in 2005, focuses on subsystems of higher-order arithmetic. Due to the richer language of higher-order arithmetic, the use of representations (aka 'codes') common in second-order arithmetic, is greatly reduced. For example, a continuous function on the Cantor space is just a function that maps binary sequences to binary sequences, and that also satisfies the usual 'epsilon-delta'-definition of continuity. Higher-order reverse mathematics includes higher-order versions of (second-order) comprehension schemes. Such a higher-order axiom states the existence of a functional that decides the truth or falsity of formulas of a given complexity. In this context, the complexity of formulas is also measured using the arithmetical hierarchy and analytical hierarchy. The higher-order counterparts of the major subsystems of second-order arithmetic generally prove the same second-order sentences (or a large subset) as the original second-order systems. For instance, the base theory of higher-order reverse mathematics, called RCAω0, proves the same sentences as RCA0, up to language. As noted in the previous paragraph, second-order comprehension axioms easily generalize to the higher-order framework. However, theorems expressing the compactness of basic spaces behave quite differently in second- and higher-order arithmetic: on one hand, when restricted to countable covers/the language of second-order arithmetic, the compactness of the unit interval is provable in WKL0 from the next section. On the other hand, given uncountable covers/the language of higher-order arithmetic, the compactness of the unit interval is only provable from (full) second-order arithmetic. Other covering lemmas (e.g. due to Lindelöf, Vitali, Besicovitch, etc.) exhibit the same behavior, and many basic properties of the gauge integral are equivalent to the compactness of the underlying space. == The big five subsystems of second-order arithmetic == Second-order arithmetic is a formal theory of the natural numbers and sets of natural numbers. Many mathematical objects, such as countable rings, groups, and fields, as well as points in effective Polish spaces, can be represented as sets of natural numbers, and modulo this representation can be studied in second-order arithmetic. Reverse mathematics makes use of several subsystems of second-order arithmetic. A typical reverse mathematics theorem shows that a particular mathematical theorem T is equivalent to a particular subsystem S of second-order arithmetic over a weaker subsystem B. This weaker system B is known as the base system for the result; in order for the reverse mathematics result to have meaning, this system must not itself be able to prove the mathematical theorem T. Steve Simpson describes five particular subsystems of second-order arithmetic, which he calls the Big Five, that occur frequently in reverse mathematics. In order of increasing strength, these systems are named by the initialisms RCA0, WKL0, ACA0, ATR0, and Π11-CA0. The following table summarizes the "big five" systems and lists the counterpart systems in higher-order arithmetic. The latter generally prove the same second-order sentences (or a large subset) as the original second-order systems. The subscript 0 in these names means that the induction scheme has been restricted from the full second-order induction scheme. For example, ACA0 includes the induction axiom (0 ∈ X ∧ {\displaystyle \wedge } ∀n(n ∈ X → n + 1 ∈ X)) → ∀n n ∈ X. This together with the full comprehension axiom of second-order arithmetic implies the full second-order induction scheme given by the universal closure of (φ(0) ∧ {\displaystyle \wedge } ∀n(φ(n) → φ(n+1))) → ∀n φ(n) for any second-order formula φ. However ACA0 does not have the full comprehension axiom, and the subscript 0 is a reminder that it does not have the full second-order induction scheme either. This restriction is important: systems with restricted induction have significantly lower proof-theoretical ordinals than systems with the full second-order induction scheme. === The base system RCA0 === RCA0 is the fragment of second-order arithmetic whose axioms are the axioms of Robinson arithmetic, induction for Σ01 formulas, and comprehension for Δ01 formulas. The subsystem RCA0 is the one most commonly used as a base system for reverse mathematics. The initials "RCA" stand for "recursive comprehension axiom", where "recursive" means "computable", as in recursive function. This name is used because RCA0 corresponds informally to "computable mathematics". In particular, any set of natural numbers that can be proven to exist in RCA0 is computable, and thus any theorem that implies that noncomputable sets exist is not provable in RCA0. To this extent, RCA0 is a constructive system, although it does not meet the requirements of the program of constructivism because it is a theory in classical logic including the law of excluded middle. Despite its seeming weakness (of not proving any non-computable sets exist), RCA0 is sufficient to prove a number of classical theorems which, therefore, require only minimal logical strength. These theorems are, in a sense, below the reach of the reverse mathematics enterprise because they are already provable in the base system. The classical theorems provable in RCA0 include: Basic properties of the natural numbers, integers, and rational numbers (for example, that the latter form an ordered field). Basic properties of the real numbers (the real numbers are an Archimedean ordered field; any nested sequence of closed intervals whose lengths tend to zero has a single point in its intersection; the real numbers are not countable).Section II.4 The Baire category theorem for a complete separable metric space (the separability condition is necessary to even state the theorem in the language of second-order arithmetic).theorem II.5.8 The intermediate value theorem on continuous real functions.theorem II.6.6 The Banach–Steinhaus theorem for a sequence of continuous linear operators on separable Banach spaces.theorem II.10.8 A weak version of Gödel's completeness theorem (for a set of sentences, in a countable language, that is already closed under consequence). The existence of an algebraic closure for a countable field (but not its uniqueness).II.9.4--II.9.8 The existence and uniqueness of the real closure of a countable ordered field.II.9.5, II.9.7 The first-order part of RCA0 (the theorems of the system that do not involve any set variables) is the set of theorems of first-order Peano arithmetic with induction limited to Σ01 formulas. It is provably consistent, as is RCA0, in full first-order Peano arithmetic. === Weak Kőnig's lemma WKL0 === The subsystem WKL0 consists of RCA0 plus a weak form of Kőnig's lemma, namely the statement that every infinite subtree of the full binary tree (the tree of all finite sequences of 0's and 1's) has an infinite path. This proposition, which is known as weak Kőnig's lemma, is easy to state in the language of second-order arithmetic. WKL0 can also be defined as the principle of Σ01 separation (given two Σ01 formulas of a free variable n that are exclusive, there is a set containing all n satisfying the one and no n satisfying the other). When this axiom is added to RCA0, the resulting subsystem is called WKL0. A similar distinction between particular axioms on the one hand, and subsystems including the basic axioms and induction on the other hand, is made for the stronger subsystems described below. In a sense, weak Kőnig's lemma is a form of the axiom of choice (although, as stated, it can be proven in classical Zermelo–Fraenkel set theory without the axiom of choice). It is not constructively valid in some senses of the word "constructive". To show that WKL0 is actually stronger than (not provable in) RCA0, it is sufficient to exhibit a theorem of WKL0 that implies that noncomputable sets exist. This is not difficult; WKL0 implies the existence of separating sets for effectively inseparable recursively enumerable sets. It turns out that RCA0 and WKL0 have the same first-order part, meaning that they prove the same first-order sentences. WKL0 can prove a good number of classical mathematical results that do not follow from RCA0, however. These results are not expressible as first-order statements but can be expressed as second-order statements. The following results are equivalent to weak Kőnig's lemma and thus to WKL0 over RCA0: The Heine–Borel theorem for the closed unit real interval, in the following sense: every covering by a sequence of open intervals has a finite subcovering. The Heine–Borel theorem for complete totally bounded separable metric spaces (where covering is by a sequence of open balls). A continuous real function on the closed unit interval (or on any compact separable metric space, as above) is bounded (or: bounded and reaches its bounds). A continuous real function on the closed unit interval can be uniformly approximated by polynomials (with rational coefficients). A continuous real function on the closed unit interval is uniformly continuous. A continuous real function on the closed unit interval is Riemann integrable. The Brouwer fixed point theorem (for continuous functions on an n {\displaystyle n} -simplex).Theorem IV.7.7 The separable Hahn–Banach theorem in the form: a bounded linear form on a subspace of a separable Banach space extends to a bounded linear form on the whole space. The Jordan curve theorem. Gödel's completeness theorem (for a countable language). Determinacy for open (or even clopen) games on {0,1} of length ω. Every countable commutative ring has a prime ideal. Every countable formally real field is orderable. Uniqueness of algebraic closure (for a countable field). The De Bruijn–Erdős theorem for countable graphs: every countable graph whose finite subgraphs are k {\displaystyle k} -colorable is k {\displaystyle k} -colorable. === Arithmetical comprehension ACA0 === ACA0 is RCA0 plus the comprehension scheme for arithmetical formulas (which is sometimes called the "arithmetical comprehension axiom"). That is, ACA0 allows us to form the set of natural numbers satisfying an arbitrary arithmetical formula (one with no bound set variables, although possibly containing set parameters).pp. 6--7 Actually, it suffices to add to RCA0 the comprehension scheme for Σ1 formulas (also including second-order free variables) in order to obtain full arithmetical comprehension.Lemma III.1.3 The first-order part of ACA0 is exactly first-order Peano arithmetic; ACA0 is a conservative extension of first-order Peano arithmetic.Corollary IX.1.6 The two systems are provably (in a weak system) equiconsistent. ACA0 can be thought of as a framework of predicative mathematics, although there are predicatively provable theorems that are not provable in ACA0. Most of the fundamental results about the natural numbers, and many other mathematical theorems, can be proven in this system. One way of seeing that ACA0 is stronger than WKL0 is to exhibit a model of WKL0 that does not contain all arithmetical sets. In fact, it is possible to build a model of WKL0 consisting entirely of low sets using the low basis theorem, since low sets relative to low sets are low. The following assertions are equivalent to ACA0 over RCA0: The sequential completeness of the real numbers (every bounded increasing sequence of real numbers has a limit).theorem III.2.2 The Bolzano–Weierstrass theorem.theorem III.2.2 Ascoli's theorem: every bounded equicontinuous sequence of real functions on the unit interval has a uniformly convergent subsequence. Every countable field embeds isomorphically into its algebraic closure.theorem III.3.2 Every countable commutative ring has a maximal ideal.theorem III.5.5 Every countable vector space over the rationals (or over any countable field) has a basis.theorem III.4.3 For any countable fields K ⊆ L {\displaystyle K\subseteq L} , there is a transcendence basis for L {\displaystyle L} over K {\displaystyle K} .theorem III.4.6 Kőnig's lemma (for arbitrary finitely branching trees, as opposed to the weak version described above).theorem III.7.2 For any countable group G {\displaystyle G} and any subgroups H , I {\displaystyle H,I} of G {\displaystyle G} , the subgroup generated by H ∪ I {\displaystyle H\cup I} exists.p.40 Any partial function can be extended to a total function. Various theorems in combinatorics, such as certain forms of Ramsey's theorem.Theorem III.7.2 === Arithmetical transfinite recursion ATR0 === The system ATR0 adds to ACA0 an axiom that states, informally, that any arithmetical functional (meaning any arithmetical formula with a free number variable n and a free set variable X, seen as the operator taking X to the set of n satisfying the formula) can be iterated transfinitely along any countable well ordering starting with any set. ATR0 is equivalent over ACA0 to the principle of Σ11 separation. ATR0 is impredicative, and has the proof-theoretic ordinal Γ 0 {\displaystyle \Gamma _{0}} , the supremum of that of predicative systems. ATR0 proves the consistency of ACA0, and thus by Gödel's theorem it is strictly stronger. The following assertions are equivalent to ATR0 over RCA0: Any two countable well orderings are comparable. That is, they are isomorphic or one is isomorphic to a proper initial segment of the other.theorem V.6.8 Ulm's theorem for countable reduced Abelian groups. The perfect set theorem, which states that every uncountable closed subset of a complete separable metric space contains a perfect closed set. Lusin's separation theorem (essentially Σ11 separation).Theorem V.5.1 Determinacy for open sets in the Baire space. === Π11 comprehension Π11-CA0 === Π11-CA0 is stronger than arithmetical transfinite recursion and is fully impredicative. It consists of RCA0 plus the comprehension scheme for Π11 formulas. In a sense, Π11-CA0 comprehension is to arithmetical transfinite recursion (Σ11 separation) as ACA0 is to weak Kőnig's lemma (Σ01 separation). It is equivalent to several statements of descriptive set theory whose proofs make use of strongly impredicative arguments; this equivalence shows that these impredicative arguments cannot be removed. The following theorems are equivalent to Π11-CA0 over RCA0: The Cantor–Bendixson theorem (every closed set of reals is the union of a perfect set and a countable set).Exercise VI.1.7 Silver's dichotomy (every coanalytic equivalence relation has either countably many equivalence classes or a perfect set of incomparables)Theorem VI.3.6 Every countable abelian group is the direct sum of a divisible group and a reduced group.Theorem VI.4.1 Determinacy for Σ 1 0 ∧ Π 1 0 {\displaystyle \Sigma _{1}^{0}\land \Pi _{1}^{0}} games.Theorem VI.5.4 == Additional systems == Weaker systems than recursive comprehension can be defined. The weak system RCA*0 consists of elementary function arithmetic EFA (the basic axioms plus Δ00 induction in the enriched language with an exponential operation) plus Δ01 comprehension. Over RCA*0, recursive comprehension as defined earlier (that is, with Σ01 induction) is equivalent to the statement that a polynomial (over a countable field) has only finitely many roots and to the classification theorem for finitely generated Abelian groups. The system RCA*0 has the same proof theoretic ordinal ω3 as EFA and is conservative over EFA for Π02 sentences. Weak Weak Kőnig's Lemma is the statement that a subtree of the infinite binary tree having no infinite paths has an asymptotically vanishing proportion of the leaves at length n (with a uniform estimate as to how many leaves of length n exist). An equivalent formulation is that any subset of Cantor space that has positive measure is nonempty (this is not provable in RCA0). WWKL0 is obtained by adjoining this axiom to RCA0. It is equivalent to the statement that if the unit real interval is covered by a sequence of intervals then the sum of their lengths is at least one. The model theory of WWKL0 is closely connected to the theory of algorithmically random sequences. In particular, an ω-model of RCA0 satisfies weak weak Kőnig's lemma if and only if for every set X there is a set Y that is 1-random relative to X. DNR (short for "diagonally non-recursive") adds to RCA0 an axiom asserting the existence of a diagonally non-recursive function relative to every set. That is, DNR states that, for any set A, there exists a total function f such that for all e the eth partial recursive function with oracle A is not equal to f. DNR is strictly weaker than WWKL (Lempp et al., 2004). Δ11-comprehension is in certain ways analogous to arithmetical transfinite recursion as recursive comprehension is to weak Kőnig's lemma. It has the hyperarithmetical sets as minimal ω-model. Arithmetical transfinite recursion proves Δ11-comprehension but not the other way around. Σ11-choice is the statement that if η(n,X) is a Σ11 formula such that for each n there exists an X satisfying η then there is a sequence of sets Xn such that η(n,Xn) holds for each n. Σ11-choice also has the hyperarithmetical sets as minimal ω-model. Arithmetical transfinite recursion proves Σ11-choice but not the other way around. HBU (short for "uncountable Heine-Borel") expresses the (open-cover) compactness of the unit interval, involving uncountable covers. The latter aspect of HBU makes it only expressible in the language of third-order arithmetic. Cousin's theorem (1895) implies HBU, and these theorems use the same notion of cover due to Cousin and Lindelöf. HBU is hard to prove: in terms of the usual hierarchy of comprehension axioms, a proof of HBU requires full second-order arithmetic. Ramsey's theorem for infinite graphs does not fall into one of the big five subsystems, and there are many other weaker variants with varying proof strengths. === Stronger systems === Over RCA0, Π11 transfinite recursion, ∆02 determinacy, and the ∆11 Ramsey theorem are all equivalent to each other. Over RCA0, Σ11 monotonic induction, Σ02 determinacy, and the Σ11 Ramsey theorem are all equivalent to each other. The following are equivalent: (schema) Π13 consequences of Π12-CA0 RCA0 + (schema over finite n) determinacy in the nth level of the difference hierarchy of Σ02 sets RCA0 + {τ: τ is a true S2S sentence} The set of Π13 consequences of second-order arithmetic Z2 has the same theory as RCA0 + (schema over finite n) determinacy in the nth level of the difference hierarchy of Σ03 sets. For a poset P {\displaystyle P} , let MF ( P ) {\displaystyle {\textrm {MF}}(P)} denote the topological space consisting of the filters on P {\displaystyle P} whose open sets are the sets of the form { F ∈ MF ( P ) ∣ p ∈ F } {\displaystyle \{F\in {\textrm {MF}}(P)\mid p\in F\}} for some p ∈ P {\displaystyle p\in P} . The following statement is equivalent to Π 2 1 − C A 0 {\displaystyle \Pi _{2}^{1}{\mathsf {-CA}}_{0}} over Π 1 1 − C A 0 {\displaystyle \Pi _{1}^{1}{\mathsf {-CA}}_{0}} : for any countable poset P {\displaystyle P} , the topological space MF ( P ) {\displaystyle {\textrm {MF}}(P)} is completely metrizable iff it is regular. == ω-models and β-models == The ω in ω-model stands for the set of non-negative integers (or finite ordinals). An ω-model is a model for a fragment of second-order arithmetic whose first-order part is the standard model of Peano arithmetic, but whose second-order part may be non-standard. More precisely, an ω-model is given by a choice S ⊆ P ( ω ) {\displaystyle S\subseteq {\mathcal {P}}(\omega )} of subsets of ω {\displaystyle \omega } . The first-order variables are interpreted in the usual way as elements of ω {\displaystyle \omega } , and + {\displaystyle +} , × {\displaystyle \times } have their usual meanings, while second-order variables are interpreted as elements of S {\displaystyle S} . There is a standard ω-model where one just takes S {\displaystyle S} to consist of all subsets of the integers. However, there are also other ω-models; for example, RCA0 has a minimal ω-model where S {\displaystyle S} consists of the recursive subsets of ω {\displaystyle \omega } . A β-model is an ω model that agrees with the standard ω-model on truth of Π 1 1 {\displaystyle \Pi _{1}^{1}} and Σ 1 1 {\displaystyle \Sigma _{1}^{1}} sentences (with parameters). Non-ω models are also useful, especially in the proofs of conservation theorems. == See also == Closed-form expression § Conversion from numerical forms Induction, bounding and least number principles Ordinal analysis == References == == References/Further Reading == Ambos-Spies, K.; Kjos-Hanssen, B.; Lempp, S.; Slaman, T.A. (2004), "Comparing DNR and WWKL", Journal of Symbolic Logic, 69 (4): 1089, arXiv:1408.2281, doi:10.2178/jsl/1102022212, S2CID 17582399. Friedman, Harvey (1975), "Some systems of second-order arithmetic and their use", Proceedings of the International Congress of Mathematicians (Vancouver, B. C., 1974), Vol. 1, Montreal: Canad. Math. Congress, pp. 235–242, MR 0429508 Friedman, Harvey (1976), Baldwin, John; Martin, D. A.; Soare, R. I.; Tait, W. W. (eds.), "Systems of second-order arithmetic with restricted induction, I, II", Meeting of the Association for Symbolic Logic, The Journal of Symbolic Logic, 41 (2): 557–559, doi:10.2307/2272259, JSTOR 2272259 Hirschfeldt, Denis R. (2014), Slicing the Truth, Lecture Notes Series of the Institute for Mathematical Sciences, National University of Singapore, vol. 28, World Scientific Hunter, James (2008), Reverse Topology (PDF) (PhD thesis), University of Wisconsin–Madison Kohlenbach, Ulrich (2005), "Higher order reverse mathematics", in Simpson, Stephen G (ed.), Higher Order Reverse Mathematics, Reverse Mathematics 2001 (PDF), Lecture notes in Logic, Cambridge University Press, pp. 281–295, CiteSeerX 10.1.1.643.551, doi:10.1017/9781316755846.018, ISBN 9781316755846 Normann, Dag; Sanders, Sam (2018), "On the mathematical and foundational significance of the uncountable", Journal of Mathematical Logic, 19: 1950001, arXiv:1711.08939, doi:10.1142/S0219061319500016, S2CID 119120366 Simpson, Stephen G. (2009), Subsystems of second-order arithmetic, Perspectives in Logic (2nd ed.), Cambridge University Press, doi:10.1017/CBO9780511581007, ISBN 978-0-521-88439-6, MR 2517689 Stillwell, John (2018), Reverse Mathematics, proofs from the inside out, Princeton University Press, ISBN 978-0-691-17717-5 Solomon, Reed (1999), "Ordered groups: a case study in reverse mathematics", The Bulletin of Symbolic Logic, 5 (1): 45–58, CiteSeerX 10.1.1.364.9553, doi:10.2307/421140, ISSN 1079-8986, JSTOR 421140, MR 1681895, S2CID 508431 Dzhafarov, Damir D.; Mummert, Carl (2022), Reverse Mathematics: Problems, Reductions, and Proofs, Theory and Applications of Computability (1st ed.), Springer Cham, pp. XIX, 488, doi:10.1007/978-3-031-11367-3, ISBN 978-3-031-11367-3 == External links == Stephen G. Simpson's home page Reverse Mathematics Zoo
|
https://en.wikipedia.org/wiki/Reverse_mathematics
|
We show that a determinant of Stirling cycle numbers counts unlabeled acyclic single-source automata. The proof involves a bijection from these automata to certain marked lattice paths and a sign-reversing involution to evaluate the determinant.
|
arxiv:0704.0004
|
In this paper we present an algorithm for computing Hecke eigensystems of Hilbert-Siegel cusp forms over real quadratic fields of narrow class number one. We give some illustrative examples using the quadratic field $\Q(\sqrt{5})$. In those examples, we identify Hilbert-Siegel eigenforms that are possible lifts from Hilbert eigenforms.
|
arxiv:0704.0011
|
The formation of quasi-2D spin-wave waveforms in longitudinally magnetized stripes of ferrimagnetic film was observed by using time- and space-resolved Brillouin light scattering technique. In the linear regime it was found that the confinement decreases the amplitude of dynamic magnetization near the lateral stripe edges. Thus, the so-called effective dipolar pinning of dynamic magnetization takes place at the edges. In the nonlinear regime a new stable spin wave packet propagating along a waveguide structure, for which both transversal instability and interaction with the side walls of the waveguide are important was observed. The experiments and a numerical simulation of the pulse evolution show that the shape of the formed waveforms and their behavior are strongly influenced by the confinement.
|
arxiv:0704.0024
|
We present a critical review about the study of linear perturbations of matched spacetimes including gauge problems. We analyse the freedom introduced in the perturbed matching by the presence of background symmetries and revisit the particular case of spherically symmetry in n-dimensions. This analysis includes settings with boundary layers such as brane world models and shell cosmologies.
|
arxiv:0704.0078
|
We show that the globular cluster mass function (GCMF) in the Milky Way depends on cluster half-mass density (rho_h) in the sense that the turnover mass M_TO increases with rho_h while the width of the GCMF decreases. We argue that this is the expected signature of the slow erosion of a mass function that initially rose towards low masses, predominantly through cluster evaporation driven by internal two-body relaxation. We find excellent agreement between the observed GCMF -- including its dependence on internal density rho_h, central concentration c, and Galactocentric distance r_gc -- and a simple model in which the relaxation-driven mass-loss rates of clusters are approximated by -dM/dt = mu_ev ~ rho_h^{1/2}. In particular, we recover the well-known insensitivity of M_TO to r_gc. This feature does not derive from a literal ``universality'' of the GCMF turnover mass, but rather from a significant variation of M_TO with rho_h -- the expected outcome of relaxation-driven cluster disruption -- plus significant scatter in rho_h as a function of r_gc. Our conclusions are the same if the evaporation rates are assumed to depend instead on the mean volume or surface densities of clusters inside their tidal radii, as mu_ev ~ rho_t^{1/2} or mu_ev ~ Sigma_t^{3/4} -- alternative prescriptions that are physically motivated but involve cluster properties (rho_t and Sigma_t) that are not as well defined or as readily observable as rho_h. In all cases, the normalization of mu_ev required to fit the GCMF implies cluster lifetimes that are within the range of standard values (although falling towards the low end of this range). Our analysis does not depend on any assumptions or information about velocity anisotropy in the globular cluster system.
|
arxiv:0704.0080
|
We get asymptotics for the volume of large balls in an arbitrary locally compact group G with polynomial growth. This is done via a study of the geometry of G and a generalization of P. Pansu's thesis. In particular, we show that any such G is weakly commensurable to some simply connected solvable Lie group S, the Lie shadow of G. We also show that large balls in G have an asymptotic shape, i.e. after a suitable renormalization, they converge to a limiting compact set which can be interpreted geometrically. We then discuss the speed of convergence, treat some examples and give an application to ergodic theory. We also answer a question of Burago about left invariant metrics and recover some results of Stoll on the irrationality of growth series of nilpotent groups.
|
arxiv:0704.0095
|
Over algebraically closed fields of characteristic p>2, prolongations of the simple finite dimensional Lie algebras and Lie superalgebras with Cartan matrix are studied for certain simplest gradings of these algebras. Several new simple Lie superalgebras are discovered, serial and exceptional, including superBrown and superMelikyan superalgebras. Simple Lie superalgebras with Cartan matrix of rank 2 are classified.
|
arxiv:0704.0130
|
By means of the diffusion entropy approach, we detect the scale-invariance characteristics embedded in the 4737 human promoter sequences. The exponent for the scale-invariance is in a wide range of $[ {0.3,0.9} ]$, which centered at $\delta_c = 0.66$. The distribution of the exponent can be separated into left and right branches with respect to the maximum. The left and right branches are asymmetric and can be fitted exactly with Gaussian form with different widths, respectively.
|
arxiv:0704.0158
|
Redundancy of experimental data is the basic statistic from which the complexity of a natural phenomenon and the proper number of experiments needed for its exploration can be estimated. The redundancy is expressed by the entropy of information pertaining to the probability density function of experimental variables. Since the calculation of entropy is inconvenient due to integration over a range of variables, an approximate expression for redundancy is derived that includes only a sum over the set of experimental data about these variables. The approximation makes feasible an efficient estimation of the redundancy of data along with the related experimental information and information cost function. From the experimental information the complexity of the phenomenon can be simply estimated, while the proper number of experiments needed for its exploration can be determined from the minimum of the cost function. The performance of the approximate estimation of these statistics is demonstrated on two-dimensional normally distributed random data.
|
arxiv:0704.0162
|
A number of recently discovered protein structures incorporate a rather unexpected structural feature: a knot in the polypeptide backbone. These knots are extremely rare, but their occurrence is likely connected to protein function in as yet unexplored fashion. Our analysis of the complete Protein Data Bank reveals several new knots which, along with previously discovered ones, can shed light on such connections. In particular, we identify the most complex knot discovered to date in human ubiquitin hydrolase, and suggest that its entangled topology protects it against unfolding and degradation by the proteasome. Knots in proteins are typically preserved across species and sometimes even across kingdoms. However, we also identify a knot which only appears in some transcarbamylases while being absent in homologous proteins of similar structure. The emergence of the knot is accompanied by a shift in the enzymatic function of the protein. We suggest that the simple insertion of a short DNA fragment into the gene may suffice to turn an unknotted into a knotted structure in this protein.
|
arxiv:0704.0191
|
We theoretically investigate the possibility of observing resonant activation in the hopping dynamics of two-mode semiconductor lasers. We present a series of simulations of a rate-equations model under random and periodic modulation of the bias current. In both cases, for an optimal choice of the modulation time-scale, the hopping times between the stable lasing modes attain a minimum. The simulation data are understood by means of an effective one-dimensional Langevin equation with multiplicative fluctuations. Our conclusions apply to both Edge Emitting and Vertical Cavity Lasers, thus opening the way to several experimental tests in such optical systems.
|
arxiv:0704.0206
|
We have been monitoring Supernova (SN) 1987A with {\it Chandra X-Ray Observatory} since 1999. We present a review of previous results from our {\it Chandra} observations, and some preliminary results from new {\it Chandra} data obtained in 2006 and 2007. High resolution imaging and spectroscopic studies of SN 1987A with {\it Chandra} reveal that X-ray emission of SN 1987A originates from the hot gas heated by interaction of the blast wave with the ring-like dense circumstellar medium (CSM) that was produced by the massive progenitor's equatorial stellar winds before the SN explosion. The blast wave is now sweeping through dense CSM all around the inner ring, and thus SN 1987A is rapidly brightening in soft X-rays. At the age of 20 yr (as of 2007 January), X-ray luminosity of SN 1987A is $L_{\rm X}$ $\sim$ 2.4 $\times$ 10$^{36}$ ergs s$^{-1}$ in the 0.5$-$10 keV band. X-ray emission is described by two-component plane shock model with electron temperatures of $kT$ $\sim$ 0.3 and 2 keV. As the shock front interacts with dense CSM all around the inner ring, the X-ray remnant is now expanding at a much slower rate of $v$ $\sim$ 1400 km s$^{-1}$ than it was until 2004 ($v$ $\sim$ 6000 km s$^{-1}$).
|
arxiv:0704.0209
|
Starting with a field theoretic approach in Minkowski space, the gravitational energy momentum tensor is derived from the Einstein equations in a straightforward manner. This allows to present them as {\it acceleration tensor} = const. $\times$ {\it total energy momentum tensor}. For flat space cosmology the gravitational energy is negative and cancels the material energy. In the relativistic theory of gravitation a bimetric coupling between the Riemann and Minkowski metrics breaks general coordinate invariance. The case of a positive cosmological constant is considered. A singularity free version of the Schwarzschild black hole is solved analytically. In the interior the components of the metric tensor quickly die out, but do not change sign, leaving the role of time as usual. For cosmology the $\Lambda$CDM model is covered, while there appears a form of inflation at early times. Here both the total energy and the zero point energy vanish.
|
arxiv:0704.0228
|
The density of states and energy spectrum of the gluon radiation are calculated for the color current of an expanding hydrodynamic skyrmion in the quark gluon plasma with a semiclassical method. Results are compared with those in literatures.
|
arxiv:0704.0264
|
An critical overview of the current state of research in turbulence in astrophysical disks.
|
arxiv:0704.0281
|
In line with the local philicity concept proposed by Chattaraj et al. (Chattaraj, P. K.; Maiti, B.; Sarkar, U. J. Phys. Chem. A. 2003, 107, 4973) and a dual descriptor derived by Toro-Labbe and coworkers (Morell, C.; Grand, A.; Toro-Labbe, A. J. Phys. Chem. A. 2005, 109, 205), we propose a multiphilic descriptor. It is defined as the difference between nucleophilic (Wk+) and electrophilic (Wk-) condensed philicity functions. This descriptor is capable of simultaneously explaining the nucleophilicity and electrophilicity of the given atomic sites in the molecule. Variation of these quantities along the path of a soft reaction is also analyzed. Predictive ability of this descriptor has been successfully tested on the selected systems and reactions. Corresponding force profiles are also analyzed in some representative cases. Also, to study the intra- and intermolecular reactivities another related descriptor namely, the nucleophilicity excess (DelW-+) for a nucleophile, over the electrophilicity in it has been defined and tested on all-metal aromatic compounds.
|
arxiv:0704.0334
|
Multifrequency VLBA observations of the final group of ten objects in a sample of FIRST-based compact steep spectrum (CSS) sources are presented. The sample was selected to investigate whether objects of this kind could be relics of radio-loud AGNs switched off at very early stages of their evolution or possibly to indicate intermittent activity. Initial observations were made using MERLIN at 5 GHz. The sources have now been observed with the VLBA at 1.7, 5 and 8.4 GHz in a snapshot mode with phase-referencing. The resulting maps are presented along with unpublished 8.4-GHz VLA images of five sources. Some of the sources discussed here show a complex radio morphology and therefore a complicated past that, in some cases, might indicate intermittent activity. One of the sources studied - 1045+352 - is known as a powerful radio and infrared-luminous broad absorption line (BAL) quasar. It is a young CSS object whose asymmetric two-sided morphology on a scale of several hundred parsecs, extending in two different directions, may suggest intermittent activity. The young age and compact structure of 1045+352 is consistent with the evolution scenario of BAL quasars. It has also been confirmed that the submillimetre flux of 1045+352 can be seriously contaminated by synchrotron emission.
|
arxiv:0704.0351
|
We discuss the phenomenological impact of a particularly interesting corner of the MSSM: the large tan(beta) regime. The capabilities of leptonic and hadronic Flavor Violating processes in shedding light on physics beyond the Standard Model are reviewed. Moreover, we show that tests of Lepton Universality in charged current processes can represent an interesting handle to obtain relevant information on New Physics scenarios.
|
arxiv:0704.0358
|
We study the notion of Fagnano orbits for dual polygonal billiards. We used them to characterize regular polygons and we study the iteration of the developing map.
|
arxiv:0704.0390
|
We construct a simple thermodynamic model to describe the melting of a supported metal nanoparticle with a spherically curved free surface both with and without surface melting. We use the model to investigate the results of recent molecular dynamics simulations, which suggest the melting temperature of a supported metal particle is the same as that of a free spherical particle with the same surface curvature. Our model shows that this is only the case when the contact angles of the supported solid and liquid particles are similar. This is also the case for the temperature at which surface melting begins.
|
arxiv:0704.0393
|
We study finite-temperature phase transitions in a two-dimensional boson Hubbard model with zero-point quantum fluctuations via Monte Carlo simulations of quantum rotor model, and construct the corresponding phase diagram. Compressibility shows a thermally activated gapped behavior in the insulating regime. Finite-size scaling of the superfluid stiffness clearly shows the nature of the Kosterlitz-Thouless transition. The transition temperature, $T_c$, confirms a scaling relation $T_c \propto \rho_0^x$ with $x=1.0$. Some evidences of anomalous quantum behavior at low temperatures are presented.
|
arxiv:0704.0396
|
Simple examples are constructed that show the entanglement of two qubits being both increased and decreased by interactions on just one of them. One of the two qubits interacts with a third qubit, a control, that is never entangled or correlated with either of the two entangled qubits and is never entangled, but becomes correlated, with the system of those two qubits. The two entangled qubits do not interact, but their state can change from maximally entangled to separable or from separable to maximally entangled. Similar changes for the two qubits are made with a swap operation between one of the qubits and a control; then there are compensating changes of entanglement that involve the control. When the entanglement increases, the map that describes the change of the state of the two entangled qubits is not completely positive. Combination of two independent interactions that individually give exponential decay of the entanglement can cause the entanglement to not decay exponentially but, instead, go to zero at a finite time.
|
arxiv:0704.0461
|
The Sun was recently predicted to be an extended source of gamma-ray emission, produced by inverse-Compton scattering of cosmic-ray electrons with the solar radiation. The emission was predicted to contribute to the diffuse extragalactic background even at large angular distances from the Sun. While this emission is expected to be readily detectable in future by GLAST, the situation for available EGRET data is more challenging. We present a detailed study of the EGRET database, using a time dependent analysis, accounting for the effect of the emission from 3C 279, the moon, and other sources, which interfere with the solar signal. The technique has been tested on the moon signal, with results consistent with previous work. We find clear evidence for emission from the Sun and its vicinity. The observations are compared with our model for the extended emission.
|
arxiv:0704.0462
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.