source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Net
A net is a mesh of strings or ropes or a device made from one, such as those used for fishing. Net or net may also refer to: Mathematics and physics Net (mathematics), a filter-like topological generalization of a sequence Net, a linear system of divisors of dimension 2 Net (polyhedron), an arrangement of polygons that can be folded up to form a polyhedron An incidence structure consisting of points and parallel classes of lines Operator algebras in Local quantum field theory ε-net (computational geometry), a mathematical concept whereby a general set is approximated by a collection of simpler subsets Others In computing, the Internet Net (textile), a textile in which the warp and weft yarns are looped or knotted at their intersections net sports, sports that use a net Net (economics) (nett), the sum or difference of two or more economic variables Net income (nett), an entity's income minus cost of goods sold, expenses and taxes for an accounting period In electronic design, a connection in a netlist In golf, the net score is the number of strokes taken minus any handicap allowance Net (command), an operating system command Net (film), 2021 Indian thriller drama film See also NET (disambiguation) Nett (disambiguation) .net (disambiguation) Network (disambiguation)
https://en.wikipedia.org/wiki/Ball%20%28mathematics%29
In mathematics, a ball is the solid figure bounded by a sphere; it is also called a solid sphere. It may be a closed ball (including the boundary points that constitute the sphere) or an open ball (excluding them). These concepts are defined not only in three-dimensional Euclidean space but also for lower and higher dimensions, and for metric spaces in general. A ball in dimensions is called a hyperball or -ball and is bounded by a hypersphere or ()-sphere. Thus, for example, a ball in the Euclidean plane is the same thing as a disk, the area bounded by a circle. In Euclidean 3-space, a ball is taken to be the volume bounded by a 2-dimensional sphere. In a one-dimensional space, a ball is a line segment. In other contexts, such as in Euclidean geometry and informal use, sphere is sometimes used to mean ball. In the field of topology the closed -dimensional ball is often denoted as or while the open -dimensional ball is or . In Euclidean space In Euclidean -space, an (open) -ball of radius and center is the set of all points of distance less than from . A closed -ball of radius is the set of all points of distance less than or equal to away from . In Euclidean -space, every ball is bounded by a hypersphere. The ball is a bounded interval when , is a disk bounded by a circle when , and is bounded by a sphere when . Volume The -dimensional volume of a Euclidean ball of radius in -dimensional Euclidean space is: where  is Leonhard Euler's gamma function (which can be thought of as an extension of the factorial function to fractional arguments). Using explicit formulas for particular values of the gamma function at the integers and half integers gives formulas for the volume of a Euclidean ball that do not require an evaluation of the gamma function. These are: In the formula for odd-dimensional volumes, the double factorial is defined for odd integers as . In general metric spaces Let be a metric space, namely a set with a metric (distance function) . The open (metric) ball of radius centered at a point in , usually denoted by or , is defined by The closed (metric) ball, which may be denoted by or , is defined by Note in particular that a ball (open or closed) always includes itself, since the definition requires . A unit ball (open or closed) is a ball of radius 1. A subset of a metric space is bounded if it is contained in some ball. A set is totally bounded if, given any positive radius, it is covered by finitely many balls of that radius. The open balls of a metric space can serve as a base, giving this space a topology, the open sets of which are all possible unions of open balls. This topology on a metric space is called the topology induced by the metric . Let denote the closure of the open ball in this topology. While it is always the case that , it is always the case that . For example, in a metric space with the discrete metric, one has and , for any . In normed vector spaces Any normed vector space
https://en.wikipedia.org/wiki/Probable%20prime
In number theory, a probable prime (PRP) is an integer that satisfies a specific condition that is satisfied by all prime numbers, but which is not satisfied by most composite numbers. Different types of probable primes have different specific conditions. While there may be probable primes that are composite (called pseudoprimes), the condition is generally chosen in order to make such exceptions rare. Fermat's test for compositeness, which is based on Fermat's little theorem, works as follows: given an integer n, choose some integer a that is not a multiple of n; (typically, we choose a in the range ). Calculate . If the result is not 1, then n is composite. If the result is 1, then n is likely to be prime; n is then called a probable prime to base a. A weak probable prime to base a is an integer that is a probable prime to base a, but which is not a strong probable prime to base a (see below). For a fixed base a, it is unusual for a composite number to be a probable prime (that is, a pseudoprime) to that base. For example, up to , there are 11,408,012,595 odd composite numbers, but only 21,853 pseudoprimes base 2. The number of odd primes in the same interval is 1,091,987,404. Properties Probable primality is a basis for efficient primality testing algorithms, which find application in cryptography. These algorithms are usually probabilistic in nature. The idea is that while there are composite probable primes to base a for any fixed a, we may hope there exists some fixed P<1 such that for any given composite n, if we choose a at random, then the probability that n is pseudoprime to base a is at most P. If we repeat this test k times, choosing a new a each time, the probability of n being pseudoprime to all the as tested is hence at most Pk, and as this decreases exponentially, only moderate k is required to make this probability negligibly small (compared to, for example, the probability of computer hardware error). This is unfortunately false for weak probable primes, because there exist Carmichael numbers; but it is true for more refined notions of probable primality, such as strong probable primes (P = 1/4, Miller–Rabin algorithm), or Euler probable primes (P = 1/2, Solovay–Strassen algorithm). Even when a deterministic primality proof is required, a useful first step is to test for probable primality. This can quickly eliminate (with certainty) most composites. A PRP test is sometimes combined with a table of small pseudoprimes to quickly establish the primality of a given number smaller than some threshold. Variations An Euler probable prime to base a is an integer that is indicated prime by the somewhat stronger theorem that for any prime p, a(p−1)/2 equals modulo p, where is the Jacobi symbol. An Euler probable prime which is composite is called an Euler–Jacobi pseudoprime to base a. The smallest Euler-Jacobi pseudoprime to base 2 is 561. There are 11347 Euler-Jacobi pseudoprimes base 2 that are less than 25·109. This tes
https://en.wikipedia.org/wiki/Defect
Defect or defects may refer to: Related to failure Angular defect, in geometry Birth defect, an abnormal condition present at birth Crystallographic defect, in the crystal lattice of solid materials Latent defect, in the law of the sale of property Product defect, a characteristic of a product which hinders its usability Software bug, an error in computer software Other uses Defection, abandoning allegiance to one country for another The Defects, a Northern Irish punk rock band See also Defective (disambiguation) Defected Records, a music label Fault (disambiguation) Flaw (disambiguation)
https://en.wikipedia.org/wiki/Outlier
In statistics, an outlier is a data point that differs significantly from other observations. An outlier may be due to a variability in the measurement, an indication of novel data, or it may be the result of experimental error; the latter are sometimes excluded from the data set. An outlier can be an indication of exciting possibility, but can also cause serious problems in statistical analyses. Outliers can occur by chance in any distribution, but they can indicate novel behaviour or structures in the data-set, measurement error, or that the population has a heavy-tailed distribution. In the case of measurement error, one wishes to discard them or use statistics that are robust to outliers, while in the case of heavy-tailed distributions, they indicate that the distribution has high skewness and that one should be very cautious in using tools or intuitions that assume a normal distribution. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate 'correct trial' versus 'measurement error'; this is modeled by a mixture model. In most larger samplings of data, some data points will be further away from the sample mean than what is deemed reasonable. This can be due to incidental systematic error or flaws in the theory that generated an assumed family of probability distributions, or it may be that some observations are far from the center of the data. Outlier points can therefore indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. However, in large samples, a small number of outliers is to be expected (and not due to any anomalous condition). Outliers, being the most extreme observations, may include the sample maximum or sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations. Naive interpretation of statistics derived from data sets that include outliers may be misleading. For example, if one is calculating the average temperature of 10 objects in a room, and nine of them are between 20 and 25 degrees Celsius, but an oven is at 175 °C, the median of the data will be between 20 and 25 °C but the mean temperature will be between 35.5 and 40 °C. In this case, the median better reflects the temperature of a randomly sampled object (but not the temperature in the room) than the mean; naively interpreting the mean as "a typical sample", equivalent to the median, is incorrect. As illustrated in this case, outliers may indicate data points that belong to a different population than the rest of the sample set. Estimators capable of coping with outliers are said to be robust: the median is a robust statistic of central tendency, while the mean is not. However, the mean is generally a more precise estimator. Occurrence and causes In the case of normally distributed data, the three sigma rule m
https://en.wikipedia.org/wiki/Box%20plot
In descriptive statistics, a box plot or boxplot is a method for graphically demonstrating the locality, spread and skewness groups of numerical data through their quartiles. In addition to the box on a box plot, there can be lines (which are called whiskers) extending from the box indicating variability outside the upper and lower quartiles, thus, the plot is also called the box-and-whisker plot and the box-and-whisker diagram. Outliers that differ significantly from the rest of the dataset may be plotted as individual points beyond the whiskers on the box-plot. Box plots are non-parametric: they display variation in samples of a statistical population without making any assumptions of the underlying statistical distribution (though Tukey's boxplot assumes symmetry for the whiskers and normality for their length). The spacings in each subsection of the box-plot indicate the degree of dispersion (spread) and skewness of the data, which are usually described using the five-number summary. In addition, the box-plot allows one to visually estimate various L-estimators, notably the interquartile range, midhinge, range, mid-range, and trimean. Box plots can be drawn either horizontally or vertically. History The range-bar method was first introduced by Mary Eleanor Spear in her book "Charting Statistics" in 1952 and again in her book "Practical Charting Techniques" in 1969. The box-and-whisker plot was first introduced in 1970 by John Tukey, who later published on the subject in his book "Exploratory Data Analysis" in 1977. Elements A boxplot is a standardized way of displaying the dataset based on the five-number summary: the minimum, the maximum, the sample median, and the first and third quartiles. Minimum (Q0 or 0th percentile): the lowest data point in the data set excluding any outliers Maximum (Q4 or 100th percentile): the highest data point in the data set excluding any outliers Median (Q2 or 50th percentile): the middle value in the data set First quartile (Q1 or 25th percentile): also known as the lower quartile qn(0.25), it is the median of the lower half of the dataset. Third quartile (Q3 or 75th percentile): also known as the upper quartile qn(0.75), it is the median of the upper half of the dataset. In addition to the minimum and maximum values used to construct a box-plot, another important element that can also be employed to obtain a box-plot is the interquartile range (IQR), as denoted below: Interquartile range (IQR) : the distance between the upper and lower quartiles Whiskers A box-plot usually includes two parts, a box and a set of whiskers as shown in Figure 2. The box is drawn from Q1 to Q3 with a horizontal line drawn in the middle to denote the median. The whiskers must end at an observed data point, but can be defined in various ways. In the most straight-forward method, the boundary of the lower whisker is the minimum value of the data set, and the boundary of the upper whisker is the maximum value of the dat
https://en.wikipedia.org/wiki/Five-number%20summary
The five-number summary is a set of descriptive statistics that provides information about a dataset. It consists of the five most important sample percentiles: the sample minimum (smallest observation) the lower quartile or first quartile the median (the middle value) the upper quartile or third quartile the sample maximum (largest observation) In addition to the median of a single set of data there are two related statistics called the upper and lower quartiles. If data are placed in order, then the lower quartile is central to the lower half of the data and the upper quartile is central to the upper half of the data. These quartiles are used to calculate the interquartile range, which helps to describe the spread of the data, and determine whether or not any data points are outliers. In order for these statistics to exist the observations must be from a univariate variable that can be measured on an ordinal, interval or ratio scale. Use and representation The five-number summary provides a concise summary of the distribution of the observations. Reporting five numbers avoids the need to decide on the most appropriate summary statistic. The five-number summary gives information about the location (from the median), spread (from the quartiles) and range (from the sample minimum and maximum) of the observations. Since it reports order statistics (rather than, say, the mean) the five-number summary is appropriate for ordinal measurements, as well as interval and ratio measurements. It is possible to quickly compare several sets of observations by comparing their five-number summaries, which can be represented graphically using a boxplot. In addition to the points themselves, many L-estimators can be computed from the five-number summary, including interquartile range, midhinge, range, mid-range, and trimean. The five-number summary is sometimes represented as in the following table: Example This example calculates the five-number summary for the following set of observations: 0, 0, 1, 2, 63, 61, 27, 13. These are the number of moons of each planet in the Solar System. It helps to put the observations in ascending order: 0, 0, 1, 2, 13, 27, 61, 63. There are eight observations, so the median is the mean of the two middle numbers, (2 + 13)/2 = 7.5. Splitting the observations either side of the median gives two groups of four observations. The median of the first group is the lower or first quartile, and is equal to (0 + 1)/2 = 0.5. The median of the second group is the upper or third quartile, and is equal to (27 + 61)/2 = 44. The smallest and largest observations are 0 and 63. So the five-number summary would be 0, 0.5, 7.5, 44, 63. Example in R It is possible to calculate the five-number summary in the R programming language using the fivenum function. The summary function, when applied to a vector, displays the five-number summary together with the mean (which is not itself a part of the five-number summary). The fivenum uses a dif
https://en.wikipedia.org/wiki/Order%20statistic
In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference. Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles. When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution. Notation and examples For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are 6, 9, 3, 8, the order statistics would be denoted where the subscript enclosed in parentheses indicates the th order statistic of the sample. The first order statistic (or smallest order statistic) is always the minimum of the sample, that is, where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values. Similarly, for a sample of size , the th order statistic (or largest order statistic) is the maximum, that is, The sample range is the difference between the maximum and minimum. It is a function of the order statistics: A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range. The sample median may or may not be an order statistic, since there is a single middle value only when the number of observations is odd. More precisely, if for some integer , then the sample median is and so is an order statistic. On the other hand, when is even, and there are two middle values, and , and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles. Probabilistic analysis Given any random variables X1, X2..., Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) of X1, ..., Xn in increasing order. When the random variables X1, X2..., Xn form a sample they are independent and identically distributed. This is the case treated below. In general, the random variables X1, ..., Xn can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem. From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a probability density function (PDF), that is, they are absolutely continuous. The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions)
https://en.wikipedia.org/wiki/Infinitesimal
In mathematics, an infinitesimal number is a quantity that is closer to 0 than any standard real number, but that is not 0. The word infinitesimal comes from a 17th-century Modern Latin coinage infinitesimus, which originally referred to the "infinity-th" item in a sequence. Infinitesimals do not exist in the standard real number system, but they do exist in other number systems, such as the surreal number system and the hyperreal number system, which can be thought of as the real numbers augmented with both infinitesimal and infinite quantities; the augmentations are the reciprocals of one another. Infinitesimal numbers were introduced in the development of calculus, in which the derivative was first conceived as a ratio of two infinitesimal quantities. This definition was not rigorously formalized. As calculus developed further, infinitesimals were replaced by limits, which can be calculated using the standard real numbers. Infinitesimals regained popularity in the 20th century with Abraham Robinson's development of nonstandard analysis and the hyperreal numbers, which, after centuries of controversy, showed that a formal treatment of infinitesimal calculus was possible. Following this, mathematicians developed surreal numbers, a related formalization of infinite and infinitesimal numbers that include both hyperreal cardinal and ordinal numbers, which is the largest ordered field. Vladimir Arnold wrote in 1990: The crucial insight for making infinitesimals feasible mathematical entities was that they could still retain certain properties such as angle or slope, even if these entities were infinitely small. Infinitesimals are a basic ingredient in calculus as developed by Leibniz, including the law of continuity and the transcendental law of homogeneity. In common speech, an infinitesimal object is an object that is smaller than any feasible measurement, but not zero in size—or, so small that it cannot be distinguished from zero by any available means. Hence, when used as an adjective in mathematics, infinitesimal means infinitely small, smaller than any standard real number. Infinitesimals are often compared to other infinitesimals of similar size, as in examining the derivative of a function. An infinite number of infinitesimals are summed to calculate an integral. The concept of infinitesimals was originally introduced around 1670 by either Nicolaus Mercator or Gottfried Wilhelm Leibniz. Archimedes used what eventually came to be known as the method of indivisibles in his work The Method of Mechanical Theorems to find areas of regions and volumes of solids. In his formal published treatises, Archimedes solved the same problem using the method of exhaustion. The 15th century saw the work of Nicholas of Cusa, further developed in the 17th century by Johannes Kepler, in particular, the calculation of the area of a circle by representing the latter as an infinite-sided polygon. Simon Stevin's work on the decimal representation of all numbe
https://en.wikipedia.org/wiki/Generating%20function
In mathematics, a generating function is a way of encoding an infinite sequence of numbers () by treating them as the coefficients of a formal power series. This series is called the generating function of the sequence. Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers. There are various types of generating functions, including ordinary generating functions, exponential generating functions, Lambert series, Bell series, and Dirichlet series; definitions and examples are given below. Every sequence in principle has a generating function of each type (except that Lambert and Dirichlet series require indices to start at 1 rather than 0), but the ease with which they can be handled may differ considerably. The particular generating function, if any, that is most useful in a given context will depend upon the nature of the sequence and the details of the problem being addressed. Generating functions are often expressed in closed form (rather than as a series), by some expression involving operations defined for formal series. These expressions in terms of the indeterminate  may involve arithmetic operations, differentiation with respect to  and composition with (i.e., substitution into) other generating functions; since these operations are also defined for functions, the result looks like a function of . Indeed, the closed form expression can often be interpreted as a function that can be evaluated at (sufficiently small) concrete values of , and which has the formal series as its series expansion; this explains the designation "generating functions". However such interpretation is not required to be possible, because formal series are not required to give a convergent series when a nonzero numeric value is substituted for . Also, not all expressions that are meaningful as functions of  are meaningful as expressions designating formal series; for example, negative and fractional powers of  are examples of functions that do not have a corresponding formal power series. Generating functions are not functions in the formal sense of a mapping from a domain to a codomain. Generating functions are sometimes called generating series, in that a series of terms can be said to be the generator of its sequence of term coefficients. Definitions Ordinary generating function (OGF) The ordinary generating function of a sequence is When the term generating function is used without qualification, it is usually taken to mean an ordinary generating function. If is the probability mass function of a discrete random variable, then its ordinary
https://en.wikipedia.org/wiki/Unary%20operation
In mathematics, a unary operation is an operation with only one operand, i.e. a single input. This is in contrast to binary operations, which use two operands. An example is any function , where is a set. The function is a unary operation on . Common notations are prefix notation (e.g. ¬, −), postfix notation (e.g. factorial ), functional notation (e.g. or ), and superscripts (e.g. transpose ). Other notations exist as well, for example, in the case of the square root, a horizontal bar extending the square root sign over the argument can indicate the extent of the argument. Examples Absolute value Obtaining the absolute value of a number is a unary operation. This function is defined as where is the absolute value of . Negation This is used to find the negative value of a single number. This is technically not a unary operation as is just short form of . Here are some examples: Unary negative and positive As unary operations have only one operand they are evaluated before other operations containing them. Here is an example using negation: Here, the first '−' represents the binary subtraction operation, while the second '−' represents the unary negation of the 2 (or '−2' could be taken to mean the integer −2). Therefore, the expression is equal to: Technically, there is also a unary + operation but it is not needed since we assume an unsigned value to be positive: The unary + operation does not change the sign of a negative operation: In this case, a unary negation is needed to change the sign: Trigonometry In trigonometry, the trigonometric functions, such as , , and , can be seen as unary operations. This is because it is possible to provide only one term as input for these functions and retrieve a result. By contrast, binary operations, such as addition, require two different terms to compute a result. Examples from programming languages JavaScript In JavaScript, these operators are unary: Increment: ++x, x++ Decrement: --x, x-- Positive: +x Negative: -x Ones' complement: ~x Logical negation: !x C family of languages In the C family of languages, the following operators are unary: Increment: ++x, x++ Decrement: --x, x-- Address: &x Indirection: *x Positive: +x Negative: -x Ones' complement: ~x Logical negation: !x Sizeof: sizeof x, sizeof(type-name) Cast: (type-name) cast-expression Unix shell (Bash) In the Unix/Linux shell (bash/sh), '$' is a unary operator when used for parameter expansion, replacing the name of a variable by its (sometimes modified) value. For example: Simple expansion: $x Complex expansion: ${#x} PowerShell Increment: ++$x, $x++ Decrement: --$x, $x-- Positive: +$x Negative: -$x Logical negation: !$x Invoke in current scope: .$x Invoke in new scope: &$x Cast: [type-name] cast-expression Cast: +$x Array: ,$array See also Binary operation Iterated binary operation Ternary operation Arity Operation (mathematics) Operator (programming) References External links Elementary algebra Operato
https://en.wikipedia.org/wiki/Altitude%20%28triangle%29
In geometry, an altitude of a triangle is a line segment through a vertex and perpendicular to a line containing the side opposite the vertex. This line containing the opposite side is called the extended base of the altitude. The intersection of the extended base and the altitude is called the foot of the altitude. The length of the altitude, often simply called "the altitude", is the distance between the extended base and the vertex. The process of drawing the altitude from the vertex to the foot is known as dropping the altitude at that vertex. It is a special case of orthogonal projection. Altitudes can be used in the computation of the area of a triangle: one-half of the product of an altitude's length and its base's length equals the triangle's area. Thus, the longest altitude is perpendicular to the shortest side of the triangle. The altitudes are also related to the sides of the triangle through the trigonometric functions. In an isosceles triangle (a triangle with two congruent sides), the altitude having the incongruent side as its base will have the midpoint of that side as its foot. Also the altitude having the incongruent side as its base will be the angle bisector of the vertex angle. It is common to mark the altitude with the letter (as in height), often subscripted with the name of the side the altitude is drawn to. In a right triangle, the altitude drawn to the hypotenuse divides the hypotenuse into two segments of lengths and . If we denote the length of the altitude by , we then have the relation   (Geometric mean theorem) For acute triangles, the feet of the altitudes all fall on the triangle's sides (not extended). In an obtuse triangle (one with an obtuse angle), the foot of the altitude to the obtuse-angled vertex falls in the interior of the opposite side, but the feet of the altitudes to the acute-angled vertices fall on the opposite extended side, exterior to the triangle. This is illustrated in the adjacent diagram: in this obtuse triangle, an altitude dropped perpendicularly from the top vertex, which has an acute angle, intersects the extended horizontal side outside the triangle. Orthocenter The three (possibly extended) altitudes intersect in a single point, called the orthocenter of the triangle, usually denoted by . The orthocenter lies inside the triangle if and only if the triangle is acute. If one angle is a right angle, the orthocenter coincides with the vertex at the right angle. Let denote the vertices and also the angles of the triangle, and let be the side lengths. The orthocenter has trilinear coordinates and barycentric coordinates Since barycentric coordinates are all positive for a point in a triangle's interior but at least one is negative for a point in the exterior, and two of the barycentric coordinates are zero for a vertex point, the barycentric coordinates given for the orthocenter show that the orthocenter is in an acute triangle's interior, on the right-angled vertex of a rig
https://en.wikipedia.org/wiki/Nine-point%20circle
In geometry, the nine-point circle is a circle that can be constructed for any given triangle. It is so named because it passes through nine significant concyclic points defined from the triangle. These nine points are: The midpoint of each side of the triangle The foot of each altitude The midpoint of the line segment from each vertex of the triangle to the orthocenter (where the three altitudes meet; these line segments lie on their respective altitudes). The nine-point circle is also known as Feuerbach's circle (after Karl Wilhelm Feuerbach), Euler's circle (after Leonhard Euler), Terquem's circle (after Olry Terquem), the six-points circle, the twelve-points circle, the -point circle, the medioscribed circle, the mid circle or the circum-midcircle. Its center is the nine-point center of the triangle. Nine significant points The diagram above shows the nine significant points of the nine-point circle. Points are the midpoints of the three sides of the triangle. Points are the feet of the altitudes of the triangle. Points are the midpoints of the line segments between each altitude's vertex intersection (points ) and the triangle's orthocenter (point ). For an acute triangle, six of the points (the midpoints and altitude feet) lie on the triangle itself; for an obtuse triangle two of the altitudes have feet outside the triangle, but these feet still belong to the nine-point circle. Discovery Although he is credited for its discovery, Karl Wilhelm Feuerbach did not entirely discover the nine-point circle, but rather the six-point circle, recognizing the significance of the midpoints of the three sides of the triangle and the feet of the altitudes of that triangle. (See Fig. 1, points .) (At a slightly earlier date, Charles Brianchon and Jean-Victor Poncelet had stated and proven the same theorem.) But soon after Feuerbach, mathematician Olry Terquem himself proved the existence of the circle. He was the first to recognize the added significance of the three midpoints between the triangle's vertices and the orthocenter. (See Fig. 1, points .) Thus, Terquem was the first to use the name nine-point circle. Tangent circles In 1822 Karl Feuerbach discovered that any triangle's nine-point circle is externally tangent to that triangle's three excircles and internally tangent to its incircle; this result is known as Feuerbach's theorem. He proved that:... the circle which passes through the feet of the altitudes of a triangle is tangent to all four circles which in turn are tangent to the three sides of the triangle… The triangle center at which the incircle and the nine-point circle touch is called the Feuerbach point. Other properties of the nine-point circle The radius of a triangle's circumcircle is twice the radius of that triangle's nine-point circle. Figure 3 A nine-point circle bisects a line segment going from the corresponding triangle's orthocenter to any point on its circumcircle. Figure 4 The center of the n
https://en.wikipedia.org/wiki/Incircle%20and%20excircles
In geometry, the incircle or inscribed circle of a triangle is the largest circle that can be contained in the triangle; it touches (is tangent to) the three sides. The center of the incircle is a triangle center called the triangle's incenter. An excircle or escribed circle of the triangle is a circle lying outside the triangle, tangent to one of its sides and tangent to the extensions of the other two. Every triangle has three distinct excircles, each tangent to one of the triangle's sides. The center of the incircle, called the incenter, can be found as the intersection of the three internal angle bisectors. The center of an excircle is the intersection of the internal bisector of one angle (at vertex , for example) and the external bisectors of the other two. The center of this excircle is called the excenter relative to the vertex , or the excenter of . Because the internal bisector of an angle is perpendicular to its external bisector, it follows that the center of the incircle together with the three excircle centers form an orthocentric system. but not all polygons do; those that do are tangential polygons. See also tangent lines to circles. Incircle and incenter Suppose has an incircle with radius and center . Let be the length of , the length of , and the length of . Also let , , and be the touchpoints where the incircle touches , , and . Incenter The incenter is the point where the internal angle bisectors of meet. The distance from vertex to the incenter is: Trilinear coordinates The trilinear coordinates for a point in the triangle is the ratio of all the distances to the triangle sides. Because the incenter is the same distance from all sides of the triangle, the trilinear coordinates for the incenter are Barycentric coordinates The barycentric coordinates for a point in a triangle give weights such that the point is the weighted average of the triangle vertex positions. Barycentric coordinates for the incenter are given by where , , and are the lengths of the sides of the triangle, or equivalently (using the law of sines) by where , , and are the angles at the three vertices. Cartesian coordinates The Cartesian coordinates of the incenter are a weighted average of the coordinates of the three vertices using the side lengths of the triangle relative to the perimeter (that is, using the barycentric coordinates given above, normalized to sum to unity) as weights. The weights are positive so the incenter lies inside the triangle as stated above. If the three vertices are located at , , and , and the sides opposite these vertices have corresponding lengths , , and , then the incenter is at Radius The inradius of the incircle in a triangle with sides of length , , is given by where is the semiperimeter. The tangency points of the incircle divide the sides into segments of lengths and See Heron's formula. Distances to the vertices Denoting the incenter of as , the distances from the incenter to the verti
https://en.wikipedia.org/wiki/Circumscribed%20circle
In geometry, a circumscribed circle for a set of points is a circle passing through each of them. Such a circle is said to circumscribe the points or a polygon formed from them; such a polygon is said to be inscribed in the circle. Circumcircle, the circumscribed circle of a triangle, which always exists for a given triangle. Cyclic polygon, a general polygon that can be circumscribed by a circle. The vertices of this polygon are concyclic points. All triangles are cyclic polygons. Cyclic quadrilateral, a special case of a cyclic polygon. See also Smallest-circle problem, the related problem of finding the circle with minimal radius containing an arbitrary set of points, not necessarily passing through them.
https://en.wikipedia.org/wiki/Orthocentric%20system
In geometry, an orthocentric system is a set of four points on a plane, one of which is the orthocenter of the triangle formed by the other three. Equivalently, the lines passing through disjoint pairs among the points are perpendicular, and the four circles passing through any three of the four points have the same radius. If four points form an orthocentric system, then each of the four points is the orthocenter of the other three. These four possible triangles will all have the same nine-point circle. Consequently these four possible triangles must all have circumcircles with the same circumradius. The common nine-point circle The center of this common nine-point circle lies at the centroid of the four orthocentric points. The radius of the common nine-point circle is the distance from the nine-point center to the midpoint of any of the six connectors that join any pair of orthocentric points through which the common nine-point circle passes. The nine-point circle also passes through the three orthogonal intersections at the feet of the altitudes of the four possible triangles. This common nine-point center lies at the midpoint of the connector that joins any orthocentric point to the circumcenter of the triangle formed from the other three orthocentric points. The common nine-point circle is tangent to all 16 incircles and excircles of the four triangles whose vertices form the orthocentric system. The common orthic triangle, its incenter, and its excenters If the six connectors that join any pair of orthocentric points are extended to six lines that intersect each other, they generate seven intersection points. Four of these points are the original orthocentric points and the additional three points are the orthogonal intersections at the feet of the altitudes. The joining of these three orthogonal points into a triangle generates an orthic triangle that is common to all the four possible triangles formed from the four orthocentric points taken three at a time. The incenter of this common orthic triangle must be one of the original four orthocentric points. Furthermore, the three remaining points become the excenters of this common orthic triangle. The orthocentric point that becomes the incenter of the orthic triangle is that orthocentric point closest to the common nine-point center. This relationship between the orthic triangle and the original four orthocentric points leads directly to the fact that the incenter and excenters of a reference triangle form an orthocentric system. It is normal to distinguish one of the orthocentric points from the others, specifically the one that is the incenter of the orthic triangle; this one is denoted as the orthocenter of the outer three orthocentric points that are chosen as a reference triangle . In this normalized configuration, the point will always lie within the triangle , and all the angles of triangle will be acute. The four possible triangles referred above are then triangles . T
https://en.wikipedia.org/wiki/List%20of%20geometers
A geometer is a mathematician whose area of study is geometry. Some notable geometers and their main fields of work, chronologically listed, are: 1000 BCE to 1 BCE Baudhayana (fl. c. 800 BC) – Euclidean geometry Manava (c. 750 BC–690 BC) – Euclidean geometry Thales of Miletus (c. 624 BC – c. 546 BC) – Euclidean geometry Pythagoras (c. 570 BC – c. 495 BC) – Euclidean geometry, Pythagorean theorem Zeno of Elea (c. 490 BC – c. 430 BC) – Euclidean geometry Hippocrates of Chios (born c. 470 – 410 BC) – first systematically organized Stoicheia – Elements (geometry textbook) Mozi (c. 468 BC – c. 391 BC) Plato (427–347 BC) Theaetetus (c. 417 BC – 369 BC) Autolycus of Pitane (360–c. 290 BC) – astronomy, spherical geometry Euclid (fl. 300 BC) – Elements, Euclidean geometry (sometimes called the "father of geometry") Apollonius of Perga (c. 262 BC – c. 190 BC) – Euclidean geometry, conic sections Archimedes (c. 287 BC – c. 212 BC) – Euclidean geometry Eratosthenes (c. 276 BC – c. 195/194 BC) – Euclidean geometry Katyayana (c. 3rd century BC) – Euclidean geometry 1–1300 AD Hero of Alexandria (c. AD 10–70) – Euclidean geometry Pappus of Alexandria (c. AD 290–c. 350) – Euclidean geometry, projective geometry Hypatia of Alexandria (c. AD 370–c. 415) – Euclidean geometry Brahmagupta (597–668) – Euclidean geometry, cyclic quadrilaterals Vergilius of Salzburg (c.700–784) – Irish bishop of Aghaboe, Ossory and later Salzburg, Austria; antipodes, and astronomy Al-Abbās ibn Said al-Jawharī (c. 800–c. 860) Thabit ibn Qurra (826–901) – analytic geometry, non-Euclidean geometry, conic sections Abu'l-Wáfa (940–998) – spherical geometry, spherical triangles Alhazen (965–c. 1040) Omar Khayyam (1048–1131) – algebraic geometry, conic sections Ibn Maḍāʾ (1116–1196) 1301–1800 AD Piero della Francesca (1415–1492) Leonardo da Vinci (1452–1519) – Euclidean geometry Jyesthadeva (c. 1500 – c. 1610) – Euclidean geometry, cyclic quadrilaterals Marin Getaldić (1568–1626) Jacques-François Le Poivre (1652–1710), projective geometry Johannes Kepler (1571–1630) – (used geometric ideas in astronomical work) Edmund Gunter (1581–1686) Girard Desargues (1591–1661) – projective geometry; Desargues' theorem René Descartes (1596–1650) – invented the methodology of analytic geometry, also called Cartesian geometry after him Pierre de Fermat (1607–1665) – analytic geometry Blaise Pascal (1623–1662) – projective geometry Christiaan Huygens (1629–1695) – evolute Giordano Vitale (1633–1711) Philippe de La Hire (1640–1718) – projective geometry Isaac Newton (1642–1727) – 3rd-degree algebraic curve Giovanni Ceva (1647–1734) – Euclidean geometry Johann Jacob Heber (1666–1727) – surveyor and geometer Giovanni Gerolamo Saccheri (1667–1733) – non-Euclidean geometry Leonhard Euler (1707–1783) Tobias Mayer (1723–1762) Johann Heinrich Lambert (1728–1777) – non-Euclidean geometry Gaspard Monge (1746–1818) – descriptive geometry John Playfair (1748–1819
https://en.wikipedia.org/wiki/Olry%20Terquem
Olry Terquem (16 June 1782 – 6 May 1862) was a French mathematician. He is known for his works in geometry and for founding two scientific journals, one of which was the first journal about the history of mathematics. He was also the pseudonymous author (as Tsarphati) of a sequence of letters advocating radical reform in Judaism. He was French Jewish. Education and career Terquem grew up speaking Yiddish, and studying only the Hebrew language and the Talmud. However, after the French revolution his family came into contact with a wider society, and his studies broadened. Despite his poor French he was admitted to study mathematics at the École Polytechnique in Paris, beginning in 1801, as only the second Jew to study there. He became an assistant there in 1803, and earned his doctorate in 1804. After finishing his studies he moved to Mainz (at that time known as Mayence and part of imperial France), where he taught at the Imperial Lycée. In 1811 he moved to the artillery school in the same city, in 1814 he moved again to the artillery school in Grenoble, and in 1815 he became the librarian of the Dépôt Central de l'Artillerie in Paris, where he remained for the rest of his life. He became an officer of the Legion of Honor in 1852. After he died, his funeral was officiated by Lazare Isidor, the Chief Rabbi of Paris and later of France, and attended by over 12 generals headed by Edmond Le Bœuf. Mathematics Terquem translated works concerning artillery, was the author of several textbooks, and became an expert on the history of mathematics. Terquem and Camille-Christophe Gerono were the founding editors of the Nouvelles Annales de Mathématiques in 1842. Terquem also founded another journal in 1855, the Bulletin de Bibliographie, d'Histoire et de Biographie de Mathématiques, which was published as a supplement to the Nouvelles Annales, and he continued editing it until 1861. This was the first journal dedicated to the history of mathematics. In geometry, Terquem is known for naming the nine-point circle and fully proving its properties. This is a circle that passes through nine special points of any given triangle. Karl Wilhelm Feuerbach had previously observed that the three feet of the altitudes of a triangle and the three midpoints of its sides all lie on a single circle, but Terquem was the first to prove that this circle also contains the midpoints of the line segments connecting each vertex to the orthocenter of the triangle. He also gave a new proof of Feuerbach's theorem that the nine-point circle is tangent to the incircle and excircles of a triangle. Terquem's other contributions to mathematics include naming the pedal curve of another curve, and counting the number of perpendicular lines from a point to an algebraic curve as a function of the degree of the curve. He was also the first to observe that the minimum or maximum value of a symmetric function is often obtained by setting all variables equal to each other. Jewish activism Te
https://en.wikipedia.org/wiki/PlanetMath
PlanetMath is a free, collaborative, mathematics online encyclopedia. The emphasis is on rigour, openness, pedagogy, real-time content, interlinked content, and also community of about 24,000 people with various maths interests. Intended to be comprehensive, the project is currently hosted by the University of Waterloo. The site is owned by a US-based nonprofit corporation, "PlanetMath.org, Ltd". PlanetMath was started when the popular free online mathematics encyclopedia MathWorld was temporarily taken offline for 12 months by a court injunction as a result of the CRC Press lawsuit against the Wolfram Research company and its employee (and MathWorld's author) Eric Weisstein. Materials The main PlanetMath focus is on encyclopedic entries. It formerly operated a self-hosted forum, but now encourages discussion via Gitter. , the encyclopedia hosted about 9,289 entries and over 16,258 concepts (a concept may be for example a specific notion defined within a more general entry). An overview of the current PlanetMath contents is also available. About 300 Wikipedia entries incorporate text from PlanetMath articles; they are listed in :Category:Wikipedia articles incorporating text from PlanetMath. An all-inclusive PlanetMath Free Encyclopedia book of 2,300 pages is available for the encyclopedia contents up to 2006 as a free download PDF file. Content development model PlanetMath implements a specific content creation system called authority model. An author who starts a new article becomes its owner, that is the only person authorized to edit that article. Other users may add corrections and discuss improvements but the resulting modifications of the article, if any, are always made by the owner. However, if there are long lasting unresolved corrections, the ownership can be removed. More precisely, after 2 weeks the system starts to remind the owner by mail; at 6 weeks any user can "adopt" the article; at 8 weeks the ownership of the entry is completely removed (and such an entry is called "orphaned"). To make the development more smooth, the owner may also choose to grant editing rights to other individuals or groups. The user can explicitly create links to other articles, and the system also automatically turns certain words into links to the defining articles. The topic area of every article is classified by the Mathematics Subject Classification (MSC) of the American Mathematical Society (AMS). The site is supervised by the Content Committee. Its basic mission is to maintain the integrity and quality of the mathematical content and organization of PlanetMath. As defined in its Charter, the tasks of the Committee include: Developing/maintaining the standards for PlanetMath content Improving individual PlanetMath entries in its Encyclopedia, Book, Paper, and Exposition) Developing topic areas Developing/improving site and user documentation Managing the PlanetMath Request list and Unproved Theorems list Improving categorizatio
https://en.wikipedia.org/wiki/Truth%20value
In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a proposition to truth, which in classical logic has only two possible values (true or false). Computing In some programming languages, any expression can be evaluated in a context that expects a Boolean data type. Typically (though this varies by programming language) expressions like the number zero, the empty string, empty lists, and null evaluate to false, and strings with content (like "abc"), other numbers, and objects evaluate to true. Sometimes these classes of expressions are called "truthy" and "falsy" / "false". Classical logic In classical logic, with its intended semantics, the truth values are true (denoted by 1 or the verum ⊤), and untrue or false (denoted by 0 or the falsum ⊥); that is, classical logic is a two-valued logic. This set of two values is also called the Boolean domain. Corresponding semantics of logical connectives are truth functions, whose values are expressed in the form of truth tables. Logical biconditional becomes the equality binary relation, and negation becomes a bijection which permutes true and false. Conjunction and disjunction are dual with respect to negation, which is expressed by De Morgan's laws: ¬( ¬( Propositional variables become variables in the Boolean domain. Assigning values for propositional variables is referred to as valuation. Intuitionistic and constructive logic In intuitionistic logic, and more generally, constructive mathematics, statements are assigned a truth value only if they can be given a constructive proof. It starts with a set of axioms, and a statement is true if one can build a proof of the statement from those axioms. A statement is false if one can deduce a contradiction from it. This leaves open the possibility of statements that have not yet been assigned a truth value. Unproven statements in intuitionistic logic are not given an intermediate truth value (as is sometimes mistakenly asserted). Indeed, one can prove that they have no third truth value, a result dating back to Glivenko in 1928. Instead, statements simply remain of unknown truth value, until they are either proven or disproven. There are various ways of interpreting intuitionistic logic, including the Brouwer–Heyting–Kolmogorov interpretation. See also . Multi-valued logic Multi-valued logics (such as fuzzy logic and relevance logic) allow for more than two truth values, possibly containing some internal structure. For example, on the unit interval such structure is a total order; this may be expressed as the existence of various degrees of truth. Algebraic semantics Not all logical systems are truth-valuational in the sense that logical connectives may be interpreted as truth functions. For example, intuitionistic logic lacks a complete set of truth values because its semantics, the Brouwer–Heyting–Kolmogorov interpretation, is specified in terms of provability conditions, and n
https://en.wikipedia.org/wiki/The%20Doctrine%20of%20Chances
The Doctrine of Chances was the first textbook on probability theory, written by 18th-century French mathematician Abraham de Moivre and first published in 1718. De Moivre wrote in English because he resided in England at the time, having fled France to escape the persecution of Huguenots. The book's title came to be synonymous with probability theory, and accordingly the phrase was used in Thomas Bayes' famous posthumous paper An Essay towards solving a Problem in the Doctrine of Chances, wherein a version of Bayes' theorem was first introduced. Editions The full title of the first edition was The doctrine of chances: or, a method for calculating the probabilities of events in play; it was published in 1718, by W. Pearson, and ran for 175 pages. Published in 1738 by Woodfall and running for 258 pages, the second edition of de Moivre's book introduced the concept of normal distributions as approximations to binomial distributions. In effect de Moivre proved a special case of the central limit theorem. Sometimes his result is called the theorem of de Moivre–Laplace. A third edition was published posthumously in 1756 by A. Millar, and ran for 348 pages; additional material in this edition included an application of probability theory to actuarial science in the calculation of annuities. References Further reading . External links The third edition of The Doctrine of Chances. Full text of “The Doctrine of Chances”, 1st edition; from books.google.com The Doctrine of Chance at MathPages Mathematics textbooks 1718 books 1738 books 1756 non-fiction books Probability books 1718 in science Abraham de Moivre
https://en.wikipedia.org/wiki/Scalar%20field
In mathematics and physics, a scalar field is a function associating a single number to every point in a space – possibly physical space. The scalar may either be a pure mathematical number (dimensionless) or a scalar physical quantity (with units). In a physical context, scalar fields are required to be independent of the choice of reference frame. That is, any two observers using the same units will agree on the value of the scalar field at the same absolute point in space (or spacetime) regardless of their respective points of origin. Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields, such as the Higgs field. These fields are the subject of scalar field theory. Definition Mathematically, a scalar field on a region U is a real or complex-valued function or distribution on U. The region U may be a set in some Euclidean space, Minkowski space, or more generally a subset of a manifold, and it is typical in mathematics to impose further conditions on the field, such that it be continuous or often continuously differentiable to some order. A scalar field is a tensor field of order zero, and the term "scalar field" may be used to distinguish a function of this kind with a more general tensor field, density, or differential form. Physically, a scalar field is additionally distinguished by having units of measurement associated with it. In this context, a scalar field should also be independent of the coordinate system used to describe the physical system—that is, any two observers using the same units must agree on the numerical value of a scalar field at any given point of physical space. Scalar fields are contrasted with other physical quantities such as vector fields, which associate a vector to every point of a region, as well as tensor fields and spinor fields. More subtly, scalar fields are often contrasted with pseudoscalar fields. Uses in physics In physics, scalar fields often describe the potential energy associated with a particular force. The force is a vector field, which can be obtained as a factor of the gradient of the potential energy scalar field. Examples include: Potential fields, such as the Newtonian gravitational potential, or the electric potential in electrostatics, are scalar fields which describe the more familiar forces. A temperature, humidity, or pressure field, such as those used in meteorology. Examples in quantum theory and relativity In quantum field theory, a scalar field is associated with spin-0 particles. The scalar field may be real or complex valued. Complex scalar fields represent charged particles. These include the Higgs field of the Standard Model, as well as the charged pions mediating the strong nuclear interaction. In the Standard Model of elementary particles, a scalar Higgs field is used to give the leptons and massive vector bosons their mass, via a combination of the Yukawa interaction and the
https://en.wikipedia.org/wiki/Derangement
In combinatorial mathematics, a derangement is a permutation of the elements of a set in which no element appears in its original position. In other words, a derangement is a permutation that has no fixed points. The number of derangements of a set of size n is known as the subfactorial of n or the n-th derangement number or n-th de Montmort number (after Pierre Remond de Montmort). Notations for subfactorials in common use include !n, Dn, dn, or n¡. For n > 0, the subfactorial !n equals the nearest integer to n!/e, where n! denotes the factorial of n and e is Euler's number. The problem of counting derangements was first considered by Pierre Raymond de Montmort in his Essay d'analyse sur les jeux de hazard. in 1708; he solved it in 1713, as did Nicholas Bernoulli at about the same time. Example Suppose that a professor gave a test to 4 students – A, B, C, and D – and wants to let them grade each other's tests. Of course, no student should grade their own test. How many ways could the professor hand the tests back to the students for grading, such that no student received their own test back? Out of 24 possible permutations (4!) for handing back the tests, {| style="font:125% monospace;line-height:1;border-collapse:collapse;" |ABCD, |ABDC, |ACBD, |ACDB, |ADBC, |ADCB, |- |BACD, |BADC, |BCAD, |BCDA, |BDAC, |BDCA, |- |CABD, |CADB, |CBAD, |CBDA, |CDAB, |CDBA, |- |DABC, |DACB, |DBAC, |DBCA, |DCAB, |DCBA. |} there are only 9 derangements (shown in blue italics above). In every other permutation of this 4-member set, at least one student gets their own test back (shown in bold red). Another version of the problem arises when we ask for the number of ways n letters, each addressed to a different person, can be placed in n pre-addressed envelopes so that no letter appears in the correctly addressed envelope. Counting derangements Counting derangements of a set amounts to the hat-check problem, in which one considers the number of ways in which n hats (call them h1 through hn) can be returned to n people (P1 through Pn) such that no hat makes it back to its owner. Each person may receive any of the n − 1 hats that is not their own. Call the hat which the person P1 receives hi and consider his owner: Pi receives either P1's hat, h1, or some other. Accordingly, the problem splits into two possible cases: Pi receives a hat other than h1. This case is equivalent to solving the problem with n − 1 people and n − 1 hats because for each of the n − 1 people besides P1 there is exactly one hat from among the remaining n − 1 hats that they may not receive (for any Pj besides Pi, the unreceivable hat is hj, while for Pi it is h1). Another way to see this is to rename h1 to hi, where the derangement is more explicit: for any j from 2 to n, Pj cannot receive hj. Pi receives h1. In this case the problem reduces to n − 2 people and n − 2 hats, because P1 received his hat and Pi received h1's hat, effectively putting both out of further consideration. For e
https://en.wikipedia.org/wiki/Chris%20Freiling
Christopher Francis Freiling is a mathematician responsible for Freiling's axiom of symmetry in set theory. He has also made significant contributions to coding theory, in the process establishing connections between that field and matroid theory. Freiling obtained his Ph.D. in 1981 from the University of California, Los Angeles under the supervision of Donald A. Martin. He is a member of the faculty of the Department of Mathematics at California State University, San Bernardino. Selected publications . . References External links Home page 20th-century American mathematicians 21st-century American mathematicians Set theorists Coding theorists University of California, Los Angeles alumni California State University, San Bernardino faculty Living people Year of birth missing (living people)
https://en.wikipedia.org/wiki/Freiling%27s%20axiom%20of%20symmetry
Freiling's axiom of symmetry () is a set-theoretic axiom proposed by Chris Freiling. It is based on intuition of Stuart Davidson but the mathematics behind it goes back to Wacław Sierpiński. Let denote the set of all functions from to countable subsets of . The axiom states: For every , there exist such that and . A theorem of Sierpiński says that under the assumptions of ZFC set theory, is equivalent to the negation of the continuum hypothesis (CH). Sierpiński's theorem answered a question of Hugo Steinhaus and was proved long before the independence of CH had been established by Kurt Gödel and Paul Cohen. Freiling claims that probabilistic intuition strongly supports this proposition while others disagree. There are several versions of the axiom, some of which are discussed below. Freiling's argument Fix a function f in A. We will consider a thought experiment that involves throwing two darts at the unit interval. We are not able to physically determine with infinite accuracy the actual values of the numbers x and y that are hit. Likewise, the question of whether "y is in f(x)" cannot actually be physically computed. Nevertheless, if f really is a function, then this question is a meaningful one and will have a definite "yes" or "no" answer. Now wait until after the first dart, x, is thrown and then assess the chances that the second dart y will be in f(x). Since x is now fixed, f(x) is a fixed countable set and has Lebesgue measure zero. Therefore, this event, with x fixed, has probability zero. Freiling now makes two generalizations: Since we can predict with virtual certainty that "y is not in f(x)" after the first dart is thrown, and since this prediction is valid no matter what the first dart does, we should be able to make this prediction before the first dart is thrown. This is not to say that we still have a measurable event, rather it is an intuition about the nature of being predictable. Since "y is not in f(x)" is predictably true, by the symmetry of the order in which the darts were thrown (hence the name "axiom of symmetry") we should also be able to predict with virtual certainty that "x is not in f(y)". The axiom is now justified based on the principle that what will predictably happen every time this experiment is performed, should at the very least be possible. Hence there should exist two real numbers x, y such that x is not in f(y) and y is not in f(x). Relation to the (Generalised) Continuum Hypothesis Fix an infinite cardinal (e.g. ). Let be the statement: there is no map from sets to sets of size for which either or . Claim: . Proof: Part I (): Suppose . Then there exists a bijection . Setting defined via , it is easy to see that this demonstrates the failure of Freiling's axiom. Part II (): Suppose that Freiling's axiom fails. Then fix some to verify this fact. Define an order relation on by iff . This relation is total and every point has many predecessors. Define now a strictl
https://en.wikipedia.org/wiki/Nimber
In mathematics, the nimbers, also called Grundy numbers, are introduced in combinatorial game theory, where they are defined as the values of heaps in the game Nim. The nimbers are the ordinal numbers endowed with nimber addition and nimber multiplication, which are distinct from ordinal addition and ordinal multiplication. Because of the Sprague–Grundy theorem which states that every impartial game is equivalent to a Nim heap of a certain size, nimbers arise in a much larger class of impartial games. They may also occur in partisan games like Domineering. The nimber addition and multiplication operations are associative and commutative. Each nimber is its own additive inverse. In particular for some pairs of ordinals, their nimber sum is smaller than either addend. The minimum excludant operation is applied to sets of nimbers. Uses Nim Nim is a game in which two players take turns removing objects from distinct heaps. As moves depend only on the position and not on which of the two players is currently moving, and where the payoffs are symmetric, Nim is an impartial game. On each turn, a player must remove at least one object, and may remove any number of objects provided they all come from the same heap. The goal of the game is to be the player who removes the last object. The nimber of a heap is simply the number of objects in that heap. Using nim addition, one can calculate the nimber of the game as a whole. The winning strategy is to force the nimber of the game to 0 for the opponent's turn. Cram Cram is a game often played on a rectangular board in which players take turns placing dominoes either horizontally or vertically until no more dominoes can be placed. The first player that cannot make a move loses. As the possible moves for both players are the same, it is an impartial game and can have a nimber value. For example, any board that is an even size by an even size will have a nimber of 0. Any board that is even by odd will have a non-zero nimber. Any board will have a nimber of 0 for all even and a nimber of 1 for all odd . Northcott's game In Northcott's game, pegs for each player are placed along a column with a finite number of spaces. Each turn each player must move the piece up or down the column, but may not move past the other player's piece. Several columns are stacked together to add complexity. The player that can no longer make any moves loses. Unlike many other nimber related games, the number of spaces between the two tokens on each row are the sizes of the Nim heaps. If your opponent increases the number of spaces between two tokens, just decrease it on your next move. Else, play the game of Nim and make the Nim-sum of the number of spaces between the tokens on each row be 0. Hackenbush Hackenbush is a game invented by mathematician John Horton Conway. It may be played on any configuration of colored line segments connected to one another by their endpoints and to a "ground" line. Players take turns
https://en.wikipedia.org/wiki/Karl%20Pearson
Karl Pearson (; born Carl Pearson; 27 March 1857 – 27 April 1936) was an English mathematician and biostatistician. He has been credited with establishing the discipline of mathematical statistics. He founded the world's first university statistics department at University College London in 1911, and contributed significantly to the field of biometrics and meteorology. Pearson was also a proponent of Social Darwinism and eugenics, and his thought is an example of what is today described as scientific racism. Pearson was a protégé and biographer of Sir Francis Galton. He edited and completed both William Kingdon Clifford's Common Sense of the Exact Sciences (1885) and Isaac Todhunter's History of the Theory of Elasticity, Vol. 1 (1886–1893) and Vol. 2 (1893), following their deaths. Early life and education Pearson was born in Islington, London, into a Quaker family. His father was William Pearson QC of the Inner Temple, and his mother Fanny (née Smith), and he had two siblings, Arthur and Amy. Pearson attended University College School, followed by King's College, Cambridge, in 1876 to study mathematics, graduating in 1879 as Third Wrangler in the Mathematical Tripos. He then travelled to Germany to study physics at the University of Heidelberg under G. H. Quincke and metaphysics under Kuno Fischer. He next visited the University of Berlin, where he attended the lectures of the physiologist Emil du Bois-Reymond on Darwinism (Emil was a brother of Paul du Bois-Reymond, the mathematician). Pearson also studied Roman Law, taught by Bruns and Mommsen, medieval and 16th century German Literature, and Socialism. He became an accomplished historian and Germanist and spent much of the 1880s in Berlin, Heidelberg, Vienna, Saig bei Lenzkirch, and Brixlegg. He wrote on Passion plays, religion, Goethe, Werther, as well as sex-related themes, and was a founder of the Men and Women's Club. Pearson was offered a Germanics post at King's College, Cambridge. Comparing Cambridge students to those he knew from Germany, Karl found German students inathletic and weak. He wrote his mother, "I used to think athletics and sport was overestimated at Cambridge, but now I think it cannot be too highly valued." On returning to England in 1880, Pearson first went to Cambridge: In his first book, The New Werther, Pearson gives a clear indication of why he studied so many diverse subjects: Pearson then returned to London to study law, emulating his father. Quoting Pearson's own account: Career His next career move was to the Inner Temple, where he read law until 1881 (although he never practised). After this, he returned to mathematics, deputising for the mathematics professor at King's College, London in 1881 and for the professor at University College London in 1883. In 1884, he was appointed to the Goldsmid Chair of Applied Mathematics and Mechanics at University College London. Pearson became the editor of Common Sense of the Exact Sciences (1885) when William Kingd
https://en.wikipedia.org/wiki/Timeline%20of%20the%20Israeli%E2%80%93Palestinian%20conflict%20in%202003
Note: The death toll quoted here is just the sum of the listings. There may be many omissions from the list. The human rights organisation B'Tselem has complied statistics of about 600 deaths during 2003 in the occupied territories alone. Note: This compilation includes only those attacks that resulted in Israeli casualties. This list does not include all the deaths of Palestinians. The numerous other attacks which failed to murder, maim, or wound are not included. January (death toll) 1 January: Tareq Ziad Duas 15-year-old, killed on 1 January 2003. Sami Zidan 22-year-old, killed on 1 January 2003, Muhammad 'Atiyyah Duas 15-year-old killed on 1 January 2003, Jihad Jum'ah 'Abd 5 year-old, killed on 1 January 2003 2 January: The body of a 72-year-old Israeli was found in the northern Jordan Valley in his burned out car. The Fatah Al-Aqsa Martyrs' Brigades claimed responsibility for the murder. Tammer Khader 21-year-old, killed on 2 January 2003 5 January: 23 people, including eight foreigners, were murdered in two nearly simultaneous suicide bombings in central Tel Aviv. More than 100 others were reported seriously injured. Islamic Jihad and Yasser Arafat's Al Aqsa Martyrs Brigades claimed responsibility. Another arm of Yassar Arafat's movement denied responsibility. 6 January: Israeli forces raided the Maghazi refugee camp in Gaza and killed three Palestinians and wounded a dozen more. Baker Muhammad Hadura 24-year-old, killed; Nassim Hassan Abu Maliah 25-year-old, killed; Iyad Muhammad Abu Za'id 26-year-old, killed 8 January: Ahmad 'Ajaj 8 January 2003, Aiman Muhammad Haneideq 30-year-old killed 10 January: Tareq Mahmoud 'Abd al-Quader Jadu 20-year-old, killed 11 January: Basman Shnir 20-year-old killed; 'Abd a-Latif Wadi 30-year-old, killed 12 January: A 48-year-old man was killed and four people wounded when terrorists infiltrated Moshav Gadish and opened fire. The Palestinian Islamic Jihad claimed responsibility for the attack. 12 January: Three Palestinians were killed; Muhammad Quar'a 14-year-old resident of Khan Yunis, killed; 'Ali Thaher Nassar 45-year-old killed; Hamadeh 'Abd a-Rahman a-Najar 13-year-old killed on 12 January 2003 and 12 wounded as 50 Israeli army vehicles accompanied with bulldozers and helicopters entered the town of Khan Yunis in the southern Gaza Strip during the night 11–12 January. Seven civilian facilities were blown up. 13 January: Jamal Mahmoud Abu al-Qumbuz 20-year-old killed 17 January: A 34-year-old Israeli was killed when terrorists entered his home near Kiryat Arba, and opened fire. His 5-year-old daughter and two others were wounded. Hamas claimed responsibility for the attack. 26 January: During the night 25–26 January Israeli forces invaded the al-Zaytoun neighbourhood of Gaza City. Twelve Palestinians were killed, three were hit with shrapnel from an artillery shell and nine were shot dead. 17 workshops were blown up and 15 more were severely damaged. 31 January: 13 Palestinians and over 50 wounded
https://en.wikipedia.org/wiki/Frank%20Yates
Frank Yates FRS (12 May 1902 – 17 June 1994) was one of the pioneers of 20th-century statistics. Biography Yates was born in Manchester, England, the eldest of five children (and only son) of seed merchant and botanist Percy Yates and his wife Edith. He attended Wadham House, a private school, before gaining a scholarship to Clifton College in 1916. In 1920, he obtained a scholarship at St John's College, Cambridge, and four years later graduated with a First Class Honours degree. He spent two years teaching mathematics to secondary school pupils at Malvern College before heading to Africa, where he was mathematical advisor on the Gold Coast Survey. He returned to England, due to ill health, and met and married a chemist, Margaret Forsythe Marsden, the daughter of a civil servant. This marriage was dissolved in 1933, and he later married Prascovie (Pauline) Tchitchkine, previously the partner of Alexis Tchitchkine. After her death in 1976, he married Ruth Hunt, his long-time secretary. In 1931, Yates was appointed assistant statistician at Rothamsted Experimental Station by R.A. Fisher. In 1933, he became head of statistics when Fisher went to University College London. At Rothamsted he worked on the design of experiments, including contributions to the theory of analysis of variance, as well as developing Yates's algorithm and the balanced incomplete block design. During World War II he worked on what would later be called operations research. After WWII, he worked on sample survey design and analysis. He became an enthusiast of electronic computers, in 1954 obtaining an Elliott 401 for Rothamsted and contributing to the initial development of statistical computing. During 1960–61, he was President of the British Computer Society, succeeding the founding president and computer pioneer, Maurice Wilkes. In 1960, he was awarded the Guy Medal in Gold of the Royal Statistical Society and, in 1966, he was awarded the Royal Medal of the Royal Society. He retired from Rothamsted to become a senior research fellow at Imperial College London. He died in 1994, aged 92, in Harpenden. Selected publications The design and analysis of factorial experiments, Technical Communication no. 35 of the Commonwealth Bureau of Soils (1937) (alternatively attributed to the Imperial Bureau of Soil Science). Statistical tables for biological, agricultural and medical research (1938, coauthor R.A. Fisher): sixth edition, 1963 Sampling methods for censuses and surveys (1949) Computer programs GENFAC, RGSP, Fitquan. See also Fisher–Yates shuffle Yates analysis Yates's correction for continuity References 1902 births 1994 deaths People educated at Clifton College Alumni of St John's College, Cambridge English statisticians Survey methodologists 20th-century English mathematicians Academics of Imperial College London British operations researchers Scientists from Manchester Fellows of the Royal Society Presidents of the Royal Statistical Society Presidents of the
https://en.wikipedia.org/wiki/Cramer%27s%20rule
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer (1704–1752), who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748 (and possibly knew of it as early as 1729). Cramer's rule implemented in a naive way is computationally inefficient for systems of more than two or three equations. In the case of equations in unknowns, it requires computation of determinants, while Gaussian elimination produces the result with the same computational complexity as the computation of a single determinant. Cramer's rule can also be numerically unstable even for 2×2 systems. However, it has recently been shown that Cramer's rule can be implemented with the same complexity as Gaussian elimination, (consistently requires twice as many arithmetic operations and has the same numerical stability when the same permutation matrices are applied). General case Consider a system of linear equations for unknowns, represented in matrix multiplication form as follows: where the matrix has a nonzero determinant, and the vector is the column vector of the variables. Then the theorem states that in this case the system has a unique solution, whose individual values for the unknowns are given by: where is the matrix formed by replacing the -th column of by the column vector . A more general version of Cramer's rule considers the matrix equation where the matrix has a nonzero determinant, and , are matrices. Given sequences and , let be the submatrix of with rows in and columns in . Let be the matrix formed by replacing the column of by the column of , for all . Then In the case , this reduces to the normal Cramer's rule. The rule holds for systems of equations with coefficients and unknowns in any field, not just in the real numbers. Proof The proof for Cramer's rule uses the following properties of the determinants: linearity with respect to any given column and the fact that the determinant is zero whenever two columns are equal, which is implied by the property that the sign of the determinant flips if you switch two columns. Fix the index of a column, and consider that the entries of the other columns have fixed values. This makes the determinant a function of the entries of the th column. Linearity with respect of this column means that this function has the form where the are coefficients that depend on the entries of that are not in column . So, one has (Laplace expansion provides a formula for computing the but their expression is not important here.) If the function is applied t
https://en.wikipedia.org/wiki/Square%20matrix
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often used to represent simple linear transformations, such as shearing or rotation. For example, if is a square matrix representing a rotation (rotation matrix) and is a column vector describing the position of a point in space, the product yields another column vector describing the position of that point after that rotation. If is a row vector, the same transformation can be obtained using where is the transpose of Main diagonal The entries () form the main diagonal of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix. For instance, the main diagonal of the 4×4 matrix above contains the elements , , , . The diagonal of a square matrix from the top right to the bottom left corner is called antidiagonal or counterdiagonal. Special kinds Diagonal or triangular matrix If all entries outside the main diagonal are zero, is called a diagonal matrix. If only all entries above (or below) the main diagonal are zero, is called an upper (or lower) triangular matrix. Identity matrix The identity matrix of size is the matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, e.g. It is a square matrix of order and also a special kind of diagonal matrix. It is called identity matrix because multiplication with it leaves a matrix unchanged: for any matrix Invertible matrix and its inverse A square matrix is called invertible or non-singular if there exists a matrix such that If exists, it is unique and is called the inverse matrix of denoted Symmetric or skew-symmetric matrix A square matrix that is equal to its transpose, i.e., is a symmetric matrix. If instead then is called a skew-symmetric matrix. For a complex square matrix often the appropriate analogue of the transpose is the conjugate transpose defined as the transpose of the complex conjugate of A complex square matrix satisfying is called a Hermitian matrix. If instead then is called a skew-Hermitian matrix. By the spectral theorem, real symmetric (or complex Hermitian) matrices have an orthogonal (or unitary) eigenbasis; i.e., every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real. Definite matrix A symmetric -matrix is called positive-definite (respectively negative-definite; indefinite), if for all nonzero vectors the associated quadratic form given by takes only positive values (respectively only negative values; both some negative and some positive values). If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence
https://en.wikipedia.org/wiki/Transcendence
Transcendence, transcendent, or transcendental may refer to: Mathematics Transcendental number, a number that is not the root of any polynomial with rational coefficients Algebraic element or transcendental element, an element of a field extension that is not the root of any polynomial with coefficients from the base field Transcendental function, a function which does not satisfy a polynomial equation whose coefficients are themselves polynomials Transcendental number theory, the branch of mathematics dealing with transcendental numbers and algebraic independence Music Transcendence (Adil Omar album), a 2018 hip hop album Transcendence (Alice Coltrane album), a 1977 jazz album Transcendence (Crimson Glory album), a 1988 heavy metal album Transcendence (Devin Townsend Project album), a 2016 heavy metal album "Transcendence" (Lindsey Stirling instrumental), a 2012 instrumental piece "Transcendence (Segue)", a 2000 progressive metal instrumental piece by Symphony X Transcendental (album), a 2006 progressive metal album by To-Mera Literature Transcendence (Rosenthal book), a 2011 book by Norman E. Rosenthal Transcendence (Salvatore novel), a 2002 fantasy novel by R. A. Salvatore Transcendence (Sheffield novel), a 1992 science-fiction novel by Charles Sheffield Transcendence: How Humans Evolved Through Fire, Language, Beauty, and Time, a 2019 book by Gaia Vince Transcendence: My Spiritual Experiences with Pramukh Swamiji, a 2015 book by A. P. J. Abdul Kalam and Arun Tiwari Transcendent (novel), a 2005 science-fiction novel by Stephen Baxter Philosophy Transcendence (philosophy), climbing or going beyond some philosophical concept or limit Transcendentalism, a 19th-century American religious and philosophical movement that advocates that there is an ideal spiritual state that transcends the physical and empirical Transcendent theosophy, a school of Islamic philosophy founded by the 17th-century Persian philosopher Mulla Sadra Transcendental idealism, a doctrine founded by 18th-century German philosopher Immanuel Kant Transcendental realism, a concept put forward by Roy Bhaskar Transcendental arguments, a style of philosophical argumentation Transcendental phenomenology, a field of phenomenological inquiry developed by Edmund Husserl Transcendentals, religious and philosophical properties of being Religion Transcendence (religion), the aspect of a god wholly independent of the material universe Transcendental Meditation, a meditation technique introduced by Maharishi Mahesh Yogi Transcendentals, religious and philosophical properties of being Other Transcendence (2012 film), a Chinese film Transcendence (2014 film), an American film starring Johnny Depp and Morgan Freeman Transcendence (band), an American alternative rock band Transcendence (Jellum), an outdoor sculpture by Keith Jellum, in Portland, Oregon, US Transcendent (TV series), a 2016 American reality television series Transcendence (video game), a 1995
https://en.wikipedia.org/wiki/Sides%20of%20an%20equation
In mathematics, LHS is informal shorthand for the left-hand side of an equation. Similarly, RHS is the right-hand side. The two sides have the same value, expressed differently, since equality is symmetric. More generally, these terms may apply to an inequation or inequality; the right-hand side is everything on the right side of a test operator in an expression, with LHS defined similarly. Example The expression on the right side of the "=" sign is the right side of the equation and the expression on the left of the "=" is the left side of the equation. For example, in is the left-hand side (LHS) and is the right-hand side (RHS). Homogeneous and inhomogeneous equations In solving mathematical equations, particularly linear simultaneous equations, differential equations and integral equations, the terminology homogeneous is often used for equations with some linear operator L on the LHS and 0 on the RHS. In contrast, an equation with a non-zero RHS is called inhomogeneous or non-homogeneous, as exemplified by Lf = g, with g a fixed function, which equation is to be solved for f. Then any solution of the inhomogeneous equation may have a solution of the homogeneous equation added to it, and still remain a solution. For example in mathematical physics, the homogeneous equation may correspond to a physical theory formulated in empty space, while the inhomogeneous equation asks for more 'realistic' solutions with some matter, or charged particles. Syntax More abstractly, when using infix notation T * U the term T stands as the left-hand side and U as the right-hand side of the operator *. This usage is less common, though. See also Equals sign References Mathematical terminology
https://en.wikipedia.org/wiki/Pseudovector
In physics and mathematics, a pseudovector (or axial vector) is a quantity that behaves like a vector in many situations, but its direction does not conform when the object is rigidly transformed by rotation, translation, reflection, etc. This can also happen when the orientation of the space is changed. For example, the angular momentum is a pseudovector because it is often described as a vector, but by just changing the position of reference (and changing the position vector), angular momentum can reverse direction, which is not supposed to happen with true (polar) vectors. In three dimensions, the curl of a polar vector field at a point and the cross product of two polar vectors are pseudovectors. One example of a pseudovector is the normal to an oriented plane. An oriented plane can be defined by two non-parallel vectors, a and b, that span the plane. The vector is a normal to the plane (there are two normals, one on each side – the right-hand rule will determine which), and is a pseudovector. This has consequences in computer graphics, where it has to be considered when transforming surface normals. A number of quantities in physics behave as pseudovectors rather than polar vectors, including magnetic field and angular velocity. In mathematics, in three dimensions, pseudovectors are equivalent to bivectors, from which the transformation rules of pseudovectors can be derived. More generally, in n-dimensional geometric algebra, pseudovectors are the elements of the algebra with dimension , written ⋀n−1Rn. The label "pseudo-" can be further generalized to pseudoscalars and pseudotensors, both of which gain an extra sign-flip under improper rotations compared to a true scalar or tensor. Physical examples Physical examples of pseudovectors include torque, angular velocity, angular momentum, magnetic field, and magnetic dipole moment. Consider the pseudovector angular momentum . Driving in a car, and looking forward, each of the wheels has an angular momentum vector pointing to the left. If the world is reflected in a mirror which switches the left and right side of the car, the "reflection" of this angular momentum "vector" (viewed as an ordinary vector) points to the right, but the actual angular momentum vector of the wheel (which is still turning forward in the reflection) still points to the left, corresponding to the extra sign flip in the reflection of a pseudovector. The distinction between polar vectors and pseudovectors becomes important in understanding the effect of symmetry on the solution to physical systems. Consider an electric current loop in the plane that inside the loop generates a magnetic field oriented in the z direction. This system is symmetric (invariant) under mirror reflections through this plane, with the magnetic field unchanged by the reflection. But reflecting the magnetic field as a vector through that plane would be expected to reverse it; this expectation is corrected by realizing that the magnetic field
https://en.wikipedia.org/wiki/Heat%20equation
In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region. As the prototypical parabolic partial differential equation, the heat equation is among the most widely studied topics in pure mathematics, and its analysis is regarded as fundamental to the broader field of partial differential equations. The heat equation can also be considered on Riemannian manifolds, leading to many geometric applications. Following work of Subbaramiah Minakshisundaram and Åke Pleijel, the heat equation is closely related with spectral geometry. A seminal nonlinear variant of the heat equation was introduced to differential geometry by James Eells and Joseph Sampson in 1964, inspiring the introduction of the Ricci flow by Richard Hamilton in 1982 and culminating in the proof of the Poincaré conjecture by Grigori Perelman in 2003. Certain solutions of the heat equation known as heat kernels provide subtle information about the region on which they are defined, as exemplified through their application to the Atiyah–Singer index theorem. The heat equation, along with variants thereof, is also important in many fields of science and applied mathematics. In probability theory, the heat equation is connected with the study of random walks and Brownian motion via the Fokker–Planck equation. The Black–Scholes equation of financial mathematics is a small variant of the heat equation, and the Schrödinger equation of quantum mechanics can be regarded as a heat equation in imaginary time. In image analysis, the heat equation is sometimes used to resolve pixelation and to identify edges. Following Robert Richtmyer and John von Neumann's introduction of "artificial viscosity" methods, solutions of heat equations have been useful in the mathematical formulation of hydrodynamical shocks. Solutions of the heat equation have also been given much attention in the numerical analysis literature, beginning in the 1950s with work of Jim Douglas, D.W. Peaceman, and Henry Rachford Jr. Statement of the equation In mathematics, if given an open subset of and a subinterval of , one says that a function is a solution of the heat equation if where denotes a general point of the domain. It is typical to refer to as "time" and as "spatial variables," even in abstract contexts where these phrases fail to have their intuitive meaning. The collection of spatial variables is often referred to simply as . For any given value of , the right-hand side of the equation is the Laplacian of the function . As such, the heat equation is often written more compactly as In physics and engineering contexts, especially in the context of diffusion through a medium, it is more common to fix a Cartesian coordinate system and then to consider
https://en.wikipedia.org/wiki/Lyapunov%20exponent
In mathematics, the Lyapunov exponent or Lyapunov characteristic exponent of a dynamical system is a quantity that characterizes the rate of separation of infinitesimally close trajectories. Quantitatively, two trajectories in phase space with initial separation vector diverge (provided that the divergence can be treated within the linearized approximation) at a rate given by where is the Lyapunov exponent. The rate of separation can be different for different orientations of initial separation vector. Thus, there is a spectrum of Lyapunov exponents—equal in number to the dimensionality of the phase space. It is common to refer to the largest one as the maximal Lyapunov exponent (MLE), because it determines a notion of predictability for a dynamical system. A positive MLE is usually taken as an indication that the system is chaotic (provided some other conditions are met, e.g., phase space compactness). Note that an arbitrary initial separation vector will typically contain some component in the direction associated with the MLE, and because of the exponential growth rate, the effect of the other exponents will be obliterated over time. The exponent is named after Aleksandr Lyapunov. Definition of the maximal Lyapunov exponent The maximal Lyapunov exponent can be defined as follows: The limit ensures the validity of the linear approximation at any time. For discrete time system (maps or fixed point iterations) , for an orbit starting with this translates into: Definition of the Lyapunov spectrum For a dynamical system with evolution equation in an n–dimensional phase space, the spectrum of Lyapunov exponents in general, depends on the starting point . However, we will usually be interested in the attractor (or attractors) of a dynamical system, and there will normally be one set of exponents associated with each attractor. The choice of starting point may determine which attractor the system ends up on, if there is more than one. (For Hamiltonian systems, which do not have attractors, this is not a concern.) The Lyapunov exponents describe the behavior of vectors in the tangent space of the phase space and are defined from the Jacobian matrix this Jacobian defines the evolution of the tangent vectors, given by the matrix , via the equation with the initial condition . The matrix describes how a small change at the point propagates to the final point . The limit defines a matrix (the conditions for the existence of the limit are given by the Oseledets theorem). The Lyapunov exponents are defined by the eigenvalues of . The set of Lyapunov exponents will be the same for almost all starting points of an ergodic component of the dynamical system. Lyapunov exponent for time-varying linearization To introduce Lyapunov exponent consider a fundamental matrix (e.g., for linearization along a stationary solution in a continuous system, the fundamental matrix is consisting of the linearly-independent solutions of the
https://en.wikipedia.org/wiki/On%20Numbers%20and%20Games
On Numbers and Games is a mathematics book by John Horton Conway first published in 1976. The book is written by a pre-eminent mathematician, and is directed at other mathematicians. The material is, however, developed in a playful and unpretentious manner and many chapters are accessible to non-mathematicians. Martin Gardner discussed the book at length, particularly Conway's construction of surreal numbers, in his Mathematical Games column in Scientific American in September 1976. The book is roughly divided into two sections: the first half (or Zeroth Part), on numbers, the second half (or First Part), on games. In the Zeroth Part, Conway provides axioms for arithmetic: addition, subtraction, multiplication, division and inequality. This allows an axiomatic construction of numbers and ordinal arithmetic, namely, the integers, reals, the countable infinity, and entire towers of infinite ordinals. The object to which these axioms apply takes the form {L|R}, which can be interpreted as a specialized kind of set; a kind of two-sided set. By insisting that L<R, this two-sided set resembles the Dedekind cut. The resulting construction yields a field, now called the surreal numbers. The ordinals are embedded in this field. The construction is rooted in axiomatic set theory, and is closely related to the Zermelo–Fraenkel axioms. In the original book, Conway simply refers to this field as "the numbers". The term "surreal numbers" is adopted later, at the suggestion of Donald Knuth. In the First Part, Conway notes that, by dropping the constraint that L<R, the axioms still apply and the construction goes through, but the resulting objects can no longer be interpreted as numbers. They can be interpreted as the class of all two-player games. The axioms for greater than and less than are seen to be a natural ordering on games, corresponding to which of the two players may win. The remainder of the book is devoted to exploring a number of different (non-traditional, mathematically inspired) two-player games, such as nim, hackenbush, and the map-coloring games col and snort. The development includes their scoring, a review of the Sprague–Grundy theorem, and the inter-relationships to numbers, including their relationship to infinitesimals. The book was first published by Academic Press Inc in 1976, , and re-released by AK Peters in 2000 (). Zeroth Part ... On Numbers In the Zeroth Part, Chapter 0, Conway introduces a specialized form of set notation, having the form {L|R}, where L and R are again of this form, built recursively, terminating in {|}, which is to be read as an analog of the empty set. Given this object, axiomatic definitions for addition, subtraction, multiplication, division and inequality may be given. As long as one insists that L<R (with this holding vacuously true when L or R are the empty set), then the resulting class of objects can be interpreted as numbers, the surreal numbers. The {L|R} notation then resembles the Dedekind cut
https://en.wikipedia.org/wiki/Phi%20%28disambiguation%29
Phi (uppercase Φ, lowercase φ, or maths symbol ϕ) is the 21st letter of the Greek alphabet. Phi or PHI may also refer to: Science and technology Mathematics Golden ratio (φ) Phi coefficient, a measure of association for two binary variables introduced by Karl Pearson Euler's totient function or phi function Integrated information theory (IIT) the symbol of which is φ is a mathematical theory of consciousness developed under the lead of the neuroscientist Giulio Tononi Standard normal distribution, notating its cumulative distribution function and its probability density function Physics, chemistry and biology Phi meson, in particle physics Magnetic flux (Φ) Peptide PHI (Peptide histidine isoleucine) 6-phospho-3-hexuloisomerase, an enzyme Phenyl group (Φ), a functional group in organic chemistry Pre-harvest interval pH(I), the isoelectric point Computing Xeon Phi, an Intel MIC microprocessor Φ (Phi) function, in static single-assignment form compiler design Medicine Permanent health insurance, against becoming disabled Protected health information, in US law Other science Phi phenomenon, in visual perception Krumbein phi scale, for the size of a particle or sediment Arts and entertainment Phi (KinKi Kids album) (2007) Phi (Truckfighters album) (2007) Sailor Phi, a villain in the Sailor Moon manga Phi, a character in the visual novels Zero Escape: Virtue's Last Reward and Zero Escape: Zero Time Dilemma. Phi: A Voyage from the Brain to the Soul, a book by Giulio Tononi (2012) Phi, a character from Beyblade Burst Turbo, a TV show written by Hiro Morita Organizations Packard Humanities Institute Paraprofessional Healthcare Institute, a nonprofit organization based in New York City, US Pepco Holdings Inc. Petroleum Helicopters International, an American commercial helicopter operator Philadelphia’s major professional sports teams Philadelphia Eagles of the National Football League Philadelphia 76ers of the National Basketball Association Philadelphia Phillies of Major League Baseball Philadelphia Flyers of the National Hockey League Post-Polio Health International Phi, a collegiate secret society at Princeton University Other uses Voiceless bilabial fricative (IPA symbol: ) Phi, a village in Sesant, Cambodia Phi, ghosts in Thai culture Philippines, IOC country code See also Փ, a letter of the Armenian alphabet PHI-base (Pathogen–Host Interaction database) Kamen Rider 555 or Masked Rider Φ's
https://en.wikipedia.org/wiki/Fokker%E2%80%93Planck%20equation
In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc. It is named after Adriaan Fokker and Max Planck, who described it in 1914 and 1917. It is also known as the Kolmogorov forward equation, after Andrey Kolmogorov, who independently discovered it in 1931. When applied to particle position distributions, it is better known as the Smoluchowski equation (after Marian Smoluchowski), and in this context it is equivalent to the convection–diffusion equation. When applied to particle position and momentum distributions, it is known as the Klein–Kramers equation. The case with zero diffusion is the continuity equation. The Fokker–Planck equation is obtained from the master equation through Kramers–Moyal expansion. The first consistent microscopic derivation of the Fokker–Planck equation in the single scheme of classical and quantum mechanics was performed by Nikolay Bogoliubov and Nikolay Krylov. One dimension In one spatial dimension x, for an Itô process driven by the standard Wiener process and described by the stochastic differential equation (SDE) with drift and diffusion coefficient , the Fokker–Planck equation for the probability density of the random variable is In the following, use . Define the infinitesimal generator (the following can be found in Ref.): The transition probability , the probability of going from to , is introduced here; the expectation can be written as Now we replace in the definition of , multiply by and integrate over . The limit is taken on Note now that which is the Chapman–Kolmogorov theorem. Changing the dummy variable to , one gets which is a time derivative. Finally we arrive to From here, the Kolmogorov backward equation can be deduced. If we instead use the adjoint operator of , , defined such that then we arrive to the Kolmogorov forward equation, or Fokker–Planck equation, which, simplifying the notation , in its differential form reads Remains the issue of defining explicitly . This can be done taking the expectation from the integral form of the Itô's lemma: The part that depends on vanished because of the martingale property. Then, for a particle subject to an Itô equation, using it can be easily calculated, using integration by parts, that which bring us to the Fokker–Planck equation: While the Fokker–Planck equation is used with problems where the initial distribution is known, if the problem is to know the distribution at previous times, the Feynman–Kac formula can be used, which is a consequence of the Kolmogorov backward equation. The stochastic process define
https://en.wikipedia.org/wiki/Incidence%20algebra
In order theory, a field of mathematics, an incidence algebra is an associative algebra, defined for every locally finite partially ordered set and commutative ring with unity. Subalgebras called reduced incidence algebras give a natural construction of various types of generating functions used in combinatorics and number theory. Definition A locally finite poset is one in which every closed interval [a, b] = {x : a ≤ x ≤ b} is finite. The members of the incidence algebra are the functions f assigning to each nonempty interval [a, b] a scalar f(a, b), which is taken from the ring of scalars, a commutative ring with unity. On this underlying set one defines addition and scalar multiplication pointwise, and "multiplication" in the incidence algebra is a convolution defined by An incidence algebra is finite-dimensional if and only if the underlying poset is finite. Related concepts An incidence algebra is analogous to a group algebra; indeed, both the group algebra and the incidence algebra are special cases of a category algebra, defined analogously; groups and posets being special kinds of categories. Upper-Triangular Matrices Consider the case of a partial order ≤ over any -element set . We enumerate as , and in such a way that the enumeration is compatible with the order ≤ on , that is, implies , which is always possible. Then, functions as above, from intervals to scalars, can be thought of as matrices , where whenever , and otherwise. Since we arranged in a way consistent with the usual order on the indices of the matrices, they will appear as upper-triangular matrices with a prescribed zero-pattern determined by the incomparable elements in under ≤. The incidence algebra of ≤ is then isomorphic to the algebra of upper-triangular matrices with this prescribed zero-pattern and arbitrary (including possibly zero) scalar entries everywhere else, with the operations being ordinary matrix addition, scaling and multiplication. Special elements The multiplicative identity element of the incidence algebra is the delta function, defined by The zeta function of an incidence algebra is the constant function ζ(a, b) = 1 for every nonempty interval [a, b]. Multiplying by ζ is analogous to integration. One can show that ζ is invertible in the incidence algebra (with respect to the convolution defined above). (Generally, a member h of the incidence algebra is invertible if and only if h(x, x) is invertible for every x.) The multiplicative inverse of the zeta function is the Möbius function μ(a, b); every value of μ(a, b) is an integral multiple of 1 in the base ring. The Möbius function can also be defined inductively by the following relation: Multiplying by μ is analogous to differentiation, and is called Möbius inversion. The square of the zeta function gives the number of elements in an interval: Examples Positive integers ordered by divisibility The convolution associated to the incidence algebra for intervals [1, n] becomes the
https://en.wikipedia.org/wiki/Concrete%20category
In mathematics, a concrete category is a category that is equipped with a faithful functor to the category of sets (or sometimes to another category, see Relative concreteness below). This functor makes it possible to think of the objects of the category as sets with additional structure, and of its morphisms as structure-preserving functions. Many important categories have obvious interpretations as concrete categories, for example the category of topological spaces and the category of groups, and trivially also the category of sets itself. On the other hand, the homotopy category of topological spaces is not concretizable, i.e. it does not admit a faithful functor to the category of sets. A concrete category, when defined without reference to the notion of a category, consists of a class of objects, each equipped with an underlying set; and for any two objects A and B a set of functions, called morphisms, from the underlying set of A to the underlying set of B. Furthermore, for every object A, the identity function on the underlying set of A must be a morphism from A to A, and the composition of a morphism from A to B followed by a morphism from B to C must be a morphism from A to C. Definition A concrete category is a pair (C,U) such that C is a category, and U : C → Set (the category of sets and functions) is a faithful functor. The functor U is to be thought of as a forgetful functor, which assigns to every object of C its "underlying set", and to every morphism in C its "underlying function". A category C is concretizable if there exists a concrete category (C,U); i.e., if there exists a faithful functor U: C → Set. All small categories are concretizable: define U so that its object part maps each object b of C to the set of all morphisms of C whose codomain is b (i.e. all morphisms of the form f: a → b for any object a of C), and its morphism part maps each morphism g: b → c of C to the function U(g): U(b) → U(c) which maps each member f: a → b of U(b) to the composition gf: a → c, a member of U(c). (Item 6 under Further examples expresses the same U in less elementary language via presheaves.) The Counter-examples section exhibits two large categories that are not concretizable. Remarks It is important to note that, contrary to intuition, concreteness is not a property which a category may or may not satisfy, but rather a structure with which a category may or may not be equipped. In particular, a category C may admit several faithful functors into Set. Hence there may be several concrete categories (C, U) all corresponding to the same category C. In practice, however, the choice of faithful functor is often clear and in this case we simply speak of the "concrete category C". For example, "the concrete category Set" means the pair (Set, I) where I denotes the identity functor Set → Set. The requirement that U be faithful means that it maps different morphisms between the same objects to different functions. However, U
https://en.wikipedia.org/wiki/Incidence%20%28epidemiology%29
In epidemiology, incidence is a measure of the probability of occurrence of a given medical condition in a population within a specified period of time. Although sometimes loosely expressed simply as the number of new cases during some time period, it is better expressed as a proportion or a rate with a denominator. Incidence proportion Incidence proportion (IP), also known as cumulative incidence, is defined as the probability that a particular event, such as occurrence of a particular disease, has occurred before a given time. It is calculated dividing the number of new cases during a given period by the number of subjects at risk in the population initially at risk at the beginning of the study. Where the period of time considered is an entire lifetime, the incidence proportion is called lifetime risk. For example, if a population contains 1,000 persons and 28 develop a condition from the time the disease first occurred until two years later, the cumulative incidence proportion is 28 cases per 1,000 persons, i.e. 2.8%. IP is related to incidence rate (IR) and duration of exposure (D) as follows: Incidence rate The incidence rate is a measure of the frequency with which a disease or other incident occurs over a specified time period. It is also known as the incidence density rate or person-time incidence rate, when the denominator is the combined person-time of the population at risk (the sum of the time duration of exposure across all persons exposed). In the same example as above, the incidence rate is 14 cases per 1000 person-years, because the incidence proportion (28 per 1,000) is divided by the number of years (two). Using person-time rather than just time handles situations where the amount of observation time differs between people, or when the population at risk varies with time. Use of this measure implies the assumption that the incidence rate is constant over different periods of time, such that for an incidence rate of 14 per 1000 persons-years, 14 cases would be expected for 1000 persons observed for 1 year or 50 persons observed for 20 years. When this assumption is substantially violated, such as in describing survival after diagnosis of metastatic cancer, it may be more useful to present incidence data in a plot of cumulative incidence, over time, taking into account loss to follow-up, using a Kaplan-Meier Plot. Incidence vs. prevalence Incidence should not be confused with prevalence, which is the proportion of cases in the population at a given time rather than rate of occurrence of new cases. Thus, incidence conveys information about the risk of contracting the disease, whereas prevalence indicates how widespread the disease is. Prevalence is the proportion of the total number of cases to the total population and is more a measure of the burden of the disease on society with no regard to time at risk or when subjects may have been exposed to a possible risk factor. Prevalence can also be measured with respect to a s
https://en.wikipedia.org/wiki/Unit%20vector
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in (pronounced "v-hat"). The term direction vector, commonly denoted as d, is used to describe a unit vector being used to represent spatial direction and relative direction. 2D spatial directions are numerically equivalent to points on the unit circle and spatial directions in 3D are equivalent to a point on the unit sphere. The normalized vector û of a non-zero vector u is the unit vector in the direction of u, i.e., where ‖u‖ is the norm (or length) of u. The term normalized vector is sometimes used as a synonym for unit vector. Unit vectors are often chosen to form the basis of a vector space, and every vector in the space may be written as a linear combination of unit vectors. Orthogonal coordinates Cartesian coordinates Unit vectors may be used to represent the axes of a Cartesian coordinate system. For instance, the standard unit vectors in the direction of the x, y, and z axes of a three dimensional Cartesian coordinate system are They form a set of mutually orthogonal unit vectors, typically referred to as a standard basis in linear algebra. They are often denoted using common vector notation (e.g., i or ) rather than standard unit vector notation (e.g., ). In most contexts it can be assumed that i, j, and k, (or and ) are versors of a 3-D Cartesian coordinate system. The notations , , , or , with or without hat, are also used, particularly in contexts where i, j, k might lead to confusion with another quantity (for instance with index symbols such as i, j, k, which are used to identify an element of a set or array or sequence of variables). When a unit vector in space is expressed in Cartesian notation as a linear combination of i, j, k, its three scalar components can be referred to as direction cosines. The value of each component is equal to the cosine of the angle formed by the unit vector with the respective basis vector. This is one of the methods used to describe the orientation (angular position) of a straight line, segment of straight line, oriented axis, or segment of oriented axis (vector). Cylindrical coordinates The three orthogonal unit vectors appropriate to cylindrical symmetry are: (also designated or ), representing the direction along which the distance of the point from the axis of symmetry is measured; , representing the direction of the motion that would be observed if the point were rotating counterclockwise about the symmetry axis; , representing the direction of the symmetry axis; They are related to the Cartesian basis , , by: The vectors and are functions of and are not constant in direction. When differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. The derivatives with respect to are: Spherical coordinates The unit vectors appropr
https://en.wikipedia.org/wiki/Svend%20%C3%85ge%20Madsen
Svend Åge Madsen ( born 2 November 1939) is a Danish novelist. He studied mathematics before he began writing fiction. His novels are generally philosophical and humorous. Several of his works have been made into films in Denmark. His writings are extensive and has been translated into many languages. Madsen's writing style and philosophy have placed him amongst the most distinguished and widely read authors in Denmark today. His novels reflect the grave problems faced by modern civilisation, and a number of them have achieved cult status in Denmark. The interplay between quasi-realism and complete fantasy in Svend Åge Madsen's novels leads to contemplation of the indefinable nature of human existence. Work Madsen's work may be divided into three phases. The first phase comprises abstract modernist works influenced by writers such as Franz Kafka, Samuel Beckett, Alain Robbe-Grillet and James Joyce. These works examine the capacity of language to depict reality; they include the experimental novels The Visit (Besøget, 1963) and Additions (Tilføjelser, 1967), the "unnovel" Pictures of Lust (Lystbilleder, 1964), and the collection of short stories Eight Times Orphan (Otte gange orphan, 1965). Madsen would later define these novels as "anti-art". The change to the next phase of his work was, according to him, a shift from "anti-art" to "anti-anti-art", which accepted the result of the first phase: "that reality can not be described", but that one may attempt to build a meaningful literature from a relativistic stance. The project was now to show how "lower" genres (such as crime fiction, romantic fiction and science fiction) could be a mosaic of equal truths that make up reality. This change is also a change from modernist literature to postmodern literature. The third phase of Madsen's work comprises some novels that are less abstract and more realistic than his earlier works, but are still highly imaginative. At the same time, Madsen started working on a "macro"-text in which characters are used repeatedly in different novels, main characters becoming minor characters and vice versa. All of these novels take places in the city of Aarhus in Denmark. Through a complex net of bizarre stories, Madsen creates an alternative Aarhus in which everything is possible and extreme philosophical positions are explored. Madsen's late literature is quite unique but can perhaps best be likened to the magical realism of Latin America. A recurring trait in his books is that the characters face some sort of extreme situation which enables a philosophical theme to emerge. Perhaps Madsen's most famous work is Vice and Virtue in the Middle Time (Tugt og utugt i mellemtiden, 1976) which has been translated into English. In this novel, a man from a very distant future takes on the experiment of writing a novel of the age called Middle Time, which is the western world in the 1970s. This creates an amusing philosophical position, in which everything we take for gran
https://en.wikipedia.org/wiki/Posterior
Posterior may refer to: Posterior (anatomy), the end of an organism opposite to its head Buttocks, as a euphemism Posterior horn (disambiguation) Posterior probability, the conditional probability that is assigned when the relevant evidence is taken into account Posterior tense, a relative future tense
https://en.wikipedia.org/wiki/Richard%20Threlkeld%20Cox
Richard Threlkeld Cox (August 5, 1898 – May 2, 1991) was a professor of physics at Johns Hopkins University, known for Cox's theorem relating to the foundations of probability. Biography He was born in Portland, Oregon, the son of attorney Lewis Cox and Elinor Cox. After Lewis Cox died, Elinor Cox married John Latané, who became a professor at Johns Hopkins University in 1913. In 1915 Richard enrolled at Johns Hopkins University to study physics, but his studies were cut short when he was drafted for World War I. He stayed in the US after being drafted and returned to Johns Hopkins University after the war, completing his BA in 1920. He earned his PhD in 1924; his dissertation was A Study of Pfund's Pressure Gauge. He taught at New York University (NYU) from 1924 to 1943, before returning to Johns Hopkins to teach. He studied probability theory, the scattering of electrons, and the discharges of electric eels. Richard Cox's most important work was Cox's theorem. His wife, Shelby Shackleford (1899 Halifax, Virginia – 1987), whom he married in 1926, was an accomplished artist and illustrated Electric Eel Calling, a book on electric eels. He died on May 2, 1991. His doctoral students include Carl T. Chase and Clifford Shull. Cox and parity violation According to T. D. Lee and C. N. Yang, parity violation implies that electrons produced by β decay should be longitudinally polarized. In 1959, Lee Grodzins indicated how a 1928 experiment by R. T. Cox, C. G. McIlwraith, and B. Kurrelmeyer on double scattering of β rays from radium confirms the polarization effect predicted by Lee and Yang. Carl T. Chase in 1929 and 1930 performed experiments confirming the 1928 experiment by Cox, McIlwraith, and Kurrelmeyer. Louis Witten interview Witten: ... I wanted to tell you something about Richard Cox. You mentioned Richard Cox. He did a lot of things, but he also did some experiments in condensed matter physics. He discovered an anomaly which wasn't consistent with physics. It couldn't be explained. It wasn't at all consistent, and he was told his experiment was wrong, and he knew that his experiment was right. So he published it, and it was an anomaly in the literature. Some years later, it was discovered that parity wasn't conserved, and his anomaly was non-parity. It's well known now by many people that his experiment was the first experiment that would have shown parity wasn't conserved if they had interpreted it correctly. Rickles: But he didn't give that interpretation; he just thought there was something strange. Witten: That's right. But he knew that his experiment was right and that people were trying to tell him that his experiment was wrong. Selected works Cox, R. T., "Of Inference and Inquiry - An Essay in Inductive Logic", In The Maximum Entropy Formalism, Ed. Levine and Tribus, M.I.T. Press, 1979. The Algebra of Probable Inference, Johns Hopkins University Press, Baltimore, MD, (1961). References External links 1898 births 1991 deat
https://en.wikipedia.org/wiki/John%20C.%20Baez
John Carlos Baez (; born June 12, 1961) is an American mathematical physicist and a professor of mathematics at the University of California, Riverside (UCR) in Riverside, California. He has worked on spin foams in loop quantum gravity, applications of higher categories to physics, and applied category theory. Additionally, Baez is known on the World Wide Web as the author of the crackpot index. Education John C. Baez attended Princeton University where he graduated with an A.B. in mathematics in 1982; his senior thesis was titled "Recursivity in quantum mechanics", under the supervision of John P. Burgess. He earned his doctorate in 1986 from the Massachusetts Institute of Technology under the direction of Irving Segal. Career Baez was a post-doctoral researcher at Yale University. Since 1989, he has been a faculty member at UC Riverside. From 2010 to 2012, he took a leave of absence to work at the Centre for Quantum Technologies in Singapore and has since worked there in the summers. Research His research includes work on spin foams in loop quantum gravity. He also worked on applications of higher categories to physics, such as the cobordism hypothesis. He has also dedicated many efforts towards applied category theory, including network theory. Recognition Baez won the 2013 Levi L. Conant Prize for his expository paper with John Huerta, "The algebra of grand unified theories". He was named a Fellow of the American Mathematical Society, in the 2022 class of fellows, "for contributions to higher category theory and mathematical physics, and for popularization of these subjects". Forums Baez is the author of This Week's Finds in Mathematical Physics, an irregular column on the internet featuring mathematical exposition and criticism. He started This Week's Finds in 1993 for the Usenet community, and it now has a following in its new form, the blog Azimuth. This Week's Finds anticipated the concept of a personal weblog. Azimuth also covers other topics that include combating climate change and various other environmental issues. He is also co-founder of the n-Category Café (or n-Café), a group blog concerning higher category theory and its applications, as well as its philosophical repercussions. The founders of the blog are Baez, David Corfield and Urs Schreiber, and the list of blog authors has extended since. The n-Café community is associated with the nLab wiki and nForum forum, which now run independently of n-Café. It is hosted on The University of Texas at Austin's official website. Family Baez's uncle Albert Baez was a physicist and a co-inventor of the X-ray microscope; Albert interested him in physics as a child. Through Albert, he is cousins with singers Joan Baez and Mimi Fariña. John Baez is married to Lisa Raphals who is a professor of Chinese and comparative literature at UCR. Selected publications Papers Books References External links Baez's home page at UCR's official website (ucr.edu) Azimuth blog by Baez T
https://en.wikipedia.org/wiki/Well-ordering%20principle
In mathematics, the well-ordering principle states that every non-empty set of positive integers contains a least element. In other words, the set of positive integers is well-ordered by its "natural" or "magnitude" order in which precedes if and only if is either or the sum of and some positive integer (other orderings include the ordering ; and ). The phrase "well-ordering principle" is sometimes taken to be synonymous with the "well-ordering theorem". On other occasions it is understood to be the proposition that the set of integers contains a well-ordered subset, called the natural numbers, in which every nonempty subset contains a least element. Properties Depending on the framework in which the natural numbers are introduced, this (second-order) property of the set of natural numbers is either an axiom or a provable theorem. For example: In Peano arithmetic, second-order arithmetic and related systems, and indeed in most (not necessarily formal) mathematical treatments of the well-ordering principle, the principle is derived from the principle of mathematical induction, which is itself taken as basic. Considering the natural numbers as a subset of the real numbers, and assuming that we know already that the real numbers are complete (again, either as an axiom or a theorem about the real number system), i.e., every bounded (from below) set has an infimum, then also every set of natural numbers has an infimum, say . We can now find an integer such that lies in the half-open interval , and can then show that we must have , and in . In axiomatic set theory, the natural numbers are defined as the smallest inductive set (i.e., set containing 0 and closed under the successor operation). One can (even without invoking the regularity axiom) show that the set of all natural numbers such that " is well-ordered" is inductive, and must therefore contain all natural numbers; from this property one can conclude that the set of all natural numbers is also well-ordered. In the second sense, this phrase is used when that proposition is relied on for the purpose of justifying proofs that take the following form: to prove that every natural number belongs to a specified set , assume the contrary, which implies that the set of counterexamples is non-empty and thus contains a smallest counterexample. Then show that for any counterexample there is a still smaller counterexample, producing a contradiction. This mode of argument is the contrapositive of proof by complete induction. It is known light-heartedly as the "minimal criminal" method and is similar in its nature to Fermat's method of "infinite descent". Garrett Birkhoff and Saunders Mac Lane wrote in A Survey of Modern Algebra that this property, like the least upper bound axiom for real numbers, is non-algebraic; i.e., it cannot be deduced from the algebraic properties of the integers (which form an ordered integral domain). Example applications The well-ordering principle can be used
https://en.wikipedia.org/wiki/Corollary
In mathematics and logic, a corollary ( , ) is a theorem of less importance which can be readily deduced from a previous, more notable statement. A corollary could, for instance, be a proposition which is incidentally proved while proving another proposition; it might also be used more casually to refer to something which naturally or incidentally accompanies something else (e.g., violence as a corollary of revolutionary social changes). Overview In mathematics, a corollary is a theorem connected by a short proof to an existing theorem. The use of the term corollary, rather than proposition or theorem, is intrinsically subjective. More formally, proposition B is a corollary of proposition A, if B can be readily deduced from A or is self-evident from its proof. In many cases, a corollary corresponds to a special case of a larger theorem, which makes the theorem easier to use and apply, even though its importance is generally considered to be secondary to that of the theorem. In particular, B is unlikely to be termed a corollary if its mathematical consequences are as significant as those of A. A corollary might have a proof that explains its derivation, even though such a derivation might be considered rather self-evident in some occasions (e.g., the Pythagorean theorem as a corollary of law of cosines). Peirce's theory of deductive reasoning Charles Sanders Peirce held that the most important division of kinds of deductive reasoning is that between corollarial and theorematic. He argued that while all deduction ultimately depends in one way or another on mental experimentation on schemata or diagrams, in corollarial deduction: "it is only necessary to imagine any case in which the premises are true in order to perceive immediately that the conclusion holds in that case" while in theorematic deduction: "It is necessary to experiment in the imagination upon the image of the premise in order from the result of such experiment to make corollarial deductions to the truth of the conclusion." Peirce also held that corollarial deduction matches Aristotle's conception of direct demonstration, which Aristotle regarded as the only thoroughly satisfactory demonstration, while theorematic deduction is: The kind more prized by mathematicians Peculiar to mathematics Involves in its course the introduction of a lemma or at least a definition uncontemplated in the thesis (the proposition that is to be proved), in remarkable cases that definition is of an abstraction that "ought to be supported by a proper postulate." See also Lemma (mathematics) Porism Proposition Lodge Corollary to the Monroe Doctrine Roosevelt Corollary to the Monroe Doctrine References Further reading Cut the knot: Sample corollaries of the Pythagorean theorem Geeks for geeks: Corollaries of binomial theorem Leo Tutorials: C language Mathematical terminology Theorems Statements
https://en.wikipedia.org/wiki/Centaurus%20%28journal%29
Centaurus. Journal of the European Society for the History of Science is a quarterly peer-reviewed academic journal covering research on the history of mathematics, science, and technology. It is the official journal of the European Society for the History of Science. The journal was established in 1950. In January 2022, Centaurus was relaunched in open-access format by the ESHS and Brepols as Centaurus. Journal of the European Society for the History of Science. The editor-in-chief is Koen Vermeir (Centre national de la recherche scientifique and Paris Diderot University). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.200. References External links History of science journals Brepols academic journals Academic journals established in 1950 English-language journals Quarterly journals
https://en.wikipedia.org/wiki/Jacques%20Charles
Jacques Alexandre César Charles (12 November 1746 – 7 April 1823) was a French inventor, scientist, mathematician, and balloonist. Charles wrote almost nothing about mathematics, and most of what has been credited to him was due to mistaking him with another Jacques Charles, also a member of the Paris Academy of Sciences, entering on 12 May 1785. He was sometimes called Charles the Geometer. Charles and the Robert brothers launched the world's first hydrogen-filled gas balloon in August 1783; then in December 1783, Charles and his co-pilot Nicolas-Louis Robert ascended to a height of about 1,800 feet (550 m) in a piloted gas balloon. Their pioneering use of hydrogen for lift led to this type of gas balloon being named a Charlière (as opposed to the hot-air Montgolfière). Charles's law, describing how gases tend to expand when heated, was formulated by Joseph Louis Gay-Lussac in 1802, but he credited it to unpublished work by Charles. Charles was elected to the Académie des Sciences in 1795 and subsequently became professor of physics at the Académie de Sciences. Biography Charles was born in Beaugency-sur-Loire in 1746. He married Julie Françoise Bouchaud des Hérettes (1784–1817), a creole woman 37 years younger than himself. Reportedly the poet Alphonse de Lamartine also fell in love with her, and she was the inspiration for Elvire in his 1820 autobiographical Poetic Meditation "Le Lac" ("The Lake"), which describes in retrospect the fervent love shared by a couple from the point of view of the bereaved man. Charles outlived her and died in Paris on 7 April 1823. Hydrogen balloon flights First hydrogen balloon Charles conceived the idea that hydrogen would be a suitable lifting agent for balloons having studied the work of Robert Boyle's Boyle's Law which was published 100 years earlier in 1662, and of his contemporaries Henry Cavendish, Joseph Black and Tiberius Cavallo. He designed the craft and then worked in conjunction with the Robert brothers, Anne-Jean and Nicolas-Louis, to build it in their workshop at the Place des Victoires in Paris. The brothers invented the methodology for the lightweight, airtight gas bag by dissolving rubber in a solution of turpentine and varnished the sheets of silk that were stitched together to make the main envelope. They used alternate strips of red and white silk, but the discolouration of the varnishing/rubberising process left a red and yellow result. Charles and the Robert brothers launched the world's first hydrogen filled balloon on 27 August 1783, from the Champ de Mars, (now the site of the Eiffel Tower) where Ben Franklin was among the crowd of onlookers. The balloon was comparatively small, a 35 cubic metre sphere of rubberised silk, and only capable of lifting about 9 kg (20 lb). It was filled with hydrogen that had been made by pouring nearly a quarter of a tonne of sulphuric acid onto a half a tonne of scrap iron. The hydrogen gas was fed into the balloon via lead pipes; but as it was
https://en.wikipedia.org/wiki/Dragan%20Maru%C5%A1i%C4%8D
Dragan Marušič (born 1953, Koper, Slovenia) is a Slovene mathematician. Marušič obtained his BSc in technical mathematics from the University of Ljubljana in 1976, and his PhD from the University of Reading in 1981 under the supervision of Crispin Nash-Williams. Marušič has published extensively, and has supervised seven PhD students (as of 2013). He served as the third rector of the University of Primorska from 2011-2019, a university he lobbied to have established in his home town of Koper. His research focuses on topics in algebraic graph theory, particularly the symmetry of graphs and the action of finite groups on combinatorial objects. He is regarded as the founder of the Slovenian school of research in algebraic graph theory and permutation groups. Education and career From 1968 to 1972 Marušič attended gymnasium in Koper. He studied undergraduate mathematics at the University of Ljubljana, graduating in 1976. He completed his PhD in 1981 in England, at the University of Reading under the supervision of Crispin Nash-Williams. After completing a post-doctoral fellowship at the University of Reading in 1983, Marušič spent a year teaching high school mathematics in Koper. He worked for one year at the University of Minnesota Duluth as an assistant professor, and then spent three years at the University of California, Santa Cruz, from 1985-1988. In 1988, he returned to Slovenia to work at the University of Ljubljana, where he rose quickly through the ranks, becoming a full professor in 1994. He also held the post of vice-rector of student affairs there from 1989-1991. In 1991-92 he spent a year as a Fulbright scholar at the University of California, Santa Cruz. Marušič maintains his post at the University of Ljubljana, although he has also held an appointment at the University of Primorska since 2004, shortly after its founding. He has increasingly devoted his time to the newer university, where he established the Faculty of Mathematics, Natural Sciences, and Information Technologies (UP FAMNIT). He served as the dean of that faculty from 2007-2011. He was elected in 2011 as the third rector of the University of Primorska, a position which he held until 2019. Marušič has supervised seven PhD students, and has supervised or co-supervised six post-doctoral fellows, in addition to numerous master's and honours students. He is one of the two founding editors and editors-in-chief (with Tomaž Pisanski) of the journal Ars Mathematica Contemporanea. Achievements and honours Marušič is regarded as the founder of the Slovenian school of research in algebraic graph theory and permutation groups. In 2002 he received the Zois Award, the highest scientific award in Slovenia, for his achievements in the field of graph theory and algebra. Since 2010, he has been a member of the committee that selects the Zois Award recipients, as well as the recipients of other scientific honours from the government of Slovenia. Research In his research, Marušič
https://en.wikipedia.org/wiki/Necessity%20and%20sufficiency
In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: "If then ", is necessary for , because the truth of is guaranteed by the truth of . (Equivalently, it is impossible to have without , or the falsity of ensures the falsity of .) Similarly, is sufficient for , because being true always implies that is true, but not being true does not always imply that is not true. In general, a necessary condition is one (possibly one of multiple conditions) that must be present in order for another condition to occur, while a sufficient condition is one that produces the said condition. The assertion that a statement is a "necessary and sufficient" condition of another means that the former statement is true if and only if the latter is true. That is, the two statements must be either simultaneously true, or simultaneously false. In ordinary English (also natural language) "necessary" and "sufficient" indicate relations between conditions or states of affairs, not statements. For example, being a male is a necessary condition for being a brother, but it is not sufficient—while being a male sibling is a necessary and sufficient condition for being a brother. Any conditional statement consists of at least one sufficient condition and at least one necessary condition. In data analytics, necessity and sufficiency can refer to different causal logics, where Necessary Condition Analysis and Qualitative Comparative Analysis can be used as analytical techniques for examining necessity and sufficiency of conditions for a particular outcome of interest. Definitions In the conditional statement, "if S, then N", the expression represented by S is called the antecedent, and the expression represented by N is called the consequent. This conditional statement may be written in several equivalent ways, such as "N if S", "S only if N", "S implies N", "N is implied by S", , and "N whenever S". In the above situation of "N whenever S," N is said to be a necessary condition for S. In common language, this is equivalent to saying that if the conditional statement is a true statement, then the consequent N must be true—if S is to be true (see third column of "truth table" immediately below). In other words, the antecedent S cannot be true without N being true. In the reverse situation of "If N, then S," for example, in order for someone to be called Socrates, it is necessary for that someone to be Named. Similarly, in order for human beings to live, it is necessary that they have air. One can also say S is a sufficient condition for N (refer again to the third column of the truth table immediately below). If the conditional statement is true, then if S is true, N must be true; whereas if the conditional statement is true and N is true, then S may be true or be false. In common terms, "the truth of S guarantees the truth of N". F
https://en.wikipedia.org/wiki/Logical%20equivalence
In logic and mathematics, statements and are said to be logically equivalent if they have the same truth value in every model. The logical equivalence of and is sometimes expressed as , , , or , depending on the notation being used. However, these symbols are also used for material equivalence, so proper interpretation would depend on the context. Logical equivalence is different from material equivalence, although the two concepts are intrinsically related. Logical equivalences In logic, many common logical equivalences exist and are often listed as laws or properties. The following tables illustrate some of these. General logical equivalences Logical equivalences involving conditional statements Logical equivalences involving biconditionals Examples In logic The following statements are logically equivalent: If Lisa is in Denmark, then she is in Europe (a statement of the form ). If Lisa is not in Europe, then she is not in Denmark (a statement of the form ). Syntactically, (1) and (2) are derivable from each other via the rules of contraposition and double negation. Semantically, (1) and (2) are true in exactly the same models (interpretations, valuations); namely, those in which either Lisa is in Denmark is false or Lisa is in Europe is true. (Note that in this example, classical logic is assumed. Some non-classical logics do not deem (1) and (2) to be logically equivalent.) Relation to material equivalence Logical equivalence is different from material equivalence. Formulas and are logically equivalent if and only if the statement of their material equivalence () is a tautology. The material equivalence of and (often written as ) is itself another statement in the same object language as and . This statement expresses the idea "' if and only if '". In particular, the truth value of can change from one model to another. On the other hand, the claim that two formulas are logically equivalent is a statement in metalanguage, which expresses a relationship between two statements and . The statements are logically equivalent if, in every model, they have the same truth value. See also Entailment Equisatisfiability If and only if Logical biconditional Logical equality ≡ the iff symbol (U+2261 IDENTICAL TO) ∷ the a is to b as c is to d symbol (U+2237 PROPORTION) ⇔ the double struck biconditional (U+21D4 LEFT RIGHT DOUBLE ARROW) ↔ the bidirectional arrow (U+2194 LEFT RIGHT ARROW) References Mathematical logic Metalogic Logical consequence Equivalence (mathematics)
https://en.wikipedia.org/wiki/Foundations%20of%20mathematics
Foundations of mathematics is the study of the philosophical and logical and/or algorithmic basis of mathematics, or, in a broader sense, the mathematical investigation of what underlies the philosophical theories concerning the nature of mathematics. In this latter sense, the distinction between foundations of mathematics and philosophy of mathematics turns out to be vague. Foundations of mathematics can be conceived as the study of the basic mathematical concepts (set, function, geometrical figure, number, etc.) and how they form hierarchies of more complex structures and concepts, especially the fundamentally important structures that form the language of mathematics (formulas, theories and their models giving a meaning to formulas, definitions, proofs, algorithms, etc.) also called metamathematical concepts, with an eye to the philosophical aspects and the unity of mathematics. The search for foundations of mathematics is a central question of the philosophy of mathematics; the abstract nature of mathematical objects presents special philosophical challenges. The foundations of mathematics as a whole does not aim to contain the foundations of every mathematical topic. Generally, the foundations of a field of study refers to a more-or-less systematic analysis of its most basic or fundamental concepts, its conceptual unity and its natural ordering or hierarchy of concepts, which may help to connect it with the rest of human knowledge. The development, emergence, and clarification of the foundations can come late in the history of a field, and might not be viewed by everyone as its most interesting part. Mathematics plays a special role in scientific thought, serving since ancient times as a model of truth and rigor for rational inquiry, and giving tools or even a foundation for other sciences (especially Physics). Mathematics' many developments towards higher abstractions in the 19th century brought new challenges and paradoxes, urging for a deeper and more systematic examination of the nature and criteria of mathematical truth, as well as a unification of the diverse branches of mathematics into a coherent whole. The systematic search for the foundations of mathematics started at the end of the 19th century and formed a new mathematical discipline called mathematical logic, which later had strong links to theoretical computer science. It went through a series of crises with paradoxical results, until the discoveries stabilized during the 20th century as a large and coherent body of mathematical knowledge with several aspects or components (set theory, model theory, proof theory, etc.), whose detailed properties and possible variants are still an active research field. Its high level of technical sophistication inspired many philosophers to conjecture that it can serve as a model or pattern for the foundations of other sciences. Historical context Ancient Greek mathematics While the practice of mathematics had previously developed in oth
https://en.wikipedia.org/wiki/Membrane%20topology
Topology of a transmembrane protein refers to locations of N- and C-termini of membrane-spanning polypeptide chain with respect to the inner or outer sides of the biological membrane occupied by the protein. Several databases provide experimentally determined topologies of membrane proteins. They include Uniprot, TOPDB, OPM, and ExTopoDB. There is also a database of domains located conservatively on a certain side of membranes, TOPDOM. Several computational methods were developed, with a limited success, for predicting transmembrane alpha-helices and their topology. Pioneer methods utilized the fact that membrane-spanning regions contain more hydrophobic residues than other parts of the protein, however applying different hydrophobic scales altered the prediction results. Later, several statistical methods were developed to improve the topography prediction and a special alignment method was introduced. According to the positive-inside rule, cytosolic loops near the lipid bilayer contain more positively-charged amino acids. Applying this rule resulted in the first topology prediction methods. There is also a negative-outside rule in transmembrane alpha-helices from single-pass proteins, although negatively charged residues are rarer than positively charged residues in transmembrane segments of proteins. As more structures were determined, machine learning algorithms appeared. Supervised learning methods are trained on a set of experimentally determined structures, however, these methods highly depend on the training set. Unsupervised learning methods are based on the principle that topology depends on the maximum divergence of the amino acid distributions in different structural parts. It was also shown that locking a segment location based on prior knowledge about the structure improves the prediction accuracy. This feature has been added to some of the existing prediction methods. The most recent methods use consensus prediction (i.e. they use several algorithm to determine the final topology) and automatically incorporate previously determined experimental informations. HTP database provides a collection of topologies that are computationally predicted for human transmembrane proteins. Discrimination of signal peptides and transmembrane segments is an additional problem in topology prediction treated with a limited success by different methods. Both signal peptides and transmembrane segments contain hydrophobic regions which form α-helices. This causes the cross-prediction between them, which is a weakness of many transmembrane topology predictors. By predicting signal peptides and transmembrane helices simultaneously (Phobius), the errors caused by cross-prediction are reduced and the performance is substantially increased. Another feature used to increase the accuracy of the prediction is the homology (PolyPhobius).” It is also possible to predict beta-barrel membrane proteins' topology. See also Endomembrane system Integral membran
https://en.wikipedia.org/wiki/Sierpi%C5%84ski%20number
In number theory, a Sierpiński number is an odd natural number k such that is composite for all natural numbers n. In 1960, Wacław Sierpiński proved that there are infinitely many odd integers k which have this property. In other words, when k is a Sierpiński number, all members of the following set are composite: If the form is instead , then k is a Riesel number. Known Sierpiński numbers The sequence of currently known Sierpiński numbers begins with: 78557, 271129, 271577, 322523, 327739, 482719, 575041, 603713, 903983, 934909, 965431, 1259779, 1290677, 1518781, 1624097, 1639459, 1777613, 2131043, 2131099, 2191531, 2510177, 2541601, 2576089, 2931767, 2931991, ... . The number 78557 was proved to be a Sierpiński number by John Selfridge in 1962, who showed that all numbers of the form have a factor in the covering set }. For another known Sierpiński number, 271129, the covering set is }. Most currently known Sierpiński numbers possess similar covering sets. However, in 1995 A. S. Izotov showed that some fourth powers could be proved to be Sierpiński numbers without establishing a covering set for all values of n. His proof depends on the aurifeuillean factorization . This establishes that all give rise to a composite, and so it remains to eliminate only using a covering set. Sierpiński problem The Sierpiński problem asks for the value of the smallest Sierpiński number. In private correspondence with Paul Erdős, Selfridge conjectured that 78,557 was the smallest Sierpiński number. No smaller Sierpiński numbers have been discovered, and it is now believed that 78,557 is the smallest number. To show that 78,557 really is the smallest Sierpiński number, one must show that all the odd numbers smaller than 78,557 are not Sierpiński numbers. That is, for every odd k below 78,557, there needs to exist a positive integer n such that is prime. , there are only five candidates which have not been eliminated as possible Sierpiński numbers: k = 21181, 22699, 24737, 55459, and 67607. The distributed volunteer computing project PrimeGrid is attempting to eliminate all the remaining values of k. , no prime has been found for these values of k, with all having been eliminated. The most recently eliminated candidate was k = 10223, when the prime was discovered by PrimeGrid in October 2016. This number is 9,383,761 digits long. Prime Sierpiński problem In 1976, Nathan Mendelsohn determined that the second provable Sierpiński number is the prime k = 271129. The prime Sierpiński problem asks for the value of the smallest prime Sierpiński number, and there is an ongoing "Prime Sierpiński search" which tries to prove that 271129 is the first Sierpiński number which is also a prime. , the nine prime values of k less than 271129 for which a prime of the form is not known are: k = 22699, 67607, 79309, 79817, 152267, 156511, 222113, 225931, and 237019. , no prime has been found for these values of k with . The first two, being less than 78557, a
https://en.wikipedia.org/wiki/List%20of%20continuity-related%20mathematical%20topics
In mathematics, the terms continuity, continuous, and continuum are used in a variety of related ways. Continuity of functions and measures Continuous function Absolutely continuous function Absolute continuity of a measure with respect to another measure Continuous probability distribution: Sometimes this term is used to mean a probability distribution whose cumulative distribution function (c.d.f.) is (simply) continuous. Sometimes it has a less inclusive meaning: a distribution whose c.d.f. is absolutely continuous with respect to Lebesgue measure. This less inclusive sense is equivalent to the condition that every set whose Lebesgue measure is 0 has probability 0. Geometric continuity Parametric continuity Continuum Continuum (set theory), the real line or the corresponding cardinal number Linear continuum, any ordered set that shares certain properties of the real line Continuum (topology), a nonempty compact connected metric space (sometimes a Hausdorff space) Continuum hypothesis, a conjecture of Georg Cantor that there is no cardinal number between that of countably infinite sets and the cardinality of the set of all real numbers. The latter cardinality is equal to the cardinality of the set of all subsets of a countably infinite set. Cardinality of the continuum, a cardinal number that represents the size of the set of real numbers See also Continuous variable Mathematical analysis Mathematics-related lists
https://en.wikipedia.org/wiki/Stratification
Stratification may refer to: Mathematics Stratification (mathematics), any consistent assignment of numbers to predicate symbols Data stratification in statistics Earth sciences Stable and unstable stratification Stratification, or stratum, the layering of rocks Stratification (archeology), the formation of layers (strata) in which objects are found Stratification (water), the formation of water layers based on temperature (and salinity, in oceans) Ocean stratification Lake stratification Atmospheric stratification, the dividing of the Earth's atmosphere into strata Inversion (meteorology) Social sciences Social stratification, the dividing of a society into levels based on power or socioeconomic status Biology Stratification (seeds), where seeds are treated to simulate winter conditions so that germination may occur Stratification (clinical trials), partitioning of subjects by a factors other than the intervention Stratification (vegetation), the vertical layering of vegetation e.g. within a forest Population stratification, the stratification of a genetic population based on allele frequencies Linguistics Stratification (linguistics), the idea that language is organized in hierarchically ordered strata (such as phonology, morphology, syntax, and semantics). See also Destratification (disambiguation) Fuel stratified injection Layer (disambiguation) Partition (disambiguation) Strata (disambiguation) Stratified epithelial lining (disambiguation) Stratified sampling Stratigraphy Stratum (disambiguation)
https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann%20statistics
In statistical mechanics, Maxwell–Boltzmann statistics describes the distribution of classical material particles over various energy states in thermal equilibrium. It is applicable when the temperature is high enough or the particle density is low enough to render quantum effects negligible. The expected number of particles with energy for Maxwell–Boltzmann statistics is where: is the energy of the i-th energy level, is the average number of particles in the set of states with energy , is the degeneracy of energy level i, that is, the number of states with energy which may nevertheless be distinguished from each other by some other means, μ is the chemical potential, k is the Boltzmann constant, T is absolute temperature, N is the total number of particles: Z is the partition function: e is Euler's number Equivalently, the number of particles is sometimes expressed as where the index i now specifies a particular state rather than the set of all states with energy , and . History Maxwell–Boltzmann statistics grew out of the Maxwell–Boltzmann distribution, most likely as a distillation of the underlying technique. The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system. Applicability Maxwell–Boltzmann statistics is used to derive the Maxwell–Boltzmann distribution of an ideal gas. However, it can also be used to extend that distribution to particles with a different energy–momentum relation, such as relativistic particles (resulting in Maxwell–Jüttner distribution), and to other than three-dimensional spaces. Maxwell–Boltzmann statistics is often described as the statistics of "distinguishable" classical particles. In other words, the configuration of particle A in state 1 and particle B in state 2 is different from the case in which particle B is in state 1 and particle A is in state 2. This assumption leads to the proper (Boltzmann) statistics of particles in the energy states, but yields non-physical results for the entropy, as embodied in the Gibbs paradox. At the same time, there are no real particles that have the characteristics required by Maxwell–Boltzmann statistics. Indeed, the Gibbs paradox is resolved if we treat all particles of a certain type (e.g., electrons, protons, etc.) as principally indistinguishable. Once this assumption is made, the particle statistics change. The change in entropy in the entropy of mixing example may be viewed as an example of a non-extensive entropy resulting from the distinguishability of the two types of particles being mixed. Quantum particles are either bosons (following instead Bose–Einstein statistics) or fermions (subject to the Pauli exclusion principle, following instead Fermi–Dirac statistics). Both of these quantum statistics approach the Maxwell–Boltzm
https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov%20theorem
In statistics, the Gauss–Markov theorem (or simply Gauss theorem for some authors) states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed (only uncorrelated with mean zero and homoscedastic with finite variance). The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator (which also drops linearity), ridge regression, or simply any degenerate estimator. The theorem was named after Carl Friedrich Gauss and Andrey Markov, although Gauss' work significantly predates Markov's. But while Gauss derived the result under the assumption of independence and normality, Markov reduced the assumptions to the form stated above. A further generalization to non-spherical errors was given by Alexander Aitken. Statement Suppose we have, in matrix notation, the linear relationship expanding to, where are non-random but unobservable parameters, are non-random and observable (called the "explanatory variables"), are random, and so are random. The random variables are called the "disturbance", "noise" or simply "error" (will be contrasted with "residual" later in the article; see errors and residuals in statistics). Note that to include a constant in the model above, one can choose to introduce the constant as a variable with a newly introduced last column of X being unity i.e., for all . Note that though as sample responses, are observable, the following statements and arguments including assumptions, proofs and the others assume under the only condition of knowing but not The Gauss–Markov assumptions concern the set of error random variables, : They have mean zero: They are homoscedastic, that is all have the same finite variance: for all and Distinct error terms are uncorrelated: A linear estimator of is a linear combination in which the coefficients are not allowed to depend on the underlying coefficients , since those are not observable, but are allowed to depend on the values , since these data are observable. (The dependence of the coefficients on each is typically nonlinear; the estimator is linear in each and hence in each random which is why this is "linear" regression.) The estimator is said to be unbiased if and only if regardless of the values of . Now, let be some linear combination of the coefficients. Then the mean squared error of the corresponding estimation is in other words, it is the expectation of the square of the weighted sum (across parameters) of the differences between the estimators and the corresponding parameters to be estimated. (Since we are considering the case in which all the parameter estimates are unbiased,
https://en.wikipedia.org/wiki/Normal%20matrix
In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose : The concept of normal matrices can be extended to normal operators on infinite dimensional normed spaces and to normal elements in C*-algebras. As in the matrix case, normality means commutativity is preserved, to the extent possible, in the noncommutative setting. This makes normal operators, and normal elements of C*-algebras, more amenable to analysis. The spectral theorem states that a matrix is normal if and only if it is unitarily similar to a diagonal matrix, and therefore any matrix satisfying the equation is diagonalizable. The converse does not hold because diagonalizable matrices may have non-orthogonal eigenspaces. The left and right singular vectors in the singular value decomposition of a normal matrix differ only in complex phase from each other and from the corresponding eigenvectors, since the phase must be factored out of the eigenvalues to form singular values. Special cases Among complex matrices, all unitary, Hermitian, and skew-Hermitian matrices are normal, with all eigenvalues being unit modulus, real, and imaginary, respectively. Likewise, among real matrices, all orthogonal, symmetric, and skew-symmetric matrices are normal, with all eigenvalues being complex conjugate pairs on the unit circle, real, and imaginary, respectively. However, it is not the case that all normal matrices are either unitary or (skew-)Hermitian, as their eigenvalues can be any complex number, in general. For example, is neither unitary, Hermitian, nor skew-Hermitian, because its eigenvalues are ; yet it is normal because Consequences The concept of normality is important because normal matrices are precisely those to which the spectral theorem applies: The diagonal entries of are the eigenvalues of , and the columns of are the eigenvectors of . The matching eigenvalues in come in the same order as the eigenvectors are ordered as columns of . Another way of stating the spectral theorem is to say that normal matrices are precisely those matrices that can be represented by a diagonal matrix with respect to a properly chosen orthonormal basis of . Phrased differently: a matrix is normal if and only if its eigenspaces span and are pairwise orthogonal with respect to the standard inner product of . The spectral theorem for normal matrices is a special case of the more general Schur decomposition which holds for all square matrices. Let be a square matrix. Then by Schur decomposition it is unitary similar to an upper-triangular matrix, say, . If is normal, so is . But then must be diagonal, for, as noted above, a normal upper-triangular matrix is diagonal. The spectral theorem permits the classification of normal matrices in terms of their spectra, for example: In general, the sum or product of two normal matrices need not be normal. However, the following holds: In this special case, the columns of are eigenvectors of both an
https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov
The phrase Gauss–Markov is used in two different ways: Gauss–Markov processes in probability theory The Gauss–Markov theorem in mathematical statistics (in this theorem, one does not assume the probability distributions are Gaussian.)
https://en.wikipedia.org/wiki/Dieppe%2C%20New%20Brunswick
Dieppe () is a city in the Canadian maritime province of New Brunswick. Statistics Canada counted the population at 28,114 in 2021, making it the fourth-largest city in the province. On 1 January 2023, Dieppe annexed parts of two neighbouring local service districts; revised census figures have not been released. Dieppe's history and identity goes back to the eighteenth century. Formerly known as Leger's Corner, it was incorporated as a town in 1952 under the Dieppe name, and designated as a city in 2003. The Dieppe name was adopted by the citizens of the area in 1946 to commemorate the Second World War's Operation Jubilee, the Dieppe Raid of 1942. It is officially a francophone city; with 63.8% of the population mother tongue French, 24% English, 3% French and English, 8% other. A majority of the population reports being bilingual, speaking both French and English. Residents generally speak French with a regional accent (colloquially called "Chiac") which is unique to southeastern New Brunswick. A large majority of Dieppe's population were in favour of the by-law regulating the use of external commercial signs in both official languages, which is a first for the province of New Brunswick. Dieppe is the largest predominantly francophone city in Canada outside Québec; while there are other municipalities with greater total numbers of francophones, they constitute a minority of the population in those cities. Dieppe was one of the co-hosts of the first Congrès Mondial Acadien (Acadian World Congress) which was held in the Moncton region in 1994, and again in 2019. Dieppe is part of the census metropolitan area of Moncton, which is New Brunswick's most populous city, with a metropolitan population of 144,810 according to Statistics Canada in 2016. Name In 1910, the area known as French Village became known as Leger's Corner which, in turn, became the Village of Dieppe in 1946 to commemorate the Canadian soldiers killed during the landing of Allied troops on Normandy beaches in Dieppe, France, on August 19, 1942. On January 1, 1952, the Village of Dieppe became the Town of Dieppe. On January 1, 2003, the municipality was designated as the City of Dieppe. Government Provincial electoral districts Members of the 58th New Brunswick Legislative Assembly (2014), the governing house of the province of New Brunswick. Dieppe - Vacant Shediac Bay-Dieppe - Robert Gauvin Federal electoral districts Members of the 42nd Parliament of Canada (2015). A section of southeast Dieppe is in the Beauséjour riding. Moncton—Riverview—Dieppe - Ginette Petitpas Taylor Beauséjour - Dominic LeBlanc Geography Dieppe is located on the Petitcodiac River. It forms the southeastern part of the Greater Moncton Area, which, in addition to the city of Moncton, includes the town of Riverview, Moncton Parish, Memramcook, Coverdale, and Salisbury. Climate Demographics In the 2021 Census of Population conducted by Statistics Canada, Dieppe had a population of living
https://en.wikipedia.org/wiki/Step%20function
In mathematics, a function on the real numbers is called a step function if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having only finitely many pieces. Definition and first consequences A function is called a step function if it can be written as , for all real numbers where , are real numbers, are intervals, and is the indicator function of : In this definition, the intervals can be assumed to have the following two properties: The intervals are pairwise disjoint: for The union of the intervals is the entire real line: Indeed, if that is not the case to start with, a different set of intervals can be picked for which these assumptions hold. For example, the step function can be written as Variations in the definition Sometimes, the intervals are required to be right-open or allowed to be singleton. The condition that the collection of intervals must be finite is often dropped, especially in school mathematics, though it must still be locally finite, resulting in the definition of piecewise constant functions. Examples A constant function is a trivial example of a step function. Then there is only one interval, The sign function , which is −1 for negative numbers and +1 for positive numbers, and is the simplest non-constant step function. The Heaviside function , which is 0 for negative numbers and 1 for positive numbers, is equivalent to the sign function, up to a shift and scale of range (). It is the mathematical concept behind some test signals, such as those used to determine the step response of a dynamical system. The rectangular function, the normalized boxcar function, is used to model a unit pulse. Non-examples The integer part function is not a step function according to the definition of this article, since it has an infinite number of intervals. However, some authors also define step functions with an infinite number of intervals. Properties The sum and product of two step functions is again a step function. The product of a step function with a number is also a step function. As such, the step functions form an algebra over the real numbers. A step function takes only a finite number of values. If the intervals for in the above definition of the step function are disjoint and their union is the real line, then for all The definite integral of a step function is a piecewise linear function. The Lebesgue integral of a step function is where is the length of the interval , and it is assumed here that all intervals have finite length. In fact, this equality (viewed as a definition) can be the first step in constructing the Lebesgue integral. A discrete random variable is sometimes defined as a random variable whose cumulative distribution function is piecewise constant. In this case, it is locally a step function (globally, it may have an infinite number of steps). Usually however,
https://en.wikipedia.org/wiki/The%20Unreasonable%20Effectiveness%20of%20Mathematics%20in%20the%20Natural%20Sciences
"The Unreasonable Effectiveness of Mathematics in the Natural Sciences" is a 1960 article by the physicist Eugene Wigner. In this paper, Wigner observes that a physical theory's mathematical structure often points the way to further advances in that theory and even to empirical predictions. Original paper and Wigner's observations Wigner begins his paper with the belief, common among those familiar with mathematics, that mathematical concepts have applicability far beyond the context in which they were originally developed. Based on his experience, he writes, "it is important to point out that the mathematical formulation of the physicist's often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena". He then invokes the fundamental law of gravitation as an example. Originally used to model freely falling bodies on the surface of the earth, this law was extended on the basis of what Wigner terms "very scanty observations" to describe the motion of the planets, where it "has proved accurate beyond all reasonable expectations". Another oft-cited example is Maxwell's equations, derived to model the elementary electrical and magnetic phenomena known as of the mid-19th century. The equations also describe radio waves, discovered by David Edward Hughes in 1879, around the time of James Clerk Maxwell's death. Wigner sums up his argument by saying that "the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it". He concludes his paper with the same question with which he began: The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning. Wigner's work provided a fresh insight into both physics and the philosophy of mathematics, and has been fairly often cited in the academic literature on the philosophy of physics and of mathematics. Wigner speculated on the relationship between the philosophy of science and the foundations of mathematics as follows: It is difficult to avoid the impression that a miracle confronts us here, quite comparable in its striking nature to the miracle that the human mind can string a thousand arguments together without getting into contradictions, or to the two miracles of laws of nature and of the human mind's capacity to divine them. Later, Hilary Putnam (1975) explained the aforementioned two "miracles" as necessary consequences of a realist (but not Platonist) view of the philosophy of mathematics. But in a passage discussing cognitive bias Wigner cautiously called "not reliable", he went further: The writer is convinced that it is
https://en.wikipedia.org/wiki/Engineering%20drawing
An engineering drawing is a type of technical drawing that is used to convey information about an object. A common use is to specify the geometry necessary for the construction of a component and is called a detail drawing. Usually, a number of drawings are necessary to completely specify even a simple component. The drawings are linked together by a master drawing or assembly drawing which gives the drawing numbers of the subsequent detailed components, quantities required, construction materials and possibly 3D images that can be used to locate individual items. Although mostly consisting of pictographic representations, abbreviations and symbols are used for brevity and additional textual explanations may also be provided to convey the necessary information. The process of producing engineering drawings is often referred to as technical drawing or drafting (draughting). Drawings typically contain multiple views of a component, although additional scratch views may be added of details for further explanation. Only the information that is a requirement is typically specified. Key information such as dimensions is usually only specified in one place on a drawing, avoiding redundancy and the possibility of inconsistency. Suitable tolerances are given for critical dimensions to allow the component to be manufactured and function. More detailed production drawings may be produced based on the information given in an engineering drawing. Drawings have an information box or title block containing who drew the drawing, who approved it, units of dimensions, meaning of views, the title of the drawing and the drawing number. History Technical drawing has existed since ancient times. Complex technical drawings were made in renaissance times, such as the drawings of Leonardo da Vinci. Modern engineering drawing, with its precise conventions of orthographic projection and scale, arose in France at a time when the Industrial Revolution was in its infancy. L. T. C. Rolt's biography of Isambard Kingdom Brunel says of his father, Marc Isambard Brunel, that "It seems fairly certain that Marc's drawings of his block-making machinery (in 1799) made a contribution to British engineering technique much greater than the machines they represented. For it is safe to assume that he had mastered the art of presenting three-dimensional objects in a two-dimensional plane which we now call mechanical drawing. It had been evolved by Gaspard Monge of Mezieres in 1765 but had remained a military secret until 1794 and was therefore unknown in England." Standardization and disambiguation Engineering drawings specify the requirements of a component or assembly which can be complicated. Standards provide rules for their specification and interpretation. Standardization also aids internationalization, because people from different countries who speak different languages can read the same engineering drawing, and interpret it the same way. One major set of engineering drawing s
https://en.wikipedia.org/wiki/Collision%20detection
Collision detection is the computational problem of detecting the intersection of two or more objects. Collision detection is a classic issue of computational geometry and has applications in various computing fields, primarily in computer graphics, computer games, computer simulations, robotics and computational physics. Collision detection algorithms can be divided into operating on 2D and 3D objects. Overview In physical simulation, experiments such as playing billiards are conducted. The physics of bouncing billiard balls are well understood, under the umbrella of rigid body motion and elastic collisions. An initial description of the situation would be given, with a very precise physical description of the billiard table and balls, as well as initial positions of all the balls. Given a force applied to the cue ball (probably resulting from a player hitting the ball with their cue stick), we want to calculate the trajectories, precise motion and eventual resting places of all the balls with a computer program. A program to simulate this game would consist of several portions, one of which would be responsible for calculating the precise impacts between the billiard balls. This particular example also turns out to be ill conditioned: a small error in any calculation will cause drastic changes in the final position of the billiard balls. Video games have similar requirements, with some crucial differences. While computer simulation needs to simulate real-world physics as precisely as possible, computer games need to simulate real-world physics in an acceptable way, in real time and robustly. Compromises are allowed, so long as the resulting simulation is satisfying to the game players. Collision detection in computer simulation Physical simulators differ in the way they react on a collision. Some use the softness of the material to calculate a force, which will resolve the collision in the following time steps like it is in reality. This is very CPU intensive for low softness materials. Some simulators estimate the time of collision by linear interpolation, roll back the simulation, and calculate the collision by the more abstract methods of conservation laws. Some iterate the linear interpolation (Newton's method) to calculate the time of collision with a much higher precision than the rest of the simulation. Collision detection utilizes time coherence to allow even finer time steps without much increasing CPU demand, such as in air traffic control. After an inelastic collision, special states of sliding and resting can occur and, for example, the Open Dynamics Engine uses constraints to simulate them. Constraints avoid inertia and thus instability. Implementation of rest by means of a scene graph avoids drift. In other words, physical simulators usually function one of two ways: where the collision is detected a posteriori (after the collision occurs) or a priori (before the collision occurs). In addition to the a posteriori and a pr
https://en.wikipedia.org/wiki/Z-transform
In mathematics and signal processing, the Z-transform converts a discrete-time signal, which is a sequence of real or complex numbers, into a complex frequency-domain (the z-domain or z-plane) representation. It can be considered a discrete-time equivalent of the Laplace transform (the s-domain or s-plane). This similarity is explored in the theory of time-scale calculus. While the continuous-time Fourier transform is evaluated on the s-domain's vertical axis (the imaginary axis), the discrete-time Fourier transform is evaluated along the z-domain's unit circle. The s-domain's left half-plane maps to the area inside the z-domain's unit circle, while the s-domain's right half-plane maps to the area outside of the z-domain's unit circle. One of the means of designing digital filters is to take analog designs, subject them to a bilinear transform which maps them from the s-domain to the z-domain, and then produce the digital filter by inspection, manipulation, or numerical approximation. Such methods tend not to be accurate except in the vicinity of the complex unity, i.e. at low frequencies. History The basic idea now known as the Z-transform was known to Laplace, and it was re-introduced in 1947 by W. Hurewicz and others as a way to treat sampled-data control systems used with radar. It gives a tractable way to solve linear, constant-coefficient difference equations. It was later dubbed "the z-transform" by Ragazzini and Zadeh in the sampled-data control group at Columbia University in 1952. The modified or advanced Z-transform was later developed and popularized by E. I. Jury. The idea contained within the Z-transform is also known in mathematical literature as the method of generating functions which can be traced back as early as 1730 when it was introduced by de Moivre in conjunction with probability theory. From a mathematical view the Z-transform can also be viewed as a Laurent series where one views the sequence of numbers under consideration as the (Laurent) expansion of an analytic function. Definition The Z-transform can be defined as either a one-sided or two-sided transform. (Just like we have the one-sided Laplace transform and the two-sided Laplace transform.) Bilateral Z-transform The bilateral or two-sided Z-transform of a discrete-time signal is the formal power series defined as: where is an integer and is, in general, a complex number. In polar form, may be written as: where is the magnitude of , is the imaginary unit, and is the complex argument (also referred to as angle or phase) in radians. Unilateral Z-transform Alternatively, in cases where is defined only for , the single-sided or unilateral Z-transform is defined as: In signal processing, this definition can be used to evaluate the Z-transform of the unit impulse response of a discrete-time causal system. An important example of the unilateral Z-transform is the probability-generating function, where the component is the probability that a discr
https://en.wikipedia.org/wiki/Brachistochrone%20curve
In physics and mathematics, a brachistochrone curve (), or curve of fastest descent, is the one lying on the plane between a point A and a lower point B, where B is not directly below A, on which a bead slides frictionlessly under the influence of a uniform gravitational field to a given end point in the shortest time. The problem was posed by Johann Bernoulli in 1696. The brachistochrone curve is the same shape as the tautochrone curve; both are cycloids. However, the portion of the cycloid used for each of the two varies. More specifically, the brachistochrone can use up to a complete rotation of the cycloid (at the limit when A and B are at the same level), but always starts at a cusp. In contrast, the tautochrone problem can use only up to the first half rotation, and always ends at the horizontal. The problem can be solved using tools from the calculus of variations and optimal control. The curve is independent of both the mass of the test body and the local strength of gravity. Only a parameter is chosen so that the curve fits the starting point A and the ending point B. If the body is given an initial velocity at A, or if friction is taken into account, then the curve that minimizes time differs from the tautochrone curve. History Johann Bernoulli posed the problem of the brachistochrone to the readers of Acta Eruditorum in June, 1696. He said: Bernoulli wrote the problem statement as: {{Quote |text=Given two points A and B in a vertical plane, what is the curve traced out by a point acted on only by gravity, which starts at A and reaches B in the shortest time.}} Johann and his brother Jakob Bernoulli derived the same solution, but Johann's derivation was incorrect, and he tried to pass off Jakob's solution as his own. Johann published the solution in the journal in May of the following year, and noted that the solution is the same curve as Huygens's tautochrone curve. After deriving the differential equation for the curve by the method given below, he went on to show that it does yield a cycloid. However, his proof is marred by his use of a single constant instead of the three constants, vm, 2g and D, below. Bernoulli allowed six months for the solutions but none were received during this period. At the request of Leibniz, the time was publicly extended for a year and a half. At 4 p.m. on 29 January 1697 when he arrived home from the Royal Mint, Isaac Newton found the challenge in a letter from Johann Bernoulli. Newton stayed up all night to solve it and mailed the solution anonymously by the next post. Upon reading the solution, Bernoulli immediately recognized its author, exclaiming that he "recognizes a lion from his claw mark". This story gives some idea of Newton's power, since Johann Bernoulli took two weeks to solve it.D.T. Whiteside, Newton the Mathematician, in Bechler, Contemporary Newtonian Research, p. 122. Newton also wrote, "I do not love to be dunned [pestered] and teased by foreigners about mathematical things...",
https://en.wikipedia.org/wiki/Calculus%20of%20variations
The calculus of variations (or variational calculus) is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations. A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, which depends upon the material of the medium. One corresponding concept in mechanics is the principle of least/stationary action. Many important problems involve functions of several variables. Solutions of boundary value problems for the Laplace equation satisfy the Dirichlet's principle. Plateau's problem requires finding a surface of minimal area that spans a given contour in space: a solution can often be found by dipping a frame in soapy water. Although such experiments are relatively easy to perform, their mathematical formulation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivial topology. History The calculus of variations may be said to begin with Newton's minimal resistance problem in 1687, followed by the brachistochrone curve problem raised by Johann Bernoulli (1696). It immediately occupied the attention of Jakob Bernoulli and the Marquis de l'Hôpital, but Leonhard Euler first elaborated the subject, beginning in 1733. Lagrange was influenced by Euler's work to contribute significantly to the theory. After Euler saw the 1755 work of the 19-year-old Lagrange, Euler dropped his own partly geometric approach in favor of Lagrange's purely analytic approach and renamed the subject the calculus of variations in his 1756 lecture Elementa Calculi Variationum. Legendre (1786) laid down a method, not entirely satisfactory, for the discrimination of maxima and minima. Isaac Newton and Gottfried Leibniz also gave some early attention to the subject. To this discrimination Vincenzo Brunacci (1810), Carl Friedrich Gauss (1829), Siméon Poisson (1831), Mikhail Ostrogradsky (1834), and Carl Jacobi (1837) have been among the contributors. An important general work is that of Sarrus (1842) which was condensed and improved by Cauchy (1844). Other valuable treatises and memoirs have been written by Strauch (1849), Jellett (1850), Otto Hesse (1857), Alfred Clebsch (1858), and Lewis Buffett Carll (1885), but perhaps the most important work of the c
https://en.wikipedia.org/wiki/Langlands%20program
In representation theory and algebraic number theory, the Langlands program is a web of far-reaching and consequential conjectures about connections between number theory and geometry. Proposed by , it seeks to relate Galois groups in algebraic number theory to automorphic forms and representation theory of algebraic groups over local fields and adeles. Widely seen as the single biggest project in modern mathematical research, the Langlands program has been described by Edward Frenkel as "a kind of grand unified theory of mathematics." The Langlands program consists of some very complicated theoretical abstractions, which can be difficult even for specialist mathematicians to grasp. To oversimplify, the fundamental lemma of the project posits a direct connection between the generalized fundamental representation of a finite field with its group extension to the automorphic forms under which it is invariant. This is accomplished through abstraction to higher dimensional integration, by an equivalence to a certain analytical group as an absolute extension of its algebra. Consequently, this allows an analytical functional construction of powerful invariance transformations for a number field to its own algebraic structure. The meaning of such a construction is nuanced, but its specific solutions and generalizations are very powerful. The consequence for proof of existence to such theoretical objects implies an analytical method in constructing the categoric mapping of fundamental structures for virtually any number field. As an analogue to the possible exact distribution of primes, the Langlands program allows a potential general tool for the resolution of invariance at the level of generalized algebraic structures. This in turn permits a somewhat unified analysis of arithmetic objects through their automorphic functions. Simply put, the Langlands philosophy allows a general analysis of structuring the abstractions of numbers. Naturally, this description is at once a reduction and over-generalization of the program's proper theorems, but these mathematical analogues provide the basis of its conceptualization. Background In a very broad context, the program built on existing ideas: the philosophy of cusp forms formulated a few years earlier by Harish-Chandra and , the work and approach of Harish-Chandra on semisimple Lie groups, and in technical terms the trace formula of Selberg and others. What initially was very new in Langlands' work, besides technical depth, was the proposed direct connection to number theory, together with the rich organisational structure hypothesised (so-called functoriality). For example, in the work of Harish-Chandra one finds the principle that what can be done for one semisimple (or reductive) Lie group, should be done for all. Therefore, once the role of some low-dimensional Lie groups such as GL(2) in the theory of modular forms had been recognised, and with hindsight GL(1) in class field theory, the way was open a
https://en.wikipedia.org/wiki/Root%20of%20unity
In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power . Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform. Roots of unity can be defined in any field. If the characteristic of the field is zero, the roots are complex numbers that are also algebraic integers. For fields with a positive characteristic, the roots belong to a finite field, and, conversely, every nonzero element of a finite field is a root of unity. Any algebraically closed field contains exactly th roots of unity, except when is a multiple of the (positive) characteristic of the field. General definition An th root of unity, where is a positive integer, is a number satisfying the equation Unless otherwise specified, the roots of unity may be taken to be complex numbers (including the number 1, and the number −1 if is even, which are complex with a zero imaginary part), and in this case, the th roots of unity are However, the defining equation of roots of unity is meaningful over any field (and even over any ring) , and this allows considering roots of unity in . Whichever is the field , the roots of unity in are either complex numbers, if the characteristic of is 0, or, otherwise, belong to a finite field. Conversely, every nonzero element in a finite field is a root of unity in that field. See Root of unity modulo n and Finite field for further details. An th root of unity is said to be if it is not an th root of unity for some smaller , that is if If n is a prime number, then all th roots of unity, except 1, are primitive. In the above formula in terms of exponential and trigonometric functions, the primitive th roots of unity are those for which and are coprime integers. Subsequent sections of this article will comply with complex roots of unity. For the case of roots of unity in fields of nonzero characteristic, see . For the case of roots of unity in rings of modular integers, see Root of unity modulo n. Elementary properties Every th root of unity is a primitive th root of unity for some , which is the smallest positive integer such that . Any integer power of an th root of unity is also an th root of unity, as This is also true for negative exponents. In particular, the reciprocal of an th root of unity is its complex conjugate, and is also an th root of unity: If is an th root of unity and then . Indeed, by the definition of congruence modulo n, for some integer , and hence Therefore, given a power of , one has , where is the remainder of the Euclidean division of by . Let be a primitive th root of unity. Then the powers , , ..., , are th roots of unity and are all distinct. (If where , then , which would imply that would not be primitive.) This implies that , , ..., , are all of the th roots of unity, since an th-degree po
https://en.wikipedia.org/wiki/Cyclotomic%20polynomial
In mathematics, the nth cyclotomic polynomial, for any positive integer n, is the unique irreducible polynomial with integer coefficients that is a divisor of and is not a divisor of for any Its roots are all nth primitive roots of unity , where k runs over the positive integers not greater than n and coprime to n (and i is the imaginary unit). In other words, the nth cyclotomic polynomial is equal to It may also be defined as the monic polynomial with integer coefficients that is the minimal polynomial over the field of the rational numbers of any primitive nth-root of unity ( is an example of such a root). An important relation linking cyclotomic polynomials and primitive roots of unity is showing that is a root of if and only if it is a dth primitive root of unity for some d that divides n. Examples If n is a prime number, then If n = 2p where p is an odd prime number, then For n up to 30, the cyclotomic polynomials are: The case of the 105th cyclotomic polynomial is interesting because 105 is the least positive integer that is the product of three distinct odd prime numbers (3*5*7) and this polynomial is the first one that has a coefficient other than 1, 0, or −1: Properties Fundamental tools The cyclotomic polynomials are monic polynomials with integer coefficients that are irreducible over the field of the rational numbers. Except for n equal to 1 or 2, they are palindromes of even degree. The degree of , or in other words the number of nth primitive roots of unity, is , where is Euler's totient function. The fact that is an irreducible polynomial of degree in the ring is a nontrivial result due to Gauss. Depending on the chosen definition, it is either the value of the degree or the irreducibility which is a nontrivial result. The case of prime n is easier to prove than the general case, thanks to Eisenstein's criterion. A fundamental relation involving cyclotomic polynomials is which means that each n-th root of unity is a primitive d-th root of unity for a unique d dividing n. The Möbius inversion formula allows the expression of as an explicit rational fraction: where is the Möbius function. The cyclotomic polynomial may be computed by (exactly) dividing by the cyclotomic polynomials of the proper divisors of n previously computed recursively by the same method: (Recall that .) This formula defines an algorithm for computing for any n, provided integer factorization and division of polynomials are available. Many computer algebra systems, such as SageMath, Maple, Mathematica, and PARI/GP, have a built-in function to compute the cyclotomic polynomials. Easy cases for computation As noted above, if is a prime number, then If n is an odd integer greater than one, then In particular, if is twice an odd prime, then (as noted above) If is a prime power (where p is prime), then More generally, if with relatively prime to , then These formulas may be applied repeatedly to get a simple expression
https://en.wikipedia.org/wiki/Partially%20ordered%20group
In abstract algebra, a partially ordered group is a group (G, +) equipped with a partial order "≤" that is translation-invariant; in other words, "≤" has the property that, for all a, b, and g in G, if a ≤ b then a + g ≤ b + g and g + a ≤ g + b. An element x of G is called positive if 0 ≤ x. The set of elements 0 ≤ x is often denoted with G+, and is called the positive cone of G. By translation invariance, we have a ≤ b if and only if 0 ≤ -a + b. So we can reduce the partial order to a monadic property: if and only if For the general group G, the existence of a positive cone specifies an order on G. A group G is a partially orderable group if and only if there exists a subset H (which is G+) of G such that: 0 ∈ H if a ∈ H and b ∈ H then a + b ∈ H if a ∈ H then -x + a + x ∈ H for each x of G if a ∈ H and -a ∈ H then a = 0 A partially ordered group G with positive cone G+ is said to be unperforated if n · g ∈ G+ for some positive integer n implies g ∈ G+. Being unperforated means there is no "gap" in the positive cone G+. If the order on the group is a linear order, then it is said to be a linearly ordered group. If the order on the group is a lattice order, i.e. any two elements have a least upper bound, then it is a lattice-ordered group (shortly l-group, though usually typeset with a script l: ℓ-group). A Riesz group is an unperforated partially ordered group with a property slightly weaker than being a lattice-ordered group. Namely, a Riesz group satisfies the Riesz interpolation property: if x1, x2, y1, y2 are elements of G and xi ≤ yj, then there exists z ∈ G such that xi ≤ z ≤ yj. If G and H are two partially ordered groups, a map from G to H is a morphism of partially ordered groups if it is both a group homomorphism and a monotonic function. The partially ordered groups, together with this notion of morphism, form a category. Partially ordered groups are used in the definition of valuations of fields. Examples The integers with their usual order An ordered vector space is a partially ordered group A Riesz space is a lattice-ordered group A typical example of a partially ordered group is Zn, where the group operation is componentwise addition, and we write (a1,...,an) ≤ (b1,...,bn) if and only if ai ≤ bi (in the usual order of integers) for all i = 1,..., n. More generally, if G is a partially ordered group and X is some set, then the set of all functions from X to G is again a partially ordered group: all operations are performed componentwise. Furthermore, every subgroup of G is a partially ordered group: it inherits the order from G. If A is an approximately finite-dimensional C*-algebra, or more generally, if A is a stably finite unital C*-algebra, then K0(A) is a partially ordered abelian group. (Elliott, 1976) Properties Archimedean Archimedean property of the real numbers can be generalized to partially ordered groups. Property: A partially ordered group is called Archimedean when for any , if and for a
https://en.wikipedia.org/wiki/Logit
In statistics, the logit ( ) function is the quantile function associated with the standard logistic distribution. It has many uses in data analysis and machine learning, especially in data transformations. Mathematically, the logit is the inverse of the standard logistic function , so the logit is defined as Because of this, the logit is also called the log-odds since it is equal to the logarithm of the odds where is a probability. Thus, the logit is a type of function that maps probability values from to real numbers in , akin to the probit function. Definition If is a probability, then is the corresponding odds; the of the probability is the logarithm of the odds, i.e.: The base of the logarithm function used is of little importance in the present article, as long as it is greater than 1, but the natural logarithm with base is the one most often used. The choice of base corresponds to the choice of logarithmic unit for the value: base 2 corresponds to a shannon, base  to a “nat”, and base 10 to a hartley; these units are particularly used in information-theoretic interpretations. For each choice of base, the logit function takes values between negative and positive infinity. The “logistic” function of any number is given by the inverse-: The difference between the s of two probabilities is the logarithm of the odds ratio (), thus providing a shorthand for writing the correct combination of odds ratios only by adding and subtracting: History There have been several efforts to adapt linear regression methods to a domain where the output is a probability value, , instead of any real number . In many cases, such efforts have focused on modeling this problem by mapping the range to and then running the linear regression on these transformed values. In 1934 Chester Ittner Bliss used the cumulative normal distribution function to perform this mapping and called his model probit an abbreviation for "probability unit";. However, this is computationally more expensive. In 1944, Joseph Berkson used log of odds and called this function logit, abbreviation for "logistic unit" following the analogy for probit: Log odds was used extensively by Charles Sanders Peirce (late 19th century). G. A. Barnard in 1949 coined the commonly used term log-odds; the log-odds of an event is the logit of the probability of the event. Barnard also coined the term lods as an abstract form of "log-odds", but suggested that "in practice the term 'odds' should normally be used, since this is more familiar in everyday life". Uses and properties The logit in logistic regression is a special case of a link function in a generalized linear model: it is the canonical link function for the Bernoulli distribution. The logit function is the negative of the derivative of the binary entropy function. The logit is also central to the probabilistic Rasch model for measurement, which has applications in psychological and educational assessment, among other areas. The i
https://en.wikipedia.org/wiki/Odds
In probability theory, odds provide a measure of the likelihood of a particular outcome. They are calculated as the ratio of the number of events that produce that outcome to the number that do not. Odds are commonly used in gambling and statistics. Odds also have a simple relation with probability: the odds of an outcome are the ratio of the probability that the outcome occurs to the probability that the outcome does not occur. In mathematical terms, where is the probability of the outcome: where is the probability that the outcome does not occur. Odds can be demonstrated by examining rolling a six-sided die. The odds of rolling a 6 is 1 to 5 (abbreviated 1:5). This is because there is 1 event (rolling a 6) that produces the specified outcome of "rolling a 6", and 5 events that do not (rolling a 1, 2, 3, 4 or 5). The odds of rolling either a 5 or 6 is 2:4. This is because there are 2 events (rolling a 5 or 6) that produce the specified outcome of "rolling either a 5 or 6", and 4 events that do not (rolling a 1, 2, 3 or 4). The odds of not rolling a 5 or 6 is the inverse 4:2. This is because there are 4 events that produce the specified outcome of "not rolling a 5 or 6" (rolling a 1, 2, 3 or 4) and two that do not (rolling a 5 or 6). The probability of an event is different, but related, and can be calculated from the odds, and vice versa. The probability of rolling a 5 or 6 is the fraction of the number of events over total events or 2/(2+4), which is 1/3, 0.33 or 33%. When gambling, odds are often the ratio of winnings to the stake and you also get your wager returned. So wagering 1 at 1:5 pays out 6 (5 + 1). If you make 6 wagers of 1, and win once and lose 5 times, you will be paid 6 and finish square. Wagering 1 at 1:1 (Evens) pays out 2 (1 + 1) and wagering 1 at 1:2 pays out 3 (1 + 2). These examples may be displayed in different forms, explained later: Fractional odds with a slash: 5 (5/1 against), 1/1 (Evens), 1/2 (on) (short priced horse). Fractional odds can also be written with a colon or a hyphen or dash. Tote boards use decimal or Continental odds (the ratio of total paid out to stake), e.g. 6.0, 2.0, 1.5 In the US Moneyline a positive number lists winnings per $100 wager; a negative number the amount to wager in order to win $100 on a short-priced horse: 500, 100/–100, –200. History The language of odds, such as the use of phrases like "ten to one" for intuitively estimated risks, is found in the sixteenth century, well before the development of probability theory. Shakespeare wrote: The sixteenth-century polymath Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes. Implied by this definition is the fact that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes. Statistical usage In statistics, odds are an expression of relative probabilities, generally quoted as the odds in favor. The odds (in favor) of
https://en.wikipedia.org/wiki/Faltings%27s%20theorem
Faltings's theorem is a result in arithmetic geometry, according to which a curve of genus greater than 1 over the field of rational numbers has only finitely many rational points. This was conjectured in 1922 by Louis Mordell, and known as the Mordell conjecture until its 1983 proof by Gerd Faltings. The conjecture was later generalized by replacing by any number field. Background Let be a non-singular algebraic curve of genus over . Then the set of rational points on may be determined as follows: When , there are either no points or infinitely many. In such cases, may be handled as a conic section. When , if there are any points, then is an elliptic curve and its rational points form a finitely generated abelian group. (This is Mordell's Theorem, later generalized to the Mordell–Weil theorem.) Moreover, Mazur's torsion theorem restricts the structure of the torsion subgroup. When , according to Faltings's theorem, has only a finite number of rational points. Proofs Igor Shafarevich conjectured that there are only finitely many isomorphism classes of abelian varieties of fixed dimension and fixed polarization degree over a fixed number field with good reduction outside a fixed finite set of places. Aleksei Parshin showed that Shafarevich's finiteness conjecture would imply the Mordell conjecture, using what is now called Parshin's trick. Gerd Faltings proved Shafarevich's finiteness conjecture using a known reduction to a case of the Tate conjecture, together with tools from algebraic geometry, including the theory of Néron models. The main idea of Faltings's proof is the comparison of Faltings heights and naive heights via Siegel modular varieties. Later proofs Paul Vojta gave a proof based on diophantine approximation. Enrico Bombieri found a more elementary variant of Vojta's proof. Brian Lawrence and Akshay Venkatesh gave a proof based on -adic Hodge theory, borrowing also some of the easier ingredients of Faltings's original proof. Consequences Faltings's 1983 paper had as consequences a number of statements which had previously been conjectured: The Mordell conjecture that a curve of genus greater than 1 over a number field has only finitely many rational points; The Isogeny theorem that abelian varieties with isomorphic Tate modules (as -modules with Galois action) are isogenous. A sample application of Faltings's theorem is to a weak form of Fermat's Last Theorem: for any fixed there are at most finitely many primitive integer solutions (pairwise coprime solutions) to , since for such the Fermat curve has genus greater than 1. Generalizations Because of the Mordell–Weil theorem, Faltings's theorem can be reformulated as a statement about the intersection of a curve with a finitely generated subgroup of an abelian variety . Generalizing by replacing by a semiabelian variety, by an arbitrary subvariety of , and by an arbitrary finite-rank subgroup of leads to the Mordell–Lang conjecture, which was proved in
https://en.wikipedia.org/wiki/Chinese%20postman%20problem
In graph theory, a branch of mathematics and computer science, Guan's route problem, the Chinese postman problem, postman tour or route inspection problem is to find a shortest closed path or circuit that visits every edge of an (connected) undirected graph at least once. When the graph has an Eulerian circuit (a closed walk that covers every edge once), that circuit is an optimal solution. Otherwise, the optimization problem is to find the smallest number of graph edges to duplicate (or the subset of edges with the minimum possible total weight) so that the resulting multigraph does have an Eulerian circuit. It can be solved in polynomial time. The problem was originally studied by the Chinese mathematician Kwan Mei-Ko in 1960, whose Chinese paper was translated into English in 1962. The original name "Chinese postman problem" was coined in his honor; different sources credit the coinage either to Alan J. Goldman or Jack Edmonds, both of whom were at the U.S. National Bureau of Standards at the time. A generalization is to choose any set T of evenly many vertices that are to be joined by an edge set in the graph whose odd-degree vertices are precisely those of T. Such a set is called a T-join. This problem, the T-join problem, is also solvable in polynomial time by the same approach that solves the postman problem. Undirected solution and T-joins The undirected route inspection problem can be solved in polynomial time by an algorithm based on the concept of a T-join. Let T be a set of vertices in a graph. An edge set J is called a T-join if the collection of vertices that have an odd number of incident edges in J is exactly the set T. A T-join exists whenever every connected component of the graph contains an even number of vertices in T. The T-join problem is to find a T-join with the minimum possible number of edges or the minimum possible total weight. For any T, a smallest T-join (when it exists) necessarily consists of paths that join the vertices of T in pairs. The paths will be such that the total length or total weight of all of them is as small as possible. In an optimal solution, no two of these paths will share any edge, but they may have shared vertices. A minimum T-join can be obtained by constructing a complete graph on the vertices of T, with edges that represent shortest paths in the given input graph, and then finding a minimum weight perfect matching in this complete graph. The edges of this matching represent paths in the original graph, whose union forms the desired T-join. Both constructing the complete graph, and then finding a matching in it, can be done in O(n3) computational steps. For the route inspection problem, T should be chosen as the set of all odd-degree vertices. By the assumptions of the problem, the whole graph is connected (otherwise no tour exists), and by the handshaking lemma it has an even number of odd vertices, so a T-join always exists. Doubling the edges of a T-join causes the given graph t
https://en.wikipedia.org/wiki/Riemann%20surface
In mathematics, particularly in complex analysis, a Riemann surface is a one-dimensional complex manifold. Loosely speaking, this means that any Riemann surface is formed by gluing together open subsets of the complex plane C using holomorphic gluing maps. Examples of Riemann surfaces include graphs of multivalued functions like √z or log(z), e.g. the subset of pairs (z,w) ∈ C2 with w = log(z). Every Riemann surface is a surface: a two-dimensional real manifold, but it contains more structure (specifically a complex structure). Conversely, a two-dimensional real manifold can be turned into a Riemann surface (usually in several inequivalent ways) if and only if it is orientable and metrizable. So the sphere and torus admit complex structures, but the Möbius strip, Klein bottle and real projective plane do not. Every compact Riemann surface is a complex algebraic curve by Chow's theorem and the Riemann–Roch theorem. Riemann surfaces were first studied by and are named after Bernhard Riemann. Definitions There are several equivalent definitions of a Riemann surface. A Riemann surface X is a complex manifold of complex dimension one. This means that X is a connected Hausdorff space that is endowed with an atlas of charts to the open unit disk of the complex plane: for every point x ∈ X there is a neighbourhood of x that is homeomorphic to the open unit disk of the complex plane, and the transition maps between two overlapping charts are required to be holomorphic. A Riemann surface is an oriented manifold of (real) dimension two – a two-sided surface – together with a conformal structure. Again, manifold means that locally at any point x of X, the space is homeomorphic to a subset of the real plane. The supplement "Riemann" signifies that X is endowed with an additional structure which allows angle measurement on the manifold, namely an equivalence class of so-called Riemannian metrics. Two such metrics are considered equivalent if the angles they measure are the same. Choosing an equivalence class of metrics on X is the additional datum of the conformal structure. A complex structure gives rise to a conformal structure by choosing the standard Euclidean metric given on the complex plane and transporting it to X by means of the charts. Showing that a conformal structure determines a complex structure is more difficult. Examples Algebraic curves Further definitions and properties As with any map between complex manifolds, a function f: M → N between two Riemann surfaces M and N is called holomorphic if for every chart g in the atlas of M and every chart h in the atlas of N, the map h ∘ f ∘ g−1 is holomorphic (as a function from C to C) wherever it is defined. The composition of two holomorphic maps is holomorphic. The two Riemann surfaces M and N are called biholomorphic (or conformally equivalent to emphasize the conformal point of view) if there exists a bijective holomorphic function from M to N whose inverse is also holomorphi
https://en.wikipedia.org/wiki/Normal%20%28geometry%29
In geometry, a normal is an object (e.g. a line, ray, or vector) that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the (infinite) line perpendicular to the tangent line to the curve at the point. A normal vector may have length one (in which case it is a unit normal vector) or its length may represent the curvature of the object (a ). Multiplying a normal vector by -1 results in the opposite vector, which may be used for indicating sides (e.g., interior or exterior). In three-dimensional space, a surface normal, or simply normal, to a surface at point is a vector perpendicular to the tangent plane of the surface at . The word normal is also used as an adjective: a line normal to a plane, the normal component of a force, the normal vector, etc. The concept of normality generalizes to orthogonality (right angles). The concept has been generalized to differentiable manifolds of arbitrary dimension embedded in a Euclidean space. The normal vector space or normal space of a manifold at point is the set of vectors which are orthogonal to the tangent space at Normal vectors are of special interest in the case of smooth curves and smooth surfaces. The normal is often used in 3D computer graphics (notice the singular, as only one normal will be defined) to determine a surface's orientation toward a light source for flat shading, or the orientation of each of the surface's corners (vertices) to mimic a curved surface with Phong shading. The foot of a normal at a point of interest Q (analogous to the foot of a perpendicular) can be defined at the point P on the surface where the normal vector contains Q. The normal distance of a point Q to a curve or to a surface is the Euclidean distance between Q and its foot P. Normal to surfaces in 3D space Calculating a surface normal For a convex polygon (such as a triangle), a surface normal can be calculated as the vector cross product of two (non-parallel) edges of the polygon. For a plane given by the equation the vector is a normal. For a plane whose equation is given in parametric form where is a point on the plane and are non-parallel vectors pointing along the plane, a normal to the plane is a vector normal to both and which can be found as the cross product If a (possibly non-flat) surface in 3D space is parameterized by a system of curvilinear coordinates with and real variables, then a normal to S is by definition a normal to a tangent plane, given by the cross product of the partial derivatives If a surface is given implicitly as the set of points satisfying then a normal at a point on the surface is given by the gradient since the gradient at any point is perpendicular to the level set For a surface in given as the graph of a function an upward-pointing normal can be found either from the parametrization giving or more simply from its implicit form giving Since a surface does not have a tangent plane at a
https://en.wikipedia.org/wiki/Equilateral%20triangle
In geometry, an equilateral triangle is a triangle in which all three sides have the same length. In the familiar Euclidean geometry, an equilateral triangle is also equiangular; that is, all three internal angles are also congruent to each other and are each 60°. It is also a regular polygon, so it is also referred to as a regular triangle. Principal properties Denoting the common length of the sides of the equilateral triangle as , we can determine using the Pythagorean theorem that: The area is The perimeter is The radius of the circumscribed circle is The radius of the inscribed circle is or The geometric center of the triangle is the center of the circumscribed and inscribed circles The altitude (height) from any side is Denoting the radius of the circumscribed circle as R, we can determine using trigonometry that: The area of the triangle is Many of these quantities have simple relationships to the altitude ("h") of each vertex from the opposite side: The area is The height of the center from each side, or apothem, is The radius of the circle circumscribing the three vertices is The radius of the inscribed circle is In an equilateral triangle, the altitudes, the angle bisectors, the perpendicular bisectors, and the medians to each side coincide. Characterizations A triangle that has the sides , , , semiperimeter , area , exradii , , (tangent to , , respectively), and where and are the radii of the circumcircle and incircle respectively, is equilateral if and only if any one of the statements in the following nine categories is true. Thus these are properties that are unique to equilateral triangles, and knowing that any one of them is true directly implies that we have an equilateral triangle. Sides Semiperimeter (Blundon) Angles Area (Weitzenböck) Circumradius, inradius, and exradii (Chapple-Euler) Equal cevians Three kinds of cevians coincide, and are equal, for (and only for) equilateral triangles: The three altitudes have equal lengths. The three medians have equal lengths. The three angle bisectors have equal lengths. Coincident triangle centers Every triangle center of an equilateral triangle coincides with its centroid, which implies that the equilateral triangle is the only triangle with no Euler line connecting some of the centers. For some pairs of triangle centers, the fact that they coincide is enough to ensure that the triangle is equilateral. In particular: A triangle is equilateral if any two of the circumcenter, incenter, centroid, or orthocenter coincide. It is also equilateral if its circumcenter coincides with the Nagel point, or if its incenter coincides with its nine-point center. Six triangles formed by partitioning by the medians For any triangle, the three medians partition the triangle into six smaller triangles. A triangle is equilateral if and only if any three of the smaller triangles have either the same perimeter or the same inradius. A triangle is equilateral if and only if t
https://en.wikipedia.org/wiki/Mathematical%20physics
Mathematical physics refers to the development of mathematical methods for application to problems in physics. The Journal of Mathematical Physics defines the field as "the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories". An alternative definition would also include those mathematics that are inspired by physics (also known as physical mathematics). Scope There are several distinct branches of mathematical physics, and these roughly correspond to particular historical parts of our world. Classical mechanics The rigorous, abstract and advanced reformulation of Newtonian mechanics adopting the Lagrangian mechanics and the Hamiltonian mechanics even in the presence of constraints. Both formulations are embodied in analytical mechanics and lead to understanding the deep interplay of the notions of symmetry and conserved quantities during the dynamical evolution, as embodied within the most elementary formulation of Noether's theorem. These approaches and ideas have been extended to other areas of physics as statistical mechanics, continuum mechanics, classical field theory and quantum field theory. Moreover, they have provided several examples and ideas in differential geometry (e.g. several notions in symplectic geometry and vector bundle). Partial differential equations Following mathematics: the theory of partial differential equation, variational calculus, Fourier analysis, potential theory, and vector analysis are perhaps most closely associated with mathematical physics. These were developed intensively from the second half of the 18th century (by, for example, D'Alembert, Euler, and Lagrange) until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics. Quantum theory The theory of atomic spectra (and, later, quantum mechanics) developed almost concurrently with some parts of the mathematical fields of linear algebra, the spectral theory of operators, operator algebras and more broadly, functional analysis. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic and molecular physics. Quantum information theory is another subspecialty. Relativity and quantum relativistic theories The special and general theories of relativity require a rather different type of mathematics. This was group theory, which played an important role in both quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the mathematical description of cosmological as well as quantum field theory phenomena. In the mathematical description of these physical areas, some concepts in homological algebra and category theory are also important. Statistical mechanics Statistical me
https://en.wikipedia.org/wiki/Probabilistic%20method
In mathematics, the probabilistic method is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erdős, for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is strictly greater than zero. Although the proof uses probability, the final conclusion is determined for certain, without any possible error. This method has now been applied to other areas of mathematics such as number theory, linear algebra, and real analysis, as well as in computer science (e.g. randomized rounding), and information theory. Introduction If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero. Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does not satisfy the prescribed properties. Another way to use the probabilistic method is by calculating the expected value of some random variable. If it can be shown that the random variable can take on a value less than the expected value, this proves that the random variable can also take on some value greater than the expected value. Alternatively, the probabilistic method can also be used to guarantee the existence of a desired element in a sample space with a value that is greater than or equal to the calculated expected value, since the non-existence of such element would imply every element in the sample space is less than the expected value, a contradiction. Common tools used in the probabilistic method include Markov's inequality, the Chernoff bound, and the Lovász local lemma. Two examples due to Erdős Although others before him proved theorems via the probabilistic method (for example, Szele's 1943 result that there exist tournaments containing a large number of Hamiltonian cycles), many of the most well known proofs using this method are due to Erdős. The first example below describes one such result from 1947 that gives a proof of a lower bound for the Ramsey number . First example Suppose we have a complete graph on vertices. We wish to show (for small enough values of ) that it is possible to color the edges of the graph in two colors (say red and blue) so that there is no complete subgraph on vertices which is monochromatic (every edge colored the same color). To do so, we color the graph randomly. Color each edge independently with probability of being red and of being blue. We calculate the expected number of monochromatic subgraphs on vertices as follows: For any set of vertices from our graph, define the variable to be if every edge amongst the vertices is the same color, and otherwise. Note that the number of monochromatic -subgraphs is the sum of over all possible subsets . For any individual set , the expected value of is s
https://en.wikipedia.org/wiki/Cayley%E2%80%93Hamilton%20theorem
In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex numbers or the integers) satisfies its own characteristic equation. If is a given matrix and is the identity matrix, then the characteristic polynomial of is defined as , where is the determinant operation and is a variable for a scalar element of the base ring. Since the entries of the matrix are (linear or constant) polynomials in , the determinant is also a degree- monic polynomial in , One can create an analogous polynomial in the matrix instead of the scalar variable , defined as The Cayley–Hamilton theorem states that this polynomial expression is equal to the zero matrix, which is to say that . The theorem allows to be expressed as a linear combination of the lower matrix powers of . When the ring is a field, the Cayley–Hamilton theorem is equivalent to the statement that the minimal polynomial of a square matrix divides its characteristic polynomial. A special case of the theorem was first proved by Hamilton in 1853 in terms of inverses of linear functions of quaternions. This corresponds to the special case of certain real or complex matrices. Cayley in 1858 stated the result for and smaller matrices, but only published a proof for the case. As for matrices, Cayley stated “..., I have not thought it necessary to undertake the labor of a formal proof of the theorem in the general case of a matrix of any degree”. The general case was first proved by Ferdinand Frobenius in 1878. Examples matrices For a matrix , the characteristic polynomial is given by , and so is trivial. matrices As a concrete example, let Its characteristic polynomial is given by The Cayley–Hamilton theorem claims that, if we define then We can verify by computation that indeed, For a generic matrix, the characteristic polynomial is given by , so the Cayley–Hamilton theorem states that which is indeed always the case, evident by working out the entries of . Applications Determinant and inverse matrix For a general invertible matrix , i.e., one with nonzero determinant, −1 can thus be written as an order polynomial expression in : As indicated, the Cayley–Hamilton theorem amounts to the identity The coefficients are given by the elementary symmetric polynomials of the eigenvalues of . Using Newton identities, the elementary symmetric polynomials can in turn be expressed in terms of power sum symmetric polynomials of the eigenvalues: where is the trace of the matrix . Thus, we can express in terms of the trace of powers of . In general, the formula for the coefficients is given in terms of complete exponential Bell polynomials as In particular, the determinant of equals . Thus, the determinant can be written as the trace identity: Likewise, the characteristic polynomial can be written as and, by multiplying both si
https://en.wikipedia.org/wiki/Disc
Disk or disc may refer to: Disk (mathematics), a geometric shape Disk storage Optical disc Music Disc (band), an American experimental music band Disk (album), a 1995 EP by Moby Other uses Disk (functional analysis), a subset of a vector space Disc (galaxy), a disc-shaped group of stars Disc (magazine), a British music magazine Disc harrow, a farm implement DISC assessment, a group of psychometric tests Death-inducing signaling complex Defence Intelligence and Security Centre or Joint Intelligence Training Group, the headquarters of the Defence College of Intelligence and the British Army Intelligence Corps Delaware Independent School Conference, a high-school sports conference , a Turkish trade union centre Domestic international sales corporation, a provision in U.S. tax law Dundee International Sports Centre, a sports centre in Scotland International Symposium on Distributed Computing, an academic conference Intervertebral disc, a cartilage between vertebrae Disk, a part of plant morphology See also Cylinder (disambiguation) Discus (disambiguation) Spelling of disc
https://en.wikipedia.org/wiki/Transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The transpose of a matrix was introduced in 1858 by the British mathematician Arthur Cayley. In the case of a logical matrix representing a binary relation R, the transpose corresponds to the converse relation RT. Transpose of a matrix Definition The transpose of a matrix , denoted by , , , , , , or , may be constructed by any one of the following methods: Reflect over its main diagonal (which runs from top-left to bottom-right) to obtain Write the rows of as the columns of Write the columns of as the rows of Formally, the -th row, -th column element of is the -th row, -th column element of : If is an matrix, then is an matrix. In the case of square matrices, may also denote the th power of the matrix . For avoiding a possible confusion, many authors use left upperscripts, that is, they denote the transpose as . An advantage of this notation is that no parentheses are needed when exponents are involved: as , notation is not ambiguous. In this article this confusion is avoided by never using the symbol as a variable name. Matrix definitions involving transposition A square matrix whose transpose is equal to itself is called a symmetric matrix; that is, is symmetric if A square matrix whose transpose is equal to its negative is called a skew-symmetric matrix; that is, is skew-symmetric if A square complex matrix whose transpose is equal to the matrix with every entry replaced by its complex conjugate (denoted here with an overline) is called a Hermitian matrix (equivalent to the matrix being equal to its conjugate transpose); that is, is Hermitian if A square complex matrix whose transpose is equal to the negation of its complex conjugate is called a skew-Hermitian matrix; that is, is skew-Hermitian if A square matrix whose transpose is equal to its inverse is called an orthogonal matrix; that is, is orthogonal if A square complex matrix whose transpose is equal to its conjugate inverse is called a unitary matrix; that is, is unitary if Examples Properties Let and be matrices and be a scalar. Products If is an matrix and is its transpose, then the result of matrix multiplication with these two matrices gives two square matrices: is and is . Furthermore, these products are symmetric matrices. Indeed, the matrix product has entries that are the inner product of a row of with a column of . But the columns of are the rows of , so the entry corresponds to the inner product of two rows of . If is the entry of the product, it is obtained from rows and in . The entry is also obtained from these rows, thus , and the product matrix () is symmetric. Similarly, the product is a symmetric matrix. A quick proof of the symmetry of results from the fact that it is it
https://en.wikipedia.org/wiki/Cyril%20Burt
Sir Cyril Lodowic Burt, FBA (3 March 1883 – 10 October 1971) was an English educational psychologist and geneticist who also made contributions to statistics. He is known for his studies on the heritability of IQ. Shortly after he died, his studies of inheritance of intelligence were discredited after evidence emerged indicating he had falsified research data, inventing correlations in separated twins which did not exist, alongside other fabrications. Childhood and education Burt was born on 3 March 1883, the first child of Cyril Cecil Barrow Burt (b. 1857), a medical practitioner, and his wife, Martha Decina Evans. He was born in London (some sources give his place of birth as Stratford-upon-Avon, probably because his entry in Who's Who gave his father's address as Snitterfield, Stratford; in fact the Burt family moved to Snitterfield when he was ten). Burt's father initially kept a chemist shop to support his family while he studied medicine. On qualifying, he became the assistant house surgeon and obstetrical assistant at Westminster Hospital, London. The younger Cyril Burt's education began in London at a Board school near St James's Park. In 1890, the family briefly moved to Jersey then to Snitterfield, Warwickshire, in 1893, where Burt's father opened a rural practice. Early in Burt's life he showed a precocious nature, so much so that his father often took the young Burt with him on his medical rounds. One of the elder Burt's more famous patients was Darwin Galton, brother of Francis Galton. The visits the Burts made to the Galton estate not only allowed the young Burt to learn about the work of Francis Galton, but also allowed Burt to meet him on multiple occasions and to be strongly drawn to his ideas; especially his studies in statistics and individual differences, two defining characters of the London School of Psychology whose membership includes both Galton and Burt. He attended King's (now known as Warwick) School, in the county town, from 1892 to 1895, and later won a scholarship to Christ's Hospital, then located in London, where he developed his interest in psychology. From 1902, he attended Jesus College, Oxford, where he studied Classics and took an interest in philosophy and psychology, the latter under William McDougall. McDougall, knowing Burt's interest in Galton's work, taught him the elements of psychometrics, thus helping Burt with his first steps in the development and structure of mental tests, an interest that would last the rest of his life. Burt was one of a group of students who worked with McDougall, which included William Brown, John Flügel, and May Smith, who all went on to have distinguished careers in psychology. Burt graduated with second-class honours in Literae Humaniores (Classics) in 1906, taking a special paper in Psychology in his Final Examinations. He subsequently supplemented his BA with a teaching diploma. In 1907, McDougall invited Burt to help with a nationwide survey of physical and ment
https://en.wikipedia.org/wiki/Complex%20conjugate
In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if and are real numbers then the complex conjugate of is The complex conjugate of is often denoted as or . In polar form, if and are real numbers then the conjugate of is This can be shown using Euler's formula. The product of a complex number and its conjugate is a real number:  (or  in polar coordinates). If a root of a univariate polynomial with real coefficients is complex, then its complex conjugate is also a root. Notation The complex conjugate of a complex number is written as or The first notation, a vinculum, avoids confusion with the notation for the conjugate transpose of a matrix, which can be thought of as a generalization of the complex conjugate. The second is preferred in physics, where dagger (†) is used for the conjugate transpose, as well as electrical engineering and computer engineering, where bar notation can be confused for the logical negation ("NOT") Boolean algebra symbol, while the bar notation is more common in pure mathematics. If a complex number is represented as a matrix, the notations are identical, and the complex conjugate corresponds to a flip along the diagonal. Properties The following properties apply for all complex numbers and unless stated otherwise, and can be proved by writing and in the form For any two complex numbers, conjugation is distributive over addition, subtraction, multiplication and division: A complex number is equal to its complex conjugate if its imaginary part is zero, that is, if the number is real. In other words, real numbers are the only fixed points of conjugation. Conjugation does not change the modulus of a complex number: Conjugation is an involution, that is, the conjugate of the conjugate of a complex number is In symbols, The product of a complex number with its conjugate is equal to the square of the number's modulus: This allows easy computation of the multiplicative inverse of a complex number given in rectangular coordinates: Conjugation is commutative under composition with exponentiation to integer powers, with the exponential function, and with the natural logarithm for nonzero arguments: If is a polynomial with real coefficients and then as well. Thus, non-real roots of real polynomials occur in complex conjugate pairs (see Complex conjugate root theorem). In general, if is a holomorphic function whose restriction to the real numbers is real-valued, and and are defined, then The map from to is a homeomorphism (where the topology on is taken to be the standard topology) and antilinear, if one considers as a complex vector space over itself. Even though it appears to be a well-behaved function, it is not holomorphic; it reverses orientation whereas holomorphic functions locally preserve orientation. It is bijective and compatible with the arith
https://en.wikipedia.org/wiki/Homogeneity%20%28disambiguation%29
Homogeneity is a sameness of constituent structure. Homogeneity, homogeneous, or homogenization may also refer to: In mathematics Transcendental law of homogeneity of Leibniz Homogeneous space for a Lie group G, or more general transformation group Homogeneous function Homogeneous polynomial Homogeneous equation (linear algebra): systems of linear equations with zero constant term Homogeneous differential equation Homogeneous distribution Homogeneous linear transformation Homogeneous relation: binary relation on a set Asymptotic homogenization, a method to study partial differential equations with highly oscillatory coefficients Homogenization of a polynomial, a mathematical operation Homogeneous (large cardinal property) Homogeneous coordinates, used in projective spaces Homogeneous element and homogeneous ideal in a graded ring Homogeneous model in model theory In statistics Homogeneity (statistics), logically consistent data matrices Homogeneity of variance In chemistry Homogeneous catalysis, a sequence of chemical reactions that involve a catalyst in the same phase as the reactants Homogeneous (chemistry), a property of a mixture showing no variation in properties Homogenization (chemistry), intensive mixing of mutually insoluble substance or groups of substance to obtain a soluble suspension or constant Other uses Homogeneity (physics), translational invariance or compatibility of units in equations Homogenization (climate), the process of removing non-climatic changes from climate data Homogenization (biology), a process that involves breaking apart cells — releasing organelles and cytoplasm Homogeneity (ecology), all of the same or similar kind or nature Milk#Creaming and homogenization, to prevent separation of the cream Heterogeneity (disambiguation), links relating to objects or systems consisting of multiple items having a large number of structural variations Monoculturalism, ethnic homogeneity or the advocacy of it Zygosity See also
https://en.wikipedia.org/wiki/Orthogonal%20group
In mathematics, the orthogonal group in dimension , denoted , is the group of distance-preserving transformations of a Euclidean space of dimension that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact. The orthogonal group in dimension has two connected components. The one that contains the identity element is a normal subgroup, called the special orthogonal group, and denoted . It consists of all orthogonal matrices of determinant 1. This group is also called the rotation group, generalizing the fact that in dimensions 2 and 3, its elements are the usual rotations around a point (in dimension 2) or a line (in dimension 3). In low dimension, these groups have been widely studied, see , and . The other component consists of all orthogonal matrices of determinant . This component does not form a group, as the product of any two of its elements is of determinant 1, and therefore not an element of the component. By extension, for any field , an matrix with entries in such that its inverse equals its transpose is called an orthogonal matrix over . The orthogonal matrices form a subgroup, denoted , of the general linear group ; that is More generally, given a non-degenerate symmetric bilinear form or quadratic form on a vector space over a field, the orthogonal group of the form is the group of invertible linear maps that preserve the form. The preceding orthogonal groups are the special case where, on some basis, the bilinear form is the dot product, or, equivalently, the quadratic form is the sum of the square of the coordinates. All orthogonal groups are algebraic groups, since the condition of preserving a form can be expressed as an equality of matrices. Name The name of "orthogonal group" originates from the following characterization of its elements. Given a Euclidean vector space of dimension , the elements of the orthogonal group are, up to a uniform scaling (homothecy), the linear maps from to that map orthogonal vectors to orthogonal vectors. In Euclidean geometry The orthogonal group is the subgroup of the general linear group , consisting of all endomorphisms that preserve the Euclidean norm; that is, endomorphisms such that Let be the group of the Euclidean isometries of a Euclidean space of dimension . This group does not depend on the choice of a particular space, since all Euclidean spaces of the same dimension are isomorphic. The stabilizer subgroup of a point is the subgroup of the elements such that . This stabilizer is (or, more exactly, is isomorphic to) , since the choice of a point as an o
https://en.wikipedia.org/wiki/3D%20rotation%20group
In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition. By definition, a rotation about the origin is a transformation that preserves the origin, Euclidean distance (so it is an isometry), and orientation (i.e., handedness of space). Composing two rotations results in another rotation, every rotation has a unique inverse rotation, and the identity map satisfies the definition of a rotation. Owing to the above properties (along composite rotations' associative property), the set of all rotations is a group under composition. Every non-trivial rotation is determined by its axis of rotation (a line through the origin) and its angle of rotation. Rotations are not commutative (for example, rotating R 90° in the x-y plane followed by S 90° in the y-z plane is not the same as S followed by R), making the 3D rotation group a nonabelian group. Moreover, the rotation group has a natural structure as a manifold for which the group operations are smoothly differentiable, so it is in fact a Lie group. It is compact and has dimension 3. Rotations are linear transformations of and can therefore be represented by matrices once a basis of has been chosen. Specifically, if we choose an orthonormal basis of , every rotation is described by an orthogonal 3 × 3 matrix (i.e., a 3 × 3 matrix with real entries which, when multiplied by its transpose, results in the identity matrix) with determinant 1. The group SO(3) can therefore be identified with the group of these matrices under matrix multiplication. These matrices are known as "special orthogonal matrices", explaining the notation SO(3). The group SO(3) is used to describe the possible rotational symmetries of an object, as well as the possible orientations of an object in space. Its representations are important in physics, where they give rise to the elementary particles of integer spin. Length and angle Besides just preserving length, rotations also preserve the angles between vectors. This follows from the fact that the standard dot product between two vectors u and v can be written purely in terms of length: It follows that every length-preserving linear transformation in preserves the dot product, and thus the angle between vectors. Rotations are often defined as linear transformations that preserve the inner product on , which is equivalent to requiring them to preserve length. See classical group for a treatment of this more general approach, where appears as a special case. Orthogonal and rotation matrices Every rotation maps an orthonormal basis of to another orthonormal basis. Like any linear transformation of finite-dimensional vector spaces, a rotation can always be represented by a matrix. Let be a given rotation. With respect to the standard basis of the columns of are given by . Since the standard basis is orthonormal, and since preserves angles a
https://en.wikipedia.org/wiki/Symplectic%20group
In mathematics, the name symplectic group can refer to two different, but closely related, collections of mathematical groups, denoted and for positive integer n and field F (usually C or R). The latter is called the compact symplectic group and is also denoted by . Many authors prefer slightly different notations, usually differing by factors of . The notation used here is consistent with the size of the most common matrices which represent the groups. In Cartan's classification of the simple Lie algebras, the Lie algebra of the complex group is denoted , and is the compact real form of . Note that when we refer to the (compact) symplectic group it is implied that we are talking about the collection of (compact) symplectic groups, indexed by their dimension . The name "symplectic group" is was coined by Hermann Weyl as a replacement for the previous confusing names (line) complex group and Abelian linear group, and is the Greek analog of "complex". The metaplectic group is a double cover of the symplectic group over R; it has analogues over other local fields, finite fields, and adele rings. The symplectic group is a classical group defined as the set of linear transformations of a -dimensional vector space over the field which preserve a non-degenerate skew-symmetric bilinear form. Such a vector space is called a symplectic vector space, and the symplectic group of an abstract symplectic vector space is denoted . Upon fixing a basis for , the symplectic group becomes the group of symplectic matrices, with entries in , under the operation of matrix multiplication. This group is denoted either or . If the bilinear form is represented by the nonsingular skew-symmetric matrix Ω, then where MT is the transpose of M. Often Ω is defined to be where In is the identity matrix. In this case, can be expressed as those block matrices , where , satisfying the three equations: Since all symplectic matrices have determinant , the symplectic group is a subgroup of the special linear group . When , the symplectic condition on a matrix is satisfied if and only if the determinant is one, so that . For , there are additional conditions, i.e. is then a proper subgroup of . Typically, the field is the field of real numbers or complex numbers . In these cases is a real or complex Lie group of real or complex dimension , respectively. These groups are connected but non-compact. The center of consists of the matrices and as long as the characteristic of the field is not . Since the center of is discrete and its quotient modulo the center is a simple group, is considered a simple Lie group. The real rank of the corresponding Lie algebra, and hence of the Lie group , is . The Lie algebra of is the set equipped with the commutator as its Lie bracket. For the standard skew-symmetric bilinear form , this Lie algebra is the set of all block matrices subject to the conditions The symplectic group over the field of complex numbers is a no
https://en.wikipedia.org/wiki/Uncorrelatedness%20%28probability%20theory%29
In probability theory and statistics, two real-valued random variables, , , are said to be uncorrelated if their covariance, , is zero. If two variables are uncorrelated, there is no linear relationship between them. Uncorrelated random variables have a Pearson correlation coefficient, when it exists, of zero, except in the trivial case when either variable has zero variance (is a constant). In this case the correlation is undefined. In general, uncorrelatedness is not the same as orthogonality, except in the special case where at least one of the two random variables has an expected value of 0. In this case, the covariance is the expectation of the product, and and are uncorrelated if and only if . If and are independent, with finite second moments, then they are uncorrelated. However, not all uncorrelated variables are independent. Definition Definition for two real random variables Two random variables are called uncorrelated if their covariance is zero. Formally: Definition for two complex random variables Two complex random variables are called uncorrelated if their covariance and their pseudo-covariance is zero, i.e. Definition for more than two random variables A set of two or more random variables is called uncorrelated if each pair of them is uncorrelated. This is equivalent to the requirement that the non-diagonal elements of the autocovariance matrix of the random vector are all zero. The autocovariance matrix is defined as: Examples of dependence without correlation Example 1 Let be a random variable that takes the value 0 with probability 1/2, and takes the value 1 with probability 1/2. Let be a random variable, independent of , that takes the value −1 with probability 1/2, and takes the value 1 with probability 1/2. Let be a random variable constructed as . The claim is that and have zero covariance (and thus are uncorrelated), but are not independent. Proof: Taking into account that where the second equality holds because and are independent, one gets Therefore, and are uncorrelated. Independence of and means that for all and , . This is not true, in particular, for and . Thus so and are not independent. Q.E.D. Example 2 If is a continuous random variable uniformly distributed on and , then and are uncorrelated even though determines and a particular value of can be produced by only one or two values of : on the other hand, is 0 on the triangle defined by although is not null on this domain. Therefore and the variables are not independent. Therefore the variables are uncorrelated. When uncorrelatedness implies independence There are cases in which uncorrelatedness does imply independence. One of these cases is the one in which both random variables are two-valued (so each can be linearly transformed to have a Bernoulli distribution). Further, two jointly normally distributed random variables are independent if they are uncorrelated, although this does not hold f
https://en.wikipedia.org/wiki/Symplectic%20matrix
In mathematics, a symplectic matrix is a matrix with real entries that satisfies the condition where denotes the transpose of and is a fixed nonsingular, skew-symmetric matrix. This definition can be extended to matrices with entries in other fields, such as the complex numbers, finite fields, p-adic numbers, and function fields. Typically is chosen to be the block matrix where is the identity matrix. The matrix has determinant and its inverse is . Properties Generators for symplectic matrices Every symplectic matrix has determinant , and the symplectic matrices with real entries form a subgroup of the general linear group under matrix multiplication since being symplectic is a property stable under matrix multiplication. Topologically, this symplectic group is a connected noncompact real Lie group of real dimension , and is denoted . The symplectic group can be defined as the set of linear transformations that preserve the symplectic form of a real symplectic vector space. This symplectic group has a distinguished set of generators, which can be used to find all possible symplectic matrices. This includes the following sets where is the set of symmetric matrices. Then, is generated by the setp. 2 of matrices. In other words, any symplectic matrix can be constructed by multiplying matrices in and together, along with some power of . Inverse matrix Every symplectic matrix is invertible with the inverse matrix given by Furthermore, the product of two symplectic matrices is, again, a symplectic matrix. This gives the set of all symplectic matrices the structure of a group. There exists a natural manifold structure on this group which makes it into a (real or complex) Lie group called the symplectic group. Determinantal properties It follows easily from the definition that the determinant of any symplectic matrix is ±1. Actually, it turns out that the determinant is always +1 for any field. One way to see this is through the use of the Pfaffian and the identity Since and we have that . When the underlying field is real or complex, one can also show this by factoring the inequality . Block form of symplectic matrices Suppose Ω is given in the standard form and let be a block matrix given by where are matrices. The condition for to be symplectic is equivalent to the two following equivalent conditions symmetric, and symmetric, and When these conditions reduce to the single condition . Thus a matrix is symplectic iff it has unit determinant. Inverse matrix of block matrix With in standard form, the inverse of is given by The group has dimension . This can be seen by noting that is anti-symmetric. Since the space of anti-symmetric matrices has dimension the identity imposes constraints on the coefficients of and leaves with independent coefficients. Symplectic transformations In the abstract formulation of linear algebra, matrices are replaced with linear transformations of finite-dimensional
https://en.wikipedia.org/wiki/Unitary%20group
In mathematics, the unitary group of degree n, denoted U(n), is the group of unitary matrices, with the group operation of matrix multiplication. The unitary group is a subgroup of the general linear group . Hyperorthogonal group is an archaic name for the unitary group, especially over finite fields. For the group of unitary matrices with determinant 1, see Special unitary group. In the simple case , the group U(1) corresponds to the circle group, consisting of all complex numbers with absolute value 1, under multiplication. All the unitary groups contain copies of this group. The unitary group U(n) is a real Lie group of dimension n2. The Lie algebra of U(n) consists of skew-Hermitian matrices, with the Lie bracket given by the commutator. The general unitary group (also called the group of unitary similitudes) consists of all matrices A such that A∗A is a nonzero multiple of the identity matrix, and is just the product of the unitary group with the group of all positive multiples of the identity matrix. Properties Since the determinant of a unitary matrix is a complex number with norm , the determinant gives a group homomorphism The kernel of this homomorphism is the set of unitary matrices with determinant . This subgroup is called the special unitary group, denoted . We then have a short exact sequence of Lie groups: The above map to has a section: we can view as the subgroup of that are diagonal with in the upper left corner and on the rest of the diagonal. Therefore is a semidirect product of with . The unitary group is not abelian for . The center of is the set of scalar matrices with ; this follows from Schur's lemma. The center is then isomorphic to . Since the center of is a -dimensional abelian normal subgroup of , the unitary group is not semisimple, but it is reductive. Topology The unitary group U(n) is endowed with the relative topology as a subset of , the set of all complex matrices, which is itself homeomorphic to a 2n2-dimensional Euclidean space. As a topological space, U(n) is both compact and connected. To show that U(n) is connected, recall that any unitary matrix A can be diagonalized by another unitary matrix S. Any diagonal unitary matrix must have complex numbers of absolute value 1 on the main diagonal. We can therefore write A path in U(n) from the identity to A is then given by The unitary group is not simply connected; the fundamental group of U(n) is infinite cyclic for all n: To see this, note that the above splitting of U(n) as a semidirect product of SU(n) and U(1) induces a topological product structure on U(n), so that Now the first unitary group U(1) is topologically a circle, which is well known to have a fundamental group isomorphic to Z, whereas is simply connected. The determinant map induces an isomorphism of fundamental groups, with the splitting inducing the inverse. The Weyl group of U(n) is the symmetric group Sn, acting on the diagonal torus by permuting the entrie
https://en.wikipedia.org/wiki/Special%20unitary%20group
In mathematics, the special unitary group of degree , denoted , is the Lie group of unitary matrices with determinant 1. The matrices of the more general unitary group may have complex determinants with absolute value 1, rather than real 1 in the special case. The group operation is matrix multiplication. The special unitary group is a normal subgroup of the unitary group , consisting of all unitary matrices. As a compact classical group, is the group that preserves the standard inner product on . It is itself a subgroup of the general linear group, The groups find wide application in the Standard Model of particle physics, especially in the electroweak interaction and in quantum chromodynamics. The simplest case, , is the trivial group, having only a single element. The group is isomorphic to the group of quaternions of norm 1, and is thus diffeomorphic to the 3-sphere. Since unit quaternions can be used to represent rotations in 3-dimensional space (up to sign), there is a surjective homomorphism from to the rotation group whose kernel is . is also identical to one of the symmetry groups of spinors, Spin(3), that enables a spinor presentation of rotations. Properties The special unitary group is a strictly real Lie group (vs. a more general complex Lie group). Its dimension as a real manifold is . Topologically, it is compact and simply connected. Algebraically, it is a simple Lie group (meaning its Lie algebra is simple; see below). The center of is isomorphic to the cyclic group , and is composed of the diagonal matrices for an th root of unity and the identity matrix. Its outer automorphism group for is while the outer automorphism group of is the trivial group. A maximal torus of rank is given by the set of diagonal matrices with determinant . The Weyl group of is the symmetric group , which is represented by signed permutation matrices (the signs being necessary to ensure that the determinant is ). The Lie algebra of , denoted by , can be identified with the set of traceless anti‑Hermitian complex matrices, with the regular commutator as a Lie bracket. Particle physicists often use a different, equivalent representation: The set of traceless Hermitian complex matrices with Lie bracket given by times the commutator. Lie algebra The Lie algebra of consists of skew-Hermitian matrices with trace zero. This (real) Lie algebra has dimension . More information about the structure of this Lie algebra can be found below in . Fundamental representation In the physics literature, it is common to identify the Lie algebra with the space of trace-zero Hermitian (rather than the skew-Hermitian) matrices. That is to say, the physicists' Lie algebra differs by a factor of from the mathematicians'. With this convention, one can then choose generators that are traceless Hermitian complex matrices, where: where the are the structure constants and are antisymmetric in all indices, while the -coefficients are sym
https://en.wikipedia.org/wiki/Spherical%20geometry
Spherical geometry or spherics () is the geometry of the two-dimensional surface of a sphere or the -dimensional surface of higher dimensional spheres. Long studied for its practical applications to astronomy, navigation, and geodesy, spherical geometry and the metrical tools of spherical trigonometry are in many respects analogous to Euclidean plane geometry and trigonometry, but also have some important differences. The sphere can be studied either extrinsically as a surface embedded in 3-dimensional Euclidean space (part of the study of solid geometry), or intrinsically using methods that only involve the surface itself without reference to any surrounding space. Principles In plane (Euclidean) geometry, the basic concepts are points and (straight) lines. In spherical geometry, the basic concepts are point and great circle. However, two great circles on a plane intersect in two antipodal points, unlike coplanar lines in Elliptic geometry. In the extrinsic 3-dimensional picture, a great circle is the intersection of the sphere with any plane through the center. In the intrinsic approach, a great circle is a geodesic; a shortest path between any two of its points provided they are close enough. Or, in the (also intrinsic) axiomatic approach analogous to Euclid's axioms of plane geometry, "great circle" is simply an undefined term, together with postulates stipulating the basic relationships between great circles and the also-undefined "points". This is the same as Euclid's method of treating point and line as undefined primitive notions and axiomatizing their relationships. Great circles in many ways play the same logical role in spherical geometry as lines in Euclidean geometry, e.g., as the sides of (spherical) triangles. This is more than an analogy; spherical and plane geometry and others can all be unified under the umbrella of geometry built from distance measurement, where "lines" are defined to mean shortest paths (geodesics). Many statements about the geometry of points and such "lines" are equally true in all those geometries provided lines are defined that way, and the theory can be readily extended to higher dimensions. Nevertheless, because its applications and pedagogy are tied to solid geometry, and because the generalization loses some important properties of lines in the plane, spherical geometry ordinarily does not use the term "line" at all to refer to anything on the sphere itself. If developed as a part of solid geometry, use is made of points, straight lines and planes (in the Euclidean sense) in the surrounding space. In spherical geometry, angles are defined between great circles, resulting in a spherical trigonometry that differs from ordinary trigonometry in many respects; for example, the sum of the interior angles of a spherical triangle exceeds 180 degrees. Relation to similar geometries Because a sphere and a plane differ geometrically, (intrinsic) spherical geometry has some features of a non-Euclide
https://en.wikipedia.org/wiki/Skew-symmetric%20matrix
In mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition In terms of the entries of the matrix, if denotes the entry in the -th row and -th column, then the skew-symmetric condition is equivalent to Example The matrix is skew-symmetric because Properties Throughout, we assume that all matrix entries belong to a field whose characteristic is not equal to 2. That is, we assume that , where 1 denotes the multiplicative identity and 0 the additive identity of the given field. If the characteristic of the field is 2, then a skew-symmetric matrix is the same thing as a symmetric matrix. The sum of two skew-symmetric matrices is skew-symmetric. A scalar multiple of a skew-symmetric matrix is skew-symmetric. The elements on the diagonal of a skew-symmetric matrix are zero, and therefore its trace equals zero. If is a real skew-symmetric matrix and is a real eigenvalue, then , i.e. the nonzero eigenvalues of a skew-symmetric matrix are non-real. If is a real skew-symmetric matrix, then is invertible, where is the identity matrix. If is a skew-symmetric matrix then is a symmetric negative semi-definite matrix. Vector space structure As a result of the first two properties above, the set of all skew-symmetric matrices of a fixed size forms a vector space. The space of skew-symmetric matrices has dimension Let denote the space of matrices. A skew-symmetric matrix is determined by scalars (the number of entries above the main diagonal); a symmetric matrix is determined by scalars (the number of entries on or above the main diagonal). Let denote the space of skew-symmetric matrices and denote the space of symmetric matrices. If then Notice that and This is true for every square matrix with entries from any field whose characteristic is different from 2. Then, since and where denotes the direct sum. Denote by the standard inner product on The real matrix is skew-symmetric if and only if This is also equivalent to for all (one implication being obvious, the other a plain consequence of for all and ). Since this definition is independent of the choice of basis, skew-symmetry is a property that depends only on the linear operator and a choice of inner product. skew symmetric matrices can be used to represent cross products as matrix multiplications. Furthermore, if is a skew-symmetric (or skew-Hermitian) matrix, then for all . Determinant Let be a skew-symmetric matrix. The determinant of satisfies In particular, if is odd, and since the underlying field is not of characteristic 2, the determinant vanishes. Hence, all odd dimension skew symmetric matrices are singular as their determinants are always zero. This result is called Jacobi’s theorem, after Carl Gustav Jacobi (Eves, 1980). The even-dimensional case is more interesting. It turns out that the det
https://en.wikipedia.org/wiki/Diagonal%20matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it (a scalar matrix), is a diagonal matrix. A diagonal matrix is sometimes called a scaling matrix, since matrix multiplication with it results in changing scale (size). Its determinant is the product of its diagonal values. Definition As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with n columns and n rows is diagonal if However, the main diagonal entries are unrestricted. The term diagonal matrix may sometimes refer to a , which is an m-by-n matrix with all the entries not of the form di,i being zero. For example: or More often, however, diagonal matrix refers to square matrices, which can be specified explicitly as a . A square diagonal matrix is a symmetric matrix, so this can also be called a . The following matrix is square diagonal matrix: If the entries are real numbers or complex numbers, then it is a normal matrix as well. In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices". Vector-to-matrix diag operator A diagonal matrix can be constructed from a vector using the operator: This may be written more compactly as . The same operator is also used to represent block diagonal matrices as where each argument is a matrix. The operator may be written as: where represents the Hadamard product and is a constant vector with elements 1. Matrix-to-vector diag operator The inverse matrix-to-vector operator is sometimes denoted by the identically named where the argument is now a matrix and the result is a vector of its diagonal entries. The following property holds: Scalar matrix A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple λ of the identity matrix . Its effect on a vector is scalar multiplication by λ. For example, a 3×3 scalar matrix has the form: The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size. By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its centralizer is the set of diagonal matrices). That is because if a diagonal matrix has then given a matrix with the term of the products are: and and (since one can divide by ), so they do not commute unless the off-diagonal terms are zero. Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices. For an abstract vector