source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/1829%20in%20science
The year 1829 in science and technology involved some significant events, listed below. Chemistry Isaac Holden produces a form of friction match. Mathematics Peter Gustav Lejeune Dirichlet publishes a memoir giving the Dirichlet conditions, showing for which functions the convergence of the Fourier series holds; introducing Dirichlet's test for the convergence of series; the Dirichlet function as an example that not any function is integrable; and, in the proof of the theorem for the Fourier series, the Dirichlet kernel and Dirichlet integral. He also introduces a general modern concept for a function. Nikolai Ivanovich Lobachevsky publishes his work on hyperbolic non-Euclidean geometry. S. D. Poisson publishes Sur l'attraction des sphéroides. Medicine Dr Benjamin Guy Babington makes the first known use of a laryngoscope. Palaeontology Jules Desnoyers names the Quaternary period. Engis 2, part of the skull of a young child and other bones, recognised in 1936 as the first known Neanderthal fossil, is found in the Awirs cave near Engis in the United Kingdom of the Netherlands (modern-day Belgium) by Philippe-Charles Schmerling. Technology May – Cyrill Demian patents a version of the accordion in Vienna. June 30 – Henry Robinson Palmer files a British patent application for corrugated iron for use in buildings. July 23 – In the United States, William Burt obtains the first patent for a form of typewriter, the typographer. October 6–14 – The Rainhill Trials, a steam locomotive competition, are run in England and won by Stephenson's Rocket. December 19 – Charles Wheatstone patents the concertina in Britain. Louis Braille publishes the first description of his method of embossed printing that allows the visually impaired to read. Higher Education Chalmers University of Technology founded in Gothenburg, Sweden. Technical University of Denmark (originally named 'College of Advanced Technology') founded in Copenhagen, Denmark. University of Stuttgart founded in Stuttgart, Germany. Ecole Centrale Paris (originally named 'École Centrale des Arts et Manufactures') founded in Paris, France. Awards Copley Medal: not awarded Births February 2 Alfred Brehm (died 1884), German zoologist. William Stanley (died 1909), English inventor. March 23 – N. R. Pogson (died 1891 in science), English-born astronomer. April 28 – Charles Bourseul (died 1912), Belgian-born telegraph engineer. April 30 – Ferdinand von Hochstetter (died 1884), German-born geologist. August 13 (O.S. August 1) – Ivan Sechenov (died 1905), "the father of Russian physiology". August 23 – Moritz Cantor (died 1920), German historian of mathematics. August 24 - Emanuella Carlbeck (died 1901), Swedish pioneer in the education of students with intellectual disability. September 7 – August Kekulé (died 1896), German chemist. September 30 Franz Reuleaux (died 1905), German mechanical engineer. Joseph Wolstenholme (died 1891), English mathematician. October 15 - Asaph
https://en.wikipedia.org/wiki/Fa%C3%A0%20di%20Bruno%27s%20formula
Faà di Bruno's formula is an identity in mathematics generalizing the chain rule to higher derivatives. It is named after , although he was not the first to state or prove the formula. In 1800, more than 50 years before Faà di Bruno, the French mathematician Louis François Antoine Arbogast had stated the formula in a calculus textbook, which is considered to be the first published reference on the subject. Perhaps the most well-known form of Faà di Bruno's formula says that where the sum is over all -tuples of nonnegative integers satisfying the constraint Sometimes, to give it a memorable pattern, it is written in a way in which the coefficients that have the combinatorial interpretation discussed below are less explicit: Combining the terms with the same value of and noticing that has to be zero for leads to a somewhat simpler formula expressed in terms of Bell polynomials : Combinatorial form The formula has a "combinatorial" form: where runs through the set of all partitions of the set , "" means the variable runs through the list of all of the "blocks" of the partition , and denotes the cardinality of the set (so that is the number of blocks in the partition and is the size of the block ). Example The following is a concrete explanation of the combinatorial form for the case. The pattern is: The factor corresponds to the partition 2 + 1 + 1 of the integer 4, in the obvious way. The factor that goes with it corresponds to the fact that there are three summands in that partition. The coefficient 6 that goes with those factors corresponds to the fact that there are exactly six partitions of a set of four members that break it into one part of size 2 and two parts of size 1. Similarly, the factor in the third line corresponds to the partition 2 + 2 of the integer 4, (4, because we are finding the fourth derivative), while corresponds to the fact that there are two summands (2 + 2) in that partition. The coefficient 3 corresponds to the fact that there are ways of partitioning 4 objects into groups of 2. The same concept applies to the others. A memorizable scheme is as follows: Combinatorics of the Faà di Bruno coefficients These partition-counting Faà di Bruno coefficients have a "closed-form" expression. The number of partitions of a set of size n corresponding to the integer partition of the integer n is equal to These coefficients also arise in the Bell polynomials, which are relevant to the study of cumulants. Variations Multivariate version Let . Then the following identity holds regardless of whether the variables are all distinct, or all identical, or partitioned into several distinguishable classes of indistinguishable variables (if it seems opaque, see the very concrete example below): where (as above) runs through the set of all partitions of the set , "" means the variable runs through the list of all of the "blocks" of the partition , and denotes the cardinality of the set (so tha
https://en.wikipedia.org/wiki/North%20Carolina%20School%20of%20Science%20and%20Mathematics
The North Carolina School of Science and Mathematics (NCSSM) is a two-year, public residential high school with two physical campuses located in Durham, North Carolina and Morganton, North Carolina that focuses on the intensive study of science, mathematics and technology. It accepts rising juniors from across North Carolina and enrolls them through senior year. Although NCSSM is a public school, enrollment is extremely selective, and applicants undergo a competitive review process for admission. NCSSM is a founding member of the National Consortium of Secondary Stem Schools (NCSSS) and a constituent institution of the University of North Carolina system. While not officially branded as such, many residents of North Carolina consider NCSSM to be a counterpart to the University of North Carolina School of the Arts due to their shared status as specialty residential high schools, with NCSSM focusing on science and math and the School of the Arts offering extended study in the arts. History Since its inception in 1980, NCSSM has been fully funded by the state, allowing students to attend without paying any tuition, room, board, or other student fees. This funding is supplemented by the NCSSM Foundation's private funding, which supports NCSSM's academic, residential, and outreach programs as well as providing funds for some capital improvements. Within the past 25 years, the Foundation has raised over $25 million in private support from corporations, foundations, alumni, parents and friends of NCSSM. A tuition fee was considered for the 2002–03 school year in the midst of a state budgetary crisis, but it was never implemented. In 2003, the NC Legislature approved a bill granting tuition costs for any university in the University of North Carolina System to all graduates of NCSSM, starting with the class of 2004, as an incentive to encourage NCSSM's talented students to stay in North Carolina. That bill was amended in 2005 to allow students to use additional tuition monies awarded to cover "costs of attendance." However, the tuition waiver was phased out in the Appropriations Act of 2009 in the North Carolina Senate in order to balance the budget. The bill states that "No new recipients shall be funded after June 30, 2009." The tuition grant was renewed retroactive for 2021 graduates in November 2021 in the Current Appropriations Act of 2021. NCSSM served as a model for 18 similar schools, many of which are now members of the National Consortium of Secondary STEM Schools (NCSSS). NCSSM has opened a second campus in Morganton, North Carolina opening for the 2022-23 school year. This campus houses approximately 300 students. Academics NCSSM students are not given a class rank and are encouraged to strive for their best rather than competing against other students. Although students previously were not given grade point averages (GPAs), the school has since changed their ways to provide GPAs on transcripts and simplify the college application proc
https://en.wikipedia.org/wiki/Colliers%2C%20Newfoundland%20and%20Labrador
Colliers is a town on the Avalon Peninsula in Newfoundland and Labrador, Canada. It is in Division 1 on Conception Bay. According to the 2016 Statistics Canada Census: the area had a population of 654, with 424 dwellings. Colliers was considered by John Guy and his associates as a preferred place for the first settlement in North America. The area that became known as Cupers Cove (Cupids) was chosen to great tragedy. If Colliers had been chosen, this tragedy might not have occurred. Thus Colliers would have been the first permanent settlement in North America. Famous residents of Colliers include former Lt. Gov. James A. McGrath, actor John Ryan and musician Terry McDonald. Demographics In the 2021 Census of Population conducted by Statistics Canada, Colliers had a population of living in of its total private dwellings, a change of from its 2016 population of . With a land area of , it had a population density of in 2021. See also List of cities and towns in Newfoundland and Labrador References External links Colliers Volunteer Fire Department Colliers - Encyclopedia of Newfoundland and Labrador, vol. 1, p. 480-481. Populated coastal places in Canada Towns in Newfoundland and Labrador
https://en.wikipedia.org/wiki/Division%20No.%201%2C%20Subdivision%20Y%2C%20Newfoundland%20and%20Labrador
Division No. 1, Subdivision Y is an unorganized subdivision on the Avalon Peninsula in Newfoundland and Labrador, Canada. It is in Division 1 on Trinity Bay. According to the 2016 Statistics Canada Census: Population: 1,118 % Change (2011 to 2016): -4.9 Dwellings: 857 Area: 190.87 km2 Density: 5.9 people/km2 Newfoundland and Labrador subdivisions
https://en.wikipedia.org/wiki/HC
HC, hc or H/C may refer to: Science, technology, and mathematics Medicine Health Canada Hemicrania continua Hyperelastosis cutis or hereditary equine regional dermal asthenia Chemistry Hemocyanin, a metalloprotein abbreviated Hc HC smoke, a US military designation for Hexachloroethane Homocapsaicin, a capsaicinoid Hydrocarbon, a category of substances consisting only of hydrogen and carbon Other uses in science, technology, and mathematics 74HC-series integrated circuits, a logic family of integrated circuits Felix HC, a series of Romanian personal microcomputers produced by ICE Felix Bucharest and which were ZX Spectrum clones Hemianthus callitrichoides, a freshwater aquatic plant native to Cuba + h.c., a notation used in mathematics and quantum physics Sports Head Coach Hors catégorie (French), used in cycle races to designate a climb that is "beyond categorization" UCI .HC road cycling races (1.HC and 2.HC), the second tier of events in the sport, after the UCI World Tour Other uses Heritage Corridor, a Metra commuter rail line running from Chicago to Joliet, Illinois Highway contract route, an outsourced United States Postal Service delivery method, formerly known as Star routes Honorary degree, or honoris causa Hors de commerce, prints similar to Artist Proofs except they are only available through the artist directly Hospitality Club, an internet-based hospitality service Houston Chronicle, newspaper of record of Houston, Texas Aero-Tropics Air Services (IATA airline designator HC) Disability, acronym for Handicap/Handicapped High-cube container, a type of intermodal shipping container
https://en.wikipedia.org/wiki/Division%20No.%207%2C%20Subdivision%20M%2C%20Newfoundland%20and%20Labrador
Division No. 7, Subdivision M is an unorganised subdivision in eastern Newfoundland, Newfoundland and Labrador, Canada. It is in Division No. 7 on Trinity Bay. According to the 2016 Statistics Canada Census: Population: 1,966 % Change (2011-2016): -4.3 Dwellings: 1,183 Area (km2.): 454.42 Density (persons per km2.): 4.3 Geography of Newfoundland and Labrador Newfoundland and Labrador subdivisions
https://en.wikipedia.org/wiki/Division%20No.%207%2C%20Subdivision%20K%2C%20Newfoundland%20and%20Labrador
Division No. 7, Subd. K is an unorganized subdivision on the Bonavista Peninsula in Newfoundland and Labrador, Canada. It is in Division No. 7 on Trinity Bay. According to the 2001 Statistics Canada Census: Population: 1,152 % Change (1996-2001): -9.9 Dwellings: 695 Area (km2.): 486.16 Density (persons per km2.): 2.4 Newfoundland and Labrador subdivisions
https://en.wikipedia.org/wiki/Division%20No.%207%2C%20Subdivision%20E%2C%20Newfoundland%20and%20Labrador
Division No. 7, Subd. E is an unorganized subdivision in eastern Newfoundland, Newfoundland and Labrador, Canada. It is in Division No. 7 on Bonavista Bay. According to the 2016 Statistics Canada Census: Population: 2,644 % Change (2011-2016): -2.6 Dwellings: 1,682 Area (km2.): 1,664.58 Density (persons per km2.): 1.6 Newfoundland and Labrador subdivisions
https://en.wikipedia.org/wiki/Division%20No.%207%2C%20Subdivision%20D%2C%20Newfoundland%20and%20Labrador
Division No. 7, Subd. D is an unorganized subdivision in eastern Newfoundland, Newfoundland and Labrador, Canada. It is in Division No. 7 on Bonavista Bay. According to the 2016 Statistics Canada Census: Population: 230 % Change (2011-2016): -0.9 Dwellings: 734 Area (km2.): 2,483.46 Density (persons per km2.): 0.1 Newfoundland and Labrador subdivisions
https://en.wikipedia.org/wiki/Division%20No.%207%2C%20Subdivision%20N%2C%20Newfoundland%20and%20Labrador
Division No. 7, Subd. N is an unorganized subdivision in eastern Newfoundland, Newfoundland and Labrador, Canada. It is in Division No. 7 on Freshwater Bay. According to the 2016 Statistics Canada Census: Population: 49 % Change (2011-2016): -15.5 Dwellings: 166 Area (km2.): 1,407.16 Density (persons per km2.): 0 Newfoundland and Labrador subdivisions
https://en.wikipedia.org/wiki/Division%20No.%206%2C%20Subdivision%20E%2C%20Newfoundland%20and%20Labrador
Division No. 6, Subd. E is an unorganized subdivision in northeastern Newfoundland, Newfoundland and Labrador, Canada. It is in Division No. 6. According to the 2016 Statistics Canada Census: Population: 194 % Change (2011-2016): -10.2 Dwellings: 631 Area (km2): 2,309.6 Density (persons per km2): 0.1 Newfoundland and Labrador subdivisions
https://en.wikipedia.org/wiki/Mutual%20information
In probability theory and information theory, the mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the "amount of information" (in units such as shannons (bits), nats or hartleys) obtained about one random variable by observing the other random variable. The concept of mutual information is intimately linked to that of entropy of a random variable, a fundamental notion in information theory that quantifies the expected "amount of information" held in a random variable. Not limited to real-valued random variables and linear dependence like the correlation coefficient, MI is more general and determines how different the joint distribution of the pair is from the product of the marginal distributions of and . MI is the expected value of the pointwise mutual information (PMI). The quantity was defined and analyzed by Claude Shannon in his landmark paper "A Mathematical Theory of Communication", although he did not call it "mutual information". This term was coined later by Robert Fano. Mutual Information is also known as information gain. Definition Let be a pair of random variables with values over the space . If their joint distribution is and the marginal distributions are and , the mutual information is defined as where is the Kullback–Leibler divergence, and is the outer product distribution which assigns probability to each . Notice, as per property of the Kullback–Leibler divergence, that is equal to zero precisely when the joint distribution coincides with the product of the marginals, i.e. when and are independent (and hence observing tells you nothing about ). is non-negative, it is a measure of the price for encoding as a pair of independent random variables when in reality they are not. If the natural logarithm is used, the unit of mutual information is the nat. If the log base 2 is used, the unit of mutual information is the shannon, also known as the bit. If the log base 10 is used, the unit of mutual information is the hartley, also known as the ban or the dit. In terms of PMFs for discrete distributions The mutual information of two jointly discrete random variables and is calculated as a double sum: where is the joint probability mass function of and , and and are the marginal probability mass functions of and respectively. In terms of PDFs for continuous distributions In the case of jointly continuous random variables, the double sum is replaced by a double integral: where is now the joint probability density function of and , and and are the marginal probability density functions of and respectively. Motivation Intuitively, mutual information measures the information that and share: It measures how much knowing one of these variables reduces uncertainty about the other. For example, if and are independent, then knowing does not give any information about and vice versa, so their mutual
https://en.wikipedia.org/wiki/ARS-based%20programming
ARS-based programming is built on three principles: abstraction, reference and synthesis. These principles can be seen as a generalized form of the basic operations of the Lambda calculus. All essential features of a programming language can be derived from ARS, even the three major programming paradigms: functional programming, object-oriented programming and imperative programming. The programming language A++ is a demonstration that, based on ARS, programming patterns can be developed that are very powerful, providing a solid base for solving common programming problems. ARS-based programming as covered in the book Programmierung pur (Undiluted Programming or Barebones Programming) published in German under the (the English rights are available now) is facilitated by three tools: A++, ARS++, and ARSAPI. A++, a minimal programming language with interpreter for basic training enforcing rigorous confrontation with the essentials of programming; ARS++, a full blown programming language including a virtual machine and compiler, extending A++ into a language that is fully ars-compatible with a functionality going beyond that of Scheme with the power of coping with the challenges of real world programming; ARSAPI, a bridge between ARS and popular programming languages like Java, C and C++, consisting of definitions and patterns recommended to express ARS in the target language. See also Educational programming language External links ARS Based Programming: Fundamental And Without Limits, further information on ARS. Programming paradigms
https://en.wikipedia.org/wiki/List%20of%20transforms
This is a list of transforms in mathematics. Integral transforms Abel transform Bateman transform Fourier transform Short-time Fourier transform Gabor transform Hankel transform Hartley transform Hermite transform Hilbert transform Hilbert–Schmidt integral operator Jacobi transform Laguerre transform Laplace transform Inverse Laplace transform Two-sided Laplace transform Inverse two-sided Laplace transform Laplace–Carson transform Laplace–Stieltjes transform Legendre transform Linear canonical transform Mellin transform Inverse Mellin transform Poisson–Mellin–Newton cycle N-transform Radon transform Stieltjes transformation Sumudu transform Wavelet transform (integral) Weierstrass transform Hussein Jassim Transform Discrete transforms Binomial transform Discrete Fourier transform, DFT Fast Fourier transform, a popular implementation of the DFT Discrete cosine transform Modified discrete cosine transform Discrete Hartley transform Discrete sine transform Discrete wavelet transform Hadamard transform (or, Walsh–Hadamard transform) Fast wavelet transform Hankel transform, the determinant of the Hankel matrix Discrete Chebyshev transform Equivalent, up to a diagonal scaling, to a discrete cosine transform Finite Legendre transform Spherical Harmonic transform Irrational base discrete weighted transform Number-theoretic transform Stirling transform Discrete-time transforms These transforms have a continuous frequency domain: Discrete-time Fourier transform Z-transform Data-dependent transforms Karhunen–Loève transform Other transforms Affine transformation (computer graphics) Bäcklund transform Bilinear transform Box–Muller transform Burrows–Wheeler transform (data compression) Chirplet transform Distance transform Fractal transform Gelfand transform Hadamard transform Hough transform (digital image processing) Inverse scattering transform Legendre transformation Möbius transformation Perspective transform (computer graphics) Sequence transform Watershed transform (digital image processing) Wavelet transform (orthonormal) Y-Δ transform (electrical circuits) See also List of Fourier-related transforms Transform coding External links Tables of Integral Transforms at EqWorld: The World of Mathematical Equations. Mathematics-related lists Image processing
https://en.wikipedia.org/wiki/Progression
Progression may refer to: In mathematics: Arithmetic progression, sequence of numbers such that the difference of any two successive members of the sequence is a constant Geometric progression, sequence of numbers such that the quotient of any two successive members of the sequence is a constant Harmonic progression (mathematics), sequence of numbers such that their reciprocals form an arithmetic progression In music: Chord progression, series of chords played in order Backdoor progression, the cadential chord progression from iv7 to I, or flat-VII7 to I in jazz music theory Omnibus progression, sequence of chords which effectively divides the octave into 4 equal parts Ragtime progression, chord progression typical of ragtime music and parlour music genres Progression, music software for guitarists Progression, Markus Schulz's second Artist Album, released in 2007 In other fields: Age progression, the process of modifying a photograph of a person to represent the effect of aging on their appearance Cisternal progression, theory of protein transport through the Golgi apparatus inside a cell Color progression, ranges of color whose values transition smoothly through a hue, saturation, luminance, or any combination of the three Horizontal progression, the gradual movement from left to right during writing a line of text in Western handwriting A progressive tax is a tax by which the tax rate increases as the taxable amount increases Semantic progression, evolution of word usage Educational progression, an individual's movement through stages of education and/or training Progress tracking in video games Astrological progression, used in Horoscopic astrology to forecast future trends and developments. See also Progress (disambiguation)
https://en.wikipedia.org/wiki/Set-theoretic%20definition%20of%20natural%20numbers
In set theory, several ways have been proposed to construct the natural numbers. These include the representation via von Neumann ordinals, commonly employed in axiomatic set theory, and a system based on equinumerosity that was proposed by Gottlob Frege and by Bertrand Russell. Definition as von Neumann ordinals In Zermelo–Fraenkel (ZF) set theory, the natural numbers are defined recursively by letting be the empty set and for each n. In this way for each natural number n. This definition has the property that n is a set with n elements. The first few numbers defined this way are: The set N of natural numbers is defined in this system as the smallest set containing 0 and closed under the successor function S defined by . The structure is a model of the Peano axioms . The existence of the set N is equivalent to the axiom of infinity in ZF set theory. The set N and its elements, when constructed this way, are an initial part of the von Neumann ordinals. Ravven and Quine refer to these sets as "counter sets". Frege and Russell Gottlob Frege and Bertrand Russell each proposed defining a natural number n as the collection of all sets with n elements. More formally, a natural number is an equivalence class of finite sets under the equivalence relation of equinumerosity. This definition may appear circular, but it is not, because equinumerosity can be defined in alternate ways, for instance by saying that two sets are equinumerous if they can be put into one-to-one correspondence—this is sometimes known as Hume's principle. This definition works in type theory, and in set theories that grew out of type theory, such as New Foundations and related systems. However, it does not work in the axiomatic set theory ZFC nor in certain related systems, because in such systems the equivalence classes under equinumerosity are proper classes rather than sets. For enabling natural numbers to form a set, equinumerous classes are replaced by special sets, named cardinal. The simplest way to introduce cardinals is to add a primitive notion, Card(), and an axiom of cardinality to ZF set theory (without axiom of choice). Axiom of cardinality: The sets A and B are equinumerous if and only if Card(A) = Card(B) Definition: the sum of cardinals K and L such as K= Card(A) and L = Card(B) where the sets A and B are disjoint, is Card (A ∪ B). The definition of a finite set is given independently of natural numbers: Definition : A set is finite if and only if any non empty family of its subsets has a minimal element for the inclusion order. Definition: a cardinal n is a natural number if and only if there exists a finite set of which the cardinal is n. 0 = Card (∅) 1 = Card({A}) = Card({∅}) Definition: the successor of a cardinal K is the cardinal K + 1 Theorem: the natural numbers satisfy Peano’s axioms Hatcher William S. Hatcher (1982) derives Peano's axioms from several foundational systems, including ZFC and category theory, and from the sys
https://en.wikipedia.org/wiki/Illustration%20of%20the%20central%20limit%20theorem
In probability theory, the central limit theorem (CLT) states that, in many situations, when independent and identically distributed random variables are added, their properly normalized sum tends toward a normal distribution. This article gives two illustrations of this theorem. Both involve the sum of independent and identically-distributed random variables and show how the probability distribution of the sum approaches the normal distribution as the number of terms in the sum increases. The first illustration involves a continuous probability distribution, for which the random variables have a probability density function. The second illustration, for which most of the computation can be done by hand, involves a discrete probability distribution, which is characterized by a probability mass function. Illustration of the continuous case The density of the sum of two independent real-valued random variables equals the convolution of the density functions of the original variables. Thus, the density of the sum of m+n terms of a sequence of independent identically distributed variables equals the convolution of the densities of the sums of m terms and of n term. In particular, the density of the sum of n+1 terms equals the convolution of the density of the sum of n terms with the original density (the "sum" of 1 term). A probability density function is shown in the first figure below. Then the densities of the sums of two, three, and four independent identically distributed variables, each having the original density, are shown in the following figures. If the original density is a piecewise polynomial, as it is in the example, then so are the sum densities, of increasingly higher degree. Although the original density is far from normal, the density of the sum of just a few variables with that density is much smoother and has some of the qualitative features of the normal density. The convolutions were computed via the discrete Fourier transform. A list of values y = f(x0 + k Δx) was constructed, where f is the original density function, and Δx is approximately equal to 0.002, and k is equal to 0 through 1000. The discrete Fourier transform Y of y was computed. Then the convolution of f with itself is proportional to the inverse discrete Fourier transform of the pointwise product of Y with itself. Original probability density function We start with a probability density function. This function, although discontinuous, is far from the most pathological example that could be created. It is a piecewise polynomial, with pieces of degrees 0 and 1. The mean of this distribution is 0 and its standard deviation is 1. Probability density function of the sum of two terms Next we compute the density of the sum of two independent variables, each having the above density. The density of the sum is the convolution of the above density with itself. The sum of two variables has mean 0. The density shown in the figure at right has been rescaled by , so
https://en.wikipedia.org/wiki/Hausdorff%20distance
In mathematics, the Hausdorff distance, or Hausdorff metric, also called Pompeiu–Hausdorff distance, measures how far two subsets of a metric space are from each other. It turns the set of non-empty compact subsets of a metric space into a metric space in its own right. It is named after Felix Hausdorff and Dimitrie Pompeiu. Informally, two sets are close in the Hausdorff distance if every point of either set is close to some point of the other set. The Hausdorff distance is the longest distance you can be forced to travel by an adversary who chooses a point in one of the two sets, from where you then must travel to the other set. In other words, it is the greatest of all the distances from a point in one set to the closest point in the other set. This distance was first introduced by Hausdorff in his book Grundzüge der Mengenlehre, first published in 1914, although a very close relative appeared in the doctoral thesis of Maurice Fréchet in 1906, in his study of the space of all continuous curves from . Definition Let be a metric space. For each pair of non-empty subsets and , the Hausdorff distance between and is defined as where represents the supremum operator, the infimum operator, and where quantifies the distance from a point to the subset . An equivalent definition is as follows. For each set let which is the set of all points within of the set (sometimes called the -fattening of or a generalized ball of radius around ). Then, the Hausedorff distance between and is defined as Equivalently, where is the smallest distance from the point to the set . Remark It is not true for arbitrary subsets that implies For instance, consider the metric space of the real numbers with the usual metric induced by the absolute value, Take Then . However because , but . But it is true that and ; in particular it is true if are closed. Properties In general, may be infinite. If both X and Y are bounded, then is guaranteed to be finite. if and only if X and Y have the same closure. For every point x of M and any non-empty sets Y, Z of M: d(x,Y) ≤ d(x,Z) + dH(Y,Z), where d(x,Y) is the distance between the point x and the closest point in the set Y. |diameter(Y)-diameter(X)| ≤ 2 dH(X,Y). If the intersection X ∩ Y has a non-empty interior, then there exists a constant r > 0, such that every set X′ whose Hausdorff distance from X is less than r also intersects Y. On the set of all subsets of M, dH yields an extended pseudometric. On the set F(M) of all non-empty compact subsets of M, dH is a metric. If M is complete, then so is F(M). If M is compact, then so is F(M). The topology of F(M) depends only on the topology of M, not on the metric d. Motivation The definition of the Hausdorff distance can be derived by a series of natural extensions of the distance function in the underlying metric space M, as follows: Define a distance function between any point x of M and any non-empty set Y of M by: For example, d(1, {3
https://en.wikipedia.org/wiki/Probability%20amplitude
In quantum mechanics, a probability amplitude is a complex number used for describing the behaviour of systems. The modulus squared of this quantity represents a probability density. Probability amplitudes provide a relationship between the quantum state vector of a system and the results of observations of that system, a link was first proposed by Max Born, in 1926. Interpretation of values of a wave function as the probability amplitude is a pillar of the Copenhagen interpretation of quantum mechanics. In fact, the properties of the space of wave functions were being used to make physical predictions (such as emissions from atoms being at certain discrete energies) before any physical interpretation of a particular function was offered. Born was awarded half of the 1954 Nobel Prize in Physics for this understanding, and the probability thus calculated is sometimes called the "Born probability". These probabilistic concepts, namely the probability density and quantum measurements, were vigorously contested at the time by the original physicists working on the theory, such as Schrödinger and Einstein. It is the source of the mysterious consequences and philosophical difficulties in the interpretations of quantum mechanics—topics that continue to be debated even today. Overview Physical Neglecting some technical complexities, the problem of quantum measurement is the behaviour of a quantum state, for which the value of the observable to be measured is uncertain. Such a state is thought to be a coherent superposition of the observable's eigenstates, states on which the value of the observable is uniquely defined, for different possible values of the observable. When a measurement of is made, the system (under the Copenhagen interpretation) jumps to one of the eigenstates, returning the eigenvalue belonging to that eigenstate. The system may always be described by a linear combination or superposition of these eigenstates with unequal "weights". Intuitively it is clear that eigenstates with heavier "weights" are more "likely" to be produced. Indeed, which of the above eigenstates the system jumps to is given by a probabilistic law: the probability of the system jumping to the state is proportional to the absolute value of the corresponding numerical weight squared. These numerical weights are called probability amplitudes, and this relationship used to calculate probabilities from given pure quantum states (such as wave functions) is called the Born rule. Clearly, the sum of the probabilities, which equals the sum of the absolute squares of the probability amplitudes, must equal 1. This is the normalization (see below) requirement. If the system is known to be in some eigenstate of (e.g. after an observation of the corresponding eigenvalue of ) the probability of observing that eigenvalue becomes equal to 1 (certain) for all subsequent measurements of (so long as no other important forces act between the measurements). In other words the p
https://en.wikipedia.org/wiki/Empirical%20orthogonal%20functions
In statistics and signal processing, the method of empirical orthogonal function (EOF) analysis is a decomposition of a signal or data set in terms of orthogonal basis functions which are determined from the data. The term is also interchangeable with the geographically weighted Principal components analysis in geophysics. The i th basis function is chosen to be orthogonal to the basis functions from the first through i − 1, and to minimize the residual variance. That is, the basis functions are chosen to be different from each other, and to account for as much variance as possible. The method of EOF analysis is similar in spirit to harmonic analysis, but harmonic analysis typically uses predetermined orthogonal functions, for example, sine and cosine functions at fixed frequencies. In some cases the two methods may yield essentially the same results. The basis functions are typically found by computing the eigenvectors of the covariance matrix of the data set. A more advanced technique is to form a kernel out of the data, using a fixed kernel. The basis functions from the eigenvectors of the kernel matrix are thus non-linear in the location of the data (see Mercer's theorem and the kernel trick for more information). See also Blind signal separation Multilinear PCA Multilinear subspace learning Nonlinear dimensionality reduction Orthogonal matrix Signal separation Singular spectrum analysis Transform coding Varimax rotation References and notes Further reading Bjornsson Halldor and Silvia A. Venegas "A manual for EOF and SVD analyses of climate data", McGill University, CCGCR Report No. 97-1, Montréal, Québec, 52pp., 1997. David B. Stephenson and Rasmus E. Benestad. "Environmental statistics for climate researchers". (See: "Empirical Orthogonal Function analysis") Christopher K. Wikle and Noel Cressie. "A dimension reduced approach to space-time Kalman filtering", Biometrika 86:815-829, 1999. Donald W. Denbo and John S. Allen. "Rotary Empirical Orthogonal Function Analysis of Currents near the Oregon Coast", "J. Phys. Oceanogr.", 14, 35-46, 1984. David M. Kaplan "Notes on EOF Analysis" Spatial analysis Statistical signal processing
https://en.wikipedia.org/wiki/Earley
Earley ( ) is a town and civil parish in the Borough of Wokingham, Berkshire, England. Along with the neighbouring town of Woodley, the Office for National Statistics places Earley within the Reading/Wokingham Urban Area; for the purposes of local government it falls within the Borough of Wokingham, outside the area of Reading Borough Council. Its name is sometimes spelt Erleigh or Erlegh and consists of a number of smaller areas, including Maiden Erlegh and Lower Earley, and lies some south and east of the centre of Reading, and some northwest of Wokingham. It had a population of 32,036 at the 2011 Census. In 2014, the RG6 postcode area (which is nearly coterminous with the area of the civil parish) was rated one of the most desirable postcode areas to live in England. The main campus of the University of Reading, Whiteknights Park, lies partly in Earley and partly in the borough of Reading. History Evidence of prehistoric man has been found in locations around Earley. For example, a hand axe was found in the railway cutting; flint implements in a garden in Elm Lane; and hand axes in the gardens in Fowler Close and Silverdale Road. Most of these finds are thought to date from the late Paleolithic period, around 35,000 years ago. Traces of flimsy shelters from the Mesolithic were discovered at the site of the old power station at Thames Valley Park in north Earley. Tools from that time have also been found, including a flint blade found in a garden in Silverdale Road. Archaeological evidence for continued human presence during the Bronze Age and Iron Age was also discovered on the site of the Thames Valley Park, and Roman remains were found on a building site off Meadow Road. Earley is mentioned in the Domesday Book as "Herlei", with two main manors: Erleigh St Bartholomew, later known as Erleigh Court; and Erleigh St Nicolas, later Erleigh White Knights. In Domesday Herlei is said to be "held by Osbern Giffard from the King, previously Dunn held it in alod of King Edward. [It was] then [assessed] at 5 hides; now at 2 hides. There is land for 7 ploughs. In demesne are 1 ½ ploughs; and 4 Villeins and 7 bordars with 2 ½ ploughs. There is 1 slave, and 2 fisheries rendering 68d, 20 acres of meadow [and] woodland for 30 pigs. The value was 100 shillings, later 60 shillings, now £4" The Erleghs, a family of knightly rank who took their name from the manors, held the manors of St Bartholemew and St Nicolas in the latter part of the 12th century through the 13th century and part of the 14th century. John de Erlegh (or John of Earley) was known as the White Knight, hence the renaming of the manor of Erleigh St Nicolas to Whiteknights. The Whiteknights estate was later owned by the Englefields, from 1606 to 1798, and then by the Marquis of Blandford, later the 5th Duke of Marlborough. The manor of Maiden Erleigh was formed out of the manor of Erlegh, as a gift of land by John de Erlegh to Robert de Erlegh in 1368. Later it was transferred to Charles
https://en.wikipedia.org/wiki/Cyclotomic%20identity
In mathematics, the cyclotomic identity states that where M is Moreau's necklace-counting function, and μ is the classic Möbius function of number theory. The name comes from the denominator, 1 − z j, which is the product of cyclotomic polynomials. The left hand side of the cyclotomic identity is the generating function for the free associative algebra on α generators, and the right hand side is the generating function for the universal enveloping algebra of the free Lie algebra on α generators. The cyclotomic identity witnesses the fact that these two algebras are isomorphic. There is also a symmetric generalization of the cyclotomic identity found by Strehl: References Mathematical identities Infinite products
https://en.wikipedia.org/wiki/Woodbury%20matrix%20identity
In mathematics (specifically linear algebra), the Woodbury matrix identity, named after Max A. Woodbury, says that the inverse of a rank-k correction of some matrix can be computed by doing a rank-k correction to the inverse of the original matrix. Alternative names for this formula are the matrix inversion lemma, Sherman–Morrison–Woodbury formula or just Woodbury formula. However, the identity appeared in several papers before the Woodbury report. The Woodbury matrix identity is where A, U, C and V are conformable matrices: A is n×n, C is k×k, U is n×k, and V is k×n. This can be derived using blockwise matrix inversion. While the identity is primarily used on matrices, it holds in a general ring or in an Ab-category. The Woodbury matrix identity allows cheap computation of inverses and solutions to linear equations. However, little is known about the numerical stability of the formula. There are no published results concerning its error bounds. Anecdotal evidence suggests that it may diverge even for seemingly benign examples (when both the original and modified matrices are well-conditioned). Discussion To prove this result, we will start by proving a simpler one. Replacing A and C with the identity matrix I, we obtain another identity which is a bit simpler: To recover the original equation from this reduced identity, set and . This identity itself can be viewed as the combination of two simpler identities. We obtain the first identity from thus, and similarly The second identity is the so-called push-through identity that we obtain from after multiplying by on the right and by on the left. Putting all together, where the first and second equality come from the first and second identity, respectively. Special cases When are vectors, the identity reduces to the Sherman–Morrison formula. In the scalar case, the reduced version is simply Inverse of a sum If n = k and U = V = In is the identity matrix, then Continuing with the merging of the terms of the far right-hand side of the above equation results in Hua's identity Another useful form of the same identity is which, unlike those above, is valid even if is singular, and has a recursive structure that yields if the spectral radius of is less than one. That is, if the above sum converges then it is equal to . This form can be used in perturbative expansions where B is a perturbation of A. Variations Binomial inverse theorem If A, B, U, V are matrices of sizes n×n, k×k, n×k, k×n, respectively, then provided A and B + BVA−1UB are nonsingular. Nonsingularity of the latter requires that B−1 exist since it equals and the rank of the latter cannot exceed the rank of B. Since B is invertible, the two B terms flanking the parenthetical quantity inverse in the right-hand side can be replaced with which results in the original Woodbury identity. A variation for when B is singular and possibly even non-square: Formulas also exist for certain cases in which A
https://en.wikipedia.org/wiki/Truncated%20mean
A truncated mean or trimmed mean is a statistical measure of central tendency, much like the mean and median. It involves the calculation of the mean after discarding given parts of a probability distribution or sample at the high and low end, and typically discarding an equal amount of both. This number of points to be discarded is usually given as a percentage of the total number of points, but may also be given as a fixed number of points. For most statistical applications, 5 to 25 percent of the ends are discarded. For example, given a set of 8 points, trimming by 12.5% would discard the minimum and maximum value in the sample: the smallest and largest values, and would compute the mean of the remaining 6 points. The 25% trimmed mean (when the lowest 25% and the highest 25% are discarded) is known as the interquartile mean. The median can be regarded as a fully truncated mean and is most robust. As with other trimmed estimators, the main advantage of the trimmed mean is robustness and higher efficiency for mixed distributions and heavy-tailed distribution (like the Cauchy distribution), at the cost of lower efficiency for some other less heavily tailed distributions (such as the normal distribution). For intermediate distributions the differences between the efficiency of the mean and the median are not very big, e.g. for the student-t distribution with 2 degrees of freedom the variances for mean and median are nearly equal. Terminology In some regions of Central Europe it is also known as a Windsor mean, but this name should not be confused with the Winsorized mean: in the latter, the observations that the trimmed mean would discard are instead replaced by the largest/smallest of the remaining values. Discarding only the maximum and minimum is known as the , particularly in management statistics. This is also known as the (for example in US agriculture, like the Average Crop Revenue Election), due to its use in Olympic events, such as the ISU Judging System in figure skating, to make the score robust to a single outlier judge. Interpolation When the percentage of points to discard does not yield a whole number, the trimmed mean may be defined by interpolation, generally linear interpolation, between the nearest whole numbers. For example, if you need to calculate the 15% trimmed mean of a sample containing 10 entries, strictly this would mean discarding 1 point from each end (equivalent to the 10% trimmed mean). If interpolating, one would instead compute the 10% trimmed mean (discarding 1 point from each end) and the 20% trimmed mean (discarding 2 points from each end), and then interpolating, in this case averaging these two values. Similarly, if interpolating the 12% trimmed mean, one would take the weighted average: weight the 10% trimmed mean by 0.8 and the 20% trimmed mean by 0.2. Advantages The truncated mean is a useful estimator because it is less sensitive to outliers than the mean but will still give a reasonable estimate
https://en.wikipedia.org/wiki/Motive%20%28algebraic%20geometry%29
In algebraic geometry, motives (or sometimes motifs, following French usage) is a theory proposed by Alexander Grothendieck in the 1960s to unify the vast array of similarly behaved cohomology theories such as singular cohomology, de Rham cohomology, etale cohomology, and crystalline cohomology. Philosophically, a "motif" is the "cohomology essence" of a variety. In the formulation of Grothendieck for smooth projective varieties, a motive is a triple , where X is a smooth projective variety, is an idempotent correspondence, and m an integer, however, such a triple contains almost no information outside the context of Grothendieck's category of pure motives, where a morphism from to is given by a correspondence of degree . A more object-focused approach is taken by Pierre Deligne in Le Groupe Fondamental de la Droite Projective Moins Trois Points. In that article, a motive is a "system of realisations" – that is, a tuple consisting of modules over the rings respectively, various comparison isomorphisms between the obvious base changes of these modules, filtrations , a -action on and a "Frobenius" automorphism of . This data is modeled on the cohomologies of a smooth projective -variety and the structures and compatibilities they admit, and gives an idea about what kind of information is contained in a motive. Introduction The theory of motives was originally conjectured as an attempt to unify a rapidly multiplying array of cohomology theories, including Betti cohomology, de Rham cohomology, l-adic cohomology, and crystalline cohomology. The general hope is that equations like [projective line] = [line] + [point] [projective plane] = [plane] + [line] + [point] can be put on increasingly solid mathematical footing with a deep meaning. Of course, the above equations are already known to be true in many senses, such as in the sense of CW-complex where "+" corresponds to attaching cells, and in the sense of various cohomology theories, where "+" corresponds to the direct sum. From another viewpoint, motives continue the sequence of generalizations from rational functions on varieties to divisors on varieties to Chow groups of varieties. The generalization happens in more than one direction, since motives can be considered with respect to more types of equivalence than rational equivalence. The admissible equivalences are given by the definition of an adequate equivalence relation. Definition of pure motives The category of pure motives often proceeds in three steps. Below we describe the case of Chow motives , where k is any field. First step: category of (degree 0) correspondences, Corr(k) The objects of are simply smooth projective varieties over k. The morphisms are correspondences. They generalize morphisms of varieties , which can be associated with their graphs in , to fixed dimensional Chow cycles on . It will be useful to describe correspondences of arbitrary degree, although morphisms in are correspondences of degree 0.
https://en.wikipedia.org/wiki/Similarity%20transformation
Similarity transformation may refer to: Similarity (geometry), for shape-preserving transformations Matrix similarity, for matrix transformations of the form See also Similarity (disambiguation) Transformation (disambiguation) Affine transformation
https://en.wikipedia.org/wiki/Necklace%20polynomial
In combinatorial mathematics, the necklace polynomial, or Moreau's necklace-counting function, introduced by , counts the number of distinct necklaces of n colored beads chosen out of α available colors. The necklaces are assumed to be aperiodic (not consisting of repeated subsequences), and counted up to rotation (rotating the beads around the necklace counts as the same necklace), but without flipping over (reversing the order of the beads counts as a different necklace). This counting function also describes, among other things, the dimensions in a free Lie algebra and the number of irreducible polynomials over a finite field. Definition The necklace polynomials are a family of polynomials in the variable such that By Möbius inversion they are given by where is the classic Möbius function. A closely related family, called the general necklace polynomial or general necklace-counting function, is: where is Euler's totient function. Applications The necklace polynomials appear as: The number of aperiodic necklaces (or equivalently Lyndon words) which can be made by arranging n colored beads having α available colors. Two such necklaces are considered equal if they are related by a rotation (but not a reflection). Aperiodic refers to necklaces without rotational symmetry, having n distinct rotations. The polynomials give the number of necklaces including the periodic ones: this is easily computed using Pólya theory. The dimension of the degree n piece of the free Lie algebra on α generators ("Witt's formula"). Here should be the dimension of the degree n piece of the corresponding free Jordan algebra. The number of distinct words of length n in a Hall set. Note that the Hall set provides an explicit basis for a free Lie algebra; thus, this is the generalized setting for the above. The number of monic irreducible polynomials of degree n over a finite field with α elements (when is a prime power). Here is the number of polynomials which are primary (a power of an irreducible). The exponent in the cyclotomic identity. Although these various types of objects are all counted by the same polynomial, their precise relationships remain mysterious or unknown. For example, there is no canonical bijection between the irreducible polynomials and the Lyndon words. However, there is a non-canonical bijection that can be constructed as follows. For any degree n monic irreducible polynomial over a field F with α elements, its roots lie in a Galois extension field L with elements. One may choose an element such that is an F-basis for L (a normal basis), where σ is the Frobenius automorphism . Then the bijection can be defined by taking a necklace, viewed as an equivalence class of functions , to the irreducible polynomial , where . Different cyclic rearrangements of f, i.e. different representatives of the same equivalence class, yield cyclic rearrangements of the factors of , so this correspondence is well-defined. Relations between
https://en.wikipedia.org/wiki/Integral%20transform
In mathematics, an integral transform is a type of transform that maps a function from its original function space into another function space via integration, where some of the properties of the original function might be more easily characterized and manipulated than in the original function space. The transformed function can generally be mapped back to the original function space using the inverse transform. General form An integral transform is any transform of the following form: The input of this transform is a function , and the output is another function . An integral transform is a particular kind of mathematical operator. There are numerous useful integral transforms. Each is specified by a choice of the function of two variables, the kernel function, integral kernel or nucleus of the transform. Some kernels have an associated inverse kernel which (roughly speaking) yields an inverse transform: A symmetric kernel is one that is unchanged when the two variables are permuted; it is a kernel function such that . In the theory of integral equations, symmetric kernels correspond to self-adjoint operators. Motivation There are many classes of problems that are difficult to solve—or at least quite unwieldy algebraically—in their original representations. An integral transform "maps" an equation from its original "domain" into another domain, in which manipulating and solving the equation may be much easier than in the original domain. The solution can then be mapped back to the original domain with the inverse of the integral transform. There are many applications of probability that rely on integral transforms, such as "pricing kernel" or stochastic discount factor, or the smoothing of data recovered from robust statistics; see kernel (statistics). History The precursor of the transforms were the Fourier series to express functions in finite intervals. Later the Fourier transform was developed to remove the requirement of finite intervals. Using the Fourier series, just about any practical function of time (the voltage across the terminals of an electronic device for example) can be represented as a sum of sines and cosines, each suitably scaled (multiplied by a constant factor), shifted (advanced or retarded in time) and "squeezed" or "stretched" (increasing or decreasing the frequency). The sines and cosines in the Fourier series are an example of an orthonormal basis. Usage example As an example of an application of integral transforms, consider the Laplace transform. This is a technique that maps differential or integro-differential equations in the "time" domain into polynomial equations in what is termed the "complex frequency" domain. (Complex frequency is similar to actual, physical frequency but rather more general. Specifically, the imaginary component ω of the complex frequency s = −σ + iω corresponds to the usual concept of frequency, viz., the rate at which a sinusoid cycles, whereas the real component σ of the
https://en.wikipedia.org/wiki/Singular%20homology
In algebraic topology, singular homology refers to the study of a certain set of algebraic invariants of a topological space X, the so-called homology groups Intuitively, singular homology counts, for each dimension n, the n-dimensional holes of a space. Singular homology is a particular example of a homology theory, which has now grown to be a rather broad collection of theories. Of the various theories, it is perhaps one of the simpler ones to understand, being built on fairly concrete constructions (see also the related theory simplicial homology). In brief, singular homology is constructed by taking maps of the standard n-simplex to a topological space, and composing them into formal sums, called singular chains. The boundary operation – mapping each n-dimensional simplex to its (n−1)-dimensional boundary – induces the singular chain complex. The singular homology is then the homology of the chain complex. The resulting homology groups are the same for all homotopy equivalent spaces, which is the reason for their study. These constructions can be applied to all topological spaces, and so singular homology is expressible as a functor from the category of topological spaces to the category of graded abelian groups. Singular simplices A singular n-simplex in a topological space X is a continuous function (also called a map) from the standard n-simplex to X, written This map need not be injective, and there can be non-equivalent singular simplices with the same image in X. The boundary of denoted as is defined to be the formal sum of the singular (n − 1)-simplices represented by the restriction of to the faces of the standard n-simplex, with an alternating sign to take orientation into account. (A formal sum is an element of the free abelian group on the simplices. The basis for the group is the infinite set of all possible singular simplices. The group operation is "addition" and the sum of simplex a with simplex b is usually simply designated a + b, but a + a = 2a and so on. Every simplex a has a negative −a.) Thus, if we designate by its vertices corresponding to the vertices of the standard n-simplex (which of course does not fully specify the singular simplex produced by ), then is a formal sum of the faces of the simplex image designated in a specific way. (That is, a particular face has to be the restriction of to a face of which depends on the order that its vertices are listed.) Thus, for example, the boundary of (a curve going from to ) is the formal sum (or "formal difference") . Singular chain complex The usual construction of singular homology proceeds by defining formal sums of simplices, which may be understood to be elements of a free abelian group, and then showing that we can define a certain group, the homology group of the topological space, involving the boundary operator. Consider first the set of all possible singular n-simplices on a topological space X. This set may be used as the
https://en.wikipedia.org/wiki/220%20%28number%29
220 (two hundred [and] twenty) is the natural number following 219 and preceding 221. In mathematics It is a composite number, with its proper divisors being 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110, making it an amicable number with 284. Every number up to 220 may be expressed as a sum of its divisors, making 220 a practical number. It is the sum of four consecutive primes (47 + 53 + 59 + 61). It is the smallest even number with the property that when represented as a sum of two prime numbers (per Goldbach's conjecture) both of the primes must be greater than or equal to 23. There are exactly 220 different ways of partitioning 64 = 82 into a sum of square numbers. It is a tetrahedral number, the sum of the first ten triangular numbers, and a dodecahedral number. If all of the diagonals of a regular decagon are drawn, the resulting figure will have exactly 220 regions. It is the sum of the sums of the divisors of the first 16 positive integers. Integers between 221 and 229 221 222 223 224 225 226 227 228 229 Notes References Wells, D. (1987). The Penguin Dictionary of Curious and Interesting Numbers (pp. 145 – 147). London: Penguin Group. Integers
https://en.wikipedia.org/wiki/Covariant%20derivative
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component (dependent on the embedding) and the intrinsic covariant derivative component. The name is motivated by the importance of changes of coordinate in physics: the covariant derivative transforms covariantly under a general coordinate transformation, that is, linearly via the Jacobian matrix of the transformation. This article presents an introduction to the covariant derivative of a vector field with respect to a vector field, both in a coordinate-free language and using a local coordinate system and the traditional index notation. The covariant derivative of a tensor field is presented as an extension of the same concept. The covariant derivative generalizes straightforwardly to a notion of differentiation associated to a connection on a vector bundle, also known as a Koszul connection. History Historically, at the turn of the 20th century, the covariant derivative was introduced by Gregorio Ricci-Curbastro and Tullio Levi-Civita in the theory of Riemannian and pseudo-Riemannian geometry. Ricci and Levi-Civita (following ideas of Elwin Bruno Christoffel) observed that the Christoffel symbols used to define the curvature could also provide a notion of differentiation which generalized the classical directional derivative of vector fields on a manifold. This new derivative – the Levi-Civita connection – was covariant in the sense that it satisfied Riemann's requirement that objects in geometry should be independent of their description in a particular coordinate system. It was soon noted by other mathematicians, prominent among these being Hermann Weyl, Jan Arnoldus Schouten, and Élie Cartan, that a covariant derivative could be defined abstractly without the presence of a metric. The crucial feature was not a particular dependence on the metric, but that the Christoffel symbols satisfied a certain precise second-order transformation law. This transformation law could serve as a starting point for defining the derivative in a covariant manner. Thus the theory of covariant differentiation forked off from the strictly Riemannian context to include a wider range of possible geometries. In the 1940s, practitioners of differential geometry began introducing other notions of covariant differentiation in general vector bundles which were, in contrast to the
https://en.wikipedia.org/wiki/Deviance%20%28statistics%29
In statistics, deviance is a goodness-of-fit statistic for a statistical model; it is often used for statistical hypothesis testing. It is a generalization of the idea of using the sum of squares of residuals (SSR) in ordinary least squares to cases where model-fitting is achieved by maximum likelihood. It plays an important role in exponential dispersion models and generalized linear models. Deviance can be related to Kullback-Leibler divergence. Definition The unit deviance is a bivariate function that satisfies the following conditions: The total deviance of a model with predictions of the observation is the sum of its unit deviances: . The (total) deviance for a model M0 with estimates , based on a dataset y, may be constructed by its likelihood as: Here denotes the fitted values of the parameters in the model M0, while denotes the fitted parameters for the saturated model: both sets of fitted values are implicitly functions of the observations y. Here, the saturated model is a model with a parameter for every observation so that the data are fitted exactly. This expression is simply 2 times the log-likelihood ratio of the full model compared to the reduced model. The deviance is used to compare two models – in particular in the case of generalized linear models (GLM) where it has a similar role to residual sum of squares from ANOVA in linear models (RSS). Suppose in the framework of the GLM, we have two nested models, M1 and M2. In particular, suppose that M1 contains the parameters in M2, and k additional parameters. Then, under the null hypothesis that M2 is the true model, the difference between the deviances for the two models follows, based on Wilks' theorem, an approximate chi-squared distribution with k-degrees of freedom. This can be used for hypothesis testing on the deviance. Some usage of the term "deviance" can be confusing. According to Collett: "the quantity is sometimes referred to as a deviance. This is [...] inappropriate, since unlike the deviance used in the context of generalized linear modelling, does not measure deviation from a model that is a perfect fit to the data." However, since the principal use is in the form of the difference of the deviances of two models, this confusion in definition is unimportant. Examples The unit deviance for the Poisson distribution is , the unit deviance for the Normal distribution is given by . See also Akaike information criterion Deviance information criterion Hosmer–Lemeshow test, a quality of fit statistic that can be used for binary data Pearson's chi-squared test, an alternative quality of fit statistic for generalized linear models for count data Peirce's criterion Notes References External links Generalized Linear Models - Edward F. Connor Lectures notes on Deviance Statistical hypothesis testing Statistical deviation and dispersion
https://en.wikipedia.org/wiki/List%20of%20English%20districts%20by%20population
This is a list of the 314 districts of England ordered by population, according to estimated figures for from the Office for National Statistics. The list consists of 188 non-metropolitan districts, 32 London boroughs, 36 metropolitan boroughs, 66 unitary authorities, and three sui generis authorities (the City of London, the Isle of Wight and the Isles of Scilly). North Northamptonshire, West Northamptonshire, Somerset, Cumberland, Westmorland and Furness and North Yorkshire are new unitary authorities that have not been updated yet. See also List of two-tier counties of England by population List of ceremonial counties of England by population List of English districts by area List of English districts and their ethnic composition List of English districts by population density List of districts in south east England by population List of towns and cities in England by population References Demographics of England Districts of England Districts by population Local government in England English districts
https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney%20U%20test
In statistics, the Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW/MWU), Wilcoxon rank-sum test, or Wilcoxon–Mann–Whitney test) is a nonparametric test of the null hypothesis that, for randomly selected values X and Y from two populations, the probability of X being greater than Y is equal to the probability of Y being greater than X. Nonparametric tests used on two dependent samples are the sign test and the Wilcoxon signed-rank test. Assumptions and formal statement of hypotheses Although Mann and Whitney developed the Mann–Whitney U test under the assumption of continuous responses with the alternative hypothesis being that one distribution is stochastically greater than the other, there are many other ways to formulate the null and alternative hypotheses such that the Mann–Whitney U test will give a valid test. A very general formulation is to assume that: All the observations from both groups are independent of each other, The responses are at least ordinal (i.e., one can at least say, of any two observations, which is the greater), Under the null hypothesis H0, the distributions of both populations are identical. The alternative hypothesis H1 is that the distributions are not identical. Under the general formulation, the test is only consistent when the following occurs under H1: The probability of an observation from population X exceeding an observation from population Y is different (larger, or smaller) than the probability of an observation from Y exceeding an observation from X; i.e., or . Under more strict assumptions than the general formulation above, e.g., if the responses are assumed to be continuous and the alternative is restricted to a shift in location, i.e., , we can interpret a significant Mann–Whitney U test as showing a difference in medians. Under this location shift assumption, we can also interpret the Mann–Whitney U test as assessing whether the Hodges–Lehmann estimate of the difference in central tendency between the two populations differs from zero. The Hodges–Lehmann estimate for this two-sample problem is the median of all possible differences between an observation in the first sample and an observation in the second sample. Otherwise, if both the dispersions and shapes of the distribution of both samples differ, the Mann–Whitney U test fails a test of medians. It is possible to show examples where medians are numerically equal while the test rejects the null hypothesis with a small p-value. The Mann–Whitney U test / Wilcoxon rank-sum test is not the same as the Wilcoxon signed-rank test, although both are nonparametric and involve summation of ranks. The Mann–Whitney U test is applied to independent samples. The Wilcoxon signed-rank test is applied to matched or dependent samples. U statistic Let be an i.i.d. sample from , and an i.i.d. sample from , and both samples independent of each other. The corresponding Mann–Whitney U statistic is defined as the smaller of: with
https://en.wikipedia.org/wiki/Derived%20functor
In mathematics, certain functors may be derived to obtain other functors closely related to the original ones. This operation, while fairly abstract, unifies a number of constructions throughout mathematics. Motivation It was noted in various quite different settings that a short exact sequence often gives rise to a "long exact sequence". The concept of derived functors explains and clarifies many of these observations. Suppose we are given a covariant left exact functor F : A → B between two abelian categories A and B. If 0 → A → B → C → 0 is a short exact sequence in A, then applying F yields the exact sequence 0 → F(A) → F(B) → F(C) and one could ask how to continue this sequence to the right to form a long exact sequence. Strictly speaking, this question is ill-posed, since there are always numerous different ways to continue a given exact sequence to the right. But it turns out that (if A is "nice" enough) there is one canonical way of doing so, given by the right derived functors of F. For every i≥1, there is a functor RiF: A → B, and the above sequence continues like so: 0 → F(A) → F(B) → F(C) → R1F(A) → R1F(B) → R1F(C) → R2F(A) → R2F(B) → ... . From this we see that F is an exact functor if and only if R1F = 0; so in a sense the right derived functors of F measure "how far" F is from being exact. If the object A in the above short exact sequence is injective, then the sequence splits. Applying any additive functor to a split sequence results in a split sequence, so in particular R1F(A) = 0. Right derived functors (for i>0) are zero on injectives: this is the motivation for the construction given below. Construction and first properties The crucial assumption we need to make about our abelian category A is that it has enough injectives, meaning that for every object A in A there exists a monomorphism A → I where I is an injective object in A. The right derived functors of the covariant left-exact functor F : A → B are then defined as follows. Start with an object X of A. Because there are enough injectives, we can construct a long exact sequence of the form where the I i are all injective (this is known as an injective resolution of X). Applying the functor F to this sequence, and chopping off the first term, we obtain the chain complex Note: this is in general not an exact sequence anymore. But we can compute its cohomology at the i-th spot (the kernel of the map from F(Ii) modulo the image of the map to F(Ii)); we call the result RiF(X). Of course, various things have to be checked: the result does not depend on the given injective resolution of X, and any morphism X → Y naturally yields a morphism RiF(X) → RiF(Y), so that we indeed obtain a functor. Note that left exactness means that 0 → F(X) → F(I0) → F(I1) is exact, so R0F(X) = F(X), so we only get something interesting for i>0. (Technically, to produce well-defined derivatives of F, we would have to fix an injective resolution for every object of A. This choice of inject
https://en.wikipedia.org/wiki/Catastrophe%20theory
In mathematics, catastrophe theory is a branch of bifurcation theory in the study of dynamical systems; it is also a particular special case of more general singularity theory in geometry. Bifurcation theory studies and classifies phenomena characterized by sudden shifts in behavior arising from small changes in circumstances, analysing how the qualitative nature of equation solutions depends on the parameters that appear in the equation. This may lead to sudden and dramatic changes, for example the unpredictable timing and magnitude of a landslide. Catastrophe theory originated with the work of the French mathematician René Thom in the 1960s, and became very popular due to the efforts of Christopher Zeeman in the 1970s. It considers the special case where the long-run stable equilibrium can be identified as the minimum of a smooth, well-defined potential function (Lyapunov function). Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, or to change from attracting to repelling and vice versa, leading to large and sudden changes of the behaviour of the system. However, examined in a larger parameter space, catastrophe theory reveals that such bifurcation points tend to occur as part of well-defined qualitative geometrical structures. In the late 1970s, applications of catastrophe theory to areas outside its scope began to be criticized, especially in biology and social sciences. Zahler and Sussmann, in a 1977 article in Nature, referred to such applications as being "characterised by incorrect reasoning, far-fetched assumptions, erroneous consequences, and exaggerated claims". As a result, catastrophe theory has become less popular in applications. Elementary catastrophes Catastrophe theory analyzes degenerate critical points of the potential function — points where not just the first derivative, but one or more higher derivatives of the potential function are also zero. These are called the germs of the catastrophe geometries. The degeneracy of these critical points can be unfolded by expanding the potential function as a Taylor series in small perturbations of the parameters. When the degenerate points are not merely accidental, but are structurally stable, the degenerate points exist as organising centres for particular geometric structures of lower degeneracy, with critical features in the parameter space around them. If the potential function depends on two or fewer active variables, and four or fewer active parameters, then there are only seven generic structures for these bifurcation geometries, with corresponding standard forms into which the Taylor series around the catastrophe germs can be transformed by diffeomorphism (a smooth transformation whose inverse is also smooth). These seven fundamental types are now presented, with the names that Thom gave them. Potential functions of one active variable Catastrophe theory studies dynamical systems that describe the evolution of a
https://en.wikipedia.org/wiki/DN
DN, dN, or dn may refer to: Science, technology, and mathematics Computing and telecommunications Digital number, the discrete of an analog value sampled by an analog-to-digital converter Directory number in a phone system Distinguished Name, an identifier type in the LDAP protocol Domain name, an identification string used within the Internet Domain Nameserver DOS Navigator, a DOS file manager Mathematics dn (elliptic function), one of Jacobi's elliptic functions Dn, a Coxeter–Dynkin diagram Dn, a dihedral group Other uses in science and technology Decinewton (symbol dN), an SI unit of force Diametre Nominal, the European equivalent of Nominal Pipe Size Diameter of a rolling element bearing in mm multiplied by its speed in rpm Deductive-nomological model, a philosophical model for scientific explanation Double negative T cells, also called CD4−CD8− Diabetic nephropathy Diabetic neuropathy DN Factor, a value used to calculate the correct lubricant for bearings Entertainment Double nil, a bid in the game of Spades (card game) Descriptive chess notation Duke Nukem, a video game character and a game franchise Journalism The Ball State Daily News, the student newspaper of Ball State University in Muncie, Indiana , a Norwegian newspaper , a Swedish newspaper Democracy Now!, the flagship program for the Pacifica Radio network Diário de Notícias, a Portuguese newspaper Places Denmark (WMO country code DN) DN postcode area for Doncaster and surrounding areas, UK Dunedin, New Zealand (commonly abbreviated DN) Dadra and Nagar Haveli, a former union territory of India Other uses DN, IATA code of Dan Air Down (disambiguation) Diebold Nixdorf, American financial technology company Digha Nikaya, a part of the Buddhist Tripitaka International DN, a kind of iceboat Dreadnought, a class of warships Debit note, a commercial document DN, then IATA code for Norwegian Air Argentina DN, then IATA code for Senegal Airlines
https://en.wikipedia.org/wiki/Cyclic%20order
In mathematics, a cyclic order is a way to arrange a set of objects in a circle. Unlike most structures in order theory, a cyclic order is not modeled as a binary relation, such as "". One does not say that east is "more clockwise" than west. Instead, a cyclic order is defined as a ternary relation , meaning "after , one reaches before ". For example, [June, October, February], but not [June, February, October], cf. picture. A ternary relation is called a cyclic order if it is cyclic, asymmetric, transitive, and connected. Dropping the "connected" requirement results in a partial cyclic order. A set with a cyclic order is called a cyclically ordered set or simply a cycle. Some familiar cycles are discrete, having only a finite number of elements: there are seven days of the week, four cardinal directions, twelve notes in the chromatic scale, and three plays in rock-paper-scissors. In a finite cycle, each element has a "next element" and a "previous element". There are also cyclic orders with infinitely many elements, such as the oriented unit circle in the plane. Cyclic orders are closely related to the more familiar linear orders, which arrange objects in a line. Any linear order can be bent into a circle, and any cyclic order can be cut at a point, resulting in a line. These operations, along with the related constructions of intervals and covering maps, mean that questions about cyclic orders can often be transformed into questions about linear orders. Cycles have more symmetries than linear orders, and they often naturally occur as residues of linear structures, as in the finite cyclic groups or the real projective line. Finite cycles A cyclic order on a set with elements is like an arrangement of on a clock face, for an -hour clock. Each element in has a "next element" and a "previous element", and taking either successors or predecessors cycles exactly once through the elements as . There are a few equivalent ways to state this definition. A cyclic order on is the same as a permutation that makes all of into a single cycle. A cycle with elements is also a -torsor: a set with a free transitive action by a finite cyclic group. Another formulation is to make into the standard directed cycle graph on vertices, by some matching of elements to vertices. It can be instinctive to use cyclic orders for symmetric functions, for example as in where writing the final monomial as would distract from the pattern. A substantial use of cyclic orders is in the determination of the conjugacy classes of free groups. Two elements and of the free group on a set are conjugate if and only if, when they are written as products of elements and with in , and then those products are put in cyclic order, the cyclic orders are equivalent under the rewriting rules that allow one to remove or add adjacent and . A cyclic order on a set can be determined by a linear order on , but not in a unique way. Choosing a linear order is equivalent to c
https://en.wikipedia.org/wiki/Finite%20Fourier%20transform
In mathematics the finite Fourier transform may refer to either another name for discrete-time Fourier transform (DTFT) of a finite-length series.  E.g., F.J.Harris (pp. 52–53) describes the finite Fourier transform as a "continuous periodic function" and the discrete Fourier transform (DFT) as "a set of samples of the finite Fourier transform".  In actual implementation, that is not two separate steps; the DFT replaces the DTFT.  So J.Cooley (pp. 77–78) describes the implementation as discrete finite Fourier transform. or another name for the Fourier series coefficients. or another name for one snapshot of a short-time Fourier transform. See also Fourier transform Notes References Further reading Rabiner, Lawrence R.; Gold, Bernard (1975). Theory and application of digital signal processing. Englewood Cliffs, N.J.: Prentice-Hall. pp 65–67. . Transforms Fourier analysis Fourier series
https://en.wikipedia.org/wiki/Category%20of%20topological%20spaces
In mathematics, the category of topological spaces, often denoted Top, is the category whose objects are topological spaces and whose morphisms are continuous maps. This is a category because the composition of two continuous maps is again continuous, and the identity function is continuous. The study of Top and of properties of topological spaces using the techniques of category theory is known as categorical topology. N.B. Some authors use the name Top for the categories with topological manifolds, with compactly generated spaces as objects and continuous maps as morphisms or with the category of compactly generated weak Hausdorff spaces. As a concrete category Like many categories, the category Top is a concrete category, meaning its objects are sets with additional structure (i.e. topologies) and its morphisms are functions preserving this structure. There is a natural forgetful functor U : Top → Set to the category of sets which assigns to each topological space the underlying set and to each continuous map the underlying function. The forgetful functor U has both a left adjoint D : Set → Top which equips a given set with the discrete topology, and a right adjoint I : Set → Top which equips a given set with the indiscrete topology. Both of these functors are, in fact, right inverses to U (meaning that UD and UI are equal to the identity functor on Set). Moreover, since any function between discrete or between indiscrete spaces is continuous, both of these functors give full embeddings of Set into Top. Top is also fiber-complete meaning that the category of all topologies on a given set X (called the fiber of U above X) forms a complete lattice when ordered by inclusion. The greatest element in this fiber is the discrete topology on X, while the least element is the indiscrete topology. Top is the model of what is called a topological category. These categories are characterized by the fact that every structured source has a unique initial lift . In Top the initial lift is obtained by placing the initial topology on the source. Topological categories have many properties in common with Top (such as fiber-completeness, discrete and indiscrete functors, and unique lifting of limits). Limits and colimits The category Top is both complete and cocomplete, which means that all small limits and colimits exist in Top. In fact, the forgetful functor U : Top → Set uniquely lifts both limits and colimits and preserves them as well. Therefore, (co)limits in Top are given by placing topologies on the corresponding (co)limits in Set. Specifically, if F is a diagram in Top and (L, φ : L → F) is a limit of UF in Set, the corresponding limit of F in Top is obtained by placing the initial topology on (L, φ : L → F). Dually, colimits in Top are obtained by placing the final topology on the corresponding colimits in Set. Unlike many algebraic categories, the forgetful functor U : Top → Set does not create or reflect limits since there will typicall
https://en.wikipedia.org/wiki/Urn%20problem
In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in an urn or other container. One pretends to remove one or more balls from the urn; the goal is to determine the probability of drawing one color or another, or some other properties. A number of important variations are described below. An urn model is either a set of probabilities that describe events within an urn problem, or it is a probability distribution, or a family of such distributions, of random variables associated with urn problems. History In Ars Conjectandi (1713), Jacob Bernoulli considered the problem of determining, given a number of pebbles drawn from an urn, the proportions of different colored pebbles within the urn. This problem was known as the inverse probability problem, and was a topic of research in the eighteenth century, attracting the attention of Abraham de Moivre and Thomas Bayes. Bernoulli used the Latin word urna, which primarily means a clay vessel, but is also the term used in ancient Rome for a vessel of any kind for collecting ballots or lots; the present-day Italian word for ballot box is still urna. Bernoulli's inspiration may have been lotteries, elections, or games of chance which involved drawing balls from a container, and it has been asserted that elections in medieval and renaissance Venice, including that of the doge, often included the choice of electors by lot, using balls of different colors drawn from an urn. Basic urn model In this basic urn model in probability theory, the urn contains x white and y black balls, well-mixed together. One ball is drawn randomly from the urn and its color observed; it is then placed back in the urn (or not), and the selection process is repeated. Possible questions that can be answered in this model are: Can I infer the proportion of white and black balls from n observations? With what degree of confidence? Knowing x and y, what is the probability of drawing a specific sequence (e.g. one white followed by one black)? If I only observe n balls, how sure can I be that there are no black balls? (A variation both on the first and the second question) Examples of urn problems beta-binomial distribution: as above, except that every time a ball is observed, an additional ball of the same color is added to the urn. Hence, the number of total balls in the urn grows. See Pólya urn model. binomial distribution: the distribution of the number of successful draws (trials), i.e. extraction of white balls, given n draws with replacement in an urn with black and white balls. Hoppe urn: a Pólya urn with an additional ball called the mutator. When the mutator is drawn it is replaced along with an additional ball of an entirely new colour. hypergeometric distribution: the balls are not returned to the urn once extracted. Hence, the number of total marbles in the urn decreases. Thi
https://en.wikipedia.org/wiki/Conditional%20expectation
In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value – the value it would take "on average" over an arbitrarily large number of occurrences – given that a certain set of "conditions" is known to occur. If the random variable can take on only a finite number of values, the "conditions" are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space. Depending on the context, the conditional expectation can be either a random variable or a function. The random variable is denoted analogously to conditional probability. The function form is either denoted or a separate function symbol such as is introduced with the meaning . Examples Example 1: Dice rolling Consider the roll of a fair and let A = 1 if the number is even (i.e., 2, 4, or 6) and A = 0 otherwise. Furthermore, let B = 1 if the number is prime (i.e., 2, 3, or 5) and B = 0 otherwise. The unconditional expectation of A is , but the expectation of A conditional on B = 1 (i.e., conditional on the die roll being 2, 3, or 5) is , and the expectation of A conditional on B = 0 (i.e., conditional on the die roll being 1, 4, or 6) is . Likewise, the expectation of B conditional on A = 1 is , and the expectation of B conditional on A = 0 is . Example 2: Rainfall data Suppose we have daily rainfall data (mm of rain each day) collected by a weather station on every day of the ten–year (3652-day) period from January 1, 1990, to December 31, 1999. The unconditional expectation of rainfall for an unspecified day is the average of the rainfall amounts for those 3652 days. The conditional expectation of rainfall for an otherwise unspecified day known to be (conditional on being) in the month of March, is the average of daily rainfall over all 310 days of the ten–year period that falls in March. And the conditional expectation of rainfall conditional on days dated March 2 is the average of the rainfall amounts that occurred on the ten days with that specific date. History The related concept of conditional probability dates back at least to Laplace, who calculated conditional distributions. It was Andrey Kolmogorov who, in 1933, formalized it using the Radon–Nikodym theorem. In works of Paul Halmos and Joseph L. Doob from 1953, conditional expectation was generalized to its modern definition using sub-σ-algebras. Definitions Conditioning on an event If is an event in with nonzero probability, and is a discrete random variable, the conditional expectation of given is where the sum is taken over all possible outcomes of . If , the conditional expectation is undefined due to the division by zero. Discrete random variables If and are discrete random variables, the conditional expectation of given is where is the joint probability mass
https://en.wikipedia.org/wiki/Birch%20and%20Swinnerton-Dyer%20conjecture
In mathematics, the Birch and Swinnerton-Dyer conjecture (often called the Birch–Swinnerton-Dyer conjecture) describes the set of rational solutions to equations defining an elliptic curve. It is an open problem in the field of number theory and is widely recognized as one of the most challenging mathematical problems. It is named after mathematicians Bryan John Birch and Peter Swinnerton-Dyer, who developed the conjecture during the first half of the 1960s with the help of machine computation. , only special cases of the conjecture have been proven. The modern formulation of the conjecture relates arithmetic data associated with an elliptic curve E over a number field K to the behaviour of the Hasse–Weil L-function L(E, s) of E at s = 1. More specifically, it is conjectured that the rank of the abelian group E(K) of points of E is the order of the zero of L(E, s) at s = 1, and the first non-zero coefficient in the Taylor expansion of L(E, s) at s = 1 is given by more refined arithmetic data attached to E over K . The conjecture was chosen as one of the seven Millennium Prize Problems listed by the Clay Mathematics Institute, which has offered a $1,000,000 prize for the first correct proof. Background proved Mordell's theorem: the group of rational points on an elliptic curve has a finite basis. This means that for any elliptic curve there is a finite subset of the rational points on the curve, from which all further rational points may be generated. If the number of rational points on a curve is infinite then some point in a finite basis must have infinite order. The number of independent basis points with infinite order is called the rank of the curve, and is an important invariant property of an elliptic curve. If the rank of an elliptic curve is 0, then the curve has only a finite number of rational points. On the other hand, if the rank of the curve is greater than 0, then the curve has an infinite number of rational points. Although Mordell's theorem shows that the rank of an elliptic curve is always finite, it does not give an effective method for calculating the rank of every curve. The rank of certain elliptic curves can be calculated using numerical methods but (in the current state of knowledge) it is unknown if these methods handle all curves. An L-function L(E, s) can be defined for an elliptic curve E by constructing an Euler product from the number of points on the curve modulo each prime p. This L-function is analogous to the Riemann zeta function and the Dirichlet L-series that is defined for a binary quadratic form. It is a special case of a Hasse–Weil L-function. The natural definition of L(E, s) only converges for values of s in the complex plane with Re(s) > 3/2. Helmut Hasse conjectured that L(E, s) could be extended by analytic continuation to the whole complex plane. This conjecture was first proved by for elliptic curves with complex multiplication. It was subsequently shown to be true for all elliptic curves o
https://en.wikipedia.org/wiki/Khinchin%27s%20constant
In number theory, Aleksandr Yakovlevich Khinchin proved that for almost all real numbers x, coefficients ai of the continued fraction expansion of x have a finite geometric mean that is independent of the value of x and is known as Khinchin's constant. That is, for it is almost always true that where is Khinchin's constant (with denoting the product over all sequence terms). Although almost all numbers satisfy this property, it has not been proven for any real number not specifically constructed for the purpose. Among the numbers whose continued fraction expansions apparently do have this property (based on numerical evidence) are π, the Euler-Mascheroni constant γ, Apéry's constant ζ(3), and Khinchin's constant itself. However, this is unproven. Among the numbers x whose continued fraction expansions are known not to have this property are rational numbers, roots of quadratic equations (including the golden ratio Φ and the square roots of integers), and the base of the natural logarithm e. Khinchin is sometimes spelled Khintchine (the French transliteration of Russian Хинчин) in older mathematical literature. Sketch of proof The proof presented here was arranged by Czesław Ryll-Nardzewski and is much simpler than Khinchin's original proof which did not use ergodic theory. Since the first coefficient a0 of the continued fraction of x plays no role in Khinchin's theorem and since the rational numbers have Lebesgue measure zero, we are reduced to the study of irrational numbers in the unit interval, i.e., those in . These numbers are in bijection with infinite continued fractions of the form [0; a1, a2, ...], which we simply write [a1, a2, ...], where a1, a2, ... are positive integers. Define a transformation T:I → I by The transformation T is called the Gauss–Kuzmin–Wirsing operator. For every Borel subset E of I, we also define the Gauss–Kuzmin measure of E Then μ is a probability measure on the σ-algebra of Borel subsets of I. The measure μ is equivalent to the Lebesgue measure on I, but it has the additional property that the transformation T preserves the measure μ. Moreover, it can be proved that T is an ergodic transformation of the measurable space I endowed with the probability measure μ (this is the hard part of the proof). The ergodic theorem then says that for any μ-integrable function f on I, the average value of is the same for almost all : Applying this to the function defined by f([a1, a2, ...]) = log(a1), we obtain that for almost all [a1, a2, ...] in I as n → ∞. Taking the exponential on both sides, we obtain to the left the geometric mean of the first n coefficients of the continued fraction, and to the right Khinchin's constant. Series expressions Khinchin's constant may be expressed as a rational zeta series in the form or, by peeling off terms in the series, where N is an integer, held fixed, and ζ(s, n) is the complex Hurwitz zeta function. Both series are strongly convergent, as ζ(n) − 1 approaches
https://en.wikipedia.org/wiki/Common%20cause%20and%20special%20cause%20%28statistics%29
Common and special causes are the two distinct origins of variation in a process, as defined in the statistical thinking and methods of Walter A. Shewhart and W. Edwards Deming. Briefly, "common causes", also called natural patterns, are the usual, historical, quantifiable variation in a system, while "special causes" are unusual, not previously observed, non-quantifiable variation. The distinction is fundamental in philosophy of statistics and philosophy of probability, with different treatment of these issues being a classic issue of probability interpretations, being recognised and discussed as early as 1703 by Gottfried Leibniz; various alternative names have been used over the years. The distinction has been particularly important in the thinking of economists Frank Knight, John Maynard Keynes and G. L. S. Shackle. Origins and concepts In 1703, Jacob Bernoulli wrote to Gottfried Leibniz to discuss their shared interest in applying mathematics and probability to games of chance. Bernoulli speculated whether it would be possible to gather mortality data from gravestones and thereby calculate, by their existing practice, the probability of a man currently aged 20 years outliving a man aged 60 years. Leibniz replied that he doubted this was possible: This captures the central idea that some variation is predictable, at least approximately in frequency. This common-cause variation is evident from the experience base. However, new, unanticipated, emergent or previously neglected phenomena (e.g. "new diseases") result in variation outside the historical experience base. Shewhart and Deming argued that such special-cause variation is fundamentally unpredictable in frequency of occurrence or in severity. John Maynard Keynes emphasised the importance of special-cause variation when he wrote: Definitions Common-cause variations Common-cause variation is characterised by: Phenomena constantly active within the system; Variation predictable probabilistically; Irregular variation within a historical experience base; and Lack of significance in individual high or low values. The outcomes of a perfectly balanced roulette wheel are a good example of common-cause variation. Common-cause variation is the noise within the system. Walter A. Shewhart originally used the term chance cause. The term common cause was coined by Harry Alpert in 1947. The Western Electric Company used the term natural pattern. Shewhart called a process that features only common-cause variation as being in statistical control. This term is deprecated by some modern statisticians who prefer the phrase stable and predictable. Special-cause variation Special-cause variation is characterised by: New, unanticipated, emergent or previously neglected phenomena within the system; Variation inherently unpredictable, even probabilistically; Variation outside the historical experience base; and Evidence of some inherent change in the system or our knowledge of it. Special-cause variation
https://en.wikipedia.org/wiki/Daniel%20Quillen
Daniel Gray "Dan" Quillen (June 22, 1940 – April 30, 2011) was an American mathematician. He is known for being the "prime architect" of higher algebraic K-theory, for which he was awarded the Cole Prize in 1975 and the Fields Medal in 1978. From 1984 to 2006, he was the Waynflete Professor of Pure Mathematics at Magdalen College, Oxford. Education and career Quillen was born in Orange, New Jersey, and attended Newark Academy. He entered Harvard University, where he earned both his AB, in 1961, and his PhD in 1964; the latter completed under the supervision of Raoul Bott, with a thesis in partial differential equations. He was a Putnam Fellow in 1959. Quillen obtained a position at the Massachusetts Institute of Technology after completing his doctorate. He also spent a number of years at several other universities. He visited France twice: first as a Sloan Fellow in Paris, during the academic year 1968–69, where he was greatly influenced by Grothendieck, and then, during 1973–74, as a Guggenheim Fellow. In 1969–70, he was a visiting member of the Institute for Advanced Study in Princeton, where he came under the influence of Michael Atiyah. In 1978, Quillen received a Fields Medal at the International Congress of Mathematicians held in Helsinki. From 1984 to 2006, he was the Waynflete Professor of Pure Mathematics at Magdalen College, Oxford. Quillen retired at the end of 2006. He died from complications of Alzheimer's disease on April 30, 2011, aged 70, in Florida. Mathematical contributions Quillen's best known contribution (mentioned specifically in his Fields medal citation) was his formulation of higher algebraic K-theory in 1972. This new tool, formulated in terms of homotopy theory, proved to be successful in formulating and solving problems in algebra, particularly in ring theory and module theory. More generally, Quillen developed tools (especially his theory of model categories) that allowed algebro-topological tools to be applied in other contexts. Before his work in defining higher algebraic K-theory, Quillen worked on the Adams conjecture, formulated by Frank Adams, in homotopy theory. His proof of the conjecture used techniques from the modular representation theory of groups, which he later applied to work on cohomology of groups and algebraic K-theory. He also worked on complex cobordism, showing that its formal group law is essentially the universal one. In related work, he also supplied a proof of Serre's conjecture about the triviality of algebraic vector bundles on affine space, which led to the Bass–Quillen conjecture. He was also an architect (along with Dennis Sullivan) of rational homotopy theory. He introduced the Quillen determinant line bundle and the Mathai–Quillen formalism. See also Friedhelm Waldhausen Selected publications (Quillen's Q-construction) References External links Archive of Daniel Quillen’s notebooks for the years 1970 through 2003 at the Clay Mathematics Institute
https://en.wikipedia.org/wiki/Mertens%20function
In number theory, the Mertens function is defined for all positive integers n as where is the Möbius function. The function is named in honour of Franz Mertens. This definition can be extended to positive real numbers as follows: Less formally, is the count of square-free integers up to x that have an even number of prime factors, minus the count of those that have an odd number. The first 143 M(n) values are The Mertens function slowly grows in positive and negative directions both on average and in peak value, oscillating in an apparently chaotic manner passing through zero when n has the values 2, 39, 40, 58, 65, 93, 101, 145, 149, 150, 159, 160, 163, 164, 166, 214, 231, 232, 235, 236, 238, 254, 329, 331, 332, 333, 353, 355, 356, 358, 362, 363, 364, 366, 393, 401, 403, 404, 405, 407, 408, 413, 414, 419, 420, 422, 423, 424, 425, 427, 428, ... . Because the Möbius function only takes the values −1, 0, and +1, the Mertens function moves slowly, and there is no x such that |M(x)| > x. H. Davenport demonstrated that, for any fixed h, uniformly in . This implies, for that The Mertens conjecture went further, stating that there would be no x where the absolute value of the Mertens function exceeds the square root of x. The Mertens conjecture was proven false in 1985 by Andrew Odlyzko and Herman te Riele. However, the Riemann hypothesis is equivalent to a weaker conjecture on the growth of M(x), namely M(x) = O(x1/2 + ε). Since high values for M(x) grow at least as fast as , this puts a rather tight bound on its rate of growth. Here, O refers to big O notation. The true rate of growth of M(x) is not known. An unpublished conjecture of Steve Gonek states that Probabilistic evidence towards this conjecture is given by Nathan Ng. In particular, Ng gives a conditional proof that the function has a limiting distribution on . That is, for all bounded Lipschitz continuous functions on the reals we have that if one assumes various conjectures about the Riemann zeta function. Representations As an integral Using the Euler product, one finds that where is the Riemann zeta function, and the product is taken over primes. Then, using this Dirichlet series with Perron's formula, one obtains where c > 1. Conversely, one has the Mellin transform which holds for . A curious relation given by Mertens himself involving the second Chebyshev function is Assuming that the Riemann zeta function has no multiple non-trivial zeros, one has the "exact formula" by the residue theorem: Weyl conjectured that the Mertens function satisfied the approximate functional-differential equation where H(x) is the Heaviside step function, B are Bernoulli numbers, and all derivatives with respect to t are evaluated at t = 0. There is also a trace formula involving a sum over the Möbius function and zeros of the Riemann zeta function in the form where the first sum on the right-hand side is taken over the non-trivial ze
https://en.wikipedia.org/wiki/Vysochanskij%E2%80%93Petunin%20inequality
In probability theory, the Vysochanskij–Petunin inequality gives a lower bound for the probability that a random variable with finite variance lies within a certain number of standard deviations of the variable's mean, or equivalently an upper bound for the probability that it lies further away. The sole restrictions on the distribution are that it be unimodal and have finite variance. (This implies that it is a continuous probability distribution except at the mode, which may have a non-zero probability.) Theorem Let be a random variable with unimodal distribution, and . If we define then for any , Relation to Gauss's inequality Taking equal to a mode of yields the first case of Gauss's inequality. Tightness of Bound Without loss of generality, assume and . If , the left-hand side can equal one, so the bound is useless. If , the bound is tight when with probability and is otherwise distributed uniformly in the interval . If , the bound is tight when with probability and is otherwise distributed uniformly in the interval . Specialization to mean and variance If has mean and finite, non-zero variance , then taking and gives that for any Proof Sketch For a relatively elementary proof see. The rough idea behind the proof is that there are two cases: one where the mode of is close to compared to , in which case we can show , and one where the mode of is far from compared to , in which case we can show . Combining these two cases gives When , the two cases give the same value. Properties The theorem refines Chebyshev's inequality by including the factor of 4/9, made possible by the condition that the distribution be unimodal. It is common, in the construction of control charts and other statistical heuristics, to set , corresponding to an upper probability bound of 4/81= 0.04938..., and to construct 3-sigma limits to bound nearly all (i.e. 95%) of the values of a process output. Without unimodality Chebyshev's inequality would give a looser bound of . One-sided version An improved version of the Vysochanskij-Petunin inequality for one-sided tail bounds exists. For a unimodal random variable with mean and variance , and , the one-sided Vysochanskij-Petunin inequality holds as follows: The one-sided Vysochanskij-Petunin inequality, as well as the related Cantelli inequality, can for instance be relevant in the financial area, in the sense of "how bad can losses get." Proof The proof is very similar to that of Cantelli's inequality. For any , Then we can apply the Vysochanskij-Petunin inequality. With , we have: As in the proof of Cantelli's inequality, it can be shown that the minimum of over all is achieved at . Plugging in this value of and simplifying yields the desired inequality. Generalisation Dharmadhikari and Joag-Dev generalised the VP inequality to deviations from an arbitrary point and moments of order other than where The standard form of the inequality can be recovered by sett
https://en.wikipedia.org/wiki/WZ
WZ may refer to: WZ sex-determination system, also known as the ZW sex-determination system WZ theory, a technique for simplifying certain combinatorial summations in mathematics Eswatini (FIPS 10-4 country code WZ) Westdeutsche Zeitung, a German newspaper Wetzlar, Germany WinZip, a computer file compression software Wizet, a Korean online gaming developer, which uses the file extension W and Z bosons in particle physics
https://en.wikipedia.org/wiki/Lee%20Harwood
Lee Harwood (6 June 1939 – 26 July 2015) was an English poet associated with the British Poetry Revival. Life Travers Rafe Lee Harwood was born in Leicester to maths teacher Wilfred Travers Lee-Harwood and Grace Ladkin Harwood, who were then living in Chertsey, Surrey. His father was an army reservist and called up as war started; after the evacuation from Dunkirk he was posted to Africa until 1947 and saw little of his son. Between 1958–61 Harwood studied English at Queen Mary College, University of London and continued living in London until 1967. During that time he worked as a monumental mason's mate, a librarian and a bookshop assistant. He was also a member of the Beat scene and in 1963 was involved in editing the one issue magazines Night Scene and Night Train featuring their work, as did Soho and Horde the following year. Tzarad, which he began editing on his own in 1965, ran for two more issues (1966, 1969) and signalled his growing interest in and involvement with the New York School of poets. It was during this time that he began to engage with French poetry and started on his translations of Tristan Tzara. In 1967 he moved to Brighton where, with the exception of some time in Greece and the United States, he lived for the rest of his life. There he worked as a bookshop manager, a bus conductor, and a Post Office counter clerk. He also became a union official and involved with the Labour Party in its radical years, even standing (unsuccessfully) in a local election. At the Poetry Society Harwood was identified with the radicals but did not join in their block resignation in 1977, arguing that 'as a trade unionist I've never believed in resignation as a useful political weapon – it always seems best to work from inside an organisation'. At that time, there was an identifiable political element to Harwood's poetry, discernible in the volume "All The Wrong Notes" (1981). In 1961 he married his first wife, Jenny Goodgame, with whom he had a son, Blake. After the breakdown of this marriage, he met the photographer Judith Walker while a writer in residence at the Aegean School of Fine Arts in Paros, Greece. Harwood married her in 1974 and they had two children, Rafe and Rowan. Photographs by Walker are used in his collections Boston-Brighton and All the wrong notes. Lee Harwood died on Sunday, 26 July 2015 in Hove, East Sussex. and was interred in Clayton Burial Ground near Hassocks, East Sussex. There is a tree (Mountain Ash), and memorial stone in the Literary Walk, in Central Park, New York City. There is also a memorial bench on the north path of Brunswick Square, Hove, UK. Poetry Harwood's early writing is similar to the poetry of the New York School, especially that of John Ashbery, whom he met in Paris in 1965. What he was aiming for, he said in a 1972 interview, was an unfinished quality containing a mosaic of information. Robert Sheppard has described Harwood's style as at once 'distanced and intimate'. Later, after discussion
https://en.wikipedia.org/wiki/K.%20G.%20Ramanathan
Kollagunta Gopalaiyer Ramanathan (13 November 1920 – 10 May 1992) was an Indian mathematician known for his work in number theory. His contributions are also to the general development of mathematical research and teaching in India. K. G. Ramanathan's early life and his family K. G. Ramanathan was born in Hyderabad in South India. He completed his B.A. and M.A. in mathematics at Osmania University and the University of Madras respectively before going to Princeton to earn his Ph.D; his advisor was Emil Artin. At Princeton, Ramanathan also worked with Hermann Weyl and Carl Siegel. Thereafter he returned to India to team up with K. S. Chandrasekharan at the Tata Institute of Fundamental Research (TIFR) at Colaba in 1951. At Princeton, for about two years, Ramanathan's neighbour was Albert Einstein, the legendary physicist. He used to sing Carnatic songs of Tyagaraja to Einstein for entertainment. Ramanathan was married to Jayalakshmi Ramanathan. He had two sons. His father's name was Kollagunta Gopal Iyer, and his mother's name was Ananthalaxmi. His mother died at an early age. He had two sisters and one brother. Career At TIFR, he built up the number theory group of young mathematicians from India. For several years, he took interest to study Ramanujan's unpublished and published work. He was an Editorial board member of Acta Arithmetica for over 30 years. He retired from TIFR in 1985. Awards Ramanathan was given numerous achievements during his more than 30 years service at TIFR. Padma Bhushan, 1983 Shanti Swarup Bhatnagar Award, 1965 Fellow of Indian Academy of Sciences Fellow of Indian National Science Academy Honorary fellow of TIFR. Selected publications On Ramanujan’s continued fraction, KG Ramanathan - Acta Arith, 1984 Some applications of Kronecker’s limit formula, KG Ramanathan - J. Indian Math. Soc, 1987 References External links Obituary, reproduced from Acta Arithmetica, Author: S. Raghavan K. G. R's Photo This is reproduced from Acta Arithmetica 64 (1993) 1-6 K. G. Ramanathan's Biography 1920 births 1992 deaths Recipients of the Padma Bhushan in literature & education Indian number theorists Presidents of the Indian Mathematical Society University of Madras alumni Osmania University alumni Scientists from Hyderabad, India 20th-century Indian mathematicians Recipients of the Shanti Swarup Bhatnagar Award in Mathematical Science
https://en.wikipedia.org/wiki/Infinitesimal%20generator
In mathematics, the term infinitesimal generator may refer to: an element of the Lie algebra, associated to a Lie group Infinitesimal generator (stochastic processes), of a stochastic process infinitesimal generator matrix, of a continuous time Markov chain, a class of stochastic processes Infinitesimal generator of a strongly continuous semigroup
https://en.wikipedia.org/wiki/Effect%20size
In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of a parameter for a hypothetical population, or to the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, or the risk of a particular event (such as a heart attack) happening. Effect sizes complement statistical hypothesis testing, and play an important role in power analyses, sample size planning, and in meta-analyses. The cluster of data-analysis methods concerning effect sizes is referred to as estimation statistics. Effect size is an essential component when evaluating the strength of a statistical claim, and it is the first item (magnitude) in the MAGIC criteria. The standard deviation of the effect size is of critical importance, since it indicates how much uncertainty is included in the measurement. A standard deviation that is too large will make the measurement nearly meaningless. In meta-analysis, where the purpose is to combine multiple effect sizes, the uncertainty in the effect size is used to weigh effect sizes, so that large studies are considered more important than small studies. The uncertainty in the effect size is calculated differently for each type of effect size, but generally only requires knowing the study's sample size (N), or the number of observations (n) in each group. Reporting effect sizes or estimates thereof (effect estimate [EE], estimate of effect) is considered good practice when presenting empirical research findings in many fields. The reporting of effect sizes facilitates the interpretation of the importance of a research result, in contrast to its statistical significance. Effect sizes are particularly prominent in social science and in medical research (where size of treatment effect is important). Effect sizes may be measured in relative or absolute terms. In relative effect sizes, two groups are directly compared with each other, as in odds ratios and relative risks. For absolute effect sizes, a larger absolute value always indicates a stronger effect. Many types of measurements can be expressed as either absolute or relative, and these can be used together because they convey different information. A prominent task force in the psychology research community made the following recommendation: Overview Population and sample effect sizes As in statistical estimation, the true effect size is distinguished from the observed effect size, e.g. to measure the risk of disease in a population (the population effect size) one can measure the risk within a sample of that population (the sample effect size). Conventions for describing true and observed effect sizes follow standard statis
https://en.wikipedia.org/wiki/Matrix%20normal%20distribution
In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables. Definition The probability density function for the random matrix X (n × p) that follows the matrix normal distribution has the form: where denotes trace and M is n × p, U is n × n and V is p × p, and the density is understood as the probability density function with respect to the standard Lebesgue measure in , i.e.: the measure corresponding to integration with respect to . The matrix normal is related to the multivariate normal distribution in the following way: if and only if where denotes the Kronecker product and denotes the vectorization of . Proof The equivalence between the above matrix normal and multivariate normal density functions can be shown using several properties of the trace and Kronecker product, as follows. We start with the argument of the exponent of the matrix normal PDF: which is the argument of the exponent of the multivariate normal PDF with respect to Lebesgue measure in . The proof is completed by using the determinant property: Properties If , then we have the following properties: Expected values The mean, or expected value is: and we have the following second-order expectations: where denotes trace. More generally, for appropriately dimensioned matrices A,B,C: Transformation Transpose transform: Linear transform: let D (r-by-n), be of full rank r ≤ n and C (p-by-s), be of full rank s ≤ p, then: Example Let's imagine a sample of n independent p-dimensional random variables identically distributed according to a multivariate normal distribution: . When defining the n × p matrix for which the ith row is , we obtain: where each row of is equal to , that is , is the n × n identity matrix, that is the rows are independent, and . Maximum likelihood parameter estimation Given k matrices, each of size n × p, denoted , which we assume have been sampled i.i.d. from a matrix normal distribution, the maximum likelihood estimate of the parameters can be obtained by maximizing: The solution for the mean has a closed form, namely but the covariance parameters do not. However, these parameters can be iteratively maximized by zero-ing their gradients at: and See for example and references therein. The covariance parameters are non-identifiable in the sense that for any scale factor, s>0, we have: Drawing values from the distribution Sampling from the matrix normal distribution is a special case of the sampling procedure for the multivariate normal distribution. Let be an n by p matrix of np independent samples from the standard normal distribution, so that Then let so that where A and B can be chosen by Cholesky decomposition or a similar matrix square root operation. Relation to other distributions Dawid (1981) provides a discussion of the relation of the matrix-valued normal distri
https://en.wikipedia.org/wiki/Midpoint
In geometry, the midpoint is the middle point of a line segment. It is equidistant from both endpoints, and it is the centroid both of the segment and of the endpoints. It bisects the segment. Formula The midpoint of a segment in n-dimensional space whose endpoints are and is given by That is, the ith coordinate of the midpoint (i = 1, 2, ..., n) is Construction Given two points of interest, finding the midpoint of the line segment they determine can be accomplished by a compass and straightedge construction. The midpoint of a line segment, embedded in a plane, can be located by first constructing a lens using circular arcs of equal (and large enough) radii centered at the two endpoints, then connecting the cusps of the lens (the two points where the arcs intersect). The point where the line connecting the cusps intersects the segment is then the midpoint of the segment. It is more challenging to locate the midpoint using only a compass, but it is still possible according to the Mohr-Mascheroni theorem. Geometric properties involving midpoints Circle The midpoint of any diameter of a circle is the center of the circle. Any line perpendicular to any chord of a circle and passing through its midpoint also passes through the circle's center. The butterfly theorem states that, if is the midpoint of a chord of a circle, through which two other chords and are drawn, then and intersect chord at and respectively, such that is the midpoint of . Ellipse The midpoint of any segment which is an area bisector or perimeter bisector of an ellipse is the ellipse's center. The ellipse's center is also the midpoint of a segment connecting the two foci of the ellipse. Hyperbola The midpoint of a segment connecting a hyperbola's vertices is the center of the hyperbola. Triangle The perpendicular bisector of a side of a triangle is the line that is perpendicular to that side and passes through its midpoint. The three perpendicular bisectors of a triangle's three sides intersect at the circumcenter (the center of the circle through the three vertices). The median of a triangle's side passes through both the side's midpoint and the triangle's opposite vertex. The three medians of a triangle intersect at the triangle's centroid (the point on which the triangle would balance if it were made of a thin sheet of uniform-density metal). The nine-point center of a triangle lies at the midpoint between the circumcenter and the orthocenter. These points are all on the Euler line. A midsegment (or midline) of a triangle is a line segment that joins the midpoints of two sides of the triangle. It is parallel to the third side and has a length equal to one half of that third side. The medial triangle of a given triangle has vertices at the midpoints of the given triangle's sides, therefore its sides are the three midsegments of the given triangle. It shares the same centroid and medians with the given triangle. The perimeter of the medial triangle e
https://en.wikipedia.org/wiki/Statistics%20Sweden
Statistics Sweden ( ; SCB, ) is the Swedish government agency operating under the Ministry of Finance and responsible for producing official statistics for decision-making, debate and research. The agency's responsibilities include: developing, producing and disseminating statistics; active participation in international statistical cooperation; coordination and support of the Swedish system for official statistics, which includes 26 authorities responsible for official statistics in their areas of expertise. National statistics in Sweden date back to 1686 when the parishes of the Church of Sweden were ordered to start keeping records on the population. SCB's predecessor, the Tabellverket ("office for tabulation"), was set up in 1749, and the current name was adopted in 1858. Subjects Statistics Sweden produces statistics in several different subject areas: , the agency had approximately 1,350 employees. The offices of the agency are located in Stockholm and Örebro. Statistics Sweden publishes the Journal of Official Statistics. See also Demographics of Sweden Eurostat Government agencies in Sweden List of national and international statistical services References External links Demographics of Sweden Government agencies of Sweden National statistical services 1858 establishments in Sweden Government agencies established in 1858
https://en.wikipedia.org/wiki/Moving%20frame
In mathematics, a moving frame is a flexible generalization of the notion of an ordered basis of a vector space often used to study the extrinsic differential geometry of smooth manifolds embedded in a homogeneous space. Introduction In lay terms, a frame of reference is a system of measuring rods used by an observer to measure the surrounding space by providing coordinates. A moving frame is then a frame of reference which moves with the observer along a trajectory (a curve). The method of the moving frame, in this simple example, seeks to produce a "preferred" moving frame out of the kinematic properties of the observer. In a geometrical setting, this problem was solved in the mid 19th century by Jean Frédéric Frenet and Joseph Alfred Serret. The Frenet–Serret frame is a moving frame defined on a curve which can be constructed purely from the velocity and acceleration of the curve. The Frenet–Serret frame plays a key role in the differential geometry of curves, ultimately leading to a more or less complete classification of smooth curves in Euclidean space up to congruence. The Frenet–Serret formulas show that there is a pair of functions defined on the curve, the torsion and curvature, which are obtained by differentiating the frame, and which describe completely how the frame evolves in time along the curve. A key feature of the general method is that a preferred moving frame, provided it can be found, gives a complete kinematic description of the curve. In the late 19th century, Gaston Darboux studied the problem of constructing a preferred moving frame on a surface in Euclidean space instead of a curve, the Darboux frame (or the trièdre mobile as it was then called). It turned out to be impossible in general to construct such a frame, and that there were integrability conditions which needed to be satisfied first. Later, moving frames were developed extensively by Élie Cartan and others in the study of submanifolds of more general homogeneous spaces (such as projective space). In this setting, a frame carries the geometric idea of a basis of a vector space over to other sorts of geometrical spaces (Klein geometries). Some examples of frames are: A linear frame is an ordered basis of a vector space. An orthonormal frame of a vector space is an ordered basis consisting of orthogonal unit vectors (an orthonormal basis). An affine frame of an affine space consists of a choice of origin along with an ordered basis of vectors in the associated difference space. A Euclidean frame of an affine space is a choice of origin along with an orthonormal basis of the difference space. A projective frame on n-dimensional projective space is an ordered collection of n+1 linearly independent points in the space. Frame fields in general relativity are four-dimensional frames, or vierbeins, in German. In each of these examples, the collection of all frames is homogeneous in a certain sense. In the case of linear frames, for instance, any t
https://en.wikipedia.org/wiki/Glenwood%2C%20Newfoundland%20and%20Labrador
Glenwood is a town in northeastern Newfoundland, Newfoundland and Labrador, Canada. It is in Division No. 6 on Gander Lake. Demographics In the 2021 Census of Population conducted by Statistics Canada, Glenwood had a population of living in of its total private dwellings, a change of from its 2016 population of . With a land area of , it had a population density of in 2021. See also List of cities and towns in Newfoundland and Labrador References Towns in Newfoundland and Labrador
https://en.wikipedia.org/wiki/Division%20No.%206%2C%20Subdivision%20D%2C%20Newfoundland%20and%20Labrador
Division No. 6, Subd. D is an unorganized subdivision in northeastern Newfoundland, Newfoundland and Labrador, Canada. It is in Division No. 6 on the Bay of Exploits. According to the 2016 Statistics Canada Census: Population: 682 % Change (2011-2016): 131.2 Dwellings: 769 Area (km2): 4,228.2 Density (persons per km2): 0.2 However, according to City-Data, there are only 285 residents. Newfoundland and Labrador subdivisions
https://en.wikipedia.org/wiki/1706%20in%20science
The year 1706 in science and technology involved some significant events. Mathematics William Jones publishes Synopsis palmariorum matheseos or, A New Introduction to the Mathematics, Containing the Principles of Arithmetic and Geometry Demonstrated in a Short and Easie Method ... Designed for ... Beginners in which he proposes using the symbol π (the Greek letter pi, as an abbreviation for perimeter) to represent the ratio of the circumference of a circle to its diameter. introduces John Machin's quickly converging inverse-tangent series for π (pi), enabling it to be computed to 100 decimal places. Technology Francis Hauksbee produces his 'Influence machine' to generate static electricity. Publications Johann Jakob Scheuchzer begins publication in Zürich of his Beschreibung der Naturgeschichten des Schweitzerlands giving an account of the natural history and geology of Switzerland. Giovanni Battista Morgagni publishes Adversaria anatomica, the first in a series in which he describes his observations of human anatomy. Births January 17 – Benjamin Franklin, American scientist and inventor, known for his experiments with electricity (died 1790) January 28 – John Baskerville, English printer and inventor (died 1775) February 11 – Nils Rosén von Rosenstein, Swedish pediatrician (died 1773) May 12 – François Boissier de Sauvages de Lacroix, French physician and botanist (died 1767) June 10 – John Dollond, English optician (died 1761) December 17 – Émilie du Châtelet, French mathematician and physicist (died 1749) Date unknown – Giuseppe Asclepi, Italian astronomer and physicist (died 1776) Deaths June 15 – Giorgio Baglivi, Italian physician (born 1668) August 6 – Jean-Baptiste Du Hamel, French scientist, philosophe (born 1624) Date unknown – Jean Le Fèvre, French astronomer (born 1652) Date unknown – Jeanne Dumée, French astronomer (born 1660) References 18th century in science 1700s in science
https://en.wikipedia.org/wiki/Modular%20equation
In mathematics, a modular equation is an algebraic equation satisfied by moduli, in the sense of moduli problems. That is, given a number of functions on a moduli space, a modular equation is an equation holding between them, or in other words an identity for moduli. The most frequent use of the term modular equation is in relation to the moduli problem for elliptic curves. In that case the moduli space itself is of dimension one. That implies that any two rational functions F and G, in the function field of the modular curve, will satisfy a modular equation P(F,G) = 0 with P a non-zero polynomial of two variables over the complex numbers. For suitable non-degenerate choice of F and G, the equation P(X,Y) = 0 will actually define the modular curve. This can be qualified by saying that P, in the worst case, will be of high degree and the plane curve it defines will have singular points; and the coefficients of P may be very large numbers. Further, the 'cusps' of the moduli problem, which are the points of the modular curve not corresponding to honest elliptic curves but degenerate cases, may be difficult to read off from knowledge of P. In that sense a modular equation becomes the equation of a modular curve. Such equations first arose in the theory of multiplication of elliptic functions (geometrically, the n2-fold covering map from a 2-torus to itself given by the mapping x → n·x on the underlying group) expressed in terms of complex analysis. See also Modular lambda function Ramanujan's lost notebook References Modular forms
https://en.wikipedia.org/wiki/Oscar%20Buneman
Oscar Buneman (28 September 1913 – 24 January 1993) made advances in science, engineering, and mathematics. Buneman was a pioneer of computational plasma physics and plasma simulation. Career In 1940 upon completion of his PhD with Douglas Hartree, Buneman joined Hartree's magnetron research group assisting the development of radar during World War II. They discovered the Buneman–Hartree criterion for the voltage threshold of a magnetron operation. After the war, Buneman developed theories and simulations of collision-less dissipation of currents called the Buneman instability. This is an example of anomalous resistivity or absorption. It is anomalous because the phenomenon does not depend on collisions. Buneman advanced elliptic equation solver methods and their associated applications (as well as for the fast Fourier transforms). Personal life On 24 January 1993 Oscar Buneman at the age of 79 died near Stanford University. The computer scientist Peter Buneman is his son. Publications Buneman, O., "Time reversible difference procedures". Journal of Computers Physics. 1, 517 (1967). Buneman, O., "A compact non-iterative poisson-solver". SUIPR report 294, Stanford University (1969). Buneman, O., "Fast numerical procedures for computer experiments on relativistic plasmas, in "Relativistic Plasmas (The Coral Gables Conference)", Benjamin, NY, 1968. Buneman, O., and et al., "Principles and capabilities of 3d EM particle simulations". Journal of Computational Physics. 38, 1 (1980). References External links and resources Langdon, Bruce, "Remembrances of Oscar Buneman". ICNSP'98. Oscar Buneman Papers Rita Meyer-Spasche/Rolf Tomas Nossum: Persecution and Patronage: Oscar Buneman's years in Britain. In: Almagest, International Journal for the History of Scientific Ideas, Vol. 7, Issue 2, 2016 1913 births 1993 deaths Plasma physicists Fellows of the American Physical Society
https://en.wikipedia.org/wiki/Smarandache%E2%80%93Wellin%20number
In mathematics, a Smarandache–Wellin number is an integer that in a given base is the concatenation of the first n prime numbers written in that base. Smarandache–Wellin numbers are named after Florentin Smarandache and Paul R. Wellin. The first decimal Smarandache–Wellin numbers are: 2, 23, 235, 2357, 235711, 23571113, 2357111317, 235711131719, 23571113171923, 2357111317192329, ... . Smarandache–Wellin prime A Smarandache–Wellin number that is also prime is called a Smarandache–Wellin prime. The first three are 2, 23 and 2357 . The fourth is 355 digits long: it is the result of concatenating the first 128 prime numbers, through 719. The primes at the end of the concatenation in the Smarandache–Wellin primes are 2, 3, 7, 719, 1033, 2297, 3037, 11927, ... . The indices of the Smarandache–Wellin primes in the sequence of Smarandache–Wellin numbers are: 1, 2, 4, 128, 174, 342, 435, 1429, ... . The 1429th Smarandache–Wellin number is a probable prime with 5719 digits ending in 11927, discovered by Eric W. Weisstein in 1998. If it is proven prime, it will be the eighth Smarandache–Wellin prime. In March 2009, Weisstein's search showed the index of the next Smarandache–Wellin prime (if one exists) is at least 22077. See also Copeland–Erdős constant Champernowne constant, another example of a number obtained by concatenating a representation in a given base. References External links List of first 54 Smarandache–Wellin numbers with factorizations Smarandache–Wellin primes at The Prime Glossary Smith, S. "A Set of Conjectures on Smarandache Sequences." Bull. Pure Appl. Sci. 15E, 101–107, 1996. Base-dependent integer sequences Prime numbers
https://en.wikipedia.org/wiki/Genichi%20Taguchi
was an engineer and statistician. From the 1950s on, Taguchi developed a methodology for applying statistics to improve the quality of manufactured goods. Taguchi methods have been controversial among some conventional Western statisticians, but others have accepted many of the concepts introduced by him as valid extensions to the body of knowledge. Biography Taguchi was born and raised in the textile town of Tokamachi, in Niigata prefecture. He initially studied textile engineering at Kiryu Technical College with the intention of entering the family kimono business. However, with the escalation of World War II in 1942, he was drafted into the Astronomical Department of the Navigation Institute of the Imperial Japanese Navy. After the war, in 1948 he joined the Ministry of Public Health and Welfare, where he came under the influence of eminent statistician Matosaburo Masuyama, who kindled his interest in the design of experiments. He also worked at the Institute of Statistical Mathematics during this time, and supported experimental work on the production of penicillin at Morinaga Pharmaceuticals, a Morinaga Seika company. In 1950, he joined the Electrical Communications Laboratory (ECL) of the Nippon Telegraph and Telephone Corporation just as statistical quality control was beginning to become popular in Japan, under the influence of W. Edwards Deming and the Union of Japanese Scientists and Engineers. ECL was engaged in a rivalry with Bell Labs to develop cross bar and telephone switching systems, and Taguchi spent his twelve years there developing methods for enhancing quality and reliability. Even at this point, he was beginning to consult widely in Japanese industry, with Toyota being an early adopter of his ideas. During the 1950s, he collaborated widely and in 1954-1955 was visiting professor at the Indian Statistical Institute, where he worked with C. R. Rao, Ronald Fisher and Walter A. Shewhart. While working at the SQC Unit of ISI, he was introduced to the orthogonal arrays invented by C. R. Rao - a topic which was to be instrumental in enabling him to develop the foundation blocks of what is now known as Taguchi methods. On completing his doctorate at Kyushu University in 1962, he left ECL, though he maintained a consulting relationship. In the same year he visited Princeton University under the sponsorship of John Tukey, who arranged a spell at Bell Labs, his old ECL rivals. In 1964 he became professor of engineering at Aoyama Gakuin University, Tokyo. In 1966 he began a collaboration with Yuin Wu, who later emigrated to the U.S. and, in 1980, invited Taguchi to lecture. During his visit there, Taguchi himself financed a return to Bell Labs, where his initial teaching had made little enduring impact. This second visit began a collaboration with Madhav Phadke and a growing enthusiasm for his methodology in Bell Labs and elsewhere, including Ford Motor Company, Boeing, Xerox and ITT. Since 1982, Genichi Taguchi has been an advi
https://en.wikipedia.org/wiki/Risk-neutral%20measure
In mathematical finance, a risk-neutral measure (also called an equilibrium measure, or equivalent martingale measure) is a probability measure such that each share price is exactly equal to the discounted expectation of the share price under this measure. This is heavily used in the pricing of financial derivatives due to the fundamental theorem of asset pricing, which implies that in a complete market, a derivative's price is the discounted expected value of the future payoff under the unique risk-neutral measure. Such a measure exists if and only if the market is arbitrage-free. A risk-neutral measure is a probability measure The easiest way to remember what the risk-neutral measure is, or to explain it to a probability generalist who might not know much about finance, is to realize that it is: The probability measure of a transformed random variable. Typically this transformation is the utility function of the payoff. The risk-neutral measure would be the measure corresponding to an expectation of the payoff with a linear utility. An implied probability measure, that is one implied from the current observable/posted/traded prices of the relevant instruments. Relevant means those instruments that are causally linked to the events in the probability space under consideration (i.e. underlying prices plus derivatives), and It is the implied probability measure (solves a kind of inverse problem) that is defined using a linear (risk-neutral) utility in the payoff, assuming some known model for the payoff. This means that you try to find the risk-neutral measure by solving the equation where current prices are the expected present value of the future pay-offs under the risk-neutral measure. The concept of a unique risk-neutral measure is most useful when one imagines making prices across a number of derivatives that would make a unique risk-neutral measure, since it implies a kind of consistency in one's hypothetical untraded prices, and theoretically points to arbitrage opportunities in markets where bid/ask prices are visible. It is also worth noting that in most introductory applications in finance, the pay-offs under consideration are deterministic given knowledge of prices at some terminal or future point in time. This is not strictly necessary to make use of these techniques. Motivating the use of risk-neutral measures Prices of assets depend crucially on their risk as investors typically demand more profit for bearing more risk. Therefore, today's price of a claim on a risky amount realised tomorrow will generally differ from its expected value. Most commonly, investors are risk-averse and today's price is below the expectation, remunerating those who bear the risk (at least in large financial markets; examples of risk-seeking markets are casinos and lotteries). To price assets, consequently, the calculated expected values need to be adjusted for an investor's risk preferences (see also Sharpe ratio). Unfortunately, the discount rat
https://en.wikipedia.org/wiki/Axiom%20of%20dependent%20choice
In mathematics, the axiom of dependent choice, denoted by , is a weak form of the axiom of choice () that is still sufficient to develop most of real analysis. It was introduced by Paul Bernays in a 1942 article that explores which set-theoretic axioms are needed to develop analysis. Formal statement A homogeneous relation on is called a total relation if for every there exists some such that is true. The axiom of dependent choice can be stated as follows: For every nonempty set and every total relation on there exists a sequence in such that for all In fact, x0 may be taken to be any desired element of X. (To see this, apply the axiom as stated above to the set of finite sequences that start with x0 and in which subsequent terms are in relation , together with the total relation on this set of the second sequence being obtained from the first by appending a single term.) If the set above is restricted to be the set of all real numbers, then the resulting axiom is denoted by Use Even without such an axiom, for any , one can use ordinary mathematical induction to form the first terms of such a sequence. The axiom of dependent choice says that we can form a whole (countably infinite) sequence this way. The axiom is the fragment of that is required to show the existence of a sequence constructed by transfinite recursion of countable length, if it is necessary to make a choice at each step and if some of those choices cannot be made independently of previous choices. Equivalent statements Over (Zermelo–Fraenkel set theory without the axiom of choice), is equivalent to the Baire category theorem for complete metric spaces. It is also equivalent over to the Löwenheim–Skolem theorem. is also equivalent over to the statement that every pruned tree with levels has a branch (proof below). Furthermore, is equivalent to a weakened form of Zorn's lemma; specifically is equivalent to the statement that any partial order such that every well-ordered chain is finite and bounded, must have a maximal element. Relation with other axioms Unlike full , is insufficient to prove (given ) that there is a non-measurable set of real numbers, or that there is a set of real numbers without the property of Baire or without the perfect set property. This follows because the Solovay model satisfies , and every set of real numbers in this model is Lebesgue measurable, has the Baire property and has the perfect set property. The axiom of dependent choice implies the axiom of countable choice and is strictly stronger. It is possible to generalize the axiom to produce transfinite sequences. If these are allowed to be arbitrarily long, then it becomes equivalent to the full axiom of choice. Notes References Axiom of choice
https://en.wikipedia.org/wiki/List%20of%20census%20divisions%20of%20Ontario
The Province of Ontario has 51 first-level administrative divisions, which collectively cover the whole province. With two exceptions, their areas match the 49 census divisions Statistics Canada has for Ontario. The Province has four types of first-level division: single-tier municipalities, regional municipalities, counties, and districts. The first three are types of municipal government but districts are not—they are defined geographic areas (some quite large) used in many contexts. The last three have within them multiple smaller, lower-tier municipalities but the single-tier municipalities do not. Regional municipalities and counties differ primarily in the services that they provide to their residents. (Lower-tier municipalities are generally treated as census subdivisions by Statistics Canada.) In some cases, an administrative division may retain its historical name even if it changes government type. For instance, Oxford County, Haldimand County, Norfolk County and Prince Edward County are no longer counties: Oxford is a regional municipality and the others are single-tier municipalities. Several administrative divisions in Ontario have significantly changed their borders or have been discontinued entirely. See: Historic counties of Ontario. Types of administrative divisions Single-tier municipalities A single-tier municipality is governed by one municipal administration, with neither a county nor regional government above it, nor further municipal subdivisions below it (cf. independent city). Single-tier municipalities are either former regional municipalities or counties whose municipal governments were amalgamated in the 1990s into a single administration. Some single-tier municipalities of this type (e.g., Toronto, Ottawa, Hamilton, Greater Sudbury) were created where a former regional municipality consisted of a single dominant urban centre and its suburbs or satellite towns or villages, while others (e.g., Brant County, Chatham-Kent, Haldimand-Norfolk, Kawartha Lakes, and Prince Edward County) were created from predominantly rural divisions with a collection of distinct communities. A single-tier municipality should not be confused with a separated municipality; such municipalities are considered as part of their surrounding county for census purposes, but are not administratively connected to the county. With the exception of Greater Sudbury, single-tier municipalities that are not considered to be part of a county, regional municipality, or district are found only in Southern Ontario. Current single-tier municipalities in Ontario that are also census divisions: Regional municipalities Regional municipalities (or regions) are upper-tier municipalities that generally have more servicing responsibilities than the counties. They generally provide the following services: maintenance and construction of arterial roads in both rural and urban areas, transit, policing, sewer and water systems, waste disposal, region-wide land u
https://en.wikipedia.org/wiki/Interaction%20%28statistics%29
In statistics, an interaction may arise when considering the relationship among three or more variables, and describes a situation in which the effect of one causal variable on an outcome depends on the state of a second causal variable (that is, when effects of the two causes are not additive). Although commonly thought of in terms of causal relationships, the concept of an interaction can also describe non-causal associations (then also called moderation or effect modification). Interactions are often considered in the context of regression analyses or factorial experiments. The presence of interactions can have important implications for the interpretation of statistical models. If two variables of interest interact, the relationship between each of the interacting variables and a third "dependent variable" depends on the value of the other interacting variable. In practice, this makes it more difficult to predict the consequences of changing the value of a variable, particularly if the variables it interacts with are hard to measure or difficult to control. The notion of "interaction" is closely related to that of moderation that is common in social and health science research: the interaction between an explanatory variable and an environmental variable suggests that the effect of the explanatory variable has been moderated or modified by the environmental variable. Introduction An interaction variable or interaction feature is a variable constructed from an original set of variables to try to represent either all of the interaction present or some part of it. In exploratory statistical analyses it is common to use products of original variables as the basis of testing whether interaction is present with the possibility of substituting other more realistic interaction variables at a later stage. When there are more than two explanatory variables, several interaction variables are constructed, with pairwise-products representing pairwise-interactions and higher order products representing higher order interactions. Thus, for a response Y and two variables x1 and x2 an additive model would be: In contrast to this, is an example of a model with an interaction between variables x1 and x2 ("error" refers to the random variable whose value is that by which Y differs from the expected value of Y; see errors and residuals in statistics). Often, models are presented without the interaction term , but this confounds the main effect and interaction effect (i.e., without specifying the interaction term, it is possible that any main effect found is actually due to an interaction). In modeling In ANOVA A simple setting in which interactions can arise is a two-factor experiment analyzed using Analysis of Variance (ANOVA). Suppose we have two binary factors A and B. For example, these factors might indicate whether either of two treatments were administered to a patient, with the treatments applied either singly, or in combination. We can then co
https://en.wikipedia.org/wiki/Simple%20ring
In abstract algebra, a branch of mathematics, a simple ring is a non-zero ring that has no two-sided ideal besides the zero ideal and itself. In particular, a commutative ring is a simple ring if and only if it is a field. The center of a simple ring is necessarily a field. It follows that a simple ring is an associative algebra over this field. It is then called a simple algebra over this field. Several references (e.g., Lang (2002) or Bourbaki (2012)) require in addition that a simple ring be left or right Artinian (or equivalently semi-simple). Under such terminology a non-zero ring with no non-trivial two-sided ideals is called quasi-simple. Rings which are simple as rings but are not a simple module over themselves do exist: a full matrix ring over a field does not have any nontrivial two-sided ideals (since any ideal of is of the form with an ideal of ), but it has nontrivial left ideals (for example, the sets of matrices which have some fixed zero columns). An immediate example of a simple ring is a division ring, where every nonzero element has a multiplicative inverse, for instance, the quaternions. Also, for any , the algebra of matrices with entries in a division ring is simple. Joseph Wedderburn proved that if a ring is a finite-dimensional simple algebra over a field , it is isomorphic to a matrix algebra over some division algebra over . In particular, the only simple rings that are finite-dimensional algebras over the real numbers are rings of matrices over either the real numbers, the complex numbers, or the quaternions. Wedderburn proved these results in 1907 in his doctoral thesis, On hypercomplex numbers, which appeared in the Proceedings of the London Mathematical Society. His thesis classified finite-dimensional simple and also semisimple algebras over fields. Simple algebras are building blocks of semisimple algebras: any finite-dimensional semisimple algebra is a Cartesian product, in the sense of algebras, of finite-dimensional simple algebras. One must be careful of the terminology: not every simple ring is a semisimple ring, and not every simple algebra is a semisimple algebra! However, every finite-dimensional simple algebra is a semisimple algebra, and every simple ring that is left or right artinian is a semisimple ring. An example of a simple ring that is not semisimple is the Weyl algebra. The Weyl algebra also gives an example of a simple algebra that is not a matrix algebra over a division algebra over its center: the Weyl algebra is infinite-dimensional, so Wedderburn's theorem does not apply. Wedderburn's result was later generalized to semisimple rings in the Wedderburn-Artin theorem: this says that every semisimple ring is a finite product of matrix rings over division rings. As a consequence of this generalization, every simple ring that is left or right artinian is a matrix ring over a division ring. Examples Let be the field of real numbers, be the field of complex numbers
https://en.wikipedia.org/wiki/Parallelogram%20of%20force
The parallelogram of forces is a method for solving (or visualizing) the results of applying two forces to an object. When more than two forces are involved, the geometry is no longer parallelogrammatic, but the same principles apply. Forces, being vectors are observed to obey the laws of vector addition, and so the overall (resultant) force due to the application of a number of forces can be found geometrically by drawing vector arrows for each force. For example, see Figure 1. This construction has the same result as moving F2 so its tail coincides with the head of F1, and taking the net force as the vector joining the tail of F1 to the head of F2. This procedure can be repeated to add F3 to the resultant F1 + F2, and so forth. Newton's proof Preliminary: the parallelogram of velocity Suppose a particle moves at a uniform rate along a line from A to B (Figure 2) in a given time (say, one second), while in the same time, the line AB moves uniformly from its position at AB to a position at DC, remaining parallel to its original orientation throughout. Accounting for both motions, the particle traces the line AC. Because a displacement in a given time is a measure of velocity, the length of AB is a measure of the particle's velocity along AB, the length of AD is a measure of the line's velocity along AD, and the length of AC is a measure of the particle's velocity along AC. The particle's motion is the same as if it had moved with a single velocity along AC. Newton's proof of the parallelogram of force Suppose two forces act on a particle at the origin (the "tails" of the vectors) of Figure 1. Let the lengths of the vectors F1 and F2 represent the velocities the two forces could produce in the particle by acting for a given time, and let the direction of each represent the direction in which they act. Each force acts independently and will produce its particular velocity whether the other force acts or not. At the end of the given time, the particle has both velocities. By the above proof, they are equivalent to a single velocity, Fnet. By Newton's second law, this vector is also a measure of the force which would produce that velocity, thus the two forces are equivalent to a single force. Bernoulli's proof for perpendicular vectors We model forces as Euclidean vectors or members of . Our first assumption is that the resultant of two forces is in fact another force, so that for any two forces there is another force . Our final assumption is that the resultant of two forces doesn't change when rotated. If is any rotation (any orthogonal map for the usual vector space structure of with ), then for all forces Consider two perpendicular forces of length and of length , with being the length of . Let and , where is the rotation between and , so . Under the invariance of the rotation, we get Similarly, consider two more forces and . Let be the rotation from to : , which by inspection makes . Applying these two equations Since
https://en.wikipedia.org/wiki/Borel%E2%80%93Kolmogorov%20paradox
In probability theory, the Borel–Kolmogorov paradox (sometimes known as Borel's paradox) is a paradox relating to conditional probability with respect to an event of probability zero (also known as a null set). It is named after Émile Borel and Andrey Kolmogorov. A great circle puzzle Suppose that a random variable has a uniform distribution on a unit sphere. What is its conditional distribution on a great circle? Because of the symmetry of the sphere, one might expect that the distribution is uniform and independent of the choice of coordinates. However, two analyses give contradictory results. First, note that choosing a point uniformly on the sphere is equivalent to choosing the longitude uniformly from and choosing the latitude from with density . Then we can look at two different great circles: If the coordinates are chosen so that the great circle is an equator (latitude ), the conditional density for a longitude defined on the interval is If the great circle is a line of longitude with , the conditional density for on the interval is One distribution is uniform on the circle, the other is not. Yet both seem to be referring to the same great circle in different coordinate systems. Explanation and implications In case (1) above, the conditional probability that the longitude λ lies in a set E given that φ = 0 can be written P(λ ∈ E | φ = 0). Elementary probability theory suggests this can be computed as P(λ ∈ E and φ = 0)/P(φ = 0), but that expression is not well-defined since P(φ = 0) = 0. Measure theory provides a way to define a conditional probability, using the family of events Rab = {φ : a < φ < b} which are horizontal rings consisting of all points with latitude between a and b. The resolution of the paradox is to notice that in case (2), P(φ ∈ F | λ = 0) is defined using the events Lab = {λ : a < λ < b}, which are lunes (vertical wedges), consisting of all points whose longitude varies between a and b. So although P(λ ∈ E | φ = 0) and P(φ ∈ F | λ = 0) each provide a probability distribution on a great circle, one of them is defined using rings, and the other using lunes. Thus it is not surprising after all that P(λ ∈ E | φ = 0) and P(φ ∈ F | λ = 0) have different distributions. Mathematical explication Measure theoretic perspective To understand the problem we need to recognize that a distribution on a continuous random variable is described by a density f only with respect to some measure μ. Both are important for the full description of the probability distribution. Or, equivalently, we need to fully define the space on which we want to define f. Let Φ and Λ denote two random variables taking values in Ω1 = respectively Ω2 = [−, ]. An event {Φ = φ, Λ = λ} gives a point on the sphere S(r) with radius r. We define the coordinate transform for which we obtain the volume element Furthermore, if either φ or λ is fixed, we get the volume elements Let denote the joint measure on , which has a density wit
https://en.wikipedia.org/wiki/Simon%20Newcomb
Simon Newcomb (March 12, 1835 – July 11, 1909) was a Canadian–American astronomer, applied mathematician, and autodidactic polymath. He served as Professor of Mathematics in the United States Navy and at Johns Hopkins University. Born in Nova Scotia, at the age of 19 Newcomb left an apprenticeship to join his father in Massachusetts, where the latter was teaching. Though Newcomb had little conventional schooling, he completed a BSc at Harvard in 1858. He later made important contributions to timekeeping, as well as to other fields in applied mathematics, such as economics and statistics. Fluent in several languages, he also wrote and published several popular science books and a science fiction novel. Biography Early life Simon Newcomb was born in the town of Wallace, Nova Scotia. His parents were John Burton Newcomb and his wife Emily Prince. His father was an itinerant school teacher, and frequently moved in order to teach in different parts of Canada, particularly in Nova Scotia and Prince Edward Island. Through his mother, Simon Newcomb was a distant cousin of William Henry Steeves, a Canadian Father of Confederation. Their immigrant ancestor in that line was Heinrich Stief, who immigrated from Germany and settled in New Brunswick about 1760. Newcomb seems to have had little conventional schooling and was taught by his father. He also had a short apprenticeship in 1851 to Dr. Foshay, a charlatan herbalist in New Brunswick. But his father gave him an excellent foundation for the youth's future studies. Newcomb was apprenticed to Dr. Foshay at the age of 16. Their agreement was that Newcomb would serve a five-year apprenticeship, during which time Foshay would train him in using herbs to treat illnesses. After two years Newcomb had become increasingly unhappy and disillusioned, as he realized that Foshay had an unscientific approach and was a charlatan. He left Foshay and broke their agreement. He walked the to the port of Calais, Maine. There he met a ship's captain who agreed to take him to Salem, Massachusetts, where his father had moved for a teaching job. In about 1854, Newcomb joined his father in Salem, and the two journeyed together to Maryland. Newcomb taught for two years in Maryland, from 1854 to 1856; for the first year in a country school in Massey's Cross Roads, Kent County, then for a year nearby in Sudlersville in Queen Anne's County. Both were located in the largely rural area of the Eastern Shore. In his spare time Newcomb studied a variety of subjects, such as political economy and religion, but his deepest studies were made in mathematics and astronomy. In particular he read Isaac Newton's Principia (1687) at this time. In 1856 Newcomb took a position as a private tutor close to Washington, DC. He often traveled to the city to study mathematics in its libraries. He borrowed a copy of Nathaniel Bowditch's translation of Pierre-Simon Laplace's Traité de mécanique céleste from the library of the Smithsonian Institution,
https://en.wikipedia.org/wiki/Weierstrass%20function
In mathematics, the Weierstrass function is an example of a real-valued function that is continuous everywhere but differentiable nowhere. It is an example of a fractal curve. It is named after its discoverer Karl Weierstrass. The Weierstrass function has historically served the role of a pathological function, being the first published example (1872) specifically concocted to challenge the notion that every continuous function is differentiable except on a set of isolated points. Weierstrass's demonstration that continuity did not imply almost-everywhere differentiability upended mathematics, overturning several proofs that relied on geometric intuition and vague definitions of smoothness. These types of functions were denounced by contemporaries: Henri Poincaré famously described them as "monsters" and called Weierstrass' work "an outrage against common sense", while Charles Hermite wrote that they were a "lamentable scourge". The functions were difficult to visualize until the arrival of computers in the next century, and the results did not gain wide acceptance until practical applications such as models of Brownian motion necessitated infinitely jagged functions (nowadays known as fractal curves). Construction In Weierstrass's original paper, the function was defined as a Fourier series: where , is a positive odd integer, and The minimum value of for which there exists such that these constraints are satisfied is . This construction, along with the proof that the function is not differentiable over any interval, was first delivered by Weierstrass in a paper presented to the Königliche Akademie der Wissenschaften on 18 July 1872. Despite never being differentiable, the function is continuous: Since the terms of the infinite series which defines it are bounded by ±an and this has finite sum for 0 < a < 1, convergence of the sum of the terms is uniform by the Weierstrass M-test with Mn = an. Since each partial sum is continuous, by the uniform limit theorem, it follows that f is continuous. Additionally, since each partial sum is uniformly continuous, it follows that f is also uniformly continuous. It might be expected that a continuous function must have a derivative, or that the set of points where it is not differentiable should be countably infinite or finite. According to Weierstrass in his paper, earlier mathematicians including Gauss had often assumed that this was true. This might be because it is difficult to draw or visualise a continuous function whose set of nondifferentiable points is something other than a countable set of points. Analogous results for better behaved classes of continuous functions do exist, for example the Lipschitz functions, whose set of non-differentiability points must be a Lebesgue null set (Rademacher's theorem). When we try to draw a general continuous function, we usually draw the graph of a function which is Lipschitz or otherwise well-behaved. The Weierstrass function was one of the first fr
https://en.wikipedia.org/wiki/Piecewise%20linear%20function
In mathematics and statistics, a piecewise linear, PL or segmented function is a real-valued function of a real variable, whose graph is composed of straight-line segments. Definition A piecewise linear function is a function defined on a (possibly unbounded) interval of real numbers, such that there is a collection of intervals on each of which the function is an affine function. (Thus "piecewise linear" is actually defined to mean "piecewise affine".) If the domain of the function is compact, there needs to be a finite collection of such intervals; if the domain is not compact, it may either be required to be finite or to be locally finite in the reals. Examples The function defined by is piecewise linear with four pieces. The graph of this function is shown to the right. Since the graph of an affine(*) function is a line, the graph of a piecewise linear function consists of line segments and rays. The x values (in the above example −3, 0, and 3) where the slope changes are typically called breakpoints, changepoints, threshold values or knots. As in many applications, this function is also continuous. The graph of a continuous piecewise linear function on a compact interval is a polygonal chain. Other examples of piecewise linear functions include the absolute value function, the sawtooth function, and the floor function. (*) A linear function satisfies by definition and therefore in particular ; functions whose graph is a straight line are affine rather than linear. Fitting to a curve An approximation to a known curve can be found by sampling the curve and interpolating linearly between the points. An algorithm for computing the most significant points subject to a given error tolerance has been published. Fitting to data If partitions, and then breakpoints, are already known, linear regression can be performed independently on these partitions. However, continuity is not preserved in that case, and also there is no unique reference model underlying the observed data. A stable algorithm with this case has been derived. If partitions are not known, the residual sum of squares can be used to choose optimal separation points. However efficient computation and joint estimation of all model parameters (including the breakpoints) may be obtained by an iterative procedure currently implemented in the package segmented for the R language. A variant of decision tree learning called model trees learns piecewise linear functions. Notation The notion of a piecewise linear function makes sense in several different contexts. Piecewise linear functions may be defined on n-dimensional Euclidean space, or more generally any vector space or affine space, as well as on piecewise linear manifolds and simplicial complexes (see simplicial map). In each case, the function may be real-valued, or it may take values from a vector space, an affine space, a piecewise linear manifold, or a simplicial complex. (In these contexts, the term “linear” does no
https://en.wikipedia.org/wiki/Weierstrass%20M-test
In mathematics, the Weierstrass M-test is a test for determining whether an infinite series of functions converges uniformly and absolutely. It applies to series whose terms are bounded functions with real or complex values, and is analogous to the comparison test for determining the convergence of series of real or complex numbers. It is named after the German mathematician Karl Weierstrass (1815-1897). Statement Weierstrass M-test. Suppose that (fn) is a sequence of real- or complex-valued functions defined on a set A, and that there is a sequence of non-negative numbers (Mn) satisfying the conditions for all and all , and converges. Then the series converges absolutely and uniformly on A. The result is often used in combination with the uniform limit theorem. Together they say that if, in addition to the above conditions, the set A is a topological space and the functions fn are continuous on A, then the series converges to a continuous function. Proof Consider the sequence of functions Since the series converges and for every , then by the Cauchy criterion, For the chosen , (Inequality (1) follows from the triangle inequality.) The sequence is thus a Cauchy sequence in R or C, and by completeness, it converges to some number that depends on x. For n > N we can write Since N does not depend on x, this means that the sequence of partial sums converges uniformly to the function S. Hence, by definition, the series converges uniformly. Analogously, one can prove that converges uniformly. Generalization A more general version of the Weierstrass M-test holds if the common codomain of the functions (fn) is a Banach space, in which case the premise is to be replaced by , where is the norm on the Banach space. For an example of the use of this test on a Banach space, see the article Fréchet derivative. See also Example of Weierstrass M-test References Functional analysis Convergence tests Articles containing proofs
https://en.wikipedia.org/wiki/Plural%20quantification
In mathematics and logic, plural quantification is the theory that an individual variable x may take on plural, as well as singular, values. As well as substituting individual objects such as Alice, the number 1, the tallest building in London etc. for x, we may substitute both Alice and Bob, or all the numbers between 0 and 10, or all the buildings in London over 20 stories. The point of the theory is to give first-order logic the power of set theory, but without any "existential commitment" to such objects as sets. The classic expositions are Boolos 1984 and Lewis 1991. History The view is commonly associated with George Boolos, though it is older (see notably Simons 1982), and is related to the view of classes defended by John Stuart Mill and other nominalist philosophers. Mill argued that universals or "classes" are not a peculiar kind of thing, having an objective existence distinct from the individual objects that fall under them, but "is neither more nor less than the individual things in the class". (Mill 1904, II. ii. 2,also I. iv. 3). A similar position was also discussed by Bertrand Russell in chapter VI of Russell (1903), but later dropped in favour of a "no-classes" theory. See also Gottlob Frege 1895 for a critique of an earlier view defended by Ernst Schroeder. The general idea can be traced back to Leibniz. (Levey 2011, pp. 129–133) Interest revived in plurals with work in linguistics in the 1970s by Remko Scha, Godehard Link, Fred Landman, Friederike Moltmann, Roger Schwarzschild, Peter Lasersohn and others, who developed ideas for a semantics of plurals. Background and motivation Multigrade (variably polyadic) predicates and relations Sentences like Alice and Bob cooperate. Alice, Bob and Carol cooperate. are said to involve a multigrade (also known as variably polyadic, also anadic) predicate or relation ("cooperate" in this example), meaning that they stand for the same concept even though they don't have a fixed arity (cf. Linnebo & Nicolas 2008). The notion of multigrade relation/predicate has appeared as early as the 1940s and has been notably used by Quine (cf. Morton 1975). Plural quantification deals with formalizing the quantification over the variable-length arguments of such predicates, e.g. "xx cooperate" where xx is a plural variable. Note that in this example it makes no sense, semantically, to instantiate xx with the name of a single person. Nominalism Broadly speaking, nominalism denies the existence of universals (abstract entities), like sets, classes, relations, properties, etc. Thus the plural logics were developed as an attempt to formalize reasoning about plurals, such as those involved in multigrade predicates, apparently without resorting to notions that nominalists deny, e.g. sets. Standard first-order logic has difficulties in representing some sentences with plurals. Most well-known is the Geach–Kaplan sentence "some critics admire only one another". Kaplan proved that it is nonfir
https://en.wikipedia.org/wiki/Icosagon
In geometry, an icosagon or 20-gon is a twenty-sided polygon. The sum of any icosagon's interior angles is 3240 degrees. Regular icosagon The regular icosagon has Schläfli symbol , and can also be constructed as a truncated decagon, , or a twice-truncated pentagon, . One interior angle in a regular icosagon is 162°, meaning that one exterior angle would be 18°. The area of a regular icosagon with edge length is In terms of the radius of its circumcircle, the area is since the area of the circle is the regular icosagon fills approximately 98.36% of its circumcircle. Uses The Big Wheel on the popular US game show The Price Is Right has an icosagonal cross-section. The Globe, the outdoor theater used by William Shakespeare's acting company, was discovered to have been built on an icosagonal foundation when a partial excavation was done in 1989. As a golygonal path, the swastika is considered to be an irregular icosagon. A regular square, pentagon, and icosagon can completely fill a plane vertex. Construction As , regular icosagon is constructible using a compass and straightedge, or by an edge-bisection of a regular decagon, or a twice-bisected regular pentagon: The golden ratio in an icosagon In the construction with given side length the circular arc around with radius , shares the segment in ratio of the golden ratio. Symmetry The regular icosagon has symmetry, order 40. There are 5 subgroup dihedral symmetries: , and , and 6 cyclic group symmetries: , and (. These 10 symmetries can be seen in 16 distinct symmetries on the icosagon, a larger number because the lines of reflections can either pass through vertices or edges. John Conway labels these by a letter and group order. Full symmetry of the regular form is and no symmetry is labeled . The dihedral symmetries are divided depending on whether they pass through vertices ( for diagonal) or edges ( for perpendiculars), and when reflection lines path through both edges and vertices. Cyclic symmetries in the middle column are labeled as for their central gyration orders. Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the subgroup has no degrees of freedom but can seen as directed edges. The highest symmetry irregular icosagons are , an isogonal icosagon constructed by ten mirrors which can alternate long and short edges, and , an isotoxal icosagon, constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular icosagon. Dissection Coxeter states that every zonogon (a -gon whose opposite sides are parallel and of equal length) can be dissected into parallelograms. In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. For the icosagon, , and it can be divided into 45: 5 squares and 4 sets of 10 rhombs. This decomposition is based on a Petrie poly
https://en.wikipedia.org/wiki/Locally%20cyclic%20group
In mathematics, a locally cyclic group is a group (G, *) in which every finitely generated subgroup is cyclic. Some facts Every cyclic group is locally cyclic, and every locally cyclic group is abelian. Every finitely-generated locally cyclic group is cyclic. Every subgroup and quotient group of a locally cyclic group is locally cyclic. Every homomorphic image of a locally cyclic group is locally cyclic. A group is locally cyclic if and only if every pair of elements in the group generates a cyclic group. A group is locally cyclic if and only if its lattice of subgroups is distributive . The torsion-free rank of a locally cyclic group is 0 or 1. The endomorphism ring of a locally cyclic group is commutative. Examples of locally cyclic groups that are not cyclic Examples of abelian groups that are not locally cyclic The additive group of real numbers (R, +); the subgroup generated by 1 and (comprising all numbers of the form a + b) is isomorphic to the direct sum Z + Z, which is not cyclic. References . . Abelian group theory Properties of groups
https://en.wikipedia.org/wiki/Pi%20function
In mathematics, three different functions are known as the pi or Pi function: (pi function) – the prime-counting function (Pi function) – the gamma function when offset to coincide with the factorial Rectangular function You might also be looking for: – the Infinite product of a sequence Capital pi notation
https://en.wikipedia.org/wiki/Polylogarithmic%20function
In mathematics, a polylogarithmic function in is a polynomial in the logarithm of , The notation is often used as a shorthand for , analogous to for . In computer science, polylogarithmic functions occur as the order of time or memory used by some algorithms (e.g., "it has polylogarithmic order"), such as in the definition of QPTAS (see PTAS). All polylogarithmic functions of are for every exponent (for the meaning of this symbol, see small o notation), that is, a polylogarithmic function grows more slowly than any positive exponent. This observation is the basis for the soft O notation . References Mathematical analysis Polynomials Analysis of algorithms
https://en.wikipedia.org/wiki/Von%20Neumann%20universe
In set theory and related branches of mathematics, the von Neumann universe, or von Neumann hierarchy of sets, denoted by V, is the class of hereditary well-founded sets. This collection, which is formalized by Zermelo–Fraenkel set theory (ZFC), is often used to provide an interpretation or motivation of the axioms of ZFC. The concept is named after John von Neumann, although it was first published by Ernst Zermelo in 1930. The rank of a well-founded set is defined inductively as the smallest ordinal number greater than the ranks of all members of the set. In particular, the rank of the empty set is zero, and every ordinal has a rank equal to itself. The sets in V are divided into the transfinite hierarchy Vα, called the cumulative hierarchy, based on their rank. Definition The cumulative hierarchy is a collection of sets Vα indexed by the class of ordinal numbers; in particular, Vα is the set of all sets having ranks less than α. Thus there is one set Vα for each ordinal number α. Vα may be defined by transfinite recursion as follows: Let V0 be the empty set: For any ordinal number β, let Vβ+1 be the power set of Vβ: For any limit ordinal λ, let Vλ be the union of all the V-stages so far: A crucial fact about this definition is that there is a single formula φ(α,x) in the language of ZFC that states "the set x is in Vα". The sets Vα are called stages or ranks. The class V is defined to be the union of all the V-stages: An equivalent definition sets for each ordinal α, where is the powerset of . The rank of a set S is the smallest α such that Another way to calculate rank is: . Finite and low cardinality stages of the hierarchy The first five von Neumann stages V0 to V4 may be visualized as follows. (An empty box represents the empty set. A box containing only an empty box represents the set containing only the empty set, and so forth.) This sequence exhibits tetrational growth. The set V5 contains 216 = 65536 elements; the set V6 contains 265536 elements, which very substantially exceeds the number of atoms in the known universe; and for any natural n, the set Vn+1 contains 2 ↑↑ n elements using Knuth's up-arrow notation. So the finite stages of the cumulative hierarchy cannot be written down explicitly after stage 5. The set Vω has the same cardinality as ω. The set Vω+1 has the same cardinality as the set of real numbers. Applications and interpretations Applications of V as models for set theories If ω is the set of natural numbers, then Vω is the set of hereditarily finite sets, which is a model of set theory without the axiom of infinity. Vω+ω is the universe of "ordinary mathematics", and is a model of Zermelo set theory (but not a model of ZF). A simple argument in favour of the adequacy of Vω+ω is the observation that Vω+1 is adequate for the integers, while Vω+2 is adequate for the real numbers, and most other normal mathematics can be built as relations of various kinds from these sets without needing the a
https://en.wikipedia.org/wiki/Constructible%20set
In mathematics, constructible set may refer to either: a notion in Gödel's constructible universe. a union of locally closed set in a topological space. See constructible set (topology).
https://en.wikipedia.org/wiki/Decision%20theory
Decision theory (or the theory of choice; not to be confused with choice theory) is a branch of applied probability theory and analytic philosophy concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome. There are three branches of decision theory: Normative decision theory: Concerned with the identification of optimal decisions, where optimality is often determined by considering an ideal decision-maker who is able to calculate with perfect accuracy and is in some sense fully rational. Prescriptive decision theory: Concerned with describing observed behaviors through the use of conceptual models, under the assumption that those making the decisions are behaving under some consistent rules. Descriptive decision theory: Analyzes how individuals actually make the decisions that they do. Decision theory is a broad field from management sciences and is an interdisciplinary topic, studied by management scientists, medical researchers, mathematicians, data scientists, psychologists, biologists, social scientists, philosophers and computer scientists. Empirical applications of this theory are usually done with the help of statistical and discrete mathematical approaches from computer science. Normative and descriptive Normative decision theory is concerned with identification of optimal decisions where optimality is often determined by considering an ideal decision maker who is able to calculate with perfect accuracy and is in some sense fully rational. The practical application of this prescriptive approach (how people ought to make decisions) is called decision analysis and is aimed at finding tools, methodologies, and software (decision support systems) to help people make better decisions. In contrast, descriptive decision theory is concerned with describing observed behaviors often under the assumption that those making decisions are behaving under some consistent rules. These rules may, for instance, have a procedural framework (e.g. Amos Tversky's elimination by aspects model) or an axiomatic framework (e.g. stochastic transitivity axioms), reconciling the Von Neumann-Morgenstern axioms with behavioral violations of the expected utility hypothesis, or they may explicitly give a functional form for time-inconsistent utility functions (e.g. Laibson's quasi-hyperbolic discounting). Prescriptive decision theory is concerned with predictions about behavior that positive decision theory produces to allow for further tests of the kind of decision-making that occurs in practice. In recent decades, there has also been increasing interest in "behavioral decision theory", contributing to a re-evaluation of what useful decision-making requires. Types of decisions Choice under uncertainty The area of choice under uncertainty represents the heart of decision theory. Known from the 17th century (Blaise Pascal invoked it in his famous wager, which is conta
https://en.wikipedia.org/wiki/360%20%28number%29
360 (three hundred sixty) is the natural number following 359 and preceding 361. In mathematics 360 is a highly composite number and one of only seven numbers such that no number less than twice as much has more divisors; the others are 1, 2, 6, 12, 60, and 2520 . 360 is also a superior highly composite number, a colossally abundant number, a refactorable number, a 5-smooth number, and a Harshad number in decimal since the sum of its digits (9) is a divisor of 360. 360 is divisible by the number of its divisors (24), and it is the smallest number divisible by every natural number from 1 to 10, except for 7. Furthermore, one of the divisors of 360 is 72, which is the number of primes below it. 360 is the sum of twin primes (179 + 181) and the sum of four consecutive powers of 3 (9 + 27 + 81 + 243). The sum of Euler's totient function φ(x) over the first thirty-four integers is 360. 360 is a triangular matchstick number. A circle is divided into 360 degrees for angular measurement. is also called a round angle. This unit choice divides round angles into equal sectors measured in integer rather than fractional degrees. Many angles commonly appearing in planimetrics have an integer number of degrees. For a simple non-intersecting polygon, the sum of the internal angles of a quadrilateral always equals 360 degrees. Integers from 361 to 369 361 361 = 192, centered triangular number, centered octagonal number, centered decagonal number, member of the Mian–Chowla sequence; also the number of positions on a standard 19 x 19 Go board. 362 362 = 2 × 181 = σ2(19): sum of squares of divisors of 19, Mertens function returns 0, nontotient, noncototient. 363 364 364 = 22 × 7 × 13, tetrahedral number, sum of twelve consecutive primes (11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53), Mertens function returns 0, nontotient. It is a repdigit in base 3 (111111), base 9 (444), base 25 (EE), base 27 (DD), base 51 (77) and base 90 (44), the sum of six consecutive powers of 3 (1 + 3 + 9 + 27 + 81 + 243), and because it is the twelfth non-zero tetrahedral number. 365 366 366 = 2 × 3 × 61, sphenic number, Mertens function returns 0, noncototient, number of complete partitions of 20, 26-gonal and 123-gonal. 367 367 is a prime number, Perrin number, happy number, prime index prime and a strictly non-palindromic number. 368 368 = 24 × 23. It is also a Leyland number. 369 References Sources Wells, D. (1987). The Penguin Dictionary of Curious and Interesting Numbers (p. 152). London: Penguin Group. External links Integers
https://en.wikipedia.org/wiki/Dedekind%20eta%20function
In mathematics, the Dedekind eta function, named after Richard Dedekind, is a modular form of weight 1/2 and is a function defined on the upper half-plane of complex numbers, where the imaginary part is positive. It also occurs in bosonic string theory. Definition For any complex number with , let ; then the eta function is defined by, Raising the eta equation to the 24th power and multiplying by gives where is the modular discriminant. The presence of 24 can be understood by connection with other occurrences, such as in the 24-dimensional Leech lattice. The eta function is holomorphic on the upper half-plane but cannot be continued analytically beyond it. The eta function satisfies the functional equations In the second equation the branch of the square root is chosen such that when . More generally, suppose are integers with , so that is a transformation belonging to the modular group. We may assume that either , or and . Then where Here is the Dedekind sum Because of these functional equations the eta function is a modular form of weight and level 1 for a certain character of order 24 of the metaplectic double cover of the modular group, and can be used to define other modular forms. In particular the modular discriminant of Weierstrass can be defined as and is a modular form of weight 12. Some authors omit the factor of , so that the series expansion has integral coefficients. The Jacobi triple product implies that the eta is (up to a factor) a Jacobi theta function for special values of the arguments: where is "the" Dirichlet character modulo 12 with and . Explicitly, The Euler function has a power series by the Euler identity: Because the eta function is easy to compute numerically from either power series, it is often helpful in computation to express other functions in terms of it when possible, and products and quotients of eta functions, called eta quotients, can be used to express a great variety of modular forms. The picture on this page shows the modulus of the Euler function: the additional factor of between this and eta makes almost no visual difference whatsoever. Thus, this picture can be taken as a picture of eta as a function of . Combinatorial identities The theory of the algebraic characters of the affine Lie algebras gives rise to a large class of previously unknown identities for the eta function. These identities follow from the Weyl–Kac character formula, and more specifically from the so-called "denominator identities". The characters themselves allow the construction of generalizations of the Jacobi theta function which transform under the modular group; this is what leads to the identities. An example of one such new identity is where is the -analog or "deformation" of the highest weight of a module. Special values From the above connection with the Euler function together with the special values of the latter, it can be easily deduced that Eta quotients Eta quotients are defined
https://en.wikipedia.org/wiki/Dirichlet%20eta%20function
In mathematics, in the area of analytic number theory, the Dirichlet eta function is defined by the following Dirichlet series, which converges for any complex number having real part > 0: This Dirichlet series is the alternating sum corresponding to the Dirichlet series expansion of the Riemann zeta function, ζ(s) — and for this reason the Dirichlet eta function is also known as the alternating zeta function, also denoted ζ*(s). The following relation holds: Both Dirichlet eta function and Riemann zeta function are special cases of polylogarithm. While the Dirichlet series expansion for the eta function is convergent only for any complex number s with real part > 0, it is Abel summable for any complex number. This serves to define the eta function as an entire function. (The above relation and the facts that the eta function is entire and together show the zeta function is meromorphic with a simple pole at s = 1, and possibly additional poles at the other zeros of the factor , although in fact these hypothetical additional poles do not exist.) Equivalently, we may begin by defining which is also defined in the region of positive real part ( represents the gamma function). This gives the eta function as a Mellin transform. Hardy gave a simple proof of the functional equation for the eta function, which is From this, one immediately has the functional equation of the zeta function also, as well as another means to extend the definition of eta to the entire complex plane. Zeros The zeros of the eta function include all the zeros of the zeta function: the negative even integers (real equidistant simple zeros); the zeros along the critical line, none of which are known to be multiple and over 40% of which have been proven to be simple, and the hypothetical zeros in the critical strip but not on the critical line, which if they do exist must occur at the vertices of rectangles symmetrical around the x-axis and the critical line and whose multiplicity is unknown. In addition, the factor adds an infinite number of complex simple zeros, located at equidistant points on the line , at where n is any nonzero integer. Under the Riemann hypothesis, the zeros of the eta function would be located symmetrically with respect to the real axis on two parallel lines , and on the perpendicular half line formed by the negative real axis. Landau's problem with ζ(s) = η(s)/0 and solutions In the equation , "the pole of at is cancelled by the zero of the other factor" (Titchmarsh, 1986, p. 17), and as a result is neither infinite nor zero (see ). However, in the equation η must be zero at all the points , where the denominator is zero, if the Riemann zeta function is analytic and finite there. The problem of proving this without defining the zeta function first was signaled and left open by E. Landau in his 1909 treatise on number theory: "Whether the eta series is different from zero or not at the points , i.e., whether these are poles of zeta or no
https://en.wikipedia.org/wiki/Statistics%20Belgium
Statistics Belgium (formerly known as the NSI) is part of the Federal Public Service Economy, SMEs, Self-Employed and Energy. Statistics Belgium conducts surveys among households and enterprises in Belgium. It uses and processes existing administrative databases (the national register) and provides data to Belgian and international authorities and organisations. Third parties may also call on its statistical expertise. Statistics Belgium is also the official representative of Belgium to international organisations such as Eurostat and OECD. Sets of figures, press releases and studies are published on its website, Statbel. Moreover, a number of databases can be queried through the online application be.STAT. Mission and tasks Its main mission can be summarized in three words: collecting, processing and disseminating relevant, reliable and commented data on Belgian society. “Collecting” means to seek information among economic and social actors. Data are collected directly or indirectly. For surveys respondents are directly interviewed by interviewers. Indirect data collection refers to the use of administrative files (NSSO, VAT, Crossroads Bank for Enterprises...). This method is increasingly used and reduces the response burden and costs for enterprises and individuals. “Processing” primarily means to assess data in a critical way. Results are then checked and validated by assessing their quality and their plausibility. Finally summary tables are drawn using basic data. By aggregating and comparing different types of data, Statistics Belgium adds value to initial figures. “Disseminating” means to make collected data available to the widest public (in accordance with the rules on personal privacy): Authorities (all Belgian political authorities at any level, but also international authorities and organisations); Enterprises (all enterprises regardless of their size or their interests); Society (researchers, journalists, professors, students and others) Statistics Belgium disseminates statistical information on: Households (population figures, household budget, time use, holiday habits...); Industry, construction, trade and other services (data from surveys among enterprises); Economic situation such as indexes (e.g. consumer price index) and economic indicators on industrial production, investments, export, employment, retail activity, services sector... Surveys The surveys conducted by Statistics Belgium include surveys on labour force, income and living conditions, structure and distribution of earnings, structural business statistics, agriculture, Census 2011 (formerly known as “population count”) and household budget. Statistics Belgium also conducts several major surveys in collaboration with other institutions. Examples include the health survey conducted by the Scientific Institute of Public Health and the time use survey carried out by the TOR research group from the Vrije Universiteit Brussel. Other public institution
https://en.wikipedia.org/wiki/Weierstrass%20elliptic%20function
In mathematics, the Weierstrass elliptic functions are elliptic functions that take a particularly simple form. They are named for Karl Weierstrass. This class of functions are also referred to as ℘-functions and they are usually denoted by the symbol ℘, a uniquely fancy script p. They play an important role in the theory of elliptic functions. A ℘-function together with its derivative can be used to parameterize elliptic curves and they generate the field of elliptic functions with respect to a given period lattice. Symbol for Weierstrass -function Definition Let be two complex numbers that are linearly independent over and let be the lattice generated by those numbers. Then the -function is defined as follows: This series converges locally uniformly absolutely in . Oftentimes instead of only is written. The Weierstrass -function is constructed exactly in such a way that it has a pole of the order two at each lattice point. Because the sum alone would not converge it is necessary to add the term . It is common to use and in the upper half-plane as generators of the lattice. Dividing by maps the lattice isomorphically onto the lattice with . Because can be substituted for , without loss of generality we can assume , and then define . Motivation A cubic of the form , where are complex numbers with , cannot be rationally parameterized. Yet one still wants to find a way to parameterize it. For the quadric , the unit circle, there exists a (non-rational) parameterization using the sine function and its derivative the cosine function: Because of the periodicity of the sine and cosine is chosen to be the domain, so the function is bijective. In a similar way one can get a parameterization of by means of the doubly periodic -function (see in the section "Relation to elliptic curves"). This parameterization has the domain , which is topologically equivalent to a torus. There is another analogy to the trigonometric functions. Consider the integral function It can be simplified by substituting and : That means . So the sine function is an inverse function of an integral function. Elliptic functions are also inverse functions of integral functions, namely of elliptic integrals. In particular the -function is obtained in the following way: Let Then can be extended to the complex plane and this extension equals the -function. Properties is an even function. That means for all , which can be seen in the following way: The second last equality holds because . Since the sum converges absolutely this rearrangement does not change the limit. is meromorphic and its derivative is and are doubly periodic with the periods and . This means: It follows that and for all . Functions which are meromorphic and doubly periodic are also called elliptic functions. Laurent expansion Let . Then for the -function has the following Laurent expansion where for are so called Eisenstein series. Differential equation Set and
https://en.wikipedia.org/wiki/Algebraic%20function%20field
In mathematics, an algebraic function field (often abbreviated as function field) of n variables over a field k is a finitely generated field extension K/k which has transcendence degree n over k. Equivalently, an algebraic function field of n variables over k may be defined as a finite field extension of the field K = k(x1,...,xn) of rational functions in n variables over k. Example As an example, in the polynomial ring k[X,Y] consider the ideal generated by the irreducible polynomial Y2 − X3 and form the field of fractions of the quotient ring k[X,Y]/(Y2 − X3). This is a function field of one variable over k; it can also be written as (with degree 2 over ) or as (with degree 3 over ). We see that the degree of an algebraic function field is not a well-defined notion. Category structure The algebraic function fields over k form a category; the morphisms from function field K to L are the ring homomorphisms f : K → L with f(a) = a for all a in k. All these morphisms are injective. If K is a function field over k of n variables, and L is a function field in m variables, and n > m, then there are no morphisms from K to L. Function fields arising from varieties, curves and Riemann surfaces The function field of an algebraic variety of dimension n over k is an algebraic function field of n variables over k. Two varieties are birationally equivalent if and only if their function fields are isomorphic. (But note that non-isomorphic varieties may have the same function field!) Assigning to each variety its function field yields a duality (contravariant equivalence) between the category of varieties over k (with dominant rational maps as morphisms) and the category of algebraic function fields over k. (The varieties considered here are to be taken in the scheme sense; they need not have any k-rational points, like the curve defined over the reals, that is with .) The case n = 1 (irreducible algebraic curves in the scheme sense) is especially important, since every function field of one variable over k arises as the function field of a uniquely defined regular (i.e. non-singular) projective irreducible algebraic curve over k. In fact, the function field yields a duality between the category of regular projective irreducible algebraic curves (with dominant regular maps as morphisms) and the category of function fields of one variable over k. The field M(X) of meromorphic functions defined on a connected Riemann surface X is a function field of one variable over the complex numbers C. In fact, M yields a duality (contravariant equivalence) between the category of compact connected Riemann surfaces (with non-constant holomorphic maps as morphisms) and function fields of one variable over C. A similar correspondence exists between compact connected Klein surfaces and function fields in one variable over R. Number fields and finite fields The function field analogy states that almost all theorems on number fields have a counterpart on function fields
https://en.wikipedia.org/wiki/Ernst%20Schr%C3%B6der%20%28mathematician%29
Friedrich Wilhelm Karl Ernst Schröder (25 November 1841 in Mannheim, Baden, Germany – 16 June 1902 in Karlsruhe, Germany) was a German mathematician mainly known for his work on algebraic logic. He is a major figure in the history of mathematical logic, by virtue of summarizing and extending the work of George Boole, Augustus De Morgan, Hugh MacColl, and especially Charles Peirce. He is best known for his monumental Vorlesungen über die Algebra der Logik (Lectures on the Algebra of Logic, 1890–1905), in three volumes, which prepared the way for the emergence of mathematical logic as a separate discipline in the twentieth century by systematizing the various systems of formal logic of the day. Life Schröder learned mathematics at Heidelberg, Königsberg, and Zürich, under Otto Hesse, Gustav Kirchhoff, and Franz Neumann. After teaching school for a few years, he moved to the Technische Hochschule Darmstadt in 1874. Two years later, he took up a chair in mathematics at the Karlsruhe Polytechnische Schule, where he spent the remainder of his life. He never married. Work Schröder's early work on formal algebra and logic was written in ignorance of the British logicians George Boole and Augustus De Morgan. Instead, his sources were texts by Ohm, Hankel, Hermann Grassmann, and Robert Grassmann (Peckhaus 1997: 233–296). In 1873, Schröder learned of Boole's and De Morgan's work on logic. To their work he subsequently added several important concepts due to Charles Sanders Peirce, including subsumption and quantification. Schröder also made original contributions to algebra, set theory, lattice theory, ordered sets and ordinal numbers. Along with Georg Cantor, he codiscovered the Cantor–Bernstein–Schröder theorem, although Schröder's proof (1898) is flawed. Felix Bernstein (1878–1956) subsequently corrected the proof as part of his Ph.D. dissertation. Schröder (1877) was a concise exposition of Boole's ideas on algebra and logic, which did much to introduce Boole's work to continental readers. The influence of the Grassmanns, especially Robert's little-known Formenlehre, is clear. Unlike Boole, Schröder fully appreciated duality. John Venn and Christine Ladd-Franklin both warmly cited this short book of Schröder's, and Charles Sanders Peirce used it as a text while teaching at Johns Hopkins University. Schröder's masterwork, his Vorlesungen über die Algebra der Logik, was published in three volumes between 1890 and 1905, at the author's expense. Vol. 2 is in two parts, the second published posthumously, edited by Eugen Müller. The Vorlesungen was a comprehensive and scholarly survey of algebraic logic up to the end of the 19th century, one that had a considerable influence on the emergence of mathematical logic in the 20th century. He developed Boole's algebra into a calculus of relations, based on composition of relations as a multiplication. The Schröder rules relate alternative interpretations of a product of relations. The Vorlesungen is a proli
https://en.wikipedia.org/wiki/Almost%20disjoint%20sets
In mathematics, two sets are almost disjoint if their intersection is small in some sense; different definitions of "small" will result in different definitions of "almost disjoint". Definition The most common choice is to take "small" to mean finite. In this case, two sets are almost disjoint if their intersection is finite, i.e. if (Here, '|X|' denotes the cardinality of X, and '< ∞' means 'finite'.) For example, the closed intervals [0, 1] and [1, 2] are almost disjoint, because their intersection is the finite set {1}. However, the unit interval [0, 1] and the set of rational numbers Q are not almost disjoint, because their intersection is infinite. This definition extends to any collection of sets. A collection of sets is pairwise almost disjoint or mutually almost disjoint if any two distinct sets in the collection are almost disjoint. Often the prefix "pairwise" is dropped, and a pairwise almost disjoint collection is simply called "almost disjoint". Formally, let I be an index set, and for each i in I, let Ai be a set. Then the collection of sets {Ai : i in I} is almost disjoint if for any i and j in I, For example, the collection of all lines through the origin in R2 is almost disjoint, because any two of them only meet at the origin. If {Ai} is an almost disjoint collection consisting of more than one set, then clearly its intersection is finite: However, the converse is not true—the intersection of the collection is empty, but the collection is not almost disjoint; in fact, the intersection of any two distinct sets in this collection is infinite. The possible cardinalities of a maximal almost disjoint family (commonly referred to as a MAD family) on the set of the natural numbers has been the object of intense study. The minimum infinite such cardinal is one of the classical Cardinal characteristics of the continuum. Other meanings Sometimes "almost disjoint" is used in some other sense, or in the sense of measure theory or topological category. Here are some alternative definitions of "almost disjoint" that are sometimes used (similar definitions apply to infinite collections): Let κ be any cardinal number. Then two sets A and B are almost disjoint if the cardinality of their intersection is less than κ, i.e. if The case of κ = 1 is simply the definition of disjoint sets; the case of is simply the definition of almost disjoint given above, where the intersection of A and B is finite. Let m be a complete measure on a measure space X. Then two subsets A and B of X are almost disjoint if their intersection is a null-set, i.e. if Let X be a topological space. Then two subsets A and B of X are almost disjoint if their intersection is meagre in X. References Families of sets
https://en.wikipedia.org/wiki/Heisenberg%20group
In mathematics, the Heisenberg group , named after Werner Heisenberg, is the group of 3×3 upper triangular matrices of the form under the operation of matrix multiplication. Elements a, b and c can be taken from any commutative ring with identity, often taken to be the ring of real numbers (resulting in the "continuous Heisenberg group") or the ring of integers (resulting in the "discrete Heisenberg group"). The continuous Heisenberg group arises in the description of one-dimensional quantum mechanical systems, especially in the context of the Stone–von Neumann theorem. More generally, one can consider Heisenberg groups associated to n-dimensional systems, and most generally, to any symplectic vector space. The three-dimensional case In the three-dimensional case, the product of two Heisenberg matrices is given by: As one can see from the term {{math|ab}}, the group is non-abelian. The neutral element of the Heisenberg group is the identity matrix, and inverses are given by The group is a subgroup of the 2-dimensional affine group Aff(2): acting on corresponds to the affine transform . There are several prominent examples of the three-dimensional case. Continuous Heisenberg group If , are real numbers (in the ring R) then one has the continuous Heisenberg group H3(R). It is a nilpotent real Lie group of dimension 3. In addition to the representation as real 3×3 matrices, the continuous Heisenberg group also has several different representations in terms of function spaces. By Stone–von Neumann theorem, there is, up to isomorphism, a unique irreducible unitary representation of H in which its centre acts by a given nontrivial character. This representation has several important realizations, or models. In the Schrödinger model, the Heisenberg group acts on the space of square integrable functions. In the theta representation, it acts on the space of holomorphic functions on the upper half-plane; it is so named for its connection with the theta functions. Discrete Heisenberg group If , are integers (in the ring Z) then one has the discrete Heisenberg group H3(Z). It is a non-abelian nilpotent group. It has two generators, and relations , where is the generator of the center of H3. (Note that the inverses of x, y, and z replace the 1 above the diagonal with −1.) By Bass's theorem, it has a polynomial growth rate of order 4. One can generate any element through Heisenberg group modulo an odd prime p If one takes a, b, c in Z/p Z for an odd prime p, then one has the Heisenberg group modulo p. It is a group of order p3 with generators x,y and relations: Analogues of Heisenberg groups over finite fields of odd prime order p are called extra special groups, or more properly, extra special groups of exponent p. More generally, if the derived subgroup of a group G is contained in the center Z of G, then the map from G/Z × G/Z → Z is a skew-symmetric bilinear operator on abelian groups. However, requiring that G/Z to be a finite
https://en.wikipedia.org/wiki/Pardo%20Brazilians
In Brazil, Pardo () is an ethnic and skin color category used by the Brazilian Institute of Geography and Statistics (IBGE) in the Brazilian censuses. The term "pardo" is a complex one, more commonly used to refer to Brazilians of mixed ethnic ancestries. Pardo Brazilians represent a diverse range of skin colors and ethnic backgrounds. The other recognized census categories are branco ("white"), preto ("black"), amarelo ("yellow", meaning ethnic East Asians), and indígena ("indigene" or "indigenous person", meaning Amerindians). The term was and is still commonly used, in popular culture and the media, to refer to Brazilians of multi ethnic backgrounds. Definitions According to IBGE (Brazilian Institute of Geography and Statistics), pardo is a broad classification that encompasses multiracial Brazilians such as mulatos and cafuzos, as well as assimilated Amerindians known as caboclos, mixed with Southern Europeans or not. The term "pardo" was first used in a Brazilian census in 1872. The following census, in 1890, replaced the word pardo by mestiço (that of mixed origins). The censuses of 1900 and 1920 did not ask about race, arguing that "the answers largely hide the truth". In Brazil, the term pardo has had a general meaning since the beginning of the Portuguese colonization. In the famous letter by Pero Vaz de Caminha, for example, in which Brazil was first described by the Portuguese, the Native Americans were called "pardo": "Pardo, naked, without clothing". A reading of colonial wills and testaments also shows it. Diogo de Vasconcelos, a widely known historian from Minas Gerais, mentions, for example, the story of Andresa de Castilhos. According to the information from the 18th century, Andresa de Castilhos was thus described: "I declare that Andresa de Castilhos, pardo woman ... has been freed ... is a descendant of the natives of the land ... I declare that Andresa de Castilhos is the daughter of a white man and a native woman". The historian Maria Leônia Chaves de Resende also explains that the word pardo was employed to name people with native ancestry or even Native Americans themselves: a Manoel, natural son of Ana carijó, was baptized as a 'pardo'; in Campanha several Native Americans were classified as 'pardo'; the natives João Ferreira, Joana Rodriges and Andreza Pedrosa, for example, were named 'freed pardo'; a Damaso called himself 'freed pardo' of the 'native of the land'; etc. According to Maria Leônia Chaves de Resende, the growth of the pardo population in Brazil includes the descendants of natives and not only those of African descent: "the growth of the 'pardo' segment had not only to do with the descendants of Africans, but also with the descendants of the natives, in particular the carijós and bastards, included in the condition of 'pardo'". The American historian Muriel Nazzari specifically pointed out the "pardo" category absorbed those of Native American descent in São Paulo: "This paper seeks to demonstrate th
https://en.wikipedia.org/wiki/Cyclic%20permutation
In mathematics, and in particular in group theory, a cyclic permutation is a permutation consisting of a single cycle. In some cases, cyclic permutations are referred to as cycles; if a cyclic permutation has k elements, it may be called a k-cycle. Some authors widen this definition to include permutations with fixed points in addition to at most one non-trivial cycle. In cycle notation, cyclic permutations are denoted by the list of their elements enclosed with parentheses, in the order to which they are permuted. For example, the permutation (1 3 2 4) that sends 1 to 3, 3 to 2, 2 to 4 and 4 to 1 is a 4-cycle, and the permutation (1 3 2)(4) that sends 1 to 3, 3 to 2, 2 to 1 and 4 to 4 is considered a 3-cycle by some authors. On the other hand, the permutation (1 3)(2 4) that sends 1 to 3, 3 to 1, 2 to 4 and 4 to 2 is not a cyclic permutation because it separately permutes the pairs {1, 3} and {2, 4}. The set of elements that are not fixed by a cyclic permutation is called the orbit of the cyclic permutation. Every permutation on finitely many elements can be decomposed into cyclic permutations on disjoint orbits. The individual cyclic parts of a permutation are also called cycles, thus the second example is composed of a 3-cycle and a 1-cycle (or fixed point) and the third is composed of two 2-cycles. Definition There is not widespread consensus about the precise definition of a cyclic permutation. Some authors define a permutation of a set to be cyclic if "successive application would take each object of the permuted set successively through the positions of all the other objects", or, equivalently, if its representation in cycle notation consists of a single cycle. Others provide a more permissive definition which allows fixed points. A nonempty subset of is a cycle of if the restriction of to is a cyclic permutation of . If is finite, its cycles are disjoint, and their union is . That is, they form a partition, called the cycle decomposition of So, according to the more permissive definition, a permutation of is cyclic if and only if is its unique cycle. For example, the permutation, written in cycle notation and two-line notation (in two ways) as has one 6-cycle and two 1-cycles its cycle diagram is shown at right. Some authors consider this permutation cyclic while others do not. With the enlarged definition, there are cyclic permutations that do not consist of a single cycle. More formally, for the enlarged definition, a permutation of a set X, viewed as a bijective function , is called a cycle if the action on X of the subgroup generated by has at most one orbit with more than a single element. This notion is most commonly used when X is a finite set; then the largest orbit, S, is also finite. Let be any element of S, and put for any . If S is finite, there is a minimal number for which . Then , and is the permutation defined by for 0 ≤ i < k and for any element of . The elements not fixed by can be pictu
https://en.wikipedia.org/wiki/Horseshoe%20map
In the mathematics of chaos theory, a horseshoe map is any member of a class of chaotic maps of the square into itself. It is a core example in the study of dynamical systems. The map was introduced by Stephen Smale while studying the behavior of the orbits of the van der Pol oscillator. The action of the map is defined geometrically by squishing the square, then stretching the result into a long strip, and finally folding the strip into the shape of a horseshoe. Most points eventually leave the square under the action of the map. They go to the side caps where they will, under iteration, converge to a fixed point in one of the caps. The points that remain in the square under repeated iteration form a fractal set and are part of the invariant set of the map. The squishing, stretching and folding of the horseshoe map are typical of chaotic systems, but not necessary or even sufficient. In the horseshoe map, the squeezing and stretching are uniform. They compensate each other so that the area of the square does not change. The folding is done neatly, so that the orbits that remain forever in the square can be simply described. For a horseshoe map: there are an infinite number of periodic orbits; periodic orbits of arbitrarily long period exist; the number of periodic orbits grows exponentially with the period; and close to any point of the fractal invariant set there is a point of a periodic orbit. The horseshoe map The horseshoe map is a diffeomorphism defined from a region of the plane into itself. The region is a square capped by two semi-disks. The codomain of (the "horseshoe") is a proper subset of its domain . The action of is defined through the composition of three geometrically defined transformations. First the square is contracted along the vertical direction by a factor . The caps are contracted so as to remain semi-disks attached to the resulting rectangle. Contracting by a factor smaller than one half assures that there will be a gap between the branches of the horseshoe. Next the rectangle is stretched horizontally by a factor of ; the caps remain unchanged. Finally the resulting strip is folded into a horseshoe-shape and placed back into . The interesting part of the dynamics is the image of the square into itself. Once that part is defined, the map can be extended to a diffeomorphism by defining its action on the caps. The caps are made to contract and eventually map inside one of the caps (the left one in the figure). The extension of f to the caps adds a fixed point to the non-wandering set of the map. To keep the class of horseshoe maps simple, the curved region of the horseshoe should not map back into the square. The horseshoe map is one-to-one, which means that an inverse f−1 exists when restricted to the image of under . By folding the contracted and stretched square in different ways, other types of horseshoe maps are possible. To ensure that the map remains one-to-one, the contracted square
https://en.wikipedia.org/wiki/Bilinear%20form
In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called vectors) over a field K (the elements of which are called scalars). In other words, a bilinear form is a function that is linear in each argument separately: and and The dot product on is an example of a bilinear form. The definition of a bilinear form can be extended to include modules over a ring, with linear maps replaced by module homomorphisms. When is the field of complex numbers , one is often more interested in sesquilinear forms, which are similar to bilinear forms but are conjugate linear in one argument. Coordinate representation Let be an -dimensional vector space with basis . The matrix A, defined by is called the matrix of the bilinear form on the basis . If the matrix represents a vector with respect to this basis, and similarly, the matrix represents another vector , then: A bilinear form has different matrices on different bases. However, the matrices of a bilinear form on different bases are all congruent. More precisely, if is another basis of , then where the form an invertible matrix . Then, the matrix of the bilinear form on the new basis is . Maps to the dual space Every bilinear form on defines a pair of linear maps from to its dual space . Define by This is often denoted as where the dot ( ⋅ ) indicates the slot into which the argument for the resulting linear functional is to be placed (see Currying). For a finite-dimensional vector space , if either of or is an isomorphism, then both are, and the bilinear form is said to be nondegenerate. More concretely, for a finite-dimensional vector space, non-degenerate means that every non-zero element pairs non-trivially with some other element: for all implies that and for all implies that . The corresponding notion for a module over a commutative ring is that a bilinear form is if is an isomorphism. Given a finitely generated module over a commutative ring, the pairing may be injective (hence "nondegenerate" in the above sense) but not unimodular. For example, over the integers, the pairing is nondegenerate but not unimodular, as the induced map from to is multiplication by 2. If is finite-dimensional then one can identify with its double dual . One can then show that is the transpose of the linear map (if is infinite-dimensional then is the transpose of restricted to the image of in ). Given one can define the transpose of to be the bilinear form given by The left radical and right radical of the form are the kernels of and respectively; they are the vectors orthogonal to the whole space on the left and on the right. If is finite-dimensional then the rank of is equal to the rank of . If this number is equal to then and are linear isomorphisms from to . In this case is nondegenerate. By the rank–nullity theorem, this is equivalent to the condition that the left and equivalently right radicals be tri
https://en.wikipedia.org/wiki/%E2%88%921
In mathematics, −1 (negative one or minus one) is the additive inverse of 1, that is, the number that when added to 1 gives the additive identity element, 0. It is the negative integer greater than negative two (−2) and less than 0. Algebraic properties Multiplication Multiplying a number by −1 is equivalent to changing the sign of the number – that is, for any we have . This can be proved using the distributive law and the axiom that 1 is the multiplicative identity: . Here we have used the fact that any number times 0 equals 0, which follows by cancellation from the equation . In other words, , so is the additive inverse of , i.e. , as was to be shown. Square of −1 The square of −1, i.e. −1 multiplied by −1, equals 1. As a consequence, a product of two negative numbers is positive. For an algebraic proof of this result, start with the equation . The first equality follows from the above result, and the second follows from the definition of −1 as additive inverse of 1: it is precisely that number which when added to 1 gives 0. Now, using the distributive law, it can be seen that . The third equality follows from the fact that 1 is a multiplicative identity. But now adding 1 to both sides of this last equation implies . The above arguments hold in any ring, a concept of abstract algebra generalizing integers and real numbers. Square roots of −1 Although there are no real square roots of −1, the complex number satisfies , and as such can be considered as a square root of −1. The only other complex number whose square is −1 is − because there are exactly two square roots of any non‐zero complex number, which follows from the fundamental theorem of algebra. In the algebra of quaternions – where the fundamental theorem does not apply – which contains the complex numbers, the equation has infinitely many solutions. Exponentiation to negative integers Exponentiation of a non‐zero real number can be extended to negative integers. We make the definition that , meaning that we define raising a number to the power −1 to have the same effect as taking its reciprocal. This definition is then extended to negative integers, preserving the exponential law for real numbers and . Exponentiation to negative integers can be extended to invertible elements of a ring, by defining as the multiplicative inverse of . A −1 that appears as a superscript of a function does not mean taking the (pointwise) reciprocal of that function, but rather the inverse function of the function. For example, is a notation for the arcsine function, and in general denotes the inverse function of ,. When a subset of the codomain is specified inside the function, it instead denotes the preimage of that subset under the function. Uses In software development, −1 is a common initial value for integers and is also used to show that a variable contains no useful information. −1 bears relation to Euler's identity since . See also Balanced ternary Menelau
https://en.wikipedia.org/wiki/Fixed%20point%20%28mathematics%29
{{hatnote|1=Fixed points in mathematics are not to be confused with other uses of "fixed point", or stationary points where {{math|1=f(x) = 0}}.}} In mathematics, a fixed point (sometimes shortened to fixpoint), also known as an invariant point, is a value that does not change under a given transformation. Specifically, for functions, a fixed point is an element that is mapped to itself by the function. Fixed point of a function Formally, is a fixed point of a function if belongs to both the domain and the codomain of , and . For example, if is defined on the real numbers by then 2 is a fixed point of , because . Not all functions have fixed points: for example, , has no fixed points, since is never equal to for any real number. In graphical terms, a fixed-point means the point is on the line , or in other words the graph of has a point in common with that line. Fixed point iteration In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. Specifically, given a function with the same domain and codomain, a point in the domain of , the fixed-point iteration is which gives rise to the sequence of iterated function applications which is hoped to converge to a point . If is continuous, then one can prove that the obtained is a fixed point of . The notions of attracting fixed points, repelling fixed points, and periodic points are defined with respect to fixed-point iteration. Fixed-point theorems A fixed-point theorem is a result saying that at least one fixed point exists, under some general condition. For example, the Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, Fixed-point iteration will always converge to a fixed point. The Brouwer fixed-point theorem (1911) says that any continuous function from the closed unit ball in n-dimensional Euclidean space to itself must have a fixed point, but it doesn't describe how to find the fixed point. The Lefschetz fixed-point theorem (and the Nielsen fixed-point theorem) from algebraic topology give a way to count fixed points. Fixed point of a group action In algebra, for a group G acting on a set X with a group action , x in X is said to be a fixed point of g if . The fixed-point subgroup of an automorphism f of a group G is the subgroup of G: Similarly, the fixed-point subring of an automorphism f of a ring R is the subring of the fixed points of f, that is, In Galois theory, the set of the fixed points of a set of field automorphisms is a field called the fixed field of the set of automorphisms. Topological fixed point property A topological space is said to have the fixed point property (FPP) if for any continuous function there exists such that . The FPP is a topological invariant, i.e. is preserved by any homeomorphism. The FPP is also preserved by any retraction. According to the Brouwer fixed-point theorem, every compact and convex subset of a Euclidean space
https://en.wikipedia.org/wiki/Theta%20function
In mathematics, theta functions are special functions of several complex variables. They show up in many topics, including Abelian varieties, moduli spaces, quadratic forms, and solitons. As Grassmann algebras, they appear in quantum field theory. The most common form of theta function is that occurring in the theory of elliptic functions. With respect to one of the complex variables (conventionally called ), a theta function has a property expressing its behavior with respect to the addition of a period of the associated elliptic functions, making it a quasiperiodic function. In the abstract theory this quasiperiodicity comes from the cohomology class of a line bundle on a complex torus, a condition of descent. One interpretation of theta functions when dealing with the heat equation is that "a theta function is a special function that describes the evolution of temperature on a segment domain subject to certain boundary conditions". Throughout this article, should be interpreted as (in order to resolve issues of choice of branch). Jacobi theta function There are several closely related functions called Jacobi theta functions, and many different and incompatible systems of notation for them. One Jacobi theta function (named after Carl Gustav Jacob Jacobi) is a function defined for two complex variables and , where can be any complex number and is the half-period ratio, confined to the upper half-plane, which means it has positive imaginary part. It is given by the formula where is the nome and . It is a Jacobi form. The restriction ensures that it is an absolutely convergent series. At fixed , this is a Fourier series for a 1-periodic entire function of . Accordingly, the theta function is 1-periodic in : By completing the square, it is also -quasiperiodic in , with Thus, in general, for any integers and . For any fixed , the function is an entire function on the complex plane, so by Liouville's theorem, it cannot be doubly periodic in unless it is constant, and so the best we could do is to make it periodic in and quasi-periodic in . Indeed, since and , the function is unbounded, as required by Liouville's theorem. It is in fact the most general entire function with 2 quasi-periods, in the following sense: Auxiliary functions The Jacobi theta function defined above is sometimes considered along with three auxiliary theta functions, in which case it is written with a double 0 subscript: The auxiliary (or half-period) functions are defined by This notation follows Riemann and Mumford; Jacobi's original formulation was in terms of the nome rather than . In Jacobi's notation the -functions are written: The above definitions of the Jacobi theta functions are by no means unique. See Jacobi theta functions (notational variations) for further discussion. If we set in the above theta functions, we obtain four functions of only, defined on the upper half-plane. These functions are called Theta Nullwert functions, based on t