source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Decile
|
In descriptive statistics, a decile is any of the nine values that divide the sorted data into ten equal parts, so that each part represents 1/10 of the sample or population. A decile is one possible form of a quantile; others include the quartile and percentile. A decile rank arranges the data in order from lowest to highest and is done on a scale of one to ten where each successive number corresponds to an increase of 10 percentage points.
Special Usage: The decile mean
A moderately robust measure of central tendency - known as the decile mean - can be computed by making use of a sample's deciles to ( = 10th percentile, = 20th percentile and so on). It is calculated as follows:
Apart from serving as an alternative for the mean and the truncated mean, it also forms the basis for robust measures of skewness and kurtosis, and even a normality test.
See also
Summary statistics
Socio-economic decile (for New Zealand schools)
References
Summary statistics
de:Quantil#Dezil
ru:Квантиль#Дециль
|
https://en.wikipedia.org/wiki/Cofinal
|
Cofinal may refer to:
Cofinal (mathematics), the property of a subset B of a preordered set A such that for every element of A there is a "larger element" in B
Cofinality (mathematics), the least cardinality of a cofinal subset in this sense
Cofinal (music), a part of some Gregorian chants
|
https://en.wikipedia.org/wiki/Sir%20John%20Sinclair%2C%201st%20Baronet
|
Colonel Sir John Sinclair, 1st Baronet, (10 May 1754 – 21 December 1835), was a Scottish politician, military officer, planter and writer who was one of the first people to use the word "statistics" in the English language in his pioneering work, Statistical Accounts of Scotland, which was published in 21 volumes.
Life
Sinclair was the eldest son of George Sinclair of Ulbster (d. 1770), a member of the family of the earls of Caithness, and his wife Lady Janet Sutherland. He was born at Thurso Castle, Caithness. He was educated at the High School in Edinburgh.
After studying law at the universities of Edinburgh and Glasgow and Trinity College, Oxford, he completed his legal studies at Lincoln's Inn in London in 1774. He was admitted to the Faculty of Advocates in Scotland in 1775, and also called to the English bar, although he never practised. He had inherited his father's estates in 1770 and had no financial need to work.
In 1780, he was returned to the House of Commons for the Caithness constituency, and subsequently represented several English constituencies, his parliamentary career extending, with few interruptions, until 1811. Sinclair established at Edinburgh a society for the improvement of British wool, and was mainly instrumental in the creation of the Board of Agriculture, of which he was the first president.
In 1788 he played a leading part in the formation of the African Association, founded to promote knowledge of Africa.
In 1794, Sinclair raised the Rothesay and Caithness Fencibles, the first of the Highland Fencible Corps which could be called to serve in the entirety of Great Britain and not merely Scotland. He later raised a second fencible unit, the Caithness Highlanders, who would go on to serve in Ireland during the Irish Rebellion of 1798.
His reputation as a financier and economist had been established by the publication, in 1784, of his History of the Public Revenue of the British Empire; in 1793 widespread ruin was prevented by the adoption of his plan for the issue of Exchequer Bills; and it was on his advice that, in 1797, Pitt issued the "loyalty loan" of 18 millions for the prosecution of the war.
From 1800 until 1816, he lived with his family at 6 Charlotte Square (now known as Bute House) in Edinburgh.
During his life, Sinclair acquired ownership of slave plantations in Saint Vincent and 610 slaves. After Parliament abolished slavery in British Empire with the Slavery Abolition Act 1833, Sinclair claimed partial compensation for the loss of his slaves under the Slave Compensation Act 1837, but died before he received his payout.
He died at home, 133 George Street, in the centre of Edinburgh's New Town. He is buried in the Royal Chapel at Holyrood Abbey. His stone sarcophagus lies towards the north-east.
Family
Sinclair, who was made a baronet in 1786, married twice. On 26 March 1776 he married his first wife Sarah Maitland, the only child and heir of Alexander Maitland of Stoke Newington. Together they h
|
https://en.wikipedia.org/wiki/Statistical%20classification
|
In statistics, classification is the problem of identifying which of a set of categories (sub-populations) an observation (or observations) belongs to. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient (sex, blood pressure, presence or absence of certain symptoms, etc.).
Often, the individual observations are analyzed into a set of quantifiable properties, known variously as explanatory variables or features. These properties may variously be categorical (e.g. "A", "B", "AB" or "O", for blood type), ordinal (e.g. "large", "medium" or "small"), integer-valued (e.g. the number of occurrences of a particular word in an email) or real-valued (e.g. a measurement of blood pressure). Other classifiers work by comparing observations to previous observations by means of a similarity or distance function.
An algorithm that implements classification, especially in a concrete implementation, is known as a classifier. The term "classifier" sometimes also refers to the mathematical function, implemented by a classification algorithm, that maps input data to a category.
Terminology across fields is quite varied. In statistics, where classification is often done with logistic regression or a similar procedure, the properties of observations are termed explanatory variables (or independent variables, regressors, etc.), and the categories to be predicted are known as outcomes, which are considered to be possible values of the dependent variable. In machine learning, the observations are often known as instances, the explanatory variables are termed features (grouped into a feature vector), and the possible categories to be predicted are classes. Other fields may use different terminology: e.g. in community ecology, the term "classification" normally refers to cluster analysis.
Relation to other problems
Classification and clustering are examples of the more general problem of pattern recognition, which is the assignment of some sort of output value to a given input value. Other examples are regression, which assigns a real-valued output to each input; sequence labeling, which assigns a class to each member of a sequence of values (for example, part of speech tagging, which assigns a part of speech to each word in an input sentence); parsing, which assigns a parse tree to an input sentence, describing the syntactic structure of the sentence; etc.
A common subclass of classification is probabilistic classification. Algorithms of this nature use statistical inference to find the best class for a given instance. Unlike other algorithms, which simply output a "best" class, probabilistic algorithms output a probability of the instance being a member of each of the possible classes. The best class is normally then selected as the one with the highest probability. However, such an algorithm has numerous advantages over non-probabilistic classi
|
https://en.wikipedia.org/wiki/CompStat
|
CompStat—or COMPSTAT, short for Computer Statistics—is a computerization and quantification program used by police departments. It was originally set up by the New York City Police Department in the 1990s. Variations of the program have since been used in police departments across the world. According to a 2022 podcast by Peter Moskos with John Yohe and Billy Gorta, the name CompStat was suggested by detective Richard Mahere for the computer file name of the original program to comply with 8.3 filename conventions, short for "Comparative Statistics" and "Computer Statistics.".
Origins
CompStat was started under the direction of Jack Maple when he was a transit police officer in New York City. The system was called Charts of the Future and was simple: it tracked crime through pins stuck in maps. Charts of the Future is credited with cutting subway crime by 27 percent.
The original commanding officer of the Transit Police Crime Analysis Unit was Lieutenant Richard Vasconi. Chief of New York City Transit Police William J. Bratton was later appointed police commissioner by Rudolph Giuliani, and he brought Maple's Charts of the Future with him. Maple eventually made the NYPD adopt it after it was rebranded as CompStat, and it was credited with helping to bring down crime by around 60%. There was a CompStat meeting every month, and it was mandatory for police officials to attend. The year after CompStat was adopted, 1995, murders dropped to 1,181. By 2012, there were 417 murders—the lowest number since records start in 1964.
Operations
Weekly crime reports
On a weekly basis, personnel from each of the NYPD's 77 precincts, nine police service areas and 12 transit districts compile a statistical summary of the week's crime complaints, arrests and summons activity, as well as a written report of significant cases, crime patterns and police activities. This data, with specific crime and enforcement locations and times, is forwarded to the chief of the department's CompStat Unit, where information is collated and loaded into a citywide database.
The unit runs computer analysis on the data and generates a weekly CompStat report. The report captures crime complaints and arrest activity at the precinct, patrol borough and citywide levels, presenting a summary of these and other important performance indicators.
The data is presented on a week-to-date, prior 28 days and year-to-date basis, with comparisons to previous years' activity. Precinct commanders and members of the department's senior officers can easily discern emerging and established crime trends, as well as deviations and anomalies. With the report, department leadership can easily make comparisons between commands. Each precinct is also ranked in each complaint and arrest category.
Accountability
The CompStat program involves weekly crime control strategy meetings. These gatherings increase information flow between the agency's executives and the commanders of operational units, with particu
|
https://en.wikipedia.org/wiki/Generalized%20function
|
In mathematics, generalized functions are objects extending the notion of functions. There is more than one recognized theory, for example the theory of distributions. Generalized functions are especially useful in making discontinuous functions more like smooth functions, and describing discrete physical phenomena such as point charges. They are applied extensively, especially in physics and engineering.
A common feature of some of the approaches is that they build on operator aspects of everyday, numerical functions. The early history is connected with some ideas on operational calculus, and more contemporary developments in certain directions are closely related to ideas of Mikio Sato, on what he calls algebraic analysis. Important influences on the subject have been the technical requirements of theories of partial differential equations, and group representation theory.
Some early history
In the mathematics of the nineteenth century, aspects of generalized function theory appeared, for example in the definition of the Green's function, in the Laplace transform, and in Riemann's theory of trigonometric series, which were not necessarily the Fourier series of an integrable function. These were disconnected aspects of mathematical analysis at the time.
The intensive use of the Laplace transform in engineering led to the heuristic use of symbolic methods, called operational calculus. Since justifications were given that used divergent series, these methods had a bad reputation from the point of view of pure mathematics. They are typical of later application of generalized function methods. An influential book on operational calculus was Oliver Heaviside's Electromagnetic Theory of 1899.
When the Lebesgue integral was introduced, there was for the first time a notion of generalized function central to mathematics. An integrable function, in Lebesgue's theory, is equivalent to any other which is the same almost everywhere. That means its value at a given point is (in a sense) not its most important feature. In functional analysis a clear formulation is given of the essential feature of an integrable function, namely the way it defines a linear functional on other functions. This allows a definition of weak derivative.
During the late 1920s and 1930s further steps were taken, basic to future work. The Dirac delta function was boldly defined by Paul Dirac (an aspect of his scientific formalism); this was to treat measures, thought of as densities (such as charge density) like genuine functions. Sergei Sobolev, working in partial differential equation theory, defined the first adequate theory of generalized functions, from the mathematical point of view, in order to work with weak solutions of partial differential equations. Others proposing related theories at the time were Salomon Bochner and Kurt Friedrichs. Sobolev's work was further developed in an extended form by Laurent Schwartz.
Schwartz distributions
The realization of such a conce
|
https://en.wikipedia.org/wiki/L%C4%ABl%C4%81vat%C4%AB
|
Līlāvatī is Indian mathematician Bhāskara II's treatise on mathematics, written in 1150 AD. It is the first volume of his main work, the Siddhānta Shiromani, alongside the Bijaganita, the Grahaganita and the Golādhyāya.
Name
His book on arithmetic is the source of interesting legends that assert that it was written for his daughter, Lilavati. Lilavati was Bhaskara II's daughter. Bhaskara II studied Lilavati's horoscope and predicted that she would remain both childless and unmarried. To avoid this fate, he ascertained an auspicious moment for his daughter's wedding and to alert his daughter at the correct time, he placed a cup with a small hole at the bottom of a vessel filled with water, arranged so that the cup would sink at the beginning of the propitious hour. He put the device in a room with a warning to Lilavati to not go near it. In her curiosity though, she went to look at the device and a pearl from her bridal dress accidentally dropped into it, thus upsetting it. The auspicious moment for the wedding thus passed unnoticed leaving a devastated Bhaskara II. It is then that he promised his daughter to write a book in her name, one that would remain till the end of time as a good name is akin to a second life.
Many of the problems are addressed to Līlāvatī herself who must have been a very bright young woman. For example "Oh Līlāvatī, intelligent girl, if you understand addition and subtraction, tell me the sum of the amounts 2, 5, 32, 193, 18, 10, and 100, as well as [the remainder of] those when subtracted from 10000." and "Fawn-eyed child Līlāvatī, tell me, how much is the number [resulting from] 135 multiplied by 12, if you understand multiplication by separate parts and by separate digits. And tell [me], beautiful one, how much is that product divided by the same multiplier?"
The word Līlāvatī itself means playful or one possessing play (from Sanskrit, Līlā = play, -vatī = female possessing the quality).
Contents
The book contains thirteen chapters, mainly definitions, arithmetical terms, interest computation, arithmetical and geometrical progressions, plane geometry, solid geometry, the shadow of the gnomon, the Kuṭṭaka - a method to solve indeterminate equations, and combinations. Bhaskara II gives the value of pi as 22/7 in the book but suggest a more accurate ratio of 3927/1250 for use in astronomical calculations. Also according to the book, the largest number is the parardha equal to one hundred thousand billion.
Lilavati includes a number of methods of computing numbers such as multiplications, squares, and progressions, with examples using kings and elephants, objects which a common man could understand.
Excerpt from Lilavati (Appears as an additional problem attached to stanza 54, Chapter 3. Translated by T N Colebrook)
Whilst making love a necklace broke.
A row of pearls mislaid.
One sixth fell to the floor.
One fifth upon the bed.
The young woman saved one third of them.
One tenth were caught by her lover.
If six pe
|
https://en.wikipedia.org/wiki/Minimum-variance%20unbiased%20estimator
|
In statistics a minimum-variance unbiased estimator (MVUE) or uniformly minimum-variance unbiased estimator (UMVUE) is an unbiased estimator that has lower variance than any other unbiased estimator for all possible values of the parameter.
For practical statistics problems, it is important to determine the MVUE if one exists, since less-than-optimal procedures would naturally be avoided, other things being equal. This has led to substantial development of statistical theory related to the problem of optimal estimation.
While combining the constraint of unbiasedness with the desirability metric of least variance leads to good results in most practical settings—making MVUE a natural starting point for a broad range of analyses—a targeted specification may perform better for a given problem; thus, MVUE is not always the best stopping point.
Definition
Consider estimation of based on data i.i.d. from some member of a family of densities , where is the parameter space. An unbiased estimator of is UMVUE if ,
for any other unbiased estimator
If an unbiased estimator of exists, then one can prove there is an essentially unique MVUE. Using the Rao–Blackwell theorem one can also prove that determining the MVUE is simply a matter of finding a complete sufficient statistic for the family and conditioning any unbiased estimator on it.
Further, by the Lehmann–Scheffé theorem, an unbiased estimator that is a function of a complete, sufficient statistic is the UMVUE estimator.
Put formally, suppose is unbiased for , and that is a complete sufficient statistic for the family of densities. Then
is the MVUE for
A Bayesian analog is a Bayes estimator, particularly with minimum mean square error (MMSE).
Estimator selection
An efficient estimator need not exist, but if it does and if it is unbiased,
it is the MVUE. Since the mean squared error (MSE) of an estimator δ is
the MVUE minimizes MSE among unbiased estimators. In some cases biased estimators have lower MSE because they have a smaller variance than does any unbiased estimator; see estimator bias.
Example
Consider the data to be a single observation from an absolutely continuous distribution on
with density
and we wish to find the UMVU estimator of
First we recognize that the density can be written as
Which is an exponential family with sufficient statistic . In fact this is a full rank exponential family, and therefore is complete sufficient. See exponential family
for a derivation which shows
Therefore,
Here we use Lehmann–Scheffé theorem to get the MVUE
Clearly is unbiased and is complete sufficient, thus the UMVU estimator is
This example illustrates that an unbiased function of the complete sufficient statistic will be UMVU, as Lehmann–Scheffé theorem states.
Other examples
For a normal distribution with unknown mean and variance, the sample mean and (unbiased) sample variance are the MVUEs for the population mean and population variance.
However, the sample stand
|
https://en.wikipedia.org/wiki/Sufficiency
|
Sufficiency may refer to:
Logical sufficiency; see necessary and sufficient conditions
sufficiency (statistics), sufficiency in statistical inference
The sufficiency of Scripture, a Christian doctrine
See also
Self-sufficiency
Eco-sufficiency
Sufficiency of disclosure, a patent law requirement
|
https://en.wikipedia.org/wiki/Weierstrass%20factorization%20theorem
|
In mathematics, and particularly in the field of complex analysis, the Weierstrass factorization theorem asserts that every entire function can be represented as a (possibly infinite) product involving its zeroes. The theorem may be viewed as an extension of the fundamental theorem of algebra, which asserts that every polynomial may be factored into linear factors, one for each root.
The theorem, which is named for Karl Weierstrass, is closely related to a second result that every sequence tending to infinity has an associated entire function with zeroes at precisely the points of that sequence.
A generalization of the theorem extends it to meromorphic functions and allows one to consider a given meromorphic function as a product of three factors: terms depending on the function's zeros and poles, and an associated non-zero holomorphic function.
Motivation
It is clear that any finite set of points in the complex plane has an associated polynomial whose zeroes are precisely at the points of that set. The converse is a consequences of the fundamental theorem of algebra: any polynomial function in the complex plane has a factorization
where is a non-zero constant and is the set of zeroes of .
The two forms of the Weierstrass factorization theorem can be thought of as extensions of the above to entire functions. The necessity of additional terms in the product is demonstrated when one considers where the sequence is not finite. It can never define an entire function, because the infinite product does not converge. Thus one cannot, in general, define an entire function from a sequence of prescribed zeroes or represent an entire function by its zeroes using the expressions yielded by the fundamental theorem of algebra.
A necessary condition for convergence of the infinite product in question is that for each z, the factors must approach 1 as . So it stands to reason that one should seek a function that could be 0 at a prescribed point, yet remain near 1 when not at that point and furthermore introduce no more zeroes than those prescribed.
Weierstrass' elementary factors have these properties and serve the same purpose as the factors above.
The elementary factors
Consider the functions of the form for . At , they evaluate to and have a flat slope at order up to . Right after , they sharply fall to some small positive value. In contrast, consider the function which has no flat slope but, at , evaluates to exactly zero. Also note that for ,
[[File:First_5_Weierstrass_factors_on_the_unit_interval.svg|thumb|right|alt=First 5 Weierstrass factors on the unit interval.|Plot of for n = 0,...,4 and x in the interval [-1,1].]]
The elementary factors,
also referred to as primary factors'',
are functions that combine the properties of zero slope and zero value (see graphic):
For and , one may express it as
and one can read off how those properties are enforced.
The utility of the elementary factors lies in the following lemma:
L
|
https://en.wikipedia.org/wiki/Young%20symmetrizer
|
In mathematics, a Young symmetrizer is an element of the group algebra of the symmetric group, constructed in such a way that, for the homomorphism from the group algebra to the endomorphisms of a vector space obtained from the action of on by permutation of indices, the image of the endomorphism determined by that element corresponds to an irreducible representation of the symmetric group over the complex numbers. A similar construction works over any field, and the resulting representations are called Specht modules. The Young symmetrizer is named after British mathematician Alfred Young.
Definition
Given a finite symmetric group Sn and specific Young tableau λ corresponding to a numbered partition of n, and consider the action of given by permuting the boxes of . Define two permutation subgroups and of Sn as follows:
and
Corresponding to these two subgroups, define two vectors in the group algebra as
and
where is the unit vector corresponding to g, and is the sign of the permutation. The product
is the Young symmetrizer corresponding to the Young tableau λ. Each Young symmetrizer corresponds to an irreducible representation of the symmetric group, and every irreducible representation can be obtained from a corresponding Young symmetrizer. (If we replace the complex numbers by more general fields the corresponding representations will not be irreducible in general.)
Construction
Let V be any vector space over the complex numbers. Consider then the tensor product vector space (n times). Let Sn act on this tensor product space by permuting the indices. One then has a natural group algebra representation on (i.e. is a right module).
Given a partition λ of n, so that , then the image of is
For instance, if , and , with the canonical Young tableau . Then the corresponding is given by
For any product vector of we then have
Thus the set of all clearly spans and since the span we obtain , where we wrote informally .
Notice also how this construction can be reduced to the construction for .
Let be the identity operator and the swap operator defined by , thus and . We have that
maps into , more precisely
is the projector onto .
Then
which is the projector onto .
The image of is
where μ is the conjugate partition to λ. Here, and are the symmetric and alternating tensor product spaces.
The image of in is an irreducible representation of Sn, called a Specht module. We write
for the irreducible representation.
Some scalar multiple of is idempotent, that is for some rational number Specifically, one finds . In particular, this implies that representations of the symmetric group can be defined over the rational numbers; that is, over the rational group algebra .
Consider, for example, S3 and the partition (2,1). Then one has
If V is a complex vector space, then the images of on spaces provides essentially all the finite-dimensional irreducible representations of GL(V).
See also
Representation t
|
https://en.wikipedia.org/wiki/Dessin%20d%27enfant
|
In mathematics, a dessin d'enfant is a type of graph embedding used to study Riemann surfaces and to provide combinatorial invariants for the action of the absolute Galois group of the rational numbers. The name of these embeddings is French for a "child's drawing"; its plural is either dessins d'enfant, "child's drawings", or dessins d'enfants, "children's drawings".
A dessin d'enfant is a graph, with its vertices colored alternately black and white, embedded in an oriented surface that, in many cases, is simply a plane. For the coloring to exist, the graph must be bipartite. The faces of the embedding are required be topological disks. The surface and the embedding may be described combinatorially using a rotation system, a cyclic order of the edges surrounding each vertex of the graph that describes the order in which the edges would be crossed by a path that travels clockwise on the surface in a small loop around the vertex.
Any dessin can provide the surface it is embedded in with a structure as a Riemann surface. It is natural to ask which Riemann surfaces arise in this way. The answer is provided by Belyi's theorem, which states that the Riemann surfaces that can be described by dessins are precisely those that can be defined as algebraic curves over the field of algebraic numbers. The absolute Galois group transforms these particular curves into each other, and thereby also transforms the underlying dessins.
For a more detailed treatment of this subject, see or .
History
19th century
Early proto-forms of dessins d'enfants appeared as early as 1856 in the icosian calculus of William Rowan Hamilton; in modern terms, these are Hamiltonian paths on the icosahedral graph.
Recognizable modern dessins d'enfants and Belyi functions were used by Felix Klein. Klein called these diagrams Linienzüge (German, plural of Linienzug "line-track", also used as a term for polygon); he used a white circle for the preimage of 0 and a '+' for the preimage of 1, rather than a black circle for 0 and white circle for 1 as in modern notation. He used these diagrams to construct an 11-fold cover of the Riemann sphere by itself, with monodromy group , following earlier constructions of a 7-fold cover with monodromy connected to the Klein quartic. These were all related to his investigations of the geometry of the quintic equation and the group collected in his famous 1884/88 Lectures on the Icosahedron. The three surfaces constructed in this way from these three groups were much later shown to be closely related through the phenomenon of trinity.
20th century
Dessins d'enfant in their modern form were then rediscovered over a century later and named by Alexander Grothendieck in 1984 in his Esquisse d'un Programme. quotes Grothendieck regarding his discovery of the Galois action on dessins d'enfants:
Part of the theory had already been developed independently by some time before Grothendieck. They outline the correspondence between maps on topological su
|
https://en.wikipedia.org/wiki/Fr%C3%A9chet%20filter
|
In mathematics, the Fréchet filter, also called the cofinite filter, on a set is a certain collection of subsets of (that is, it is a particular subset of the power set of ).
A subset of belongs to the Fréchet filter if and only if the complement of in is finite.
Any such set is said to be , which is why it is alternatively called the cofinite filter on .
The Fréchet filter is of interest in topology, where filters originated, and relates to order and lattice theory because a set's power set is a partially ordered set under set inclusion (more specifically, it forms a lattice).
The Fréchet filter is named after the French mathematician Maurice Fréchet (1878-1973), who worked in topology.
Definition
A subset of a set is said to be cofinite in if its complement in (that is, the set ) is finite.
If the empty set is allowed to be in a filter, the Fréchet filter on , denoted by is the set of all cofinite subsets of .
That is:
If is a finite set, then every cofinite subset of is necessarily not empty, so that in this case, it is not necessary to make the empty set assumption made before.
This makes a on the lattice the power set of with set inclusion, given that denotes the complement of a set in the following two conditions hold:
Intersection condition If two sets are finitely complemented in then so is their intersection, since and
Upper-set condition If a set is finitely complemented in then so are its supersets in .
Properties
If the base set is finite, then since every subset of and in particular every complement, is then finite.
This case is sometimes excluded by definition or else called the improper filter on Allowing to be finite creates a single exception to the Fréchet filter's being free and non-principal since a filter on a finite set cannot be free and a non-principal filter cannot contain any singletons as members.
If is infinite, then every member of is infinite since it is simply minus finitely many of its members.
Additionally, is infinite since one of its subsets is the set of all where
The Fréchet filter is both free and non-principal, excepting the finite case mentioned above, and is included in every free filter.
It is also the dual filter of the ideal of all finite subsets of (infinite) .
The Fréchet filter is necessarily an ultrafilter (or maximal proper filter).
Consider the power set where is the natural numbers.
The set of even numbers is the complement of the set of odd numbers. Since neither of these sets is finite, neither set is in the Fréchet filter on
However, an (an any other non-degenerate filter) is free if and only if it includes the Fréchet filter.
The ultrafilter lemma states that every non-degenerate filter is contained in some ultrafilter.
The existence of free ultrafilters was established by Tarski in 1930, relying on a theorem equivalent to the axiom of choice and is used in the construction of the hyperreals in nonstandard analysis.
Examples
If is a fini
|
https://en.wikipedia.org/wiki/Zolt%C3%A1n%20Tibor%20Balogh
|
Zoltán "Zoli" Tibor Balogh (December 7, 1953 – June 19, 2002) was a Hungarian-born mathematician, specializing in set-theoretic topology. His father, Tibor Balogh, was also a mathematician. His best-known work concerned solutions to problems involving normality of products, most notably the first ZFC construction of a small (cardinality continuum) Dowker space. He also solved
Nagami's problem (normal + screenable does not imply paracompact), and the second and third Morita conjectures about normality in products.
References
External links
Memorial with photograph
Zoli -- Topology Proceedings 27 (2003)
Author profile in the database zbMATH
20th-century Hungarian mathematicians
1953 births
2002 deaths
Topologists
|
https://en.wikipedia.org/wiki/Reuben%20Goodstein
|
Reuben Louis Goodstein (15 December 1912 – 8 March 1985) was an English mathematician with a strong interest in the philosophy and teaching of mathematics.
Education
Goodstein was educated at St Paul's School in London. He received his Master's degree from Magdalene College, Cambridge. After this, he worked at the University of Reading but ultimately spent most of his academic career at the University of Leicester. He earned his PhD from the University of London in 1946 while still working in Reading.
Goodstein also studied under Ludwig Wittgenstein.
Research
He published many works on finitism and the reconstruction of analysis from a finitistic viewpoint, for example "Constructive Formalism. Essays on the foundations of mathematics." Goodstein's theorem was among the earliest examples of theorems found to be unprovable in Peano arithmetic but provable in stronger logical systems (such as second-order arithmetic). He also introduced a variant of the Ackermann function that is now known as the hyperoperation sequence, together with the naming convention now used for these operations (tetration, pentation, hexation, etc.).
Besides mathematical logic (in which he held the first professorial chair in the U.K.), mathematical analysis, and the philosophy of mathematics, Goodstein was keenly interested in the teaching of mathematics. From 1956 to 1962 he was editor of The Mathematical Gazette. In 1962 he was an invited speaker at the International Congress of Mathematicians (with an address on A recursive lattice) in Stockholm. Among his doctoral students are Martin Löb and Alan Bundy.
Publications
Fundamental concepts of mathematics, Pergamon Press, 1962, 2nd edn. 1979
Essays in the philosophy of mathematics, Leicester University Press 1965
Recursive Analysis, North Holland 1961, Dover 2010
Mathematical Logic, Leicester University Press 1957
Development of mathematical logic, London, Logos Press 1971
Complex functions, McGraw Hill 1965
Boolean Algebra, Pergamon Press 1963, Dover 2007
Recursive number theory - a development of recursive arithmetic in a logic-free equation calculus, North Holland 1957
Constructive formalism - essays on the foundations of mathematics, Leicester University College 1951
with E. J. F. Primrose: Axiomatic projective geometry, Leicester University College 1953
References
English mathematicians
1912 births
1985 deaths
People educated at St Paul's School, London
Alumni of the University of London
Academics of the University of Reading
Academics of the University of Leicester
20th-century English mathematicians
Alumni of Magdalene College, Cambridge
|
https://en.wikipedia.org/wiki/Herman%20te%20Riele
|
Hermanus Johannes Joseph te Riele (born 5 January 1947) is a Dutch mathematician at CWI in Amsterdam with a specialization in computational number theory. He is known for proving the correctness of the Riemann hypothesis for the first 1.5 billion non-trivial zeros of the Riemann zeta function with Jan van de Lune and Dik Winter, for disproving the Mertens conjecture with Andrew Odlyzko, and for factoring large numbers of world record size. In 1987, he found a new upper bound for π(x) − Li(x).
In 1970, Te Riele received an engineer's degree in mathematical engineering from Delft University of Technology and, in 1976, a PhD degree in mathematics and physics from University of Amsterdam (1976).
References
External links
20th-century Dutch mathematicians
21st-century Dutch mathematicians
1947 births
Delft University of Technology alumni
Living people
Number theorists
Scientists from The Hague
|
https://en.wikipedia.org/wiki/Neal%20Koblitz
|
Neal I. Koblitz (born December 24, 1948) is a Professor of Mathematics at the University of Washington. He is also an adjunct professor with the Centre for Applied Cryptographic Research at the University of Waterloo. He is the creator of hyperelliptic curve cryptography and the independent co-creator of elliptic curve cryptography.
Biography
Koblitz received his B.A. in mathematics from Harvard University in 1969. While at Harvard, he was a Putnam Fellow in 1968. He received his Ph.D. from Princeton University in 1974 under the direction of Nick Katz. From 1975 to 1979 he was an instructor at Harvard University. In 1979 he began working at the University of Washington.
Koblitz's 1981 article "Mathematics as Propaganda" criticized the misuse of mathematics in the social sciences and helped motivate Serge Lang's successful challenge to the nomination of political scientist Samuel P. Huntington to the National Academy of Sciences. In The Mathematical Intelligencer, Koblitz, Steven Weintraub, and Saunders Mac Lane later criticized the arguments of Herbert A. Simon, who had attempted to defend Huntington's work.
He co-invented Elliptic-curve cryptography in 1985, with Victor S. Miller and for this was awarded the Levchin Prize in 2021.
With his wife Ann Hibner Koblitz, he in 1985 founded the Kovalevskaia Prize, to honor women scientists in developing countries. It was financed from the royalties of Ann Hibner Koblitz's 1983 biography of Sofia Kovalevskaia. Although the awardees have ranged over many fields of science, one of the 2011 winners was a Vietnamese mathematician, Lê Thị Thanh Nhàn. Koblitz is an atheist.
See also
List of University of Waterloo people
Gross–Koblitz formula
Selected publications
References
External links
Neal Koblitz's home page
1948 births
Living people
20th-century American mathematicians
21st-century American mathematicians
American atheists
Modern cryptographers
Public-key cryptographers
Putnam Fellows
Number theorists
Harvard University alumni
Princeton University alumni
University of Washington faculty
|
https://en.wikipedia.org/wiki/Unimodality
|
In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object.
Unimodal probability distribution
In statistics, a unimodal probability distribution or unimodal distribution is a probability distribution which has a single peak. The term "mode" in this context refers to any peak of the distribution, not just to the strict definition of mode which is usual in statistics.
If there is a single mode, the distribution function is called "unimodal". If it has more modes it is "bimodal" (2), "trimodal" (3), etc., or in general, "multimodal". Figure 1 illustrates normal distributions, which are unimodal. Other examples of unimodal distributions include Cauchy distribution, Student's t-distribution, chi-squared distribution and exponential distribution. Among discrete distributions, the binomial distribution and Poisson distribution can be seen as unimodal, though for some parameters they can have two adjacent values with the same probability.
Figure 2 and Figure 3 illustrate bimodal distributions.
Other definitions
Other definitions of unimodality in distribution functions also exist.
In continuous distributions, unimodality can be defined through the behavior of the cumulative distribution function (cdf). If the cdf is convex for x < m and concave for x > m, then the distribution is unimodal, m being the mode. Note that under this definition the uniform distribution is unimodal, as well as any other distribution in which the maximum distribution is achieved for a range of values, e.g. trapezoidal distribution. Usually this definition allows for a discontinuity at the mode; usually in a continuous distribution the probability of any single value is zero, while this definition allows for a non-zero probability, or an "atom of probability", at the mode.
Criteria for unimodality can also be defined through the characteristic function of the distribution or through its Laplace–Stieltjes transform.
Another way to define a unimodal discrete distribution is by the occurrence of sign changes in the sequence of differences of the probabilities. A discrete distribution with a probability mass function, , is called unimodal if the sequence has exactly one sign change (when zeroes don't count).
Uses and results
One reason for the importance of distribution unimodality is that it allows for several important results. Several inequalities are given below which are only valid for unimodal distributions. Thus, it is important to assess whether or not a given data set comes from a unimodal distribution. Several tests for unimodality are given in the article on multimodal distribution.
Inequalities
Gauss's inequality
A first important result is Gauss's inequality. Gauss's inequality gives an upper bound on the probability that a value lies more than any given distance from its mode. This inequality depends on unimodality.
Vysochanskiï–Petunin
|
https://en.wikipedia.org/wiki/Restriction%20%28mathematics%29
|
In mathematics, the restriction of a function is a new function, denoted or obtained by choosing a smaller domain for the original function
The function is then said to extend
Formal definition
Let be a function from a set to a set If a set is a subset of then the restriction of to is the function
given by for Informally, the restriction of to is the same function as but is only defined on .
If the function is thought of as a relation on the Cartesian product then the restriction of to can be represented by its graph where the pairs represent ordered pairs in the graph
Extensions
A function is said to be an of another function if whenever is in the domain of then is also in the domain of and
That is, if and
A (respectively, , etc.) of a function is an extension of that is also a linear map (respectively, a continuous map, etc.).
Examples
The restriction of the non-injective function to the domain is the injection
The factorial function is the restriction of the gamma function to the positive integers, with the argument shifted by one:
Properties of restrictions
Restricting a function to its entire domain gives back the original function, that is,
Restricting a function twice is the same as restricting it once, that is, if then
The restriction of the identity function on a set to a subset of is just the inclusion map from into
The restriction of a continuous function is continuous.
Applications
Inverse functions
For a function to have an inverse, it must be one-to-one. If a function is not one-to-one, it may be possible to define a partial inverse of by restricting the domain. For example, the function
defined on the whole of is not one-to-one since for any However, the function becomes one-to-one if we restrict to the domain in which case
(If we instead restrict to the domain then the inverse is the negative of the square root of ) Alternatively, there is no need to restrict the domain if we allow the inverse to be a multivalued function.
Selection operators
In relational algebra, a selection (sometimes called a restriction to avoid confusion with SQL's use of SELECT) is a unary operation written as
or where:
and are attribute names,
is a binary operation in the set
is a value constant,
is a relation.
The selection selects all those tuples in for which holds between the and the attribute.
The selection selects all those tuples in for which holds between the attribute and the value
Thus, the selection operator restricts to a subset of the entire database.
The pasting lemma
The pasting lemma is a result in topology that relates the continuity of a function with the continuity of its restrictions to subsets.
Let be two closed subsets (or two open subsets) of a topological space such that and let also be a topological space. If is continuous when restricted to both and then is continuous.
This result allows one to take two contin
|
https://en.wikipedia.org/wiki/White%20Light%20%28novel%29
|
White Light is a work of science fiction by Rudy Rucker published in 1980 by Virgin Books in the UK and Ace Books in the US. It was written while Rucker was teaching mathematics at the University of Heidelberg from 1978 to 1980, at roughly the same time he was working on the non-fiction book Infinity and the Mind.
On one level, the book is an exploration of the mathematics of infinity through fiction, in much the same way the novel Flatland: A Romance of Many Dimensions explored the concept of multiple dimensions. More specifically, White Light uses an imaginary universe to elucidate the set theory concept of aleph numbers, which are more or less the idea that some infinities are bigger than others.
Plot summary
The book is the story of Felix Rayman, a down-and-out mathematics teacher at SUCAS (a state college in New York, a play on SUNY) with a troubled family life and dead-in-the-water career. In the fictional town of Bernho (Geneseo), he begins experimenting with lucid dreaming—aided by "fuzz weed" (marijuana)—hoping to gain insight into Cantor's continuum hypothesis.
During an out-of-body experience, Felix loses his physical body and nearly falls victim to the Devil, who hunts the Earth for souls like his to take to Hell; Felix calls upon Jesus, who saves him. Jesus asks Felix to do him a favor: to take a restless ghost named Kathy to a place called "Cimön", and bring her to God/Absolute Infinite, which can be found there.
Cimön is permeated with the notion of infinity in its various guises: just getting there involves grappling with infinity, as Cimön is an infinite distance away from Earth. Felix and Kathy get there in their astral bodies by doubling their speed in half the time so that they asymptotically approach infinite speed at four hours. Eventually, at the speed of light, they turn into the eponymous "white light" and merge with Cimön.
In this new world, Felix encounters famous scientists and mathematicians such as Albert Einstein and Georg Cantor, who all reside in a hotel that is based on Hilbert's paradox of the Grand Hotel. Felix stays there after Kathy leaves him; the hotel is full, but Felix has the desk clerk move everybody one room up, leaving an empty room for him.
He falls in with a loquacious beetle named "Franx", reminiscent of Franz Kafka's The Metamorphosis, which is mentioned in Rucker's Afterword. The two decide to climb "Mount On", which itself is infinite (not aleph-null infinite, but perhaps instead cardinality of the continuum or greater).
After many adventures, Franx and Felix find Kathy. They leave off climbing Mount On, and instead try the other side of Cimön, the Deserts, littered with portholes to Hell. Felix merges with the Absolute Infinite, but Kathy is scared and refuses.
Eventually, Felix wakes back up on Earth in his body; everybody attributes his dreams to a spectacular binge-drinking and marijuana-smoking episode, until Felix remembers an insight he had regarding the continuum hypothesis: if
|
https://en.wikipedia.org/wiki/Kleinian%20model
|
In mathematics, a Kleinian model is a model of a three-dimensional hyperbolic manifold N by the quotient space where is a discrete subgroup of PSL(2,C). Here, the subgroup , a Kleinian group, is defined so that it is isomorphic to the fundamental group of the surface N. Many authors use the terms Kleinian group and Kleinian model interchangeably, letting one stand for the other. The concept is named after Felix Klein.
Many properties of Kleinian models are in direct analogy to those of Fuchsian models; however, overall, the theory is less well developed. A number of unsolved conjectures on Kleinian models are the analogs to theorems on Fuchsian models.
See also
Hyperbolic 3-manifold
References
Hyperbolic geometry
Kleinian groups
|
https://en.wikipedia.org/wiki/Hyperbolic%20manifold
|
In mathematics, a hyperbolic manifold is a space where every point looks locally like hyperbolic space of some dimension. They are especially studied in dimensions 2 and 3, where they are called hyperbolic surfaces and hyperbolic 3-manifolds, respectively. In these dimensions, they are important because most manifolds can be made into a hyperbolic manifold by a homeomorphism. This is a consequence of the uniformization theorem for surfaces and the geometrization theorem for 3-manifolds proved by Perelman.
Rigorous definition
A hyperbolic -manifold is a complete Riemannian -manifold of constant sectional curvature .
Every complete, connected, simply-connected manifold of constant negative curvature is isometric to the real hyperbolic space . As a result, the universal cover of any closed manifold of constant negative curvature is . Thus, every such can be written as where is a torsion-free discrete group of isometries on . That is, is a discrete subgroup of . The manifold has finite volume if and only if is a lattice.
Its thick–thin decomposition has a thin part consisting of tubular neighborhoods of closed geodesics and ends which are the product of a Euclidean ()-manifold and the closed half-ray. The manifold is of finite volume if and only if its thick part is compact.
Examples
The simplest example of a hyperbolic manifold is hyperbolic space, as each point in hyperbolic space has a neighborhood isometric to hyperbolic space.
A simple non-trivial example, however, is the once-punctured torus. This is an example of an (Isom(), )-manifold. This can be formed by taking an ideal rectangle in – that is, a rectangle where the vertices are on the boundary at infinity, and thus don't exist in the resulting manifold – and identifying opposite images.
In a similar fashion, we can construct the thrice-punctured sphere, shown below, by gluing two ideal triangles together. This also shows how to draw curves on the surface – the black line in the diagram becomes the closed curve when the green edges are glued together. As we are working with a punctured sphere, the colored circles in the surface – including their boundaries – are not part of the surface, and hence are represented in the diagram as ideal vertices.
Many knots and links, including some of the simpler knots such as the figure eight knot and the Borromean rings, are hyperbolic, and so the complement of the knot or link in is a hyperbolic 3-manifold of finite volume.
Important results
For the hyperbolic structure on a finite volume hyperbolic -manifold is unique by Mostow rigidity and so geometric invariants are in fact topological invariants. One of these geometric invariants used as a topological invariant is the hyperbolic volume of a knot or link complement, which can allow us to distinguish two knots from each other by studying the geometry of their respective manifolds.
See also
Hyperbolic 3-manifold
Hyperbolic space
Hyperbolization theorem
Margulis lemma
Normally
|
https://en.wikipedia.org/wiki/Georg%20Heinrich%20Thiessen
|
Georg Heinrich Thiessen (19 January 1914 – 3 July 1961) was a German astronomer.
After graduating, Georg Thiessen studied physics and mathematics and received his doctorate in 1940 under Richard Becker at Göttingen Georg August University. He joined the Fraunhofer Institute of the Institute for High Frequency Research in Freiburg in Breisgau in 1943, where he met Karl-Otto Kiepenheuer. In January 1945 he was transferred to the observatory in Hamburg-Bergedorf, where he was employed as assistant and later as 'Observator' from 1946 to 1953. In 1953 he habilitated on the subject of magnetic fields of the sun, he believed in the existence of a global solar magnetic field. He was promoted to professor in 1959.
On 3 of July 1961 he was killed in a frontal collision with a tram, his wife was seriously injured in this accident.
A crater on the farside of the moon (Thiessen) has been named after him since 1970.
Sunspots
Thiessen extensively studied sunspots. He discovered that the granulation, filling the entire solar surface outside sunspots, cannot be observed in the umbra. However, his observations revealed that there are small brighter spots (so-called umbra dots) inside the umbra. They are difficult to observe due to their small size and because of the high brightness contrast between the sunspot umbra and the surrounding photosphere.
External links
MitAG 15 (1962) 17 (obituary, in German)
Katalog der Deutschen Nationalbibliothek
Author Query Results
Nachruf auf Georg Thiessen
20th-century German astronomers
1914 births
1961 deaths
|
https://en.wikipedia.org/wiki/Cusp%20neighborhood
|
In mathematics, a cusp neighborhood is defined as a set of points near a cusp singularity.
Cusp neighborhood for a Riemann surface
The cusp neighborhood for a hyperbolic Riemann surface can be defined in terms of its Fuchsian model.
Suppose that the Fuchsian group G contains a parabolic element g. For example, the element t ∈ SL(2,Z) where
is a parabolic element. Note that all parabolic elements of SL(2,C) are conjugate to this element. That is, if g ∈ SL(2,Z) is parabolic, then for some h ∈ SL(2,Z).
The set
where H is the upper half-plane has
for any where is understood to mean the group generated by g. That is, γ acts properly discontinuously on U. Because of this, it can be seen that the projection of U onto H/G is thus
.
Here, E is called the neighborhood of the cusp corresponding to g.
Note that the hyperbolic area of E is exactly 1, when computed using the canonical Poincaré metric. This is most easily seen by example: consider the intersection of U defined above with the fundamental domain
of the modular group, as would be appropriate for the choice of T as the parabolic element. When integrated over the volume element
the result is trivially 1. Areas of all cusp neighborhoods are equal to this, by the invariance of the area under conjugation.
References
Hyperbolic geometry
Riemann surfaces
|
https://en.wikipedia.org/wiki/Closeness%20%28mathematics%29
|
Closeness is a basic concept in topology and related areas in mathematics. Intuitively, we say two sets are close if they are arbitrarily near to each other. The concept can be defined naturally in a metric space where a notion of distance between elements of the space is defined, but it can be generalized to topological spaces where we have no concrete way to measure distances.
The closure operator closes a given set by mapping it to a closed set which contains the original set and all points close to it. The concept of closeness is related to limit point.
Definition
Given a metric space a point is called close or near to a set if
,
where the distance between a point and a set is defined as
where inf stands for infimum. Similarly a set is called close to a set if
where
.
Properties
if a point is close to a set and a set then and are close (the converse is not true!).
closeness between a point and a set is preserved by continuous functions
closeness between two sets is preserved by uniformly continuous functions
Closeness relation between a point and a set
Let be some set. A relation between the points of and the subsets of is a closeness relation if it satisfies the following conditions:
Let and be two subsets of and a point in .
If then is close to .
if is close to then
if is close to and then is close to
if is close to then is close to or is close to
if is close to and for every point , is close to , then is close to .
Topological spaces have a closeness relationship built into them: defining a point to be close to a subset if and only if is in the closure of satisfies the above conditions. Likewise, given a set with a closeness relation, defining a point to be in the closure of a subset if and only if is close to satisfies the Kuratowski closure axioms. Thus, defining a closeness relation on a set is exactly equivalent to defining a topology on that set.
Closeness relation between two sets
Let , and be sets.
if and are close then and
if and are close then and are close
if and are close and then and are close
if and are close then either and are close or and are close
if then and are close
Generalized definition
The closeness relation between a set and a point can be generalized to any topological space. Given a topological space and a point , is called close to a set if .
To define a closeness relation between two sets the topological structure is too weak and we have to use a uniform structure. Given a uniform space, sets A and B are called close to each other if they intersect all entourages, that is, for any entourage U, (A×B)∩U is non-empty.
See also
Topological space
Uniform space
References
General topology
|
https://en.wikipedia.org/wiki/Quasiperiodic%20motion
|
In mathematics and theoretical physics, quasiperiodic motion is in rough terms the type of motion executed by a dynamical system containing a finite number (two or more) of incommensurable frequencies.
That is, if we imagine that the phase space is modelled by a torus T (that is, the variables are periodic like angles), the trajectory of the system is modelled by a curve on T that wraps around the torus without ever exactly coming back on itself.
A quasiperiodic function on the real line is the type of function (continuous, say) obtained from a function on T, by means of a curve
R → T
which is linear (when lifted from T to its covering Euclidean space), by composition. It is therefore oscillating, with a finite number of underlying frequencies. (NB the sense in which theta functions and the Weierstrass zeta function in complex analysis are said to have quasi-periods with respect to a period lattice is something distinct from this.)
The theory of almost periodic functions is, roughly speaking, for the same situation but allowing T to be a torus with an infinite number of dimensions.
References
See also
Quasiperiodicity
Dynamical systems
|
https://en.wikipedia.org/wiki/Allan%20Birnbaum
|
Allan Birnbaum (May 27, 1923 – July 1, 1976) was an American statistician who contributed to statistical inference, foundations of statistics, statistical genetics, statistical psychology, and history of statistics.
Life and career
Birnbaum was born in San Francisco. His parents were Russian-born Orthodox Jews. He studied mathematics at the University of California, Berkeley, doing a premedical programme at the same time. After taking a bachelor's degree in mathematics in 1945, he spent two years doing graduate courses in science, mathematics and philosophy, planning perhaps a career in the philosophy of science. One of his philosophy teachers, Hans Reichenbach, suggested he combine philosophy with science.
He went to Columbia University to do a PhD with Abraham Wald but, when Wald died in a plane crash, Birnbaum asked Erich Leo Lehmann, who was visiting Columbia to take him on. Birnbaum's thesis and his early work was very much in the spirit of Lehmann's classic text Testing Statistical Hypotheses.
Birnbaum stayed at Columbia until 1959 when he moved to the Courant Institute of Mathematical Sciences, becoming a full Professor of Statistics in 1963. He travelled a good deal and liked Britain especially. In 1975 he accepted a post at the City University, London, and worked with The Open University on their course M341 "Fundamentals of statistical inference" (with Adrian Smith). He took his life in 1976.
The article in the Leading Personalities volume opens with the declaration, "Allan Birnbaum was one of the most profound thinkers in the field of foundations of statistics." The assessment is based on Birnbaum's 1962 article and the publications surrounding it. Birnbaum's argument for the likelihood principle generated great controversy; it implied, amongst other things, a repudiation of the approach of Wald and Lehmann, that Birnbaum had followed in his own research. Leonard Jimmie Savage opened the discussion by saying
Without any intent to speak with exaggeration or rhetorically, it seems to me that this is really a historic occasion. This paper is landmark in statistics because it seems to me improbable that many people will be able to read this paper or to have heard it tonight without coming away with considerable respect for the likelihood principle.
Although Birnbaum made other contributions, none compared with this for impact or continuing resonance.
Publications of Allan Birnbaum
41 papers are listed by Barnard and Godambe. The first appeared in 1953 and the last, posthumously, in 1977. The most celebrated is the 1962 paper on the likelihood principle.
(With discussion.)
Discussions
– originally published in Encyclopedia of Statistical Science.
See also
CLs method (particle physics)#Allan Birnbaum
External links
For Birnbaum's PhD students see
For information about Birnbaum's correspondence with R. A. Fisher (and a copy of one letter) see
Correspondence of Sir R.A. Fisher: Calendar of Correspondence with Allan Birn
|
https://en.wikipedia.org/wiki/Uniformly%20connected%20space
|
In topology and related areas of mathematics a uniformly connected space or Cantor connected space is a uniform space U such that every uniformly continuous function from U to a discrete uniform space is constant.
A uniform space U is called uniformly disconnected if it is not uniformly connected.
Properties
A compact uniform space is uniformly connected if and only if it is connected
Examples
every connected space is uniformly connected
the rational numbers and the irrational numbers are disconnected but uniformly connected
See also
connectedness
References
Cantor, Georg Über Unendliche, lineare punktmannigfaltigkeiten, Mathematische Annalen. 21 (1883) 545-591.
Uniform spaces
|
https://en.wikipedia.org/wiki/Dilation%20%28metric%20space%29
|
In mathematics, a dilation is a function from a metric space into itself that satisfies the identity
for all points , where is the distance from to and is some positive real number.
In Euclidean space, such a dilation is a similarity of the space. Dilations change the size but not the shape of an object or figure.
Every dilation of a Euclidean space that is not a congruence has a unique fixed point that is called the center of dilation. Some congruences have fixed points and others do not.
See also
Homothety
Dilation (operator theory)
References
Metric geometry
|
https://en.wikipedia.org/wiki/Wess%E2%80%93Zumino%E2%80%93Witten%20model
|
In theoretical physics and mathematics, a Wess–Zumino–Witten (WZW) model, also called a Wess–Zumino–Novikov–Witten model, is a type of two-dimensional conformal field theory named after Julius Wess, Bruno Zumino, Sergei Novikov and Edward Witten. A WZW model is associated to a Lie group (or supergroup), and its symmetry algebra is the affine Lie algebra built from the corresponding Lie algebra (or Lie superalgebra). By extension, the name WZW model is sometimes used for any conformal field theory whose symmetry algebra is an affine Lie algebra.
Action
Definition
For a Riemann surface, a Lie group, and a (generally complex) number, let us define the -WZW model on at the level . The model is a nonlinear sigma model whose action is a functional of a field :
Here, is equipped with a flat Euclidean metric, is the partial derivative, and is the Killing form on the Lie algebra of . The Wess–Zumino term of the action is
Here is the completely anti-symmetric tensor, and is the Lie bracket.
The Wess–Zumino term is an integral over a three-dimensional manifold whose boundary is .
Topological properties of the Wess–Zumino term
For the Wess–Zumino term to make sense, we need the field to have an extension to . This requires the homotopy group to be trivial, which is the case in particular for any compact Lie group .
The extension of a given to is in general not unique.
For the WZW model to be well-defined,
should not depend on the choice of the extension.
The Wess–Zumino term is invariant under small deformations of , and only depends on its homotopy class.
Possible homotopy classes are controlled by the homotopy group .
For any compact, connected simple Lie group , we have , and different extensions of lead to values of that differ by integers. Therefore, they lead to the same value of provided the level obeys
Integer values of the level also play an important role in the representation theory of the model's symmetry algebra, which is an affine Lie algebra. If the level is a positive integer, the affine Lie algebra has unitary highest weight representations with highest weights that are dominant integral. Such representations decompose into finite-dimensional subrepresentations with respect to the subalgebras spanned by each simple root, the corresponding negative root and their commutator, which is a Cartan generator.
In the case of the noncompact simple Lie group ,
the homotopy group is trivial, and the level is not constrained to be an integer.
Geometrical interpretation of the Wess–Zumino term
If ea are the basis vectors for the Lie algebra, then are the structure constants of the Lie algebra. The structure constants are completely anti-symmetric, and thus they define a 3-form on the group manifold of G. Thus, the integrand above is just the pullback of the harmonic 3-form to the ball Denoting the harmonic 3-form by c and the pullback by one then has
This form leads directly to a topological analysis of the WZ ter
|
https://en.wikipedia.org/wiki/ZP
|
ZP may refer to:
Mathematics and science
Zp, the ring of p-adic integers
Zona pellucida (or egg coat), a glycoprotein layer around an oocyte
Z/pZ, the cyclic group of integers modulo p
Organisations
Zila Parishad ():
District Councils of Bangladesh
District Councils of India
Zjednoczona Prawica, the Polish United Right party
ZP, US Navy prefix for airship patrol squadrons, 1942–1961
People
Zach Parise, American ice hockey player
ZP Theart, former vocalist for British power metal band DragonForce
José Luis Rodríguez Zapatero, former Spanish prime minister, via popular nickname "ZP" (Zapatero Presidente)
|
https://en.wikipedia.org/wiki/Cahiers%20de%20Topologie%20et%20G%C3%A9om%C3%A9trie%20Diff%C3%A9rentielle%20Cat%C3%A9goriques
|
The Cahiers de Topologie et Géométrie Différentielle Catégoriques (French: Notebooks of categorical topology and categorical differential geometry) is a French mathematical scientific journal established by Charles Ehresmann in 1957. It concentrates on category theory "and its applications, [e]specially in topology and differential geometry". Its older papers (two years or more after publication) are freely available on the internet through the French NUMDAM service.
It was originally published by the Institut Henri Poincaré under the name Cahiers de Topologie; after the first volume, Ehresmann changed the publisher to the Institut Henri Poincaré and later Dunod/Bordas. In the eighth volume he changed the name to Cahiers de Topologie et Géométrie Différentielle. After Ehresmann's death in 1979 the editorship passed to his wife Andrée Ehresmann; in 1984, at the suggestion of René Guitart, the name was changed again, to add "Catégoriques".
References
External links
Official website as of January 2018 ; previous official website
Archive at Numdam: Volumes 1 (1957) - 7 (1965) : Séminaire Ehresmann. Topologie et géométrie différentielle; Volumes 8 (1966) - 52 (2011) : Cahiers de Topologie et Géométrie Différentielle Catégoriques
Table of Contents for Volumes 38 (1997) through 57 (2016) maintained at the electronic journal Theory and Applications of Categories
Mathematics journals
Academic journals established in 1957
Quarterly journals
Multilingual journals
|
https://en.wikipedia.org/wiki/Exotic
|
Exotic may refer to:
Mathematics and physics
Exotic R4, a differentiable 4-manifold, homeomorphic but not diffeomorphic to the Euclidean space R4
Exotic sphere, a differentiable n-manifold, homeomorphic but not diffeomorphic to the ordinary n-sphere
Exotic atom, an atom with one or more electrons replaced by other negatively charged particles
Exotic hadron
Exotic baryon, bound states of 3 quarks and additional particles
Exotic meson, non-quark model mesons
Exotic matter, a hypothetical concept of particle physics
Music
"Exotic" (1963 song), a song by The Sentinals from the 1963 album Surf Crazy - Original Surfin' Hits
"Exotic" (Lil Baby song), 2018
"Exotic" (Priyanka Chopra song), a 2012 song by Priyanka Chopra featuring Pitbull
Flora and fauna
Exotic pet
Exotic Shorthair, a breed of cat
Exotic species (or introduced species), a species not native to an area
Other
Exotic dancer, a type of dancer or stripper
Exotic derivative, a type of financial derivative
See also
Exoticism
Exotica (disambiguation)
|
https://en.wikipedia.org/wiki/Lie%20theory
|
In mathematics, the mathematician Sophus Lie ( ) initiated lines of study involving integration of differential equations, transformation groups, and contact of spheres that have come to be called Lie theory. For instance, the latter subject is Lie sphere geometry. This article addresses his approach to transformation groups, which is one of the areas of mathematics, and was worked out by Wilhelm Killing and Élie Cartan.
The foundation of Lie theory is the exponential map relating Lie algebras to Lie groups which is called the Lie group–Lie algebra correspondence. The subject is part of differential geometry since Lie groups are differentiable manifolds. Lie groups evolve out of the identity (1) and the tangent vectors to one-parameter subgroups generate the Lie algebra. The structure of a Lie group is implicit in its algebra, and the structure of the Lie algebra is expressed by root systems and root data.
Lie theory has been particularly useful in mathematical physics since it describes the standard transformation groups: the Galilean group, the Lorentz group, the Poincaré group and the conformal group of spacetime.
Elementary Lie theory
The one-parameter groups are the first instance of Lie theory. The compact case arises through Euler's formula in the complex plane. Other one-parameter groups occur in the split-complex number plane as the unit hyperbola
and in the dual number plane as the line
In these cases the Lie algebra parameters have names: angle, hyperbolic angle, and slope. These species of angle are useful for providing polar decompositions which describe sub-algebras of 2 x 2 real matrices.
There is a classical 3-parameter Lie group and algebra pair: the quaternions of unit length which can be identified with the 3-sphere. Its Lie algebra is the subspace of quaternion vectors. Since the commutator ij − ji = 2k, the Lie bracket in this algebra is twice the cross product of ordinary vector analysis.
Another elementary 3-parameter example is given by the Heisenberg group and its Lie algebra.
Standard treatments of Lie theory often begin with the classical groups.
History and scope
Early expressions of Lie theory are found in books composed by Sophus Lie with Friedrich Engel and Georg Scheffers from 1888 to 1896.
In Lie's early work, the idea was to construct a theory of continuous groups, to complement the theory of discrete groups that had developed in the theory of modular forms, in the hands of Felix Klein and Henri Poincaré. The initial application that Lie had in mind was to the theory of differential equations. On the model of Galois theory and polynomial equations, the driving conception was of a theory capable of unifying, by the study of symmetry, the whole area of ordinary differential equations.
According to historian Thomas W. Hawkins, it was Élie Cartan that made Lie theory what it is:
While Lie had many fertile ideas, Cartan was primarily responsible for the extensions and applications of his theory that have ma
|
https://en.wikipedia.org/wiki/Cantor%E2%80%93Dedekind%20axiom
|
In mathematical logic, the Cantor–Dedekind axiom is the thesis that the real numbers are order-isomorphic to the linear continuum of geometry. In other words, the axiom states that there is a one-to-one correspondence between real numbers and points on a line.
This axiom became a theorem proved by Emil Artin in his book Geometric Algebra. More precisely, Euclidean spaces defined over the field of real numbers satisfy the axioms of Euclidean geometry, and, from the axioms of Euclidean geometry, one can construct a field that is isomorphic to the real numbers.
Analytic geometry was developped from the Cartesian coordinate system introduced by René Descartes. It implicitly assumed this axiom by blending the distinct concepts of real numbers and points on a line, sometimes referred to as the real number line. Artin's proof, not only makes this blend explicity, but also that analytic geometry is strictly equivalent with the traditional synthetic geometry, in the sense that exactly the same theorems can be proved in the two frameworks.
Another consequence is that Alfred Tarski's proof of the decidability of first-order theories of the real numbers could be seen as an algorithm to solve any first-order problem in Euclidean geometry.
References
Ehrlich, P. (1994). "General introduction". Real Numbers, Generalizations of the Reals, and Theories of Continua, vi–xxxii. Edited by P. Ehrlich, Kluwer Academic Publishers, Dordrecht
Bruce E. Meserve (1953)
B.E. Meserve (1955)
Real numbers
Mathematical axioms
|
https://en.wikipedia.org/wiki/Mollifier
|
In mathematics, mollifiers (also known as approximations to the identity) are smooth functions with special properties, used for example in distribution theory to create sequences of smooth functions approximating nonsmooth (generalized) functions, via convolution. Intuitively, given a function which is rather irregular, by convolving it with a mollifier the function gets "mollified", that is, its sharp features are smoothed, while still remaining close to the original nonsmooth (generalized) function.
They are also known as Friedrichs mollifiers after Kurt Otto Friedrichs, who introduced them.
Historical notes
Mollifiers were introduced by Kurt Otto Friedrichs in his paper , which is considered a watershed in the modern theory of partial differential equations. The name of this mathematical object had a curious genesis, and Peter Lax tells the whole story in his commentary on that paper published in Friedrichs' "Selecta". According to him, at that time, the mathematician Donald Alexander Flanders was a colleague of Friedrichs: since he liked to consult colleagues about English usage, he asked Flanders an advice on how to name the smoothing operator he was using. Flanders was a puritan, nicknamed by his friends Moll after Moll Flanders in recognition of his moral qualities: he suggested to call the new mathematical concept a "mollifier" as a pun incorporating both Flanders' nickname and the verb 'to mollify', meaning 'to smooth over' in a figurative sense.
Previously, Sergei Sobolev used mollifiers in his epoch making 1938 paper, which contains the proof of the Sobolev embedding theorem: Friedrichs himself acknowledged Sobolev's work on mollifiers stating that:-"These mollifiers were introduced by Sobolev and the author...".
It must be pointed out that the term "mollifier" has undergone linguistic drift since the time of these foundational works: Friedrichs defined as "mollifier" the integral operator whose kernel is one of the functions nowadays called mollifiers. However, since the properties of a linear integral operator are completely determined by its kernel, the name mollifier was inherited by the kernel itself as a result of common usage.
Definition
Modern (distribution based) definition
If is a smooth function on ℝn, n ≥ 1, satisfying the following three requirements
it is compactly supported
where is the Dirac delta function and the limit must be understood in the space of Schwartz distributions, then is a mollifier. The function could also satisfy further conditions: for example, if it satisfies
≥ 0 for all x ∈ ℝn, then it is called a positive mollifier
= for some infinitely differentiable function : ℝ+ → ℝ, then it is called a symmetric mollifier
Notes on Friedrichs' definition
Note 1. When the theory of distributions was still not widely known nor used, property above was formulated by saying that the convolution of the function with a given function belonging to a proper Hilbert or Banach space converges a
|
https://en.wikipedia.org/wiki/G.%20S.%20Carr
|
George Shoobridge Carr (1837–1914) was a British mathematician. He wrote Synopsis of Pure Mathematics (1886). This book, first published in England in 1880, was read and studied closely by mathematician Srinivasa Ramanujan when he was a teenager. Ramanujan had already produced many theorems by the age of 15.
Carr was a private coach for the Tripos mathematics examinations at the University of Cambridge, and the Synopsis was written as a study guide for those examinations.
External links
Amitabha Sen, The Legacy of Mr. Carr, A Gift for the Gifted, parabaas.com, 1999
1837 births
19th-century British mathematicians
1914 deaths
20th-century British mathematicians
|
https://en.wikipedia.org/wiki/Proof%20procedure
|
In logic, and in particular proof theory, a proof procedure for a given logic is a systematic method for producing proofs in some proof calculus of (provable) statements.
Types of proof calculi used
There are several types of proof calculi. The most popular are natural deduction, sequent calculi (i.e., Gentzen-type systems), Hilbert systems, and semantic tableaux or trees. A given proof procedure will target a specific proof calculus, but can often be reformulated so as to produce proofs in other proof styles.
Completeness
A proof procedure for a logic is complete if it produces a proof for each provable statement. The theorems of logical systems are typically recursively enumerable, which implies the existence of a complete but usually extremely inefficient proof procedure; however, a proof procedure is only of interest if it is reasonably efficient.
Faced with an unprovable statement, a complete proof procedure may sometimes succeed in detecting and signalling its unprovability. In the general case, where provability is only a semidecidable property, this is not possible, and instead the procedure will diverge (not terminate).
See also
Automated theorem proving
Proof complexity
Proof tableaux
Deductive system
Proof (truth)
References
Willard Quine 1982 (1950). Methods of Logic. Harvard Univ. Press.
Proof theory
|
https://en.wikipedia.org/wiki/Categorification
|
In mathematics, categorification is the process of replacing set-theoretic theorems with category-theoretic analogues. Categorification, when done successfully, replaces sets with categories, functions with functors, and equations with natural isomorphisms of functors satisfying additional properties. The term was coined by Louis Crane.
The reverse of categorification is the process of decategorification. Decategorification is a systematic process by which isomorphic objects in a category are identified as equal. Whereas decategorification is a straightforward process, categorification is usually much less straightforward. In the representation theory of Lie algebras, modules over specific algebras are the principal objects of study, and there are several frameworks for what a categorification of such a module should be, e.g., so called (weak) abelian categorifications.
Categorification and decategorification are not precise mathematical procedures, but rather a class of possible analogues. They are used in a similar way to the words like 'generalization', and not like 'sheafification'.
Examples
One form of categorification takes a structure described in terms of sets, and interprets the sets as isomorphism classes of objects in a category. For example, the set of natural numbers can be seen as the set of cardinalities of finite sets (and any two sets with the same cardinality are isomorphic). In this case, operations on the set of natural numbers, such as addition and multiplication, can be seen as carrying information about coproducts and products of the category of finite sets. Less abstractly, the idea here is that manipulating sets of actual objects, and taking coproducts (combining two sets in a union) or products (building arrays of things to keep track of large numbers of them) came first. Later, the concrete structure of sets was abstracted away – taken "only up to isomorphism", to produce the abstract theory of arithmetic. This is a "decategorification" – categorification reverses this step.
Other examples include homology theories in topology. Emmy Noether gave the modern formulation of homology as the rank of certain free abelian groups by categorifying the notion of a Betti number. See also Khovanov homology as a knot invariant in knot theory.
An example in finite group theory is that the ring of symmetric functions is categorified by the category of representations of the symmetric group. The decategorification map sends the Specht module indexed by partition to the Schur function indexed by the same partition,
essentially following the character map from a favorite basis of the associated Grothendieck group to a representation-theoretic favorite basis of the ring of symmetric functions. This map reflects how the structures are similar; for example
have the same decomposition numbers over their respective bases, both given by Littlewood–Richardson coefficients.
Abelian categorifications
For a category , let be the Gr
|
https://en.wikipedia.org/wiki/Pickover%20stalk
|
Pickover stalks are certain kinds of details to be found empirically in the Mandelbrot set, in the study of fractal geometry. They are so named after the researcher Clifford Pickover, whose "epsilon cross" method was instrumental in their discovery. An "epsilon cross" is a cross-shaped orbit trap.
According to Vepstas (1997) "Pickover hit on the novel concept of looking to see how closely the orbits of interior points come to the x and y axes. In these pictures, the closer that the point approaches, the higher up the color scale, with red denoting the closest approach. The logarithm of the distance is taken to accentuate the details".
Biomorphs
Biomorphs are biological-looking Pickover Stalks. At the end of the 1980s, Pickover developed biological feedback organisms similar to Julia sets and the fractal Mandelbrot set. According to Pickover (1999) in summary, he "described an algorithm which could be used for the creation of diverse and complicated forms resembling invertebrate organisms. The shapes are complicated and difficult to predict before actually experimenting with the mappings. He hoped these techniques would encourage others to explore further and discover new forms, by accident, that are on the edge of science and art".
Pickover developed an algorithm (which uses neither random perturbations nor natural laws) to create very complicated forms resembling invertebrate organisms. The iteration, or recursion, of mathematical transformations is used to generate biological morphologies. He called them "biomorphs." At the same time he coined "biomorph" for these patterns, the famous evolutionary biologist Richard Dawkins used the word to refer to his own set of biological shapes that were arrived at by a very different procedure. More rigorously, Pickover's "biomorphs" encompass the class of organismic morphologies created by small changes to traditional convergence tests in the field of "Julia set" theory.
Pickover's biomorphs show a self-similarity at different scales, a common feature of dynamical systems with feedback. Real systems, such as shorelines and mountain ranges, also show self-similarity over some scales. A 2-dimensional parametric 0L system can “look” like Pickover's biomorphs.
Implementation
The below example, written in pseudocode, renders a Mandelbrot set colored using a Pickover Stalk with a transformation vector and a color dividend.
The transformation vector is used to offset the (x, y) position when sampling the point's distance to the horizontal and vertical axis.
The color dividend is a float used to determine how thick the stalk is when it is rendered.
For each pixel (x, y) on the target, do:
{
zx = scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.5, 1))
zy = scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1, 1))
float2 c = (zx, zy) //Offset in the Mandelbrot formulae
float x = zx; //Coordinates to be iterated
float y = zy;
float trapDistance =
|
https://en.wikipedia.org/wiki/Digit%20sum
|
In mathematics, the digit sum of a natural number in a given number base is the sum of all its digits. For example, the digit sum of the decimal number would be
Definition
Let be a natural number. We define the digit sum for base , to be the following:
where is one less than the number of digits in the number in base , and
is the value of each digit of the number.
For example, in base 10, the digit sum of 84001 is
For any two bases and for sufficiently large natural numbers
The sum of the base 10 digits of the integers 0, 1, 2, ... is given by in the On-Line Encyclopedia of Integer Sequences. use the generating function of this integer sequence (and of the analogous sequence for binary digit sums) to derive several rapidly converging series with rational and transcendental sums.
Extension to negative integers
The digit sum can be extended to the negative integers by use of a signed-digit representation to represent each integer.
Applications
The concept of a decimal digit sum is closely related to, but not the same as, the digital root, which is the result of repeatedly applying the digit sum operation until the remaining value is only a single digit. The decimal digital root of any non-zero integer will be a number in the range 1 to 9, whereas the digit sum can take any value. Digit sums and digital roots can be used for quick divisibility tests: a natural number is divisible by 3 or 9 if and only if its digit sum (or digital root) is divisible by 3 or 9, respectively. For divisibility by 9, this test is called the rule of nines and is the basis of the casting out nines technique for checking calculations.
Digit sums are also a common ingredient in checksum algorithms to check the arithmetic operations of early computers. Earlier, in an era of hand calculation, suggested using sums of 50 digits taken from mathematical tables of logarithms as a form of random number generation; if one assumes that each digit is random, then by the central limit theorem, these digit sums will have a random distribution closely approximating a Gaussian distribution.
The digit sum of the binary representation of a number is known as its Hamming weight or population count; algorithms for performing this operation have been studied, and it has been included as a built-in operation in some computer architectures and some programming languages. These operations are used in computing applications including cryptography, coding theory, and computer chess.
Harshad numbers are defined in terms of divisibility by their digit sums, and Smith numbers are defined by the equality of their digit sums with the digit sums of their prime factorizations.
See also
Arithmetic dynamics
Casting out nines
Checksum
Digital root
Hamming weight
Harshad number
Perfect digital invariant
Sideways sum
Smith number
Sum-product number
References
External links
Simple applications of digit sum
Addition
Arithmetic dynamics
Base-dependent integer sequences
Numb
|
https://en.wikipedia.org/wiki/Score%20test
|
In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, they have an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948, a fact that can be used to determine statistical significance.
Since function maximization subject to equality constraints is most conveniently done using a Lagrangean expression of the problem, the score test can be equivalently understood as a test of the magnitude of the Lagrange multipliers associated with the constraints where, again, if the constraints are non-binding at the maximum likelihood, the vector of Lagrange multipliers should not differ from zero by more than sampling error. The equivalence of these two approaches was first shown by S. D. Silvey in 1959, which led to the name Lagrange multiplier test that has become more commonly used, particularly in econometrics, since Breusch and Pagan's much-cited 1980 paper.
The main advantage of the score test over the Wald test and likelihood-ratio test is that the score test only requires the computation of the restricted estimator. This makes testing feasible when the unconstrained maximum likelihood estimate is a boundary point in the parameter space. Further, because the score test only requires the estimation of the likelihood function under the null hypothesis, it is less specific than the likelihood ratio test about the alternative hypothesis.
Single-parameter test
The statistic
Let be the likelihood function which depends on a univariate parameter and let be the data. The score is defined as
The Fisher information is
where ƒ is the probability density.
The statistic to test is
which has an asymptotic distribution of , when is true. While asymptotically identical, calculating the LM statistic using the outer-gradient-product estimator of the Fisher information matrix can lead to bias in small samples.
Note on notation
Note that some texts use an alternative notation, in which the statistic is tested against a normal distribution. This approach is equivalent and gives identical results.
As most powerful test for small deviations
where is the likelihood function, is the value of the parameter of interest under the null hypothesis, and is a constant set depending on the size of the test desired (i.e. the probability of rejecting if is true; see Type I error).
The score test is the most powerful test for small deviations from . To see this, consider testing versus . By the Neyman–Pearson lemma, the most powerful test has the form
Taking the log of both sides yields
The score test follows making the substitution (by
|
https://en.wikipedia.org/wiki/Polydisc
|
In the theory of functions of several complex variables, a branch of mathematics, a polydisc is a Cartesian product of discs.
More specifically, if we denote by the open disc of center z and radius r in the complex plane, then an open polydisc is a set of the form
It can be equivalently written as
One should not confuse the polydisc with the open ball in Cn, which is defined as
Here, the norm is the Euclidean distance in Cn.
When , open balls and open polydiscs are not biholomorphically equivalent, that is, there is no biholomorphic mapping between the two. This was proven by Poincaré in 1907 by showing that their automorphism groups have different dimensions as Lie groups.
When the term bidisc is sometimes used.
A polydisc is an example of logarithmically convex Reinhardt domain.
References
Several complex variables
|
https://en.wikipedia.org/wiki/Edward%20F.%20Moore
|
Edward Forrest Moore (November 23, 1925 in Baltimore, Maryland – June 14, 2003 in Madison, Wisconsin) was an American professor of mathematics and computer science, the inventor of the Moore finite state machine, and an early pioneer of artificial life.
Biography
Moore received a B.S. in chemistry from the Virginia Polytechnic Institute in Blacksburg, Virginia in 1947 and a Ph.D. in Mathematics from Brown University in Providence, Rhode Island in June 1950. He worked at the University of Illinois at Urbana–Champaign from 1950 to 1952 and was a visiting professor at MIT and visiting lecturer at Harvard University simultaneously in 1961-1962. He worked at Bell Labs from 1952 to 1966. After that, he was a professor at the University of Wisconsin–Madison from 1966 until he retired in 1985.
He married Elinor Constance Martin and they had three children.
Scientific work
He was the first to use the type of finite state machine (FSM) that is commonly used today, the Moore FSM. With Claude Shannon he did seminal work on computability theory and built reliable circuits using less reliable relays. He also spent a great deal of his later years on a fruitless effort to solve the Four Color Theorem.
With John Myhill, Moore proved the Garden of Eden theorem characterizing the cellular automaton rules that have patterns with no predecessor. He is also the namesake of the Moore neighborhood for cellular automata, used by Conway's Game of Life, and was the first to publish on the firing squad synchronization problem in cellular automata.
In a 1956 article in Scientific American, he proposed "Artificial Living Plants," which would be floating factories that could create copies of themselves. They could be programmed to perform some function (extracting fresh water, harvesting minerals from seawater) for an investment that would be relatively small compared to the huge returns from the exponentially growing numbers of factories.
Moore also asked which regular graphs can have their diameter matching a simple lower bound for the problem given by a regular tree with the same degree. The graphs matching this bound were named Moore graphs by .
Publications
With Claude Shannon, before and during his time at Bell Labs, he coauthored "Gedanken-experiments on sequential machines", "Computability by Probabilistic Machines", "Machine Aid for Switching Circuit Design", and "Reliable Circuits Using Less Reliable Relays".
At Bell Labs he authored "Variable Length Binary Encodings", "The Shortest Path Through a Maze", "A simplified universal Turing machine", and "Complete Relay Decoding Networks".
"Machine models of self-reproduction," Proceedings of Symposia in Applied Mathematics, volume 14, pages 17–33. The American Mathematical Society, 1962.
"Artificial Living Plants," Scientific American, (Oct 1956):118-126
"Gedanken-experiments on Sequential Machines," pp 129 – 153, Automata Studies, Annals of Mathematical Studies, no. 34, Princeton University Press, Prince
|
https://en.wikipedia.org/wiki/Wald%20test
|
In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate. Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis, a fact that can be used to determine statistical significance.
Together with the Lagrange multiplier test and the likelihood-ratio test, the Wald test is one of three classical approaches to hypothesis testing. An advantage of the Wald test over the other two is that it only requires the estimation of the unrestricted model, which lowers the computational burden as compared to the likelihood-ratio test. However, a major disadvantage is that (in finite samples) it is not invariant to changes in the representation of the null hypothesis; in other words, algebraically equivalent expressions of non-linear parameter restriction can lead to different values of the test statistic. That is because the Wald statistic is derived from a Taylor expansion, and different ways of writing equivalent nonlinear expressions lead to nontrivial differences in the corresponding Taylor coefficients. Another aberration, known as the Hauck–Donner effect, can occur in binomial models when the estimated (unconstrained) parameter is close to the boundary of the parameter space—for instance a fitted probability being extremely close to zero or one—which results in the Wald test no longer monotonically increasing in the distance between the unconstrained and constrained parameter.
Mathematical details
Under the Wald test, the estimated that was found as the maximizing argument of the unconstrained likelihood function is compared with a hypothesized value . In particular, the squared difference is weighted by the curvature of the log-likelihood function.
Test on a single parameter
If the hypothesis involves only a single parameter restriction, then the Wald statistic takes the following form:
which under the null hypothesis follows an asymptotic χ2-distribution with one degree of freedom. The square root of the single-restriction Wald statistic can be understood as a (pseudo) t-ratio that is, however, not actually t-distributed except for the special case of linear regression with normally distributed errors. In general, it follows an asymptotic z distribution.
where is the standard error of the maximum likelihood estimate (MLE), the square root of the variance. There are several ways to consistently estimate the variance matrix which in finite samples leads to alternative estimates of standard errors and associated test statistics and p-values.
Test(s) on multiple parameters
The Wald test can be used to test a single hypothesis on multiple parameters, as well as
|
https://en.wikipedia.org/wiki/Inverse-chi-squared%20distribution
|
In probability and statistics, the inverse-chi-squared distribution (or inverted-chi-square distribution) is a continuous probability distribution of a positive-valued random variable. It is closely related to the chi-squared distribution. It arises in Bayesian inference, where it can be used as the prior and posterior distribution for an unknown variance of the normal distribution.
Definition
The inverse-chi-squared distribution (or inverted-chi-square distribution ) is the probability distribution of a random variable whose multiplicative inverse (reciprocal) has a chi-squared distribution. It is also often defined as the distribution of a random variable whose reciprocal divided by its degrees of freedom is a chi-squared distribution. That is, if has the chi-squared distribution with degrees of freedom, then according to the first definition, has the inverse-chi-squared distribution with degrees of freedom; while according to the second definition, has the inverse-chi-squared distribution with degrees of freedom. Information associated with the first definition is depicted on the right side of the page.
The first definition yields a probability density function given by
while the second definition yields the density function
In both cases, and is the degrees of freedom parameter. Further, is the gamma function. Both definitions are special cases of the scaled-inverse-chi-squared distribution. For the first definition the variance of the distribution is while for the second definition .
Related distributions
chi-squared: If and , then
scaled-inverse chi-squared: If , then
Inverse gamma with and
Inverse chi-squared distribution is a special case of type 5 Pearson distribution
See also
Scaled-inverse-chi-squared distribution
Inverse-Wishart distribution
References
External links
InvChisquare in geoR package for the R Language.
Continuous distributions
Exponential family distributions
Probability distributions with non-finite variance
|
https://en.wikipedia.org/wiki/Instituto%20Nacional%20de%20Estad%C3%ADstica%20e%20Inform%C3%A1tica
|
The Instituto Nacional de Estadística e Informática (INEI) ("National Institute of Statistics and Informatics") is a semi-autonomous Peruvian government agency which coordinates, compiles, and evaluates statistical information for the country. Its current director is Renán Quispe Llanos.
As stated on its website, the INEI eases decision-making with the help of quality statistical information and the use of information technology and thus helps develop the society.
Censuses
The latest census performed by the INEI is the 2017 Census, which was conducted from August 22 through November 5 of that year. Its preliminary results will be released to the public in 3 months, and final results in January 2018. An earlier census is the 2007 Census.
Coding systems
In its reports INEI uses standard coding systems for geographical location (Ubicación Geográfica) and classification of economical activities (Clasificación Nacional de Actividades Económicas del Perú):
UBIGEO
ClaNAE
See also
Census in Peru
External links
INEI website
Peru
Government agencies of Peru
|
https://en.wikipedia.org/wiki/Direct%20comparison%20test
|
In mathematics, the comparison test, sometimes called the direct comparison test to distinguish it from similar related tests (especially the limit comparison test), provides a way of deducing the convergence or divergence of an infinite series or an improper integral. In both cases, the test works by comparing the given series or integral to one whose convergence properties are known.
For series
In calculus, the comparison test for series typically consists of a pair of statements about infinite series with non-negative (real-valued) terms:
If the infinite series converges and for all sufficiently large n (that is, for all for some fixed value N), then the infinite series also converges.
If the infinite series diverges and for all sufficiently large n, then the infinite series also diverges.
Note that the series having larger terms is sometimes said to dominate (or eventually dominate) the series with smaller terms.
Alternatively, the test may be stated in terms of absolute convergence, in which case it also applies to series with complex terms:
If the infinite series is absolutely convergent and for all sufficiently large n, then the infinite series is also absolutely convergent.
If the infinite series is not absolutely convergent and for all sufficiently large n, then the infinite series is also not absolutely convergent.
Note that in this last statement, the series could still be conditionally convergent; for real-valued series, this could happen if the an are not all nonnegative.
The second pair of statements are equivalent to the first in the case of real-valued series because converges absolutely if and only if , a series with nonnegative terms, converges.
Proof
The proofs of all the statements given above are similar. Here is a proof of the third statement.
Let and be infinite series such that converges absolutely (thus converges), and without loss of generality assume that for all positive integers n. Consider the partial sums
Since converges absolutely, for some real number T. For all n,
is a nondecreasing sequence and is nonincreasing.
Given then both belong to the interval , whose length decreases to zero as goes to infinity.
This shows that is a Cauchy sequence, and so must converge to a limit. Therefore, is absolutely convergent.
For integrals
The comparison test for integrals may be stated as follows, assuming continuous real-valued functions f and g on with b either or a real number at which f and g each have a vertical asymptote:
If the improper integral converges and for , then the improper integral also converges with
If the improper integral diverges and for , then the improper integral also diverges.
Ratio comparison test
Another test for convergence of real-valued series, similar to both the direct comparison test above and the ratio test, is called the ratio comparison test:
If the infinite series converges and , , and for all sufficiently large n, then the infinit
|
https://en.wikipedia.org/wiki/Solution%20in%20radicals
|
A solution in radicals or algebraic solution is a closed-form expression, and more specifically a closed-form algebraic expression, that is the solution of a polynomial equation, and relies only on addition, subtraction, multiplication, division, raising to integer powers, and the extraction of th roots (square roots, cube roots, and other integer roots).
A well-known example is the solution
of the quadratic equation
There exist more complicated algebraic solutions for cubic equations and quartic equations. The Abel–Ruffini theorem, and, more generally Galois theory, state that some quintic equations, such as
do not have any algebraic solution. The same is true for every higher degree. However, for any degree there are some polynomial equations that have algebraic solutions; for example, the equation can be solved as The eight other solutions are nonreal complex numbers, which are also algebraic and have the form where is a fifth root of unity, which can be expressed with two nested square roots. See also for various other examples in degree 5.
Évariste Galois introduced a criterion allowing one to decide which equations are solvable in radicals. See Radical extension for the precise formulation of his result.
Algebraic solutions form a subset of closed-form expressions, because the latter permit transcendental functions (non-algebraic functions) such as the exponential function, the logarithmic function, and the trigonometric functions and their inverses.
See also
Solvable quintics
Solvable sextics
Solvable septics
References
Algebra
Equations
|
https://en.wikipedia.org/wiki/David%20Berlinski
|
David Berlinski (born 1942) is an American author who has written books about mathematics and the history of science as well as fiction. An opponent of evolution, he is a senior fellow of the Discovery Institute's Center for Science and Culture, an organization which promotes the pseudoscience of intelligent design.
Early life
David Berlinski was born in the United States in 1942 to German-born Jewish refugees who had emigrated to New York City after escaping from France while the Vichy government was collaborating with the Germans. His father was Herman Berlinski, a composer, organist, pianist, musicologist and choir conductor, and his mother was Sina Berlinski (née Goldfein), a pianist, piano teacher and voice coach. Both were born and raised in Leipzig, where they studied at the Conservatory, before fleeing to Paris, where they were married and undertook further studies. German was David Berlinski's first spoken language. He earned his BA from Columbia University and PhD in philosophy from Princeton University.
Academic career
After his PhD, Berlinski was a research assistant in the Department of Biology at Columbia University. He has taught philosophy, mathematics and English at Stanford University, Rutgers, the City University of New York and the Université de Paris. He was a research fellow at the International Institute for Applied Systems Analysis (IIASA) in Austria and the Institut des Hautes Études Scientifiques (IHES) in France.
Author
Mathematics and biology
Berlinski has written works on systems analysis, the history of differential topology, analytic philosophy, and the philosophy of mathematics. Berlinski has authored books for the general public on mathematics and the history of mathematics. These include A Tour of the Calculus (1995) on calculus, The Advent of the Algorithm (2000) on algorithms, Newton's Gift (2000) on Isaac Newton, and Infinite Ascent: A Short History of Mathematics (2005). Another book, The Secrets of the Vaulted Sky (2003), aimed to redeem astrology as "rationalistic"; Publishers Weekly described the book as offering "self-consciously literary vignettes ... ostentatious erudition and metaphysical pseudo-profundities". In Black Mischief (1988), Berlinski wrote "Our paper became a monograph. When we had completed the details, we rewrote everything so that no one could tell how we came upon our ideas or why. This is the standard in mathematics."
Berlinski's books have received mixed reviews. Newton's Gift, The King of Infinite Space and The Advent of the Algorithm were criticized on MathSciNet for containing historical and mathematical inaccuracies. While the Mathematical Association of America review of A Tour of the Calculus by Fernando Q. Gouvêa recommended that professors have students read the book to appreciate the overarching historical and philosophical picture of calculus, a review in The Mathematical Gazette criticized it for inaccuracy and lack of clarity, declaring, "I haven't learned anythi
|
https://en.wikipedia.org/wiki/Mittag-Leffler%20function
|
In mathematics, the Mittag-Leffler function is a special function, a complex function which depends on two complex parameters and . It may be defined by the following series when the real part of is strictly positive:
where is the gamma function. When , it is abbreviated as .
For , the series above equals the Taylor expansion of the geometric series and consequently .
In the case and are real and positive, the series converges for all values of the argument , so the Mittag-Leffler function is an entire function. This function is named after Gösta Mittag-Leffler. This class of functions are important in the theory of the fractional calculus.
For , the Mittag-Leffler function is an entire function of order , and is in some sense the simplest entire function of its order.
The Mittag-Leffler function satisfies the recurrence property (Theorem 5.1 of )
from which the following Poincaré asymptotic expansion holds : for and real such that
then for all , we can show the following asymptotic expansions (Section 6. of ):
-as :
,
-and as :
,
where we used the notation .
Special cases
For we find: (Section 2 of )
Error function:
The sum of a geometric progression:
Exponential function:
Hyperbolic cosine:
For , we have
For , the integral
gives, respectively: , , .
Mittag-Leffler's integral representation
The integral representation of the Mittag-Leffler function is (Section 6 of )
where the contour starts and ends at and circles around the singularities and branch points of the integrand.
Related to the Laplace transform and Mittag-Leffler summation is the expression (Eq (7.5) of with )
Applications of Mittag-Leffler function
One of the applications of the Mittag-Leffler function is in modeling fractional order viscoelastic materials. Experimental investigations into the time-dependent relaxation behavior of viscoelastic materials are characterized by a very fast decrease of the stress at the beginning of the relaxation process and an extremely slow decay for large times. It can even take a long time before a constant asymptotic value is reached. Therefore, a lot of Maxwell elements are required to describe relaxation behavior with sufficient accuracy. This ends in a difficult optimization problem in order to identify a large number of material parameters. On the other hand, over the years, the concept of fractional derivatives has been introduced to the theory of viscoelasticity. Among these models, the fractional Zener model was found to be very effective to predict the dynamic nature of rubber-like materials with only a small number of material parameters. The solution of the corresponding constitutive equation leads to a relaxation function of the Mittag-Leffler type. It is defined by the power series with negative arguments. This function represents all essential properties of the relaxation process under the influence of an arbitrary and continuous signal with a jump at the origin.
See also
Mittag-Leffler summat
|
https://en.wikipedia.org/wiki/Perfect%20Bayesian%20equilibrium
|
In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games.
Any perfect Bayesian equilibrium has two components -- strategies and beliefs:
The strategy of a player in a given information set specifies his choice of action in that information set, which may depend on the history (on actions taken previously in the game). This is similar to a sequential game.
The belief of a player in a given information set determines what node in that information set he believes the game has reached. The belief may be a probability distribution over the nodes in the information set, and is typically a probability distribution over the possible types of the other players. Formally, a belief system is an assignment of probabilities to every node in the game such that the sum of probabilities in any information set is 1.
The strategies and beliefs also must satisfy the following conditions:
Sequential rationality: each strategy should be optimal in expectation, given the beliefs.
Consistency: each belief should be updated according to the equilibrium strategies, the observed actions, and Bayes' rule on every path reached in equilibrium with positive probability. On paths of zero probability, known as off-equilibrium paths, the beliefs must be specified but can be arbitrary.
A perfect Bayesian equilibrium is always a Nash equilibrium.
Examples of perfect Bayesian equilibria
Gift game 1
Consider the following game:
The sender has two possible types: either a "friend" (with probability ) or an "enemy" (with probability ). Each type has two strategies: either give a gift, or not give.
The receiver has only one type, and two strategies: either accept the gift, or reject it.
The sender's utility is 1 if his gift is accepted, -1 if his gift is rejected, and 0 if he does not give any gift.
The receiver's utility depends on who gives the gift:
If the sender is a friend, then the receiver's utility is 1 (if he accepts) or 0 (if he rejects).
If the sender is an enemy, then the receiver's utility is -1 (if he accepts) or 0 (if he rejects).
For any value of Equilibrium 1 exists, a pooling equilibrium in which both types of sender choose the same action:
Equilibrium 1. Sender: Not
|
https://en.wikipedia.org/wiki/Weierstrass%20functions
|
In mathematics, the Weierstrass functions are special functions of a complex variable that are auxiliary to the Weierstrass elliptic function. They are named for Karl Weierstrass. The relation between the sigma, zeta, and functions is analogous to that between the sine, cotangent, and squared cosecant functions: the logarithmic derivative of the sine is the cotangent, whose derivative is negative the squared cosecant.
Weierstrass sigma function
The Weierstrass sigma function associated to a two-dimensional lattice is defined to be the product
where denotes or are a fundamental pair of periods.
Through careful manipulation of the Weierstrass factorization theorem as it relates also to the sine function, another potentially more manageable infinite product definition is
for any with and where we have used the notation (see zeta function below).
Weierstrass zeta function
The Weierstrass zeta function is defined by the sum
The Weierstrass zeta function is the logarithmic derivative of the sigma-function. The zeta function can be rewritten as:
where is the Eisenstein series of weight 2k + 2.
The derivative of the zeta function is , where is the Weierstrass elliptic function.
The Weierstrass zeta function should not be confused with the Riemann zeta function in number theory.
Weierstrass eta function
The Weierstrass eta function is defined to be
and any w in the lattice
This is well-defined, i.e. only depends on the lattice vector w. The Weierstrass eta function should not be confused with either the Dedekind eta function or the Dirichlet eta function.
Weierstrass ℘-function
The Weierstrass p-function is related to the zeta function by
The Weierstrass ℘-function is an even elliptic function of order N=2 with a double pole at each lattice point and no other poles.
Degenerate case
Consider the situation where one period is real, which we can scale to be and the other is taken to the limit of so that the functions are only singly-periodic. The corresponding invariants are of discriminant . Then we have and thus from the above infinite product definition the following equality:
A generalization for other sine-like functions on other doubly-periodic lattices is
Elliptic functions
Analytic functions
|
https://en.wikipedia.org/wiki/Greater%20Montreal
|
Greater Montreal () is the most populous metropolitan area in Quebec and the second most populous in Canada after Greater Toronto. In 2015, Statistics Canada identified Montreal's Census Metropolitan Area (CMA) as with a population of 4,027,100, almost half that of the province.
A smaller area of is governed by the Montreal Metropolitan Community (MMC) (, CMM). This level of government is headed by a president (currently Montreal mayor Valérie Plante).
The inner ring is composed of densely populated municipalities located in close proximity to Downtown Montreal. It includes the entire Island of Montreal, Laval, and the Urban Agglomeration of Longueuil. Due to their proximity to Montreal's downtown core, some additional suburbs on the South Shore (Brossard, Saint-Lambert, and Boucherville) are usually included in the inner ring, despite their location on the mainland.
The outer ring is composed of low-density municipalities located on the fringe of Metropolitan Montreal. Most of these cities and towns are semi-rural. Specifically, the term off-island suburbs refers to those suburbs that are located on the North Shore of the Mille-Îles River, those on the South Shore that were never included in the megacity of Longueuil, and those on the Vaudreuil-Soulanges Peninsula.
Largest cities
Cities and towns
Only a portion of the municipalities and MRC's located in geographical entities highlighted in light gray are part of the CMM/CMA.
There are 82 municipalities that are part of the MMC and 91 municipalities that are part of the CMA.
There are 79 municipalities that overlap between the two, with 3 municipalities being part of the MMC but not the CMA, and 12 municipalities being part of the CMA but not the MMC.
Kanesatake and Kahnawake are not included in the previous counts.
Demographics
Ethnicity
Note: Totals greater than 100% due to multiple origin responses.
Language
Transportation
Exo operates the region's commuter rail and metropolitan bus services, and is the second busiest such system in Canada after Toronto's GO Transit. Established in June 2007, Exo's commuter rail system has six lines linking the downtown core with communities as far west as Hudson, as Far south as Mont-Saint-Hilaire, as far east as Mascouche, and as far north as Saint-Jérôme.
Along with Exo, a sister agency, the Autorité régionale de transport métropolitain (ARTM) plans, integrates, and coordinates public transport across Greater Montreal, including the Island of Montreal, Laval (Île Jésus), and communities along both the north shore of the Rivière des Mille-Îles and the south shore of the Saint Lawrence River. The ARTM's mandate also includes the management of reserved High-occupancy vehicle lanes, metropolitan bus terminuses, park-and-ride lots, and a budget of $163 million, which is shared amongst the transit corporations and inter-municipal public transit organizations.
The Exo/ARTM's territory spans 63 municipalities and one native reserve, 13 regional co
|
https://en.wikipedia.org/wiki/Index%20%28economics%29
|
In statistics, economics, and finance, an index is a statistical measure of change in a representative group of individual data points. These data may be derived from any number of sources, including company performance, prices, productivity, and employment. Economic indices track economic health from different perspectives. Examples include the consumer price index, which measures changes in retail prices paid by consumers, and the cost-of-living index (COLI), which measures the relative cost of living over time.
Influential global financial indices such as the Global Dow, and the NASDAQ Composite track the performance of selected large and powerful companies in order to evaluate and predict economic trends.
The Dow Jones Industrial Average and the S&P 500 primarily track U.S. markets, though some legacy international companies are included. The consumer price index tracks the variation in prices for different consumer goods and services over time in a constant geographical location and is integral to calculations used to adjust salaries, bond interest rates, and tax thresholds for inflation.
The GDP Deflator Index, or real GDP, measures the level of prices of all-new, domestically produced, final goods and services in an economy. Market performance indices include the labour market index/job index and proprietary stock market index investment instruments offered by brokerage houses.
Some indices display market variations. For example, the Economist provides a Big Mac Index that expresses the adjusted cost of a globally ubiquitous Big Mac as a percentage over or under the cost of a Big Mac in the U.S. in USD. Such indices can be used to help forecast currency values.
Index numbers
An index number is an economic data figure reflecting price or quantity compared with a standard or base value. The base usually equals 100 and the index number is usually expressed as 100 times the ratio to the base value. For example, if a commodity costs twice as much in 1970 as it did in 1960, its index number would be 200 relative to 1960. Index numbers are used especially to compare business activity, the cost of living, and employment. They enable economists to reduce unwieldy business data into easily understood terms.
In contrast to a cost-of-living index based on the true but unknown utility function, a superlative index number is an index number that can be calculated. Thus, superlative index numbers are used to provide a fairly close approximation to the underlying cost-of-living index number in a wide range of circumstances.
Some indexes are not time series. Spatial indexes summarize real estate prices, or toxins in the environment, or availability of services, across geographic locations. Indexes may also be used to summarize comparisons between distributions of data within categories. For example, purchasing power parity comparisons of currencies are often constructed with indexes.
There is a substantial body of economic analysis concerning t
|
https://en.wikipedia.org/wiki/Peter%20Jones%20%28mathematician%29
|
Peter Wilcox Jones (born 1952) is a mathematician at Yale University, known for his work in harmonic analysis and fractal geometry. He received his Ph.D. at the University of California, Los Angeles in 1978, under the supervision of John B. Garnett. He received the Salem Prize in 1981. He is an elected member of the U.S. National Academy of Sciences (2008), the Royal Swedish Academy of Sciences (2008), and the American Academy of Arts and Sciences (1998). He is not related to the mathematician Vaughan Jones.
References
External links
Faculty page at Yale
1952 births
Living people
20th-century American mathematicians
21st-century American mathematicians
University of California, Los Angeles alumni
Yale University faculty
Members of the United States National Academy of Sciences
|
https://en.wikipedia.org/wiki/Steric%20factor
|
The steric factor, usually denoted ρ, is a quantity used in collision theory.
Also called the probability factor, the steric factor is defined as the ratio between the experimental value of the rate constant and the one predicted by collision theory. It can also be defined as the ratio between the pre-exponential factor and the collision frequency, and it is most often less than unity. Physically, the steric factor can be interpreted as the ratio of the cross section for reactive collisions to the total collision cross section.
Usually, the more complex the reactant molecules, the lower the steric factors. Nevertheless, some reactions exhibit steric factors greater than unity: the harpoon reactions, which involve atoms that exchange electrons, producing ions. The deviation from unity can have different causes: the molecules are not spherical, so different geometries are possible; not all the kinetic energy is delivered into the right spot; the presence of a solvent (when applied to solutions); and so on.
When collision theory is applied to reactions in solution, the solvent cage has an effect on the reactant molecules, as several collisions can take place in a single encounter, which leads to predicted preexponential factors being too large. ρ values greater than unity can be attributed to favorable entropic contributions.
Usually there is no simple way to accurately estimate steric factors without performing trajectory or scattering calculations. It is also more commonly known as the frequency factor.
Notes
Chemical kinetics
Physical chemistry
|
https://en.wikipedia.org/wiki/Checking%20whether%20a%20coin%20is%20fair
|
In statistics, the question of checking whether a coin is fair is one whose importance lies, firstly, in providing a simple problem on which to illustrate basic ideas of statistical inference and, secondly, in providing a simple problem that can be used to compare various competing methods of statistical inference, including decision theory. The practical problem of checking whether a coin is fair might be considered as easily solved by performing a sufficiently large number of trials, but statistics and probability theory can provide guidance on two types of question; specifically those of how many trials to undertake and of the accuracy of an estimate of the probability of turning up heads, derived from a given sample of trials.
A fair coin is an idealized randomizing device with two states (usually named "heads" and "tails") which are equally likely to occur. It is based on the coin flip used widely in sports and other situations where it is required to give two parties the same chance of winning. Either a specially designed chip or more usually a simple currency coin is used, although the latter might be slightly "unfair" due to an asymmetrical weight distribution, which might cause one state to occur more frequently than the other, giving one party an unfair advantage. So it might be necessary to test experimentally whether the coin is in fact "fair" – that is, whether the probability of the coin's falling on either side when it is tossed is exactly 50%. It is of course impossible to rule out arbitrarily small deviations from fairness such as might be expected to affect only one flip in a lifetime of flipping; also it is always possible for an unfair (or "biased") coin to happen to turn up exactly 10 heads in 20 flips. Therefore, any fairness test must only establish a certain degree of confidence in a certain degree of fairness (a certain maximum bias). In more rigorous terminology, the problem is of determining the parameters of a Bernoulli process, given only a limited sample of Bernoulli trials.
Preamble
This article describes experimental procedures for determining whether a coin is fair or unfair. There are many statistical methods for analyzing such an experimental procedure. This article illustrates two of them.
Both methods prescribe an experiment (or trial) in which the coin is tossed many times and the result of each toss is recorded. The results can then be analysed statistically to decide whether the coin is "fair" or "probably not fair".
Posterior probability density function, or PDF (Bayesian approach). Initially, the true probability of obtaining a particular side when a coin is tossed is unknown, but the uncertainty is represented by the "prior distribution". The theory of Bayesian inference is used to derive the posterior distribution by combining the prior distribution and the likelihood function which represents the information obtained from the experiment. The probability that this particular coin is a "fair coin
|
https://en.wikipedia.org/wiki/Infinity%20symbol
|
The infinity symbol () is a mathematical symbol representing the concept of infinity. This symbol is also called a lemniscate, after the lemniscate curves of a similar shape studied in algebraic geometry, or "lazy eight", in the terminology of livestock branding.
This symbol was first used mathematically by John Wallis in the 17th century, although it has a longer history of other uses. In mathematics, it often refers to infinite processes (potential infinity) rather than infinite values (actual infinity). It has other related technical meanings, such as the use of long-lasting paper in bookbinding, and has been used for its symbolic value of the infinite in modern mysticism and literature. It is a common element of graphic design, for instance in corporate logos as well as in older designs such as the Métis flag.
Both the infinity symbol itself and several variations of the symbol are available in various character encodings.
History
The lemniscate has been a common decorative motif since ancient times; for instance it is commonly seen on Viking Age combs.
The English mathematician John Wallis is credited with introducing the infinity symbol with its mathematical meaning in 1655, in his De sectionibus conicis. Wallis did not explain his choice of this symbol. It has been conjectured to be a variant form of a Roman numeral, but which Roman numeral is unclear. One theory proposes that the infinity symbol was based on the numeral for 100 million, which resembled the same symbol enclosed within a rectangular frame. Another proposes instead that it was based on the notation CIↃ used to represent 1,000. Instead of a Roman numeral, it may alternatively be derived from a variant the lower-case form of omega, the last letter in the Greek alphabet.
Perhaps in some cases because of typographic limitations, other symbols resembling the infinity sign have been used for the same meaning. Leonhard Euler used an open letterform more closely resembling a reflected and sideways S than a lemniscate, and even has been used as a stand-in for the infinity symbol itself.
Usage
Mathematics
In mathematics, the infinity symbol is used more often to represent a potential infinity, rather than an actually infinite quantity as included in the cardinal numbers and the ordinal numbers (which use other notations, such as and ω, for infinite values). For instance, in mathematical expressions with summations and limits such as
the infinity sign is conventionally interpreted as meaning that the variable grows arbitrarily large towards infinity, rather than actually taking an infinite value, although other interpretations are possible.
The infinity symbol may also be used to represent a point at infinity, especially when there is only one such point under consideration. This usage includes, in particular, the infinite point of a projective line, and the point added to a topological space to form its one-point compactification.
Other technical uses
In areas other tha
|
https://en.wikipedia.org/wiki/Fundamental%20pair%20of%20periods
|
In mathematics, a fundamental pair of periods is an ordered pair of complex numbers that defines a lattice in the complex plane. This type of lattice is the underlying object with which elliptic functions and modular forms are defined.
Definition
A fundamental pair of periods is a pair of complex numbers such that their ratio is not real. If considered as vectors in , the two are not collinear. The lattice generated by and is
This lattice is also sometimes denoted as to make clear that it depends on and It is also sometimes denoted by or or simply by The two generators and are called the lattice basis. The parallelogram with vertices is called the fundamental parallelogram.
While a fundamental pair generates a lattice, a lattice does not have any unique fundamental pair; in fact, an infinite number of fundamental pairs correspond to the same lattice.
Algebraic properties
A number of properties, listed below, can be seen.
Equivalence
Two pairs of complex numbers and are called equivalent if they generate the same lattice: that is, if
No interior points
The fundamental parallelogram contains no further lattice points in its interior or boundary. Conversely, any pair of lattice points with this property constitute a fundamental pair, and furthermore, they generate the same lattice.
Modular symmetry
Two pairs and are equivalent if and only if there exists a matrix with integer entries and and determinant such that
that is, so that
This matrix belongs to the modular group This equivalence of lattices can be thought of as underlying many of the properties of elliptic functions (especially the Weierstrass elliptic function) and modular forms.
Topological properties
The abelian group maps the complex plane into the fundamental parallelogram. That is, every point can be written as for integers with a point in the fundamental parallelogram.
Since this mapping identifies opposite sides of the parallelogram as being the same, the fundamental parallelogram has the topology of a torus. Equivalently, one says that the quotient manifold is a torus.
Fundamental region
Define to be the half-period ratio. Then the lattice basis can always be chosen so that lies in a special region, called the fundamental domain. Alternately, there always exists an element of the projective special linear group that maps a lattice basis to another basis so that lies in the fundamental domain.
The fundamental domain is given by the set which is composed of a set plus a part of the boundary of
where is the upper half-plane.
The fundamental domain is then built by adding the boundary on the left plus half the arc on the bottom:
Three cases pertain:
If and , then there are exactly two lattice bases with the same in the fundamental region: and
If , then four lattice bases have the same the above two , and ,
If , then there are six lattice bases with the same , , and their negatives.
In the closure of the
|
https://en.wikipedia.org/wiki/European%20Union%20statistics
|
Statistics in the European Union are collected by Eurostat (European statistics body).
Area and population
As of 1 January 2006, the population of the EU was about 493 million people, although in 2020 the EU lost over 10% of its population as a result of the UK leaving the bloc. Many countries are expected to experience a decline in population over the coming decades, though this could be offset with new countries planning to join the EU within the next 20 years. The most populous member state is Germany, with an estimated 80.4 million people. France and Ireland have the highest birth-rates. The most densely populated country is the island of Malta, which is also the smallest, while the largest in area is France. The least densely populated country is Finland.
Population figures in the table below are from 2006 or 2007 estimates. The highest and lowest figures in each column have been marked in bold.
Economy
For statistics relating to economy, please see Economy of the European Union.
EU budget
The primary resource for funding the European Union is the contributions sought from member states. Each member state contributes to the EU budget, and receives funding back from the EU, depending on the relative wealth of the states, i.e. their ability to pay.
The table below shows the contributions as a percentage of the total budget. This takes into account the special considerations given to the United Kingdom to reduce its contribution through a rebate. Expenditure in Luxembourg, Belgium and France include items for the EU administrative centres in each of those countries.
Economic and governance-related rankings
There are many indices available on issues such as corruption, development, and freedom. The rankings below include all EU member states, EU candidates (with the exception of Turkey, cause their accession negotiations have stalled since 2016) and EFTA countries.
Freedom of the press
Reporters sans frontières (Reporters Without Borders) conducts an annual survey on the freedom of the press and produces scores (not shown here) for each country, resulting in the Press Freedom Index. In 2019 and 2020, Finland was proclaimed as having the freest press in the European Union, and the second in the world behind Norway. Bulgaria was ranked as having the least free press in the European Union in 2019 and 2020.
Economic freedom
The Index of Economic Freedom, published by The Wall Street Journal and The Heritage Foundation, uses 50 different variables to compile the survey, in areas such as trade policy and government intervention.
A similar index produced by the World Economic Forum is its Global Competitiveness Index.
Perception of corruption
Transparency International is an international NGO publishing an annual Global Corruption Report indicating the perception of corruption
around the world. The rankings of the table refer to the Corruption Perceptions Index 2022 . A high ranking means low corruption.
Human development
The Human Dev
|
https://en.wikipedia.org/wiki/Fractional%20Brownian%20motion
|
In probability theory, fractional Brownian motion (fBm), also called a fractal Brownian motion, is a generalization of Brownian motion. Unlike classical Brownian motion, the increments of fBm need not be independent. fBm is a continuous-time Gaussian process on , that starts at zero, has expectation zero for all in , and has the following covariance function:
where H is a real number in (0, 1), called the Hurst index or Hurst parameter associated with the fractional Brownian motion. The Hurst exponent describes the raggedness of the resultant motion, with a higher value leading to a smoother motion. It was introduced by .
The value of H determines what kind of process the fBm is:
if H = 1/2 then the process is in fact a Brownian motion or Wiener process;
if H > 1/2 then the increments of the process are positively correlated;
if H < 1/2 then the increments of the process are negatively correlated.
Fractional Brownian motion has stationary increments X(t) = BH(s+t) − BH(s) (the value is the same for any s). The increment process X(t) is known as fractional Gaussian noise.
There is also a generalization of fractional Brownian motion: n-th order fractional Brownian motion, abbreviated as n-fBm. n-fBm is a Gaussian, self-similar, non-stationary process whose increments of order n are stationary. For n = 1, n-fBm is classical fBm.
Like the Brownian motion that it generalizes, fractional Brownian motion is named after 19th century biologist Robert Brown; fractional Gaussian noise is named after mathematician Carl Friedrich Gauss.
Background and definition
Prior to the introduction of the fractional Brownian motion, used the Riemann–Liouville fractional integral to define the process
where integration is with respect to the white noise measure dB(s). This integral turns out to be ill-suited as a definition of fractional Brownian motion because of its over-emphasis of the origin . It does not have stationary increments.
The idea instead is to use a different fractional integral of white noise to define the process: the Weyl integral
for t > 0 (and similarly for t < 0).
The resulting process has stationary increments.
The main difference between fractional Brownian motion and regular Brownian motion is that while the increments in Brownian Motion are independent, increments for fractional Brownian motion are not. If H > 1/2, then there is positive autocorrelation: if there is an increasing pattern in the previous steps, then it is likely that the current step will be increasing as well. If H < 1/2, the autocorrelation is negative.
Properties
Self-similarity
The process is self-similar, since in terms of probability distributions:
This property is due to the fact that the covariance function is homogeneous of order 2H and can be considered as a fractal property. FBm can also be defined as the unique mean-zero Gaussian process, null
at the origin, with stationary and self-similar increments.
Stationary increments
It has statio
|
https://en.wikipedia.org/wiki/Vi%C3%A8te%27s%20formula
|
In mathematics, Viète's formula is the following infinite product of nested radicals representing twice the reciprocal of the mathematical constant :
It can also be represented as:
The formula is named after François Viète, who published it in 1593. As the first formula of European mathematics to represent an infinite process, it can be given a rigorous meaning as a limit expression, and marks the beginning of mathematical analysis. It has linear convergence, and can be used for calculations of , but other methods before and since have led to greater accuracy. It has also been used in calculations of the behavior of systems of springs and masses, and as a motivating example for the concept of statistical independence.
The formula can be derived as a telescoping product of either the areas or perimeters of nested polygons converging to a circle. Alternatively, repeated use of the half-angle formula from trigonometry leads to a generalized formula, discovered by Leonhard Euler, that has Viète's formula as a special case. Many similar formulas involving nested roots or infinite products are now known.
Significance
François Viète (1540–1603) was a French lawyer, privy councillor to two French kings, and amateur mathematician. He published this formula in 1593 in his work Variorum de rebus mathematicis responsorum, liber VIII. At this time, methods for approximating to (in principle) arbitrary accuracy had long been known. Viète's own method can be interpreted as a variation of an idea of Archimedes of approximating the circumference of a circle by the perimeter of a many-sided polygon, used by Archimedes to find the approximation
By publishing his method as a mathematical formula, Viète formulated the first instance of an infinite product known in mathematics, and the first example of an explicit formula for the exact value of . As the first representation in European mathematics of a number as the result of an infinite process rather than of a finite calculation, Eli Maor highlights Viète's formula as marking the beginning of mathematical analysis and Jonathan Borwein calls its appearance "the dawn of modern mathematics".
Using his formula, Viète calculated to an accuracy of nine decimal digits. However, this was not the most accurate approximation to known at the time, as the Persian mathematician Jamshīd al-Kāshī had calculated to an accuracy of nine sexagesimal digits and 16 decimal digits in 1424. Not long after Viète published his formula, Ludolph van Ceulen used a method closely related to Viète's to calculate 35 digits of , which were published only after van Ceulen's death in 1610.
Beyond its mathematical and historical significance, Viète's formula can be used to explain the different speeds of waves of different frequencies in an infinite chain of springs and masses, and the appearance of in the limiting behavior of these speeds. Additionally, a derivation of this formula as a product of integrals involving the Rademacher syste
|
https://en.wikipedia.org/wiki/Thomas%20Tooke
|
Thomas Tooke (; 28 February 177426 February 1858) was an English economist known for writing on money and economic statistics. After Tooke's death the Statistical Society endowed the Tooke Chair of economics at King's College London, and a Tooke Prize.
In business, he served several terms between 1840 and 1852 as governor of the Royal Exchange Corporation. Likewise, he served for several terms as chairman of the St Katharine's Docks company. He was also an early director of the London and Birmingham Railway.
Life
Born at Kronstadt on 29 February 1774, he was the eldest son of William Tooke, at that time chaplain to the British factory there. Thomas began his professional life at the age of fifteen in a house of business at St Petersburg, and subsequently became a partner in the London firms of Stephen Thornton & Co., and Astell, Tooke, & Thornton.
He took no serious part in discussion of economic questions until 1819, when he gave evidence before committees of both Houses of Parliament on the resumption of cash payments by the Bank of England. Tooke was one of the earliest supporters of the free trade movement which assumed the form in the petition of the merchants of the City of London presented to the House of Commons by Alexander Baring, on 8 May 1820. This document was drawn up by Tooke; and the circumstances which led to its preparation are described in the sixth volume of his History of Prices. Lord Liverpool's government, especially through William Huskisson after 1828, moved in the direction sought.
It was to support the principles of the merchants' petition that Tooke, with David Ricardo, Robert Malthus, James Mill, and others, founded the Political Economy Club in April 1821. From the beginning Tooke took part in its discussions, and continued to attend its meetings to the end of his life.
Out of controversy over paper money emerged the Bank Charter Act 1844, the main object of which was to prevent the over-issue of notes. Tooke was opposed to the provisions of the act. He thought that by some changes in the management of the Bank of England, coupled with the compulsory maintenance of a much larger reserve of bullion, more satisfactory results would be achieved.
Besides giving evidence on economic questions before several parliamentary committees, such as those of 1821 on agricultural depression and on foreign trade, of 1832, 1840, and 1848 on the Bank Acts, Tooke was a member of the factories inquiry commission of 1833. He retired from active business on his own account in 1836, but was governor of the Royal Exchange Assurance Corporation from 1840 to 1852, and was also chairman of the St. Katharine's Dock Company. He was elected a Fellow of the Royal Society in March 1821, and membre correspondant de l'Institut de France (Académie des Sciences Morales et Politiques) in February 1853. He resided in London at 12 Russell Square, then later in Richmond Terrace, and at 31 Spring Gardens, where he died on 26 February 1858. He is buri
|
https://en.wikipedia.org/wiki/Oskar%20Anderson
|
Oskar Johann Viktor Anderson (; ] – 12 February 1960) was a Russian-German mathematician of Baltic German descent. He is best known for his work on mathematical statistics and econometrics.
Life
Anderson was born from a Baltic German family in Minsk (now in Belarus), but soon moved to Kazan (Russia). His father, Nikolai Anderson, was professor in Finno-Ugric languages at the University of Kazan. His older brothers were the folklorist Walter Anderson and the astrophysicist Wilhelm Anderson.
Oskar Anderson graduated from Kazan Gymnasium with a gold medal in 1906. After studying mathematics for one year at the University of Kazan, he moved to St. Petersburg to study economics at the Polytechnic Institute. From 1907 to 1915, he was Aleksandr Chuprov's student and assistant. In 1912 he married Margarethe Natalie von Hindenburg-Hirtenberg, a granddaughter of who was commemorated in "The Funeral of 'The Universal Man'" in Dostoyevsky's A Writer's Diary, and started lecturing at a commercial school in St. Petersburg while also studying for a law degree at the University of Saint Petersburg, graduating in 1914.
In 1918 he took on a professorship in Kiev but he was forced to flee Russia in 1920 due to the Russian Revolution, first taking a post in Budapest (Hungary) before becoming a professor at the University of Economics at Varna (Bulgaria) in 1924.
Anderson was one of the charter members of the Econometric Society, whose members also elected him to be a fellow of the society in 1933. In the same year he also received a fellowship from the Rockefeller Foundation.
Supported by the foundation, in 1935 he established and became director of the Statistical Institute for Economic Research at the University of Sofia. For the remainder of the decade he also served the League of Nations as an associate member of its Committee of Statistical Experts.
In 1942 he joined the Kiel Institute for the World Economy as head of the Department of Eastern Studies and also took up a full professorship of statistics at the University of Kiel, where he was joined by his brother Walter after the end of the second world war. In 1947 he took a position at the University of Munich, teaching there until 1956, when he retired.
Writings
Einführung in die Mathematische Statistik, Wien : Springer-Verlag, 1935,
Über die repräsentative Methode und deren Anwendung auf die Aufarbeitung der Ergbnisse der bulgarischen landwirtschaftlichen Betriebszählung vom 31. Dezember 1926, München : , 1949
Die Saisonschwankungen in der deutschen Stromproduktion vor und nach dem Kriege , München : Inst. f. Wirtschaftsforschung, 1950
External links
References/Further reading
1887 births
1960 deaths
Mathematicians from Kazan
Baltic German people from the Russian Empire
Mathematicians from the Russian Empire
German statisticians
Statisticians from the Russian Empire
20th-century German mathematicians
Kazan Federal University alumni
Peter the Great St. Petersburg Polytechnic Univers
|
https://en.wikipedia.org/wiki/Linear%20probability%20model
|
In statistics, a linear probability model (LPM) is a special case of a binary regression model. Here the dependent variable for each observation takes values which are either 0 or 1. The probability of observing a 0 or 1 in any one case is treated as depending on one or more explanatory variables. For the "linear probability model", this relationship is a particularly simple one, and allows the model to be fitted by linear regression.
The model assumes that, for a binary outcome (Bernoulli trial), , and its associated vector of explanatory variables, ,
For this model,
and hence the vector of parameters β can be estimated using least squares. This method of fitting would be inefficient, and can be improved by adopting an iterative scheme based on weighted least squares, in which the model from the previous iteration is used to supply estimates of the conditional variances, , which would vary between observations. This approach can be related to fitting the model by maximum likelihood.
A drawback of this model is that, unless restrictions are placed on , the estimated coefficients can imply probabilities outside the unit interval . For this reason, models such as the logit model or the probit model are more commonly used.
Latent-variable formulation
More formally, the LPM can arise from a latent-variable formulation (usually to be found in the econometrics literature ), as follows: assume the following regression model with a latent (unobservable) dependent variable:
The critical assumption here is that the error term of this regression is a symmetric around zero uniform random variable, and hence, of mean zero. The cumulative distribution function of here is
Define the indicator variable if , and zero otherwise, and consider the conditional probability
But this is the Linear Probability Model,
with the mapping
This method is a general device to obtain a conditional probability model of a binary variable: if we assume that the distribution of the error term is Logistic, we obtain the logit model, while if we assume that it is the Normal, we obtain the probit model and, if we assume that it is the logarithm of a Weibull distribution, the complementary log-log model.
See also
Linear approximation
References
Further reading
Horrace, William C., and Ronald L. Oaxaca. "Results on the Bias and Inconsistency of Ordinary Least Squares for the Linear Probability Model." Economics Letters, 2006: Vol. 90, P. 321–327
Generalized linear models
|
https://en.wikipedia.org/wiki/Probit%20model
|
In statistics, a probit model is a type of regression where the dependent variable can take only two values, for example married or not married. The word is a portmanteau, coming from probability + unit. The purpose of the model is to estimate the probability that an observation with particular characteristics will fall into a specific one of the categories; moreover, classifying observations based on their predicted probabilities is a type of binary classification model.
A probit model is a popular specification for a binary response model. As such it treats the same set of problems as does logistic regression using similar techniques. When viewed in the generalized linear model framework, the probit model employs a probit link function. It is most often estimated using the maximum likelihood procedure, such an estimation being called a probit regression.
Conceptual framework
Suppose a response variable Y is binary, that is it can have only two possible outcomes which we will denote as 1 and 0. For example, Y may represent presence/absence of a certain condition, success/failure of some device, answer yes/no on a survey, etc. We also have a vector of regressors X, which are assumed to influence the outcome Y. Specifically, we assume that the model takes the form
where P is the probability and is the cumulative distribution function of the standard normal distribution. The parameters β are typically estimated by maximum likelihood.
It is possible to motivate the probit model as a latent variable model. Suppose there exists an auxiliary random variable
where ε ~ N(0, 1). Then Y can be viewed as an indicator for whether this latent variable is positive:
The use of the standard normal distribution causes no loss of generality compared with the use of a normal distribution with an arbitrary mean and standard deviation, because adding a fixed amount to the mean can be compensated by subtracting the same amount from the intercept, and multiplying the standard deviation by a fixed amount can be compensated by multiplying the weights by the same amount.
To see that the two models are equivalent, note that
Model estimation
Maximum likelihood estimation
Suppose data set contains n independent statistical units corresponding to the model above.
For the single observation, conditional on the vector of inputs of that observation, we have:
where is a vector of inputs, and is a vector of coefficients.
The likelihood of a single observation is then
In fact, if , then , and if , then .
Since the observations are independent and identically distributed, then the likelihood of the entire sample, or the joint likelihood, will be equal to the product of the likelihoods of the single observations:
The joint log-likelihood function is thus
The estimator which maximizes this function will be consistent, asymptotically normal and efficient provided that exists and is not singular. It can be shown that this log-likelihood function is global
|
https://en.wikipedia.org/wiki/Law%20of%20total%20cumulance
|
In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of time series. It was introduced by David Brillinger.
It is most transparent when stated in its most general form, for joint cumulants, rather than for cumulants of a specified order for just one random variable. In general, we have
where
κ(X1, ..., Xn) is the joint cumulant of n random variables X1, ..., Xn, and
the sum is over all partitions of the set { 1, ..., n } of indices, and
"B ∈ ;" means B runs through the whole list of "blocks" of the partition , and
κ(Xi : i ∈ B | Y) is a conditional cumulant given the value of the random variable Y. It is therefore a random variable in its own right—a function of the random variable Y.
Examples
The special case of just one random variable and n = 2 or 3
Only in case n = either 2 or 3 is the nth cumulant the same as the nth central moment. The case n = 2 is well-known (see law of total variance). Below is the case n = 3. The notation μ3 means the third central moment.
General 4th-order joint cumulants
For general 4th-order cumulants, the rule gives a sum of 15 terms, as follows:
Cumulants of compound Poisson random variables
Suppose Y has a Poisson distribution with expected value λ, and X is the sum of Y copies of W that are independent of each other and of Y.
All of the cumulants of the Poisson distribution are equal to each other, and so in this case are equal to λ. Also recall that if random variables W1, ..., Wm are independent, then the nth cumulant is additive:
We will find the 4th cumulant of X. We have:
We recognize the last sum as the sum over all partitions of the set { 1, 2, 3, 4 }, of the product over all blocks of the partition, of cumulants of W of order equal to the size of the block. That is precisely the 4th raw moment of W (see cumulant for a more leisurely discussion of this fact). Hence the cumulants of X are the moments of W multiplied by λ.
In this way we see that every moment sequence is also a cumulant sequence (the converse cannot be true, since cumulants of even order ≥ 4 are in some cases negative, and also because the cumulant sequence of the normal distribution is not a moment sequence of any probability distribution).
Conditioning on a Bernoulli random variable
Suppose Y = 1 with probability p and Y = 0 with probability q = 1 − p. Suppose the conditional probability distribution of X given Y is F if Y = 1 and G if Y = 0. Then we have
where means is a partition of the set { 1, ..., n } that is finer than the coarsest partition – the sum is over all partitions except that one. For example, if n = 3, then we have
References
Algebra of random variables
Theory of probability distributions
Theorems in statistics
Statistical laws
|
https://en.wikipedia.org/wiki/Reinhold%20Baer
|
Reinhold Baer (22 July 1902 – 22 October 1979) was a German mathematician, known for his work in algebra. He introduced injective modules in 1940. He is the eponym of Baer rings and Baer groups.
Biography
Baer studied mechanical engineering for a year at Leibniz University Hannover. He then went to study philosophy at Freiburg in 1921. While he was at Göttingen in 1922 he was influenced by Emmy Noether and Hellmuth Kneser. In 1924 he won a scholarship for specially gifted students. Baer wrote up his doctoral dissertation and it was published in Crelle's Journal in 1927.
Baer accepted a post at Halle in 1928. There, he published Ernst Steinitz's "Algebraische Theorie der Körper" with Helmut Hasse, first published in Crelle's Journal in 1910.
While Baer was with his wife in Austria, Adolf Hitler and the Nazis came into power. Both of Baer's parents were Jewish, and he was for this reason informed that his services at Halle were no longer required. Louis Mordell invited him to go to Manchester and Baer accepted.
Baer stayed at Princeton University and was a visiting scholar at the nearby Institute for Advanced Study from 1935 to 1937. For a short while he lived in North Carolina. From 1938 to 1956 he worked at the University of Illinois at Urbana-Champaign. He returned to Germany in 1956.
According to biographer K. W. Gruenberg,
The rapid development of lattice theory in the mid-thirties suggested that projective geometry should be viewed as a special kind of lattice, the lattice of all subspaces of a vector space... [Linear Algebra and Projective Geometry (1952)] is an account of the representation of vector spaces over division rings, of projectivities by semi-linear transformations and of dualities by semi-bilinear forms.
He died of heart failure on October 22 in 1979.
In 2016 the Reinhold Baer Prize for the best Ph.D. thesis in group theory was set up in his honour.
Bibliography
1934: "Erweiterung von Gruppen und ihren Isomorphismen", Mathematische Zeitschrift 38(1): 375–416 (German)
1940: "Nilpotent groups and their generalizations", Transactions of the American Mathematical Society 47: 393–434
1944: "The higher commutator subgroups of a group", Bulletin of the American Mathematical Society 50: 143–160
1945: "Representations of groups as quotient groups. II. Minimal central chains of a group", Transactions of the American Mathematical Society 58: 348–389
1945: "Representations of groups as quotient groups. III. Invariants of classes of related representations", Transactions of the American Mathematical Society 58: 390–419
See also
Capable group
Dedekind group
Retract (group theory)
Radical of a ring
Semiprime ring
Nielsen-Schreier theorem
References
O. H. Kegel (1979) "Reinhold Baer (1902 — 1979)", Mathematical Intelligencer 2:181,2.
External links
K.W. Gruenberg & Derek Robinson (2003) The Mathematical Legacy of Reinhold Baer, Illinois Journal of Mathematics'' 47(1-2) from Project Euclid.
Author profile in the d
|
https://en.wikipedia.org/wiki/2001%20Canadian%20census
|
The 2001 Canadian census was a detailed enumeration of the Canadian population. Census day was May 15, 2001. On that day, Statistics Canada attempted to count every person in Canada. The total population count of Canada was 30,007,094. This was a 4% increase over 1996 census of 28,846,761. In contrast, the official Statistics Canada population estimate for 2001 was 31,021,300. This is considered a more accurate population number than the actual count.
The previous census was the 1996 census and the following census was in 2006 census.
Canada by the numbers
A summary of information about Canada.
Census summary
Canada has experienced one of the smallest census-to-census growth rates in its population. From 1996 to 2001, the nation's population increased only 4.0%. The census counted 30,007,094 people on May 15, 2001, compared with 28,846,761 on May 14, 1996.
Only three provinces and one territory had growth rates above the national average. Alberta's population soared 10.3%, Ontario gained 6.1% and British Columbia, 4.9%. Nunavut's population rose 8.1%. The population of Newfoundland and Labrador declined for the second consecutive census period.
Urbanization continued. In 2001, 79.4% of Canadians lived in an urban centre of 10,000 people or more, compared with 78.5% in 1996. Outside the urban centres, the population of rural and small-town areas declined 0.4%.
In 2001, just over 64% of the nation's population, or about 19,297,000 people, lived in the 27 census metropolitan areas (CMAs), up slightly from 63% in 1996. Seven of these 27 CMAs saw their populations grow at a rate of at least double the national average. The strongest rise, by far, occurred in Calgary.
From 1996 to 2001, the nation's population concentrated further in four broad urban regions: the extended Golden Horseshoe in southern Ontario; Montreal and environs; British Columbia's Lower Mainland and southern Vancouver Island; and the Calgary-Edmonton corridor. In 2001, 51% of Canada's population lived in these regions, compared with 49% in 1996.
Population by province/territory
Demographics
Mother tongue
Population by mother tongue of Canada's official languages:
Aboriginal peoples
Population of Aboriginal peoples in Canada:
Ethnic origin
Population by ethnic origin. Only those origins with more than 250,000 respondents are included here. This is based entirely on self reporting.
Religion
Population by religion. Only those religions with more than 250,000 respondents are included here. The census question was partly aided—that is, the questionnaire form gave examples of some of the denominations but not others. The actual question asked is noted below.
The actual question asked: "What is this person's religion? Indicate a specific denomination or religion even if this person is not currently a practising member of that group.For example, Roman Catholic, Ukrainian Catholic, United Church, Anglican, Lutheran, Baptist, Coptic Orthodox, Greek Orthodox, J
|
https://en.wikipedia.org/wiki/138%20%28number%29
|
138 (one hundred [and] thirty-eight) is the natural number following 137 and preceding 139.
In mathematics
138 is a sphenic number, and the smallest product of three primes such that in base 10, the third prime is a concatenation of the other two: . It is also a one-step palindrome in decimal (138 + 831 = 969).
138 has eight total divisors that generate an arithmetic mean of 36, which is the eighth triangular number. While the sum of the digits of 138 is 12, the product of its digits is 24.
138 is an Ulam number, the thirty-first abundant number, and a primitive (square-free) congruent number. It is the third 47-gonal number.
As an interprime, 138 lies between the eleventh pair of twin primes (137, 139), respectively the 33rd and 34th prime numbers.
It is the sum of two consecutive primes (67 + 71), and the sum of four consecutive primes (29 + 31 + 37 + 41).
There are a total of 44 numbers that are relatively prime with 138 (and up to), while 22 is its reduced totient.
138 is the denominator of the twenty-second Bernoulli number (whose respective numerator, is 854513).
A magic sum of 138 is generated inside four magic circles that features the first thirty-three non-zero integers, with a 9 in the center (first constructed by Yang Hui).
The simplest Catalan solid, the triakis tetrahedron, produces 138 stellations (depending on rules chosen), 44 of which are fully symmetric and 94 of which are enantiomorphs.
Using two radii to divide a circle according to the golden ratio yields sectors of approximately 138 degrees (the golden angle), and 222 degrees.
In science
The Saros number of the solar eclipse series which began on June 6, 1472, and will end on July 11, 2716. The duration of Saros series 138 is 1244 years, and it contains 70 solar eclipses
138 Tolosa is a brightly colored, stony main belt asteroid
The New General Catalogue object NGC 138, a spiral galaxy in the constellation Pisces
138P/Shoemaker-Levy is a periodic comet in the Solar System
In media
"We Are 138", a 1978 song by the American punk rock band Misfits.
Who's Afraid of 138!? is a trance record label operated by Dutch DJ Armin van Buuren, as a reference to the use of 138 BPM in some forms of trance music.
See also
The year AD 138 or 138 BC
List of highways numbered 138
Notes
References
Integers
|
https://en.wikipedia.org/wiki/Life%20table
|
In actuarial science and demography, a life table (also called a mortality table or actuarial table) is a table which shows, for each age, what the probability is that a person of that age will die before their next birthday ("probability of death"). In other words, it represents the survivorship of people from a certain population. They can also be explained as a long-term mathematical way to measure a population's longevity. Tables have been created by demographers including John Graunt, Reed and Merrell, Keyfitz, and Greville.
There are two types of life tables used in actuarial science. The period life table represents mortality rates during a specific time period for a certain population. A cohort life table, often referred to as a generation life table, is used to represent the overall mortality rates of a certain population's entire lifetime. They must have had to be born during the same specific time interval. A cohort life table is more frequently used because it is able to make a prediction of any expected changes in the mortality rates of a population in the future. This type of table also analyzes patterns in mortality rates that can be observed over time. Both of these types of life tables are created based on an actual population from the present, as well as an educated prediction of the experience of a population in the near future. In order to find the true life expectancy average, 100 years would need to pass and by then finding that data would be of no use as healthcare is continually advancing.
Other life tables in historical demography may be based on historical records, although these often undercount infants and understate infant mortality, on comparison with other regions with better records, and on mathematical adjustments for varying mortality levels and life expectancies at birth.
From this starting point, a number of inferences can be derived.
The probability of surviving any particular year of age
The remaining life expectancy for people at different ages
Life tables are also used extensively in biology and epidemiology. An area that uses this tool is Social Security. It examines the mortality rates of all the people who have Social Security to decide which actions to take.
The concept is also of importance in product life cycle management.
All mortality tables are specific to environmental and life circumstances, and are used to probabilistically determine expected maximum age within those environmental conditions.
Background
There are two types of life tables:
Period or static life tables show the current probability of death (for people of different ages, in the current year)
Cohort'' life tables show the probability of death of people from a given cohort (especially birth year) over the course of their lifetime.
Static life tables sample individuals assuming a stationary population with overlapping generations. "Static life tables" and "cohort life tables" will be identical if population is in equilibri
|
https://en.wikipedia.org/wiki/Connectivity%20%28graph%20theory%29
|
In mathematics and computer science, connectivity is one of the basic concepts of graph theory: it asks for the minimum number of elements (nodes or edges) that need to be removed to separate the remaining nodes into two or more isolated subgraphs. It is closely related to the theory of network flow problems. The connectivity of a graph is an important measure of its resilience as a network.
Connected vertices and graphs
In an undirected graph , two vertices and are called connected if contains a path from to . Otherwise, they are called disconnected. If the two vertices are additionally connected by a path of length , i.e. by a single edge, the vertices are called adjacent.
A graph is said to be connected if every pair of vertices in the graph is connected. This means that there is a path between every pair of vertices. An undirected graph that is not connected is called disconnected. An undirected graph G is therefore disconnected if there exist two vertices in G such that no path in G has these vertices as endpoints. A graph with just one vertex is connected. An edgeless graph with two or more vertices is disconnected.
A directed graph is called weakly connected if replacing all of its directed edges with undirected edges produces a connected (undirected) graph. It is unilaterally connected or unilateral (also called semiconnected) if it contains a directed path from to or a directed path from to for every pair of vertices . It is strongly connected, or simply strong, if it contains a directed path from to and a directed path from to for every pair of vertices .
Components and cuts
A connected component is a maximal connected subgraph of an undirected graph. Each vertex belongs to exactly one connected component, as does each edge. A graph is connected if and only if it has exactly one connected component.
The strong components are the maximal strongly connected subgraphs of a directed graph.
A vertex cut or separating set of a connected graph is a set of vertices whose removal renders disconnected. The vertex connectivity (where is not a complete graph) is the size of a minimal vertex cut. A graph is called -vertex-connected or -connected if its vertex connectivity is or greater.
More precisely, any graph (complete or not) is said to be -vertex-connected if it contains at least vertices, but does not contain a set of vertices whose removal disconnects the graph; and is defined as the largest such that is -connected. In particular, a complete graph with vertices, denoted , has no vertex cuts at all, but .
A vertex cut for two vertices and is a set of vertices whose removal from the graph disconnects and . The local connectivity is the size of a smallest vertex cut separating and . Local connectivity is symmetric for undirected graphs; that is, . Moreover, except for complete graphs, equals the minimum of over all nonadjacent pairs of vertices .
-connectivity is also called biconnectivity and -connec
|
https://en.wikipedia.org/wiki/HJ
|
HJ may refer to:
Science, technology, and mathematics
Hall–Janko group, a mathematical group
U.S. code for a cryptographic key change; see cryptoperiod
Other uses
, a two-letter combination used in some languages
hj-reduction in English, dropping the sound before
Hajji (Hj.), an Islamic honorific
Handjob
hic jacet ('here lies'), Latin phrase on gravestones
Hilal-i-Jurat, post-nominal for Pakistan honour
Hitler-Jugend (Hitler Youth)
Holden HJ, an Australian car 1974-1976
Hot Jupiter, a type of planet
Tasman Cargo Airlines, IATA airline designator
|
https://en.wikipedia.org/wiki/Topological%20module
|
In mathematics, a topological module is a module over a topological ring such that scalar multiplication and addition are continuous.
Examples
A topological vector space is a topological module over a topological field.
An abelian topological group can be considered as a topological module over where is the ring of integers with the discrete topology.
A topological ring is a topological module over each of its subrings.
A more complicated example is the -adic topology on a ring and its modules. Let be an ideal of a ring The sets of the form for all and all positive integers form a base for a topology on that makes into a topological ring. Then for any left -module the sets of the form for all and all positive integers form a base for a topology on that makes into a topological module over the topological ring
See also
References
Algebra
Topology
Topological algebra
Topological groups
|
https://en.wikipedia.org/wiki/141%20%28number%29
|
141 (one hundred [and] forty-one) is the natural number following 140 and preceding 142.
In mathematics
141 is:
a centered pentagonal number.
the sum of the sums of the divisors of the first 13 positive integers.
the second n to give a prime Cullen number (of the form n2n + 1).
an undulating number in base 10, with the previous being 131, and the next being 151.
the sixth hendecagonal (11-gonal) number.
a semiprime: a product of two prime numbers, namely 3 and 47. Since those prime factors are Gaussian primes, this means that 141 is a Blum integer.
a Hilbert prime
In the military
The Lockheed C-141 Starlifter was a United States Air Force military strategic airlifter
K-141 Kursk was a Russian nuclear cruise missile submarine, which sank in the Barents Sea on 12 August 2000
was a United States Navy ship during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy following World War I
was a United States Navy during World War II
In transportation
London Buses route 141 is a Transport for London contracted bus route in London
141 Nottingham–Sutton-in-Ashfield is a bus route in England
The 141 C Ouest was a 2-8-2 steam locomotive of the Chemin de fer de l'État
British Rail Class 141 was the first production model of the Pacer diesel multiple units
Union des Transports Africains de Guinée Flight 141, which crashed in the Bight of Benin on December 25, 2003
The Saipa 141 car produced by SAIPA
The Córas Iompair Éireann 141 class locomotive from General Motors Electro-Motive Division in 1962
In other fields
141 is also:
The year AD 141 or 141 BC
141 AH is a year in the Islamic calendar that corresponds to 759 – 760 CE
141 Lumen is a dark C-type, rocky asteroid orbiting in the asteroid belt
The atomic number of unquadunium, a temporary chemical element
The telephone dialing prefix for withholding one's Caller ID in the United Kingdom
Psalm 141
Sonnet 141 by William Shakespeare
See also
List of highways numbered 141
United Nations Security Council Resolution 141
United States Supreme Court cases, Volume 141
References
Integers
|
https://en.wikipedia.org/wiki/Courant%20Institute%20of%20Mathematical%20Sciences
|
The Courant Institute of Mathematical Sciences (commonly known as Courant or CIMS) is the mathematics research school of New York University (NYU), and is among the most prestigious mathematics schools and mathematical sciences research centers in the world. Founded in 1935, it is named after Richard Courant, one of the founders of the Courant Institute and also a mathematics professor at New York University from 1936 to 1972, and serves as a center for research and advanced training in computer science and mathematics. It is located on Gould Plaza next to the Stern School of Business and the economics department of the College of Arts and Science.
NYU is ranked #1 in applied mathematics in the US (as per US News), #5 in citation impact worldwide, and #12 in citation worldwide. It is also ranked #19 worldwide in computer science and information systems. It is also known for its extensive research in pure mathematical areas, such as partial differential equations, probability and geometry, as well as applied mathematical areas, such as computational biology, computational neuroscience, and mathematical finance. The Mathematics Department of the institute has 15 members of the United States National Academy of Sciences (joint third globally with Princeton University, and after the University of California at Berkeley and Harvard University who are joint first globally with 17 members each, and just ahead of other topnotch research universities like Stanford University which has 14 members) and five members of the National Academy of Engineering. Four faculty members have been awarded the National Medal of Science, one was honored with the Kyoto Prize, and nine have received career awards from the National Science Foundation. Courant Institute professors Peter Lax, S. R. Srinivasa Varadhan, Mikhail Gromov, Louis Nirenberg won the 2005, 2007, 2009 and 2015 Abel Prize respectively for their research in partial differential equations, probability and geometry. Louis Nirenberg also received the Chern Medal in 2010, and Subhash Khot won the Nevanlinna Prize in 2014. Amir Pnueli and Yann LeCun won the 1996 and 2018 Turing Award respectively. In addition, Jeff Cheeger was also awarded the Shaw Prize in Mathematical Sciences in 2021.
The director of the Courant Institute directly reports to New York University's provost and president and works closely with deans and directors of other NYU colleges and divisions respectively. The undergraduate programs and graduate programs at the Courant Institute are run independently by the institute, and formally associated with the NYU College of Arts and Science, NYU Tandon School Of Engineering, and NYU Graduate School of Arts and Science respectively.
Academics
Rankings
The Courant Institute specializes in applied mathematics, mathematical analysis and scientific computation. There is emphasis on partial differential equations and their applications. The mathematics department is consistently ranked in the United
|
https://en.wikipedia.org/wiki/Giuseppe%20Veronese
|
Giuseppe Veronese (7 May 1854 – 17 July 1917) was an Italian mathematician. He was born in Chioggia, near Venice.
Education
Veronese earned his laurea in mathematics from the Istituto Tecnico di Venezia in 1872.
Work
Although Veronese's work was severely criticised as unsound by Peano, he is now recognised as having priority on many ideas that have since become parts of transfinite numbers and model theory, and as one of the respected authorities of the time, his work served to focus Peano and others on the need for greater rigor.
He is particularly noted for his hypothesis of relative continuity which was the foundation for his development of the first non-Archimedean linear continuum.
Veronese produced several significant monographs. The most famous appeared in 1891, Fondamenti di geometria a più dimensioni e a più specie di unità rettilinee esposti in forma elementare, normally referred to as Fondamenti di geometria to distinguish it from Veronese' other works also styled Fondamenti. It was this work that was most severely criticised by both Peano and Cantor, however Levi-Civita described it as masterful and Hilbert as profound.
See also
Veronese surface
References
Philip Ehrlich (ed) Real Numbers, Generalisations of the Reals, and Theories of Continua, 1994.
Paola Cantu', Giuseppe Veronese e i fondamenti della geometria [Giuseppe Veronese and the Foundations of Geometry], Milano, Unicopli, "Biblioteca di cultura filosofica, 10", 1999, 270 pp. .
Philip Ehrlich: The rise of non-Archimedean mathematics and the roots of a misconception. I. The emergence of non-Archimedean systems of magnitudes. Archive for History of Exact Sciences 60 (2006), no. 1, 1–121.
External links
Fondamenti di geometria, full text in Italian, as HTML and as image files.
Foundations of geometry in higher dimensions and more species of rectilinear units exposed in elemental form. Lessons for school teaching in mathematics, full text in Google-English translation.
Grundzüge der Geometrie von mehreren Dimensionen und mehreren Arten gradliniger Einheiten in elementarer Form entwickelt, 1894, German translation.
Peano's dismissal of Veronese' work.
'Generic points' attributed to Veronese.
Biography & Bibliography by P. Cantù
1854 births
1917 deaths
People from Chioggia
Algebraic geometers
Italian algebraic geometers
20th-century Italian mathematicians
19th-century Italian mathematicians
Historians of mathematics
Academic staff of the University of Padua
|
https://en.wikipedia.org/wiki/List%20of%20Welsh%20mathematicians
|
This is a list of Welsh mathematicians, who have contributed to the development of mathematics.
References
Chambers, Ll. G. Mathemategwyr Cymru (Mathematicians of Wales), Cyd Bwyllgor Addysg Cymru, 1994.
External links
Welsh scientists Mathematicians, Scientists and Inventors
Welsh
|
https://en.wikipedia.org/wiki/Quadrature%20%28geometry%29
|
In mathematics, particularly in geometry, quadrature (also called squaring) is a historical process of drawing a square with the same area as a given plane figure or computing the numerical value of that area. A classical example is the quadrature of the circle (or squaring the circle).
Quadrature problems served as one of the main sources of problems in the development of calculus. They introduce important topics in mathematical analysis.
History
Antiquity
Greek mathematicians understood the determination of an area of a figure as the process of geometrically constructing a square having the same area (squaring), thus the name quadrature for this process. The Greek geometers were not always successful (see squaring the circle), but they did carry out quadratures of some figures whose sides were not simply line segments, such as the lune of Hippocrates and the parabola. By a certain Greek tradition, these constructions had to be performed using only a compass and straightedge, though not all Greek mathematicians adhered to this dictum.
For a quadrature of a rectangle with the sides a and b it is necessary to construct a square with the side (the geometric mean of a and b). For this purpose it is possible to use the following: if one draws the circle with diameter made from joining line segments of lengths a and b, then the height (BH in the diagram) of the line segment drawn perpendicular to the diameter, from the point of their connection to the point where it crosses the circle, equals the geometric mean of a and b. A similar geometrical construction solves the problems of quadrature of a parallelogram and of a triangle.
Problems of quadrature for curvilinear figures are much more difficult. The quadrature of the circle with compass and straightedge was proved in the 19th century to be impossible. Nevertheless, for some figures a quadrature can be performed. The quadratures of the surface of a sphere and a parabola segment discovered by Archimedes became the highest achievement of analysis in antiquity.
The area of the surface of a sphere is equal to four times the area of the circle formed by a great circle of this sphere.
The area of a segment of a parabola determined by a straight line cutting it is 4/3 the area of a triangle inscribed in this segment.
For the proofs of these results, Archimedes used the method of exhaustion attributed to Eudoxus.
Medieval mathematics
In medieval Europe, quadrature meant the calculation of area by any method. Most often the method of indivisibles was used; it was less rigorous than the geometric constructions of the Greeks, but it was simpler and more powerful. With its help, Galileo Galilei and Gilles de Roberval found the area of a cycloid arch, Grégoire de Saint-Vincent investigated the area under a hyperbola (Opus Geometricum, 1647), and Alphonse Antonio de Sarasa, de Saint-Vincent's pupil and commentator, noted the relation of this area to logarithms.
Integral calculus
John Wallis algebrised
|
https://en.wikipedia.org/wiki/Q-analog
|
In mathematics, a q-analog of a theorem, identity or expression is a generalization involving a new parameter q that returns the original theorem, identity or expression in the limit as . Typically, mathematicians are interested in q-analogs that arise naturally, rather than in arbitrarily contriving q-analogs of known results. The earliest q-analog studied in detail is the basic hypergeometric series, which was introduced in the 19th century.
q-analogs are most frequently studied in the mathematical fields of combinatorics and special functions. In these settings, the limit is often formal, as is often discrete-valued (for example, it may represent a prime power).
q-analogs find applications in a number of areas, including the study of fractals and multi-fractal measures, and expressions for the entropy of chaotic dynamical systems. The relationship to fractals and dynamical systems results from the fact that many fractal patterns have the symmetries of Fuchsian groups in general (see, for example Indra's pearls and the Apollonian gasket) and the modular group in particular. The connection passes through hyperbolic geometry and ergodic theory, where the elliptic integrals and modular forms play a prominent role; the q-series themselves are closely related to elliptic integrals.
q-analogs also appear in the study of quantum groups and in q-deformed superalgebras. The connection here is similar, in that much of string theory is set in the language of Riemann surfaces, resulting in connections to elliptic curves, which in turn relate to q-series.
"Classical" q-theory
Classical q-theory begins with the q-analogs of the nonnegative integers. The equality
suggests that we define the q-analog of n, also known as the q-bracket or q-number of n, to be
By itself, the choice of this particular q-analog among the many possible options is unmotivated. However, it appears naturally in several contexts. For example, having decided to use [n]q as the q-analog of n, one may define the q-analog of the factorial, known as the q-factorial, by
This q-analog appears naturally in several contexts. Notably, while n! counts the number of permutations of length n, [n]q! counts permutations while keeping track of the number of inversions. That is, if inv(w) denotes the number of inversions of the permutation w and Sn denotes the set of permutations of length n, we have
In particular, one recovers the usual factorial by taking the limit as .
The q-factorial also has a concise definition in terms of the q-Pochhammer symbol, a basic building-block of all q-theories:
From the q-factorials, one can move on to define the q-binomial coefficients, also known as Gaussian coefficients, Gaussian polynomials, or Gaussian binomial coefficients:
The q-exponential is defined as:
q-trigonometric functions, along with a q-Fourier transform, have been defined in this context.
Combinatorial q-analogs
The Gaussian coefficients count subspaces of a finite vector space
|
https://en.wikipedia.org/wiki/Arc%20elasticity
|
In mathematics and economics, the arc elasticity is the elasticity of one variable with respect to another between two given points. It is the ratio of the percentage change of one of the variables between the two points to the percentage change of the other variable. It contrasts with the point elasticity, which is the limit of the arc elasticity as the distance between the two points approaches zero and which hence is defined at a single point rather than for a pair of points.
Like the point elasticity, the arc elasticity can vary in value depending on the starting point. For example, the arc elasticity of supply of a product with respect to the product's price could be large when the starting and ending prices are both low, but could be small when they are both high.20%/10%=2
Formula
The y arc elasticity of x is defined as:
where the percentage change in going from point 1 to point 2 is usually calculated relative to the midpoint:
The use of the midpoint arc elasticity formula (with the midpoint used for the base of the change, rather than the initial point (x1, y1) which is used in almost all other contexts for calculating percentages) was advocated by R. G. D. Allen for use when x refers to the quantity of a good demanded or supplied and y refers to its price, due to the following properties: (1) it is symmetric with respect to the two prices and quantities, (2) it is independent of the units of measurement, and (3) it yields a value of unity if the total revenues (price times quantity) at the two points are equal.
The arc elasticity is used when there is not a general function for the relationship of two variables, but two points on the relationship are known. In contrast, calculation of the point elasticity requires detailed knowledge of the functional relationship and can be calculated wherever the function is defined.
For comparison, the y point elasticity of x is given by
Application in economics
The arc elasticity of quantity demanded (or quantity supplied) Q with respect to price P, also known as the arc price elasticity of demand (or supply), is calculated as
Example
Suppose that two points on a demand curve, and , are known. (Nothing else might be known about the demand curve.) Then the arc elasticity is obtained using the formula
Suppose the quantity of hot dogs demanded at halftime of football games is measured at two different games at which two different prices are charged: at one measurement the quantity demanded is 80 units, and at the other measurement it is 120 units. The percent change, measured against the average, would be (120-80)/((120+80)/2))=40%. If the measurements were taken in reverse sequence (first 120 and then 80), the absolute value of the percentage change would be the same.
In contrast, if the percentage change in quantity demanded were measured against the initial value, the calculated percentage change would be (120-80)/80= 50%. The percent change for the reverse sequence of observations,
|
https://en.wikipedia.org/wiki/Algebraic%20graph%20theory
|
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.
Branches of algebraic graph theory
Using linear algebra
The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Laplacian matrix of a graph (this part of algebraic graph theory is also called spectral graph theory). For the Petersen graph, for example, the spectrum of the adjacency matrix is (−2, −2, −2, −2, 1, 1, 1, 1, 1, 3). Several theorems relate properties of the spectrum to other graph properties. As a simple example, a connected graph with diameter D will have at least D+1 distinct values in its spectrum. Aspects of graph spectra have been used in analysing the synchronizability of networks.
Using group theory
The second branch of algebraic graph theory involves the study of graphs in connection to group theory, particularly automorphism groups and geometric group theory. The focus is placed on various families of graphs based on symmetry (such as symmetric graphs, vertex-transitive graphs, edge-transitive graphs, distance-transitive graphs, distance-regular graphs, and strongly regular graphs), and on the inclusion relationships between these families. Certain of such categories of graphs are sparse enough that lists of graphs can be drawn up. By Frucht's theorem, all groups can be represented as the automorphism group of a connected graph (indeed, of a cubic graph). Another connection with group theory is that, given any group, symmetrical graphs known as Cayley graphs can be generated, and these have properties related to the structure of the group.
This second branch of algebraic graph theory is related to the first, since the symmetry properties of a graph are reflected in its spectrum. In particular, the spectrum of a highly symmetrical graph, such as the Petersen graph, has few distinct values (the Petersen graph has 3, which is the minimum possible, given its diameter). For Cayley graphs, the spectrum can be related directly to the structure of the group, in particular to its irreducible characters.
Studying graph invariants
Finally, the third branch of algebraic graph theory concerns algebraic properties of invariants of graphs, and especially the chromatic polynomial, the Tutte polynomial and knot invariants. The chromatic polynomial of a graph, for example, counts the number of its proper vertex colorings. For the Petersen graph, this polynomial is . In particular, this means that the Petersen graph cannot be properly colored with one or two colors, but can be colored in 120 different ways with 3 colors. Much work in this area of algebraic graph theory was motiva
|
https://en.wikipedia.org/wiki/Probability%20distribution%20function
|
Probability distribution function may refer to:
Probability distribution
Cumulative distribution function
Probability mass function
Probability density function
|
https://en.wikipedia.org/wiki/Segre%20embedding
|
In mathematics, the Segre embedding is used in projective geometry to consider the cartesian product (of sets) of two projective spaces as a projective variety. It is named after Corrado Segre.
Definition
The Segre map may be defined as the map
taking a pair of points to their product
(the XiYj are taken in lexicographical order).
Here, and are projective vector spaces over some arbitrary field, and the notation
is that of homogeneous coordinates on the space. The image of the map is a variety, called a Segre variety. It is sometimes written as .
Discussion
In the language of linear algebra, for given vector spaces U and V over the same field K, there is a natural way to map their cartesian product to their tensor product.
In general, this need not be injective because, for , and any nonzero ,
Considering the underlying projective spaces P(U) and P(V), this mapping becomes a morphism of varieties
This is not only injective in the set-theoretic sense: it is a closed immersion in the sense of algebraic geometry. That is, one can give a set of equations for the image. Except for notational trouble, it is easy to say what such equations are: they express two ways of factoring products of coordinates from the tensor product, obtained in two different ways as something from U times something from V.
This mapping or morphism σ is the Segre embedding. Counting dimensions, it shows how the product of projective spaces of dimensions m and n embeds in dimension
Classical terminology calls the coordinates on the product multihomogeneous, and the product generalised to k factors k-way projective space.
Properties
The Segre variety is an example of a determinantal variety; it is the zero locus of the 2×2 minors of the matrix . That is, the Segre variety is the common zero locus of the quadratic polynomials
Here, is understood to be the natural coordinate on the image of the Segre map.
The Segre variety is the categorical product of and .
The projection
to the first factor can be specified by m+1 maps on open subsets covering the Segre variety, which agree on intersections of the subsets. For fixed , the map is given by sending to . The equations ensure that these maps agree with each other, because if we have .
The fibers of the product are linear subspaces. That is, let
be the projection to the first factor; and likewise for the second factor. Then the image of the map
for a fixed point p is a linear subspace of the codomain.
Examples
Quadric
For example with m = n = 1 we get an embedding of the product of the projective line with itself in P3. The image is a quadric, and is easily seen to contain two one-parameter families of lines. Over the complex numbers this is a quite general non-singular quadric. Letting
be the homogeneous coordinates on P3, this quadric is given as the zero locus of the quadratic polynomial given by the determinant
Segre threefold
The map
is known as the Segre threefold. It is an example of
|
https://en.wikipedia.org/wiki/Closed%20immersion
|
In algebraic geometry, a closed immersion of schemes is a morphism of schemes that identifies Z as a closed subset of X such that locally, regular functions on Z can be extended to X. The latter condition can be formalized by saying that is surjective.
An example is the inclusion map induced by the canonical map .
Other characterizations
The following are equivalent:
is a closed immersion.
For every open affine , there exists an ideal such that as schemes over U.
There exists an open affine covering and for each j there exists an ideal such that as schemes over .
There is a quasi-coherent sheaf of ideals on X such that and f is an isomorphism of Z onto the global Spec of over X.
Definition for locally ringed spaces
In the case of locally ringed spaces a morphism is a closed immersion if a similar list of criteria is satisfied
The map is a homeomorphism of onto its image
The associated sheaf map is surjective with kernel
The kernel is locally generated by sections as an -module
The only varying condition is the third. It is instructive to look at a counter-example to get a feel for what the third condition yields by looking at a map which is not a closed immersion, whereIf we look at the stalk of at then there are no sections. This implies for any open subscheme containing the sheaf has no sections. This violates the third condition since at least one open subscheme covering contains .
Properties
A closed immersion is finite and radicial (universally injective). In particular, a closed immersion is universally closed. A closed immersion is stable under base change and composition. The notion of a closed immersion is local in the sense that f is a closed immersion if and only if for some (equivalently every) open covering the induced map is a closed immersion.
If the composition is a closed immersion and is separated, then is a closed immersion. If X is a separated S-scheme, then every S-section of X is a closed immersion.
If is a closed immersion and is the quasi-coherent sheaf of ideals cutting out Z, then the direct image from the category of quasi-coherent sheaves over Z to the category of quasi-coherent sheaves over X is exact, fully faithful with the essential image consisting of such that .
A flat closed immersion of finite presentation is the open immersion of an open closed subscheme.
See also
Segre embedding
Regular embedding
Notes
References
The Stacks Project
Morphisms of schemes
|
https://en.wikipedia.org/wiki/List%20of%20United%20States%20regional%20mathematics%20competitions
|
Many math competitions in the United States have regional restrictions. Of these, most are statewide.
For a more complete list, please visit here .
The contests include:
Alabama
Alabama Statewide High School Mathematics Contest
Virgil Grissom High School Math Tournament
Vestavia Hills High School Math Tournament
Arizona
Great Plains Math League
AATM State High School Contest
California
Bay Area Math Olympiad
Lawrence Livermore National Laboratories Annual High School Math Challenge
Cal Poly Math Contest and Trimathlon
Polya Competition
Bay Area Math Meet
College of Creative Studies Math Competition
LA Math Cup
Math Day at the Beach hosted by CSULB
Math Field Day for San Diego Middle Schools
Mesa Day Math Contest at UC Berkeley
Santa Barbara County Math Superbowl
Pomona College Mathematical Talent Search
Redwood Empire Mathematics Tournament hosted by Humboldt State (middle and high school)
San Diego Math League and San Diego Math Olympiad hosted by the San Diego Math Circle
Santa Clara University High School Mathematics Contest
SC Mathematics Competition (SCMC) hosted by RSO@USC
Stanford Mathematics Tournament
UCSD/GSDMC High School Honors Mathematics Contest
Colorado
Colorado Mathematics Olympiad
District of Columbia
Moody's Mega Math
Florida
Florida-Stuyvesant Alumni Mathematics Competition
David Essner Mathematics Competition
James S. Rickards High School Fall Invitational
FAMAT Regional Competitions:
January Regional
February Regional
March Regional
FGCU Math Competition
Georgia
Central Math Meet(grades 9 - 12)
GA Council of Teachers of Mathematics State Varsity Math Tournament
STEM Olympiads Of America Math, Science & Cyber Olympiads (grades 3 - 8)
Valdosta State University Middle Grades Mathematics Competition
Illinois
ICTM math contest (grades 3–12)
Indiana
[IUPUI High School Math Contest] (grades 9–12)
Huntington University Math Competition (grades 6–12)
Indiana Math League
IASP Academic Super Bowl
Rose-Hulman High School Mathematics Contest (grades 9–12)
Trine University Math Competition
Iowa
Great Plains Math League
Kansas
Great Plains Math League
Louisiana
Louisiana State University Mathematics Contest for high school students
Maine
Pi-Cone South Math League
Maine Association of Math Leagues
Maryland
The University of Maryland High School Mathematics Competition
The Eastern Shore High School Mathematics Competition
Maryland Trig-Star
Maryland Math League
JHU Math Competition
Montgomery Blair Math Tournament
Massachusetts
Harvard–MIT Mathematics Tournament
Worcester Polytechnic Institute Mathematics Meet
Massachusetts Mathematics Olympiad
Greater Boston Mathematics League
Massachusetts Mathematics League
Southeastern Massachusetts Mathematics League
Southern Massachusetts Conference Mathematics League
Western Massachusetts Mathematics League
Worcester County Mathematics League
Intermediate Math League of Eastern Massachusetts
Lexington Mathematics Tournament
Winchest
|
https://en.wikipedia.org/wiki/Fuzzy%20measure%20theory
|
In mathematics, fuzzy measure theory considers generalized measures in which the additive property is replaced by the weaker property of monotonicity. The central concept of fuzzy measure theory is the fuzzy measure (also capacity, see ), which was introduced by Choquet in 1953 and independently defined by Sugeno in 1974 in the context of fuzzy integrals. There exists a number of different classes of fuzzy measures including plausibility/belief measures, possibility/necessity measures, and probability measures, which are a subset of classical measures.
Definitions
Let be a universe of discourse, be a class of subsets of , and . A function where
is called a fuzzy measure.
A fuzzy measure is called normalized or regular if .
Properties of fuzzy measures
A fuzzy measure is:
additive if for any such that , we have ;
supermodular if for any , we have ;
submodular if for any , we have ;
superadditive if for any such that , we have ;
subadditive if for any such that , we have ;
symmetric if for any , we have implies ;
Boolean if for any , we have or .
Understanding the properties of fuzzy measures is useful in application. When a fuzzy measure is used to define a function such as the Sugeno integral or Choquet integral, these properties will be crucial in understanding the function's behavior. For instance, the Choquet integral with respect to an additive fuzzy measure reduces to the Lebesgue integral. In discrete cases, a symmetric fuzzy measure will result in the ordered weighted averaging (OWA) operator. Submodular fuzzy measures result in convex functions, while supermodular fuzzy measures result in concave functions when used to define a Choquet integral.
Möbius representation
Let g be a fuzzy measure. The Möbius representation of g is given by the set function M, where for every ,
The equivalent axioms in Möbius representation are:
.
, for all and all
A fuzzy measure in Möbius representation M is called normalized
if
Möbius representation can be used to give an indication of which subsets of X interact with one another. For instance, an additive fuzzy measure has Möbius values all equal to zero except for singletons. The fuzzy measure g in standard representation can be recovered from the Möbius form using the Zeta transform:
Simplification assumptions for fuzzy measures
Fuzzy measures are defined on a semiring of sets or monotone class, which may be as granular as the power set of X, and even in discrete cases the number of variables can be as large as 2|X|. For this reason, in the context of multi-criteria decision analysis and other disciplines, simplification assumptions on the fuzzy measure have been introduced so that it is less computationally expensive to determine and use. For instance, when it is assumed the fuzzy measure is additive, it will hold that and the values of the fuzzy measure can be evaluated from the values on X. Similarly, a symmetric fuzzy measure is defined uniq
|
https://en.wikipedia.org/wiki/Edward%20O.%20Thorp
|
Edward Oakley Thorp (born August 14, 1932) is an American mathematics professor, author, hedge fund manager, and blackjack researcher. He pioneered the modern applications of probability theory, including the harnessing of very small correlations for reliable financial gain.
Thorp is the author of Beat the Dealer, which mathematically proved that the house advantage in blackjack could be overcome by card counting. He also developed and applied effective hedge fund techniques in the financial markets, and collaborated with Claude Shannon in creating the first wearable computer.
Thorp received his Ph.D. in mathematics from the University of California, Los Angeles in 1958, and worked at the Massachusetts Institute of Technology (MIT) from 1959 to 1961. He was a professor of mathematics from 1961 to 1965 at New Mexico State University, and then joined the University of California, Irvine where he was a professor of mathematics from 1965 to 1977 and a professor of mathematics and finance from 1977 to 1982.
Background
Thorp was born in Chicago, but moved to southern California in his childhood. He had an early aptitude for science, and often tinkered with experiments of his own creation. He was one of the youngest amateur radio operators when he was certified at age 12. Thorp went on to win scholarships by doing well in chemistry and physics competitions (one instance led him to meeting President Truman), ultimately electing to go to UC Berkeley for his undergraduate degree. However, he transferred to UCLA after one year, majoring in physics. This was eventually followed by a PhD in Mathematics at UCLA. He met his future wife Vivian during his first year at UCLA. They married in January 1956.
Computer-aided research in blackjack
Thorp used the IBM 704 as a research tool in order to investigate the probabilities of winning while developing his blackjack game theory, which was based on the Kelly criterion, which he learned about from the 1956 paper by Kelly. He learned Fortran in order to program the equations needed for his theoretical research model on the probabilities of winning at blackjack. Thorp analyzed the game of blackjack to a great extent this way, while devising card counting schemes with the aid of the IBM 704 in order to improve his odds, especially near the end of a card deck that is not being reshuffled after every deal.
Applied research in casinos
Thorp decided to test his theory in practice in Reno, Lake Tahoe, and Las Vegas, Nevada.
Thorp started his applied research using $10,000, with Manny Kimmel, a wealthy professional gambler and former bookmaker, providing the venture capital. First they visited Reno and Lake Tahoe establishments where they tested Thorp's theory at the local blackjack tables. The experimental results proved successful and his theory was verified since he won $11,000 in a single weekend. As a countermeasure to his methods, casinos now shuffle long before the end of the deck is reached. During his Las Ve
|
https://en.wikipedia.org/wiki/Institute%20of%20Mathematics%2C%20Physics%2C%20and%20Mechanics
|
Institute of Mathematics, Physics, and Mechanics (; IMFM) is the leading research institution in the areas of mathematics and theoretical computer science in Slovenia. It includes researchers from University of Ljubljana, University of Maribor and University of Primorska. It was founded in 1960.
The IMFM is composed of the following departments:
Department of Mathematrics
Department of Physics
Department of Theoretical Computer Science
The director is Jernej Kozak.
References
External links
Research institutes in Slovenia
Mathematical institutes
Physics research institutes
Scientific organizations established in 1960
Scientific organizations in Ljubljana
|
https://en.wikipedia.org/wiki/STW%20%28disambiguation%29
|
STW or StW may refer to:
Business
Scott Tallon Walker Architects
Stop the War Coalition, an anti-war group in the United Kingdom
Mathematics
The Shimura-Taniyama-Weil conjecture, a generalization of Fermat's Last Theorem.
Music
Salt the Wound, a deathcore band
Silence the World, third album by the Swedish band Adept
Television
KSTW, a television station
Secrets of a Teenage Witch, a 3D animated series
STW-9, a television station in Perth, Australia
Scott the Woz, a web comedy review series
Utility companies
Severn Trent Water
Sewage treatment, Sewage Treatment Works
Transport
MTR station code for Sha Tin Wai station, Hong Kong
National Rail station code for Strawberry Hill railway station, London, England
Other
Fortnite: Save the World, a survival game
Search The Web, a milder version of computer jargon acronym STFW
Shogun: Total War, a PC strategy game.
Speed through water, nautical term
Sport Touring Wagon, an alternative marketing name for Crossover (automobile) style vehicles.
IATA code for Stavropol Shpakovskoye Airport
Super Tourenwagen Cup, the German Supertouring car championship (until 1999)
Surviving the World, a daily webcomic.
"Stop the world", a global pause in a computer program for garbage collection.
ScrewTurn Wiki, software
|
https://en.wikipedia.org/wiki/LGBT%20culture%20in%20Singapore
|
There are no statistics on how many LGBT people there are in Singapore or what percentage of the population they constitute. While homosexuality is legal in the country, the country is largely conservative.
Notable persons identifying as LGBT
Historical
Paddy Chew was the first Singaporean to publicly declare his HIV-positive status. He came out on 12 December 1998 during the First National AIDS Conference in Singapore. He identified his orientation as bisexual. His affliction was dramatised in a play called Completely With/Out Character produced by The Necessary Stage, directed by Alvin Tan and written by Haresh Sharma, staged in May 1999. He died on 21 August 1999, shortly after the play's run ended.
Arthur Yap was a poet who was awarded the 1983 Singapore Cultural Medallion for Literature. He died of laryngeal carcinoma on 19 June 2006, bequeathing $500,000/-, part of his estate which included his apartment off Killiney Road, to the National Cancer Centre Singapore where he was a patient.
Arts personalities
Cyril Wong, poet.
Alfian Sa'at, writer, poet and playwright. He had a weekly column on gay website Trevvy titled, "Iced Bandung".
Ng Yi-Sheng, writer and performance artist. Ng is the author of a collection of personally written poems, including ones with queer-theming.
Sean Foo, entrepreneur, filmmaker and LGBT advocate who founded Dear Straight People. Sean is also credited as the creator of Singapore's first gay Boys Love web drama series, "Getaway."
Politicians
Vincent Wijeysingha; first Singaporean politician to openly declare that he was gay when he made a post on Facebook ahead of the annual Pink Dot SG event.
Internet
Singapore has particularly established LGBT portals owing to its high Internet penetration rates and the restriction on LGBT content in print and broadcast media.
Blowing Wind Gay Forum is an online discussion forum for gay men in Singapore started in 1997 to discuss any issues which concern them. It eschews political, religious, and anti-racial topics.
Dear Straight People is a media platform focused on content related to and concerning the LGBTQ+ community. Founded by Sean Foo in July 2015, Dear Straight People has become Singapore's leading LGBTQ+ publication.
Fridae.asia is an English-language LGBT news and social networking portal founded in 2000 by Stuart Koe. Fridae was popular during the early 2000s, but has become largely inactive during the 2010s.
Gay SG Confessions, also known as 'GSC' – Started in February 2013 in the footsteps of a host of popular "confessions" websites, GSC is a Facebook page that hosts a collection of user-contributed stories by gay, bisexual, lesbian, straight, transgender and curious members. The page was a sleeper published over 500 'confessions' or posts within less than 2 weeks of its creation and garnering over 10,000-page 'Likes' in slightly over 6 months. The site is run by an anonymous moderator, an account director in an advertising firm in his 30s who wants t
|
https://en.wikipedia.org/wiki/Columbia-Shuswap%20D
|
The Columbia-Shuswap Electoral Area D, referred to by Statistics Canada as Columbia-Shuswap D, is a regional district electoral area in the South-west corner of the Columbia-Shuswap Regional District of British Columbia. It contains the communities of Falkland, Ranchero, and Silver Creek. The population of this area, exclusive of any residents of Indian Reserves, is around 4000 people. Agriculture is the main economy for the area. The Salmon River flows through it before going into Shuswap Lake.
References
Regional district electoral areas in British Columbia
Columbia-Shuswap Regional District
|
https://en.wikipedia.org/wiki/Attenuation%20length
|
In physics, the attenuation length or absorption length is the distance into a material when the probability has dropped to that a particle has not been absorbed. Alternatively, if there is a beam of particles incident on the material, the attenuation length is the distance where the intensity of the beam has dropped to , or about 63% of the particles have been stopped.
Mathematically, the probability of finding a particle at depth into the material is calculated by the Beer–Lambert law:
.
In general is material- and energy-dependent.
See also
Beer's Law
Mean free path
Attenuation coefficient
Attenuation (electromagnetic radiation)
Radiation length
References
https://web.archive.org/web/20050215215652/http://www.ct.infn.it/~rivel/Glossario/node2.html
External links
http://henke.lbl.gov/optical_constants/atten2.html
Particle physics
Experimental particle physics
|
https://en.wikipedia.org/wiki/Prime%20power
|
In mathematics, a prime power is a positive integer which is a positive integer power of a single prime number.
For example: , and are prime powers, while
, and are not.
The sequence of prime powers begins:
2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, 23, 25, 27, 29, 31, 32, 37, 41, 43, 47, 49, 53, 59, 61, 64, 67, 71, 73, 79, 81, 83, 89, 97, 101, 103, 107, 109, 113, 121, 125, 127, 128, 131, 137, 139, 149, 151, 157, 163, 167, 169, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 243, 251, … .
The prime powers are those positive integers that are divisible by exactly one prime number; in particular, the number 1 is not a prime power. Prime powers are also called primary numbers, as in the primary decomposition.
Properties
Algebraic properties
Prime powers are powers of prime numbers. Every prime power (except powers of 2) has a primitive root; thus the multiplicative group of integers modulo pn (that is, the group of units of the ring Z/pnZ) is cyclic.
The number of elements of a finite field is always a prime power and conversely, every prime power occurs as the number of elements in some finite field (which is unique up to isomorphism).
Combinatorial properties
A property of prime powers used frequently in analytic number theory is that the set of prime powers which are not prime is a small set in the sense that the infinite sum of their reciprocals converges, although the primes are a large set.
Divisibility properties
The totient function (φ) and sigma functions (σ0) and (σ1) of a prime power are calculated by the formulas
All prime powers are deficient numbers. A prime power pn is an n-almost prime. It is not known whether a prime power pn can be a member of an amicable pair. If there is such a number, then pn must be greater than 101500 and n must be greater than 1400.
See also
Almost prime
Fermi–Dirac prime
Perfect power
Semiprime
References
Further reading
Elementary Number Theory. Jones, Gareth A. and Jones, J. Mary. Springer-Verlag London Limited. 1998.
Prime numbers
Exponentials
Number theory
Integer sequences
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20identities
|
This article lists mathematical identities, that is, identically true relations holding in mathematics.
Bézout's identity (despite its usual name, it is not, properly speaking, an identity)
Binomial inverse theorem
Binomial identity
Brahmagupta–Fibonacci two-square identity
Candido's identity
Cassini and Catalan identities
Degen's eight-square identity
Difference of two squares
Euler's four-square identity
Euler's identity
Fibonacci's identity see Brahmagupta–Fibonacci identity or Cassini and Catalan identities
Heine's identity
Hermite's identity
Lagrange's identity
Lagrange's trigonometric identities
MacWilliams identity
Matrix determinant lemma
Newton's identity
Parseval's identity
Pfister's sixteen-square identity
Sherman–Morrison formula
Sophie Germain identity
Sun's curious identity
Sylvester's determinant identity
Vandermonde's identity
Woodbury matrix identity
Identities for classes of functions
Exterior calculus identities
Fibonacci identities: Combinatorial Fibonacci identities and Other Fibonacci identities
Hypergeometric function identities
List of integrals of logarithmic functions
List of topics related to
List of trigonometric identities
Inverse trigonometric functions
Logarithmic identities
Summation identities
Vector calculus identities
See also
External links
A Collection of Algebraic Identities
Matrix Identities
Identities
|
https://en.wikipedia.org/wiki/Topological%20graph%20theory
|
In mathematics, topological graph theory is a branch of graph theory. It studies the embedding of graphs in surfaces, spatial embeddings of graphs, and graphs as topological spaces. It also studies immersions of graphs.
Embedding a graph in a surface means that we want to draw the graph on a surface, a sphere for example, without two edges intersecting. A basic embedding problem often presented as a mathematical puzzle is the three utilities problem. Other applications can be found in printing electronic circuits where the aim is to print (embed) a circuit (the graph) on a circuit board (the surface) without two connections crossing each other and resulting in a short circuit.
Graphs as topological spaces
To an undirected graph we may associate an abstract simplicial complex C with a single-element set per vertex and a two-element set per edge. The geometric realization |C| of the complex consists of a copy of the unit interval [0,1] per edge, with the endpoints of these intervals glued together at vertices. In this view, embeddings of graphs into a surface or as subdivisions of other graphs are both instances of topological embedding, homeomorphism of graphs is just the specialization of topological homeomorphism, the notion of a connected graph coincides with topological connectedness, and a connected graph is a tree if and only if its fundamental group is trivial.
Other simplicial complexes associated with graphs include the Whitney complex or clique complex, with a set per clique of the graph, and the matching complex, with a set per matching of the graph (equivalently, the clique complex of the complement of the line graph). The matching complex of a complete bipartite graph is called a chessboard complex, as it can be also described as the complex of sets of nonattacking rooks on a chessboard.
Example studies
John Hopcroft and Robert Tarjan derived a means of testing the planarity of a graph in time linear to the number of edges. Their algorithm does this by constructing a graph embedding which they term a "palm tree". Efficient planarity testing is fundamental to graph drawing.
Fan Chung et al studied the problem of embedding a graph into a book with the graph's vertices in a line along the spine of the book. Its edges are drawn on separate pages in such a way that edges residing on the same page do not cross. This problem abstracts layout problems arising in the routing of multilayer printed circuit boards.
Graph embeddings are also used to prove structural results about graphs, via graph minor theory and the graph structure theorem.
See also
Crossing number (graph theory)
Genus
Planar graph
Real tree
Toroidal graph
Topological combinatorics
Voltage graph
Notes
|
https://en.wikipedia.org/wiki/Centro%20de%20Investigaci%C3%B3n%20en%20Matem%C3%A1ticas
|
The Centro de Investigación en Matemáticas (lit. "Center for Research in Mathematics"), commonly known by its acronym in Spanish as CIMAT, is a North American scientific research institution based in the city of Guanajuato, in the homonym State of Guanajuato, in central Mexico, and was established in the year 1980. It belongs to the Mexican National System of Public Centers of Research under administration of the country's National Council of Science and Technology (CONACyT).
CIMAT is oriented to scientific research under the auspices of the Mexican government. It is also devoted to the generation, dissemination and application of knowledge in specialized fields, as well as to the formation of human resources in the areas of pure and applied mathematics, probability and statistics, and computer science. Of CIMAT's faculty, more than 80% of the researchers belong to the Mexican National System of Researchers (SNI). Academically, the center is organized in four main areas: pure mathematics, applied mathematics, probability and statistics, and computer science.
The research groups of the center interact strongly with similar institutions in Mexico and in foreign countries. This provides a continuous flow of visitors from around the world and provides conferences, workshops, and seminars.
The educational programs at CIMAT currently have more than 200 students, who come from all over the country and from abroad (mainly from Central and South American countries, but also from African countries, the rest of North America, Spain and other countries). The Master's and Doctorate programs offered at the center are registered in the Excellency Graduate Studies Registry of the National Council of Science and Technology, CONACyT.
CIMAT's infrastructure includes offices, an auditorium, many seminar rooms, a specialized mathematical library, computing equipment, electronic communication devices, and a lodge known as CIMATEL, for the arrangement of national and international conferences, courses and academic reunions.
Education
The center offers undergraduate and graduate programs that are attended by students from all over the country and abroad. The undergraduate programs are offered jointly with the University of Guanajuato, yet the teaching is in charge of the center's faculty. Many of the undergraduate students are former international/Iberoamerican mathematical Olympiad or informatics Olympiad contestants and medal winners.
CIMAT has an important role in the teaching of the mathematics and computer science undergraduate programs of the mathematics department of the University of Guanajuato.
Also, CIMAT offers a thesis-writing program, for students affiliated to other universities in the country.
Courses are usually offered in Spanish only.
Departments
The center has three departments:
Pure and Applied mathematics. Research groups are devoted to differential geometry, algebraic geometry, applied mathematics, dynamical systems, functional analysis
|
https://en.wikipedia.org/wiki/Surface%20bundle
|
In mathematics, a surface bundle is a bundle in which the fiber is a surface. When the base space is a circle the total space is three-dimensional and is often called a surface bundle over the circle.
See also
Mapping torus
Geometric topology
|
https://en.wikipedia.org/wiki/Hurwitz%20matrix
|
In mathematics, a Hurwitz matrix, or Routh–Hurwitz matrix, in engineering stability matrix, is a structured real square matrix constructed with coefficients of a real polynomial.
Hurwitz matrix and the Hurwitz stability criterion
Namely, given a real polynomial
the square matrix
is called Hurwitz matrix corresponding to the polynomial . It was established by Adolf Hurwitz in 1895 that a real polynomial with is stable
(that is, all its roots have strictly negative real part) if and only if all the leading principal minors of the matrix are positive:
and so on. The minors are called the Hurwitz determinants. Similarly, if then the polynomial is stable if and only if the principal minors have alternating signs starting with a negative one.
Hurwitz stable matrices
In engineering and stability theory, a square matrix is called a Hurwitz matrix if every eigenvalue of has strictly negative real part, that is,
for each eigenvalue . is also called a stable matrix, because then the differential equation
is asymptotically stable, that is, as
If is a (matrix-valued) transfer function, then is called Hurwitz if the poles of all elements of have negative real part. Note that it is not necessary that for a specific argument be a Hurwitz matrix — it need not even be square. The connection is that if is a Hurwitz matrix, then the dynamical system
has a Hurwitz transfer function.
Any hyperbolic fixed point (or equilibrium point) of a continuous dynamical system is locally asymptotically stable if and only if the Jacobian of the dynamical system is Hurwitz stable at the fixed point.
The Hurwitz stability matrix is a crucial part of control theory. A system is stable if its control matrix is a Hurwitz matrix. The negative real components of the eigenvalues of the matrix represent negative feedback. Similarly, a system is inherently unstable if any of the eigenvalues have positive real components, representing positive feedback.
See also
Liénard–Chipart criterion
M-matrix
P-matrix
Perron–Frobenius theorem
Z-matrix
Jury stability criterion, for the analogue criterion for discrete-time systems.
References
External links
Matrices
Differential equations
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.