source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Ice%20hockey%20statistics
|
The following are statistics commonly tracked in ice hockey.
Team statistics
STK - winning or losing streak
GD - Goal Difference (used as standings tie breaker)
GP – Games played – Number of games the team has played
W – Wins – Games the team has won in regulation.
L – Losses – Games the team has lost in regulation.
T – Ties – Games that have ended in a tie (Note: The NHL no longer uses ties. Instead games are determined by OT or SO.)
OTL – Overtime losses – Games the team has lost in overtime
SOL – Shootout losses – Games the team has lost in a shootout (Note: Many leagues, most notably the NHL, do not separate overtime losses and shootout losses, including all losses past regulation in the overtime losses statistic.)
P or PTS – Points – Team points, calculated from W, OTW, OTL, L, SOL and SOW. As 2 points for a W, 2 points for an OTW or SOW, 1 point for a T or OTL or SOL, and zero for a L.
GF – Goals for – Number of goals the team has scored
GA – Goals against – Number of goals scored against the team
OTW - Overtime Win
SOW - Shoot Out Win
ROW - Regulation plus Overtime Wins, not including shootouts. Used as a secondary tie-breaker.
Individual statistics
GP – Games played – Number of games the player has set foot on the ice in the current season.
G – Goals – Total number of goals the player has scored in the current season.
A – Assists – Number of goals the player has assisted in the current season.
P or PTS – Points – Scoring points, calculated as the sum of G and A.
S - Shots on Goal - Total number of shots taken on net in the current season.
PN - Penalties - Number of penalties the player has been assessed.
PIM – Penalty Infraction Minutes, Penalties in minutes, or Penalty Minutes – Number of penalty minutes the player has been assessed. For statistical purposes, ten minutes are recorded for a game misconduct, gross misconduct, or match penalty.
PPG – Power play goals – Number of goals the player has scored while his team was on the power play.
PPA – Power play assists – Number of goals the player has assisted in while his team was on the power play.
SHG – Shorthanded goals – Number of goals the player has scored while his team was shorthanded.
SHA – Shorthanded assists – Number of goals the player has assisted in while his team was shorthanded.
GWG – Game-winning goals – Number of game-winning goals the player has scored (a goal is considered game winning when the team would win the game without scoring any more goals, for example, the winning team's third goal in a 5–2 win).
GTG – Game-tying goals – Number of game-tying goals (that is, the last goal scored in a tie game) the player has scored.
ENG – Empty net goals – Number of goals scored on an empty net.
+/- or P/M – Plus/minus – The number of team even strength or shorthanded goals for minus the number of team even strength or shorthanded goals against while the player is on the ice (see plus/minus).
TOI (or TOT) – Time on ice – Total time on i
|
https://en.wikipedia.org/wiki/Christos%20Papakyriakopoulos
|
Christos Dimitriou Papakyriakopoulos (; June 29, 1914 – June 29, 1976), commonly known as Papa, was a Greek mathematician specializing in geometric topology.
Early life
Papakyriakopoulos was born in Chalandri, then in the Municipality of Athens, now in North Athens.
Career
Papakyriakopoulos worked in isolation at Athens Polytechnic as research assistant to Professor Nikolaos Kritikos. But he was enrolled as research student at Athens University, being awarded a PhD in 1943 on the recommendation of Constantin Carathéodory. In 1948, he was invited by Ralph Fox to come as his guest at the Princeton University mathematics department because Fox had been impressed by a letter from Papakyriakopoulos that purported to prove Dehn's lemma. The proof, as it turned out, was faulty, but Fox's sponsorship would continue for many years and enabled Papakyriakopoulos to work on his mathematics without concern for financial support.
Papakyriakopoulos is best known for his proofs of Dehn's lemma, the loop theorem, and the sphere theorem, three foundational results for the study of 3-manifolds. In honor of this work, he was awarded the first Oswald Veblen Prize in Geometry in 1964. From the early 1960s on, he mostly worked on the Poincaré conjecture. Bernard Maskit produced counterexamples about his proof three times.
Tribute
The following unusual limerick was composed by John Milnor, shortly after learning of several graduate students' frustration at completing a project where the work of every Princeton mathematics faculty member was to be summarized in a limerick:
The perfidious lemma of Dehn
Was every topologist's bane
'Til Christos D. Pap-
akyriakop-
oulos proved it without any strain.
This may be the only limerick where one word spans three lines. The phrase "without any strain" is not meant to indicate that Papa did not expend much energy in his efforts. Rather, it refers to Papa's "tower construction", which quite nicely circumvents much of the difficulty in the cut-and-paste efforts that preceded Papa's proof.
Other activities
Papakyriakopoulos sympathized with leftist politics and in 1941 joined the student branch of the National Liberation Front (EAM). When he went to live in the US, in 1948 the Greek authorities reported him to the American authorities as a "dangerous communist" and asked for his extradition, but Princeton Institute of Advanced Study gave him protection as it had done with others suffering political persecution.
He was a reclusive character, spending most of his time in his office listening to his beloved Richard Wagner. Legend has it that in the United States he lived for 25 years in the same hotel room he used when he first arrived in the country, all of his belongings inside his original luggage.
Death
Papakyriakopoulos died of stomach cancer at age 62 in Princeton, New Jersey.
See also
List of Greek mathematicians
Notes
References
External links
1914 births
1976 deaths
20th-century Greek mathematicians
|
https://en.wikipedia.org/wiki/Birkhoff%20interpolation
|
In mathematics, Birkhoff interpolation is an extension of polynomial interpolation. It refers to the problem of finding a polynomial of degree such that only certain derivatives have specified values at specified points:
where the data points and the nonnegative integers are given. It differs from Hermite interpolation in that it is possible to specify derivatives of at some points without specifying the lower derivatives or the polynomial itself. The name refers to George David Birkhoff, who first studied the problem in 1906.
Existence and uniqueness of solutions
In contrast to Lagrange interpolation and Hermite interpolation, a Birkhoff interpolation problem does not always have a unique solution. For instance, there is no quadratic polynomial such that and . On the other hand, the Birkhoff interpolation problem where the values of and are given always has a unique solution.
An important problem in the theory of Birkhoff interpolation is to classify those problems that have a unique solution. Schoenberg formulates the problem as follows. Let denote the number of conditions (as above) and let be the number of interpolation points. Given a matrix , all of whose entries are either or , such that exactly entries are , then the corresponding problem is to determine such that
The matrix is called the incidence matrix. For example, the incidence matrices for the interpolation problems mentioned in the previous paragraph are:
Now the question is: Does a Birkhoff interpolation problem with a given incidence matrix have a unique solution for any choice of the interpolation points?
The case with interpolation points was tackled by George Pólya in 1931. Let denote the sum of the entries in the first columns of the incidence matrix:
Then the Birkhoff interpolation problem with has a unique solution if and only if . Schoenberg showed that this is a necessary condition for all values of .
Some examples
Consider a differentiable function on , such that . Let us see that there is no Birkhoff interpolation quadratic polynomial such that where : Since , one may write the polynomial as (by completing the square) where are merely the interpolation coefficients. The derivative of the interpolation polynomial is given by . This implies , however this is absurd, since is not necessarily . The incidence matrix is given by:
Consider a differentiable function on , and denote with . Let us see that there is indeed Birkhoff interpolation quadratic polynomial such that and . Construct the interpolating polynomial of at the nodes , such that . Thus the polynomial : is the Birkhoff interpolating polynomial. The incidence matrix is given by:
Given a natural number , and a differentiable function on , is there a polynomial such that: and for with ? Construct the Lagrange/Newton polynomial (same interpolating polynomial, different form to calculate and express them) that satisfies for , then the polynomial is the Birkhoff interpol
|
https://en.wikipedia.org/wiki/Superadditivity
|
In mathematics, a function is superadditive if
for all and in the domain of
Similarly, a sequence is called superadditive if it satisfies the inequality
for all and
The term "superadditive" is also applied to functions from a boolean algebra to the real numbers where such as lower probabilities.
Examples of superadditive functions
The map is a superadditive function for nonnegative real numbers because the square of is always greater than or equal to the square of plus the square of for nonnegative real numbers and :
The determinant is superadditive for nonnegative Hermitian matrix, that is, if are nonnegative Hermitian then This follows from the Minkowski determinant theorem, which more generally states that is superadditive (equivalently, concave) for nonnegative Hermitian matrices of size : If are nonnegative Hermitian then
Horst Alzer proved that Hadamard's gamma function is superadditive for all real numbers with
Mutual information
Properties
If is a superadditive function whose domain contains then To see this, take the inequality at the top: Hence
The negative of a superadditive function is subadditive.
Fekete's lemma
The major reason for the use of superadditive sequences is the following lemma due to Michael Fekete.
Lemma: (Fekete) For every superadditive sequence the limit is equal to the supremum (The limit may be positive infinity, as is the case with the sequence for example.)
The analogue of Fekete's lemma holds for subadditive functions as well.
There are extensions of Fekete's lemma that do not require the definition of superadditivity above to hold for all and
There are also results that allow one to deduce the rate of convergence to the limit whose existence is stated in Fekete's lemma if some kind of both superadditivity and subadditivity is present. A good exposition of this topic may be found in Steele (1997).
See also
References
Notes
Mathematical analysis
Sequences and series
Types of functions
|
https://en.wikipedia.org/wiki/Plateau%20%28mathematics%29
|
A plateau of a function is a part of its domain where the function has constant value.
More formally, let U, V be topological spaces. A plateau for a function f: U → V is a path-connected set of points P of U such that for some y we have
f (p) = y
for all p in P.
Examples
Plateaus can be observed in mathematical models as well as natural systems. In nature, plateaus can be observed in physical, chemical and biological systems. An example of an observed plateau in the natural world is in the tabulation of biodiversity of life through time.
See also
Level set
Contour line
Minimal surface
References
Topology
|
https://en.wikipedia.org/wiki/Lindel%C3%B6f%20hypothesis
|
In mathematics, the Lindelöf hypothesis is a conjecture by Finnish mathematician Ernst Leonard Lindelöf (see ) about the rate of growth of the Riemann zeta function on the critical line. This hypothesis is implied by the Riemann hypothesis. It says that for any ε > 0,
as t tends to infinity (see big O notation). Since ε can be replaced by a smaller value, the conjecture can be restated as follows: for any positive ε,
The μ function
If σ is real, then μ(σ) is defined to be the infimum of all real numbers a such that ζ(σ + iT ) = O(T a). It is trivial to check that μ(σ) = 0 for σ > 1, and the functional equation of the zeta function implies that μ(σ) = μ(1 − σ) − σ + 1/2. The Phragmén–Lindelöf theorem implies that μ is a convex function. The Lindelöf hypothesis states μ(1/2) = 0, which together with the above properties of μ implies that μ(σ) is 0 for σ ≥ 1/2 and 1/2 − σ for σ ≤ 1/2.
Lindelöf's convexity result together with μ(1) = 0 and μ(0) = 1/2 implies that 0 ≤ μ(1/2) ≤ 1/4. The upper bound of 1/4 was lowered by Hardy and Littlewood to 1/6 by applying Weyl's method of estimating exponential sums to the approximate functional equation. It has since been lowered to slightly less than 1/6 by several authors using long and technical proofs, as in the following table:
Relation to the Riemann hypothesis
(1918–1919) showed that the Lindelöf hypothesis is equivalent to the following statement about the zeros of the zeta function: for every ε > 0, the number of zeros with real part at least 1/2 + ε and imaginary part between T and T + 1 is o(log(T)) as T tends to infinity. The Riemann hypothesis implies that there are no zeros at all in this region and so implies the Lindelöf hypothesis. The number of zeros with imaginary part between T and T + 1 is known to be O(log(T)), so the Lindelöf hypothesis seems only slightly stronger than what has already been proved, but in spite of this it has resisted all attempts to prove it.
Means of powers (or moments) of the zeta function
The Lindelöf hypothesis is equivalent to the statement that
for all positive integers k and all positive real numbers ε. This has been proved for k = 1 or 2, but the case k = 3 seems much harder and is still an open problem.
There is a much more precise conjecture about the asymptotic behavior of the integral: it is believed that
for some constants ck,j . This has been proved by Littlewood for k = 1 and by for k = 2
(extending a result of who found the leading term).
suggested the value
for the leading coefficient when k is 6, and used random matrix theory to suggest some conjectures for the values of the coefficients for higher k. The leading coefficients are conjectured to be the product of an elementary factor, a certain product over primes, and the number of n × n Young tableaux given by the sequence
1, 1, 2, 42, 24024, 701149020, ... .
Other consequences
Denoting by pn the n-th prime number, a result by Albert Ingham shows that the Lindelöf hypothesis
|
https://en.wikipedia.org/wiki/Alfred%20Gray%20%28mathematician%29
|
Alfred Gray (October 22, 1939 – October 27, 1998) was an American mathematician whose main research interests were in differential geometry. He also made contributions in the fields of complex variables and differential equations.
Short biography
Alfred Gray was born in Dallas, Texas to Alfred James Gray & Eloise Evans and studied mathematics at the University of Kansas.
He received a Ph.D. from the University of California, Los Angeles in 1964 and spent four years at University of California, Berkeley.
From 1970–1998 he was a professor at the University of Maryland, College Park.
He died in Bilbao, Spain of a heart attack while working with students in a computer lab at Colegio Mayor Miguel de Unamuno around 4 AM, on October 27, 1998.
Mathematical contributions
In the broad area of differential geometry, he made specific contributions in classifying various types of geometrical structures, such as (Kähler manifolds and almost Hermitian manifolds).
Gray introduced the concept of a nearly Kähler manifold, gave topological obstructions to the existence of geometrical structures, made several contributions in the computation of the volume of tubes and balls, curvature identities, etc.
He published a book on tubes and is the author of two textbooks and over one hundred scientific articles.
His books were translated into Spanish, Italian, Russian and German.
He was a pioneer in the use of computer graphics in teaching differential geometry (particularly the geometry of curves and surfaces) and of using electronic computation in teaching both differential geometry and ordinary differential equations.
References
External links
List of publications
Differential geometry web site maintained in his honor
Webpage of the International Congress on Differential Geometry in Memory of Alfred Gray
A short film in remembrance of Prof. Alfred Gray
1939 births
1998 deaths
20th-century American mathematicians
University of California, Los Angeles alumni
University of Maryland, College Park faculty
People from Dallas
Mathematicians from Texas
University of Kansas alumni
Differential geometers
|
https://en.wikipedia.org/wiki/Invariance%20theorem
|
Invariance theorem may refer to:
Invariance of domain, a theorem in topology
A theorem pertaining to Kolmogorov complexity
A result in classical mechanics for adiabatic invariants
A theorem of algorithmic probability
See also
Invariant (mathematics)
|
https://en.wikipedia.org/wiki/The%20Man%20Who%20Counted
|
The Man Who Counted (original Portuguese title: O Homem que Calculava) is a book on recreational mathematics and curious word problems by Brazilian writer Júlio César de Mello e Souza, published under the pen name Malba Tahan. Since its first publication in 1938, the book has been immensely popular in Brazil and abroad, not only among mathematics teachers but among the general public as well.
The book has been published in many other languages, including Catalan, English (in the UK and in the US), German, Italian, and Spanish, and is recommended as a paradidactic source in many countries. It earned its author a prize from the Brazilian Literary Academy.
Plot summary
First published in Brazil in 1949, O Homem que Calculava is a series of tales in the style of the Arabian Nights, but revolving around mathematical puzzles and curiosities. The book is ostensibly a translation by Brazilian scholar Breno de Alencar Bianco of an original manuscript by Malba Tahan, a thirteenth-century Persian scholar of the Islamic Empire – both equally fictitious.
The first two chapters tell how Hanak Tade Maia was traveling from Samarra to Baghdad when he met Beremiz Samir, a young lad from Khoy with amazing mathematical abilities. The traveler then invited Beremiz to come with him to Baghdad, where a man with his abilities will certainly find profitable employment. The rest of the book tells of various incidents that befell the two men along the road and in Baghdad. In all those events, Beremiz Samir uses his abilities with calculation like a magic wand to amaze and entertain people, settle disputes, and find wise and just solutions to seemingly unsolvable problems.
In the first incident along their trip (chapter III), Beremiz settles a heated inheritance dispute between three brothers. Their father had left them 35 camels, of which 1/2 (17.5 camels) should go to his eldest son, 1/3 (11.666... camels) to the middle one, and 1/9 (3.888... camels) to the youngest. To solve the brothers dilemma, Beremiz convinces Hanak to donate his only camel to the dead man's estate. Then, with 36 camels, Beremiz gives 18, 12, and 4 animals to the three heirs, making all of them profit with the new share. Of the remaining two camels, one is returned to Hanak, and the other is claimed by Beremiz as his reward.
The translator's notes observe that the 17-animal inheritance puzzle, a mathematical puzzle whose first publication is in the works of Muhaqiqi Naraqi, is a variant of this problem, with 17 camels to be divided in the same proportions. It is found in hundreds of recreational mathematics books, such as those of E. Fourrey (1949) and G. Boucheny (1939). However, the 17-camel version leaves only one camel at the end, with no net profit for the estate's executor.
At the end of the book, Beremiz uses his abilities to win the hand of his student and secret love Telassim, the daughter of one of the Caliph's advisers. (The caliph mentioned is Al-Musta'sim, the only real charac
|
https://en.wikipedia.org/wiki/Variational%20inequality
|
In mathematics, a variational inequality is an inequality involving a functional, which has to be solved for all possible values of a given variable, belonging usually to a convex set. The mathematical theory of variational inequalities was initially developed to deal with equilibrium problems, precisely the Signorini problem: in that model problem, the functional involved was obtained as the first variation of the involved potential energy. Therefore, it has a variational origin, recalled by the name of the general abstract problem. The applicability of the theory has since been expanded to include problems from economics, finance, optimization and game theory.
History
The first problem involving a variational inequality was the Signorini problem, posed by Antonio Signorini in 1959 and solved by Gaetano Fichera in 1963, according to the references and : the first papers of the theory were and , . Later on, Guido Stampacchia proved his generalization to the Lax–Milgram theorem in in order to study the regularity problem for partial differential equations and coined the name "variational inequality" for all the problems involving inequalities of this kind. Georges Duvaut encouraged his graduate students to study and expand on Fichera's work, after attending a conference in Brixen on 1965 where Fichera presented his study of the Signorini problem, as reports: thus the theory become widely known throughout France. Also in 1965, Stampacchia and Jacques-Louis Lions extended earlier results of , announcing them in the paper : full proofs of their results appeared later in the paper .
Definition
Following , the definition of a variational inequality is the following one.
Given a Banach space , a subset of , and a functional from to the dual space of the space ,
the variational inequality problem
is the problem of solving
for the variable belonging to the following inequality:
where
is the duality pairing.
In general, the variational inequality problem can be formulated on any finite – or infinite-dimensional Banach space. The three obvious steps in the study of the problem are the following ones:
Prove the existence of a solution: this step implies the mathematical correctness of the problem, showing that there is at least a solution.
Prove the uniqueness of the given solution: this step implies the physical correctness of the problem, showing that the solution can be used to represent a physical phenomenon. It is a particularly important step since most of the problems modeled by variational inequalities are of physical origin.
Find the solution or prove its regularity.
Examples
The problem of finding the minimal value of a real-valued function of real variable
This is a standard example problem, reported by : consider the problem of finding the minimal value of a differentiable function over a closed interval . Let be a point in where the minimum occurs. Three cases can occur:
if then
if then
if then
These neces
|
https://en.wikipedia.org/wiki/Riesz%20function
|
In mathematics, the Riesz function is an entire function defined by Marcel Riesz in connection with the Riemann hypothesis, by means of the power series
If we set we may define it in terms of the coefficients of the Laurent series development of the hyperbolic (or equivalently, the ordinary) cotangent around zero. If
then F may be defined as
The values of ζ(2k) approach one for increasing k, and comparing the series for the Riesz function with that for shows that it defines an entire function. Alternatively, F may be defined as
denotes the rising factorial power in the notation of D. E. Knuth and the number Bn are the Bernoulli number. The series is one of alternating terms and the function quickly tends to minus infinity for increasingly negative values of x. Positive values of x are more interesting and delicate.
Riesz criterion
It can be shown that
for any exponent e larger than 1/2, where this is big O notation; taking values both positive and negative. Riesz showed that the Riemann hypothesis is equivalent to the claim that the above is true for any e larger than 1/4. In the same paper, he added a slightly pessimistic note too: «Je ne sais pas encore decider si cette condition facilitera la vérification de l'hypothèse» ("I do not know how to decide if this condition will facilitate the verification of the hypothesis").
Mellin transform of the Riesz function
The Riesz function is related to the Riemann zeta function via its Mellin transform. If we take
we see that if then
converges, whereas from the growth condition we have that if
then
converges. Putting this together, we see the Mellin transform of the Riesz function is defined on the strip .
On this strip, we have (cf. Ramanujan's master theorem)
From the inverse Mellin transform, we now get an expression for the Riesz function, as
where c is between minus one and minus one-half. If the Riemann hypothesis is true, we can move the line of integration to any value less than minus one-fourth, and hence we get the equivalence between the fourth-root rate of growth for the Riesz function and the Riemann hypothesis.
J. garcia (see references) gave the integral representation of using Borel resummation as
and is the fractional part of 'x'
Calculation of the Riesz function
The Maclaurin series coefficients of F increase in absolute value until they reach their maximum at the 40th term of -1.753. By the 109th term they have dropped below one in absolute value. Taking the first 1000 terms suffices to give a very accurate value for
for . However, this would require evaluating a polynomial of degree 1000 either using rational arithmetic with the coefficients of large numerator or denominator, or using floating point computations of over 100 digits. An alternative is to use the inverse Mellin transform defined above and numerically integrate. Neither approach is computationally easy.
Another approach is to use acceleration of convergence. We have
Since ζ(2k) approaches one as
|
https://en.wikipedia.org/wiki/Braided%20monoidal%20category
|
In mathematics, a commutativity constraint on a monoidal category is a choice of isomorphism for each pair of objects A and B which form a "natural family." In particular, to have a commutativity constraint, one must have for all pairs of objects .
A braided monoidal category is a monoidal category equipped with a braiding—that is, a commutativity constraint that satisfies axioms including the hexagon identities defined below. The term braided references the fact that the braid group plays an important role in the theory of braided monoidal categories. Partly for this reason, braided monoidal categories and other topics are related in the theory of knot invariants.
Alternatively, a braided monoidal category can be seen as a tricategory with one 0-cell and one 1-cell.
Braided monoidal categories were introduced by André Joyal and Ross Street in a 1986 preprint. A modified version of this paper was published in 1993.
The hexagon identities
For along with the commutativity constraint to be called a braided monoidal category, the following hexagonal diagrams must commute for all objects . Here is the associativity isomorphism coming from the monoidal structure on :
Properties
Coherence
It can be shown that the natural isomorphism along with the maps coming from the monoidal structure on the category , satisfy various coherence conditions, which state that various compositions of structure maps are equal. In particular:
The braiding commutes with the units. That is, the following diagram commutes:
The action of on an -fold tensor product factors through the braid group. In particular,
as maps . Here we have left out the associator maps.
Variations
There are several variants of braided monoidal categories that are used in various contexts. See, for example, the expository paper of Savage (2009) for an explanation of symmetric and coboundary monoidal categories, and the book by Chari and Pressley (1995) for ribbon categories.
Symmetric monoidal categories
A braided monoidal category is called symmetric if also satisfies for all pairs of objects and . In this case the action of on an -fold tensor product factors through the symmetric group.
Ribbon categories
A braided monoidal category is a ribbon category if it is rigid, and it may preserve quantum trace and co-quantum trace. Ribbon categories are particularly useful in constructing knot invariants.
Coboundary monoidal categories
A coboundary or “cactus” monoidal category is a monoidal category together with a family of natural isomorphisms with the following properties:
for all pairs of objects and .
The first property shows us that , thus allowing us to omit the analog to the second defining diagram of a braided monoidal category and ignore the associator maps as implied.
Examples
The category of representations of a group (or a Lie algebra) is a symmetric monoidal category where .
The category of representations of a quantized universal enveloping al
|
https://en.wikipedia.org/wiki/Pierre%20Cartier%20%28mathematician%29
|
Pierre Émile Cartier (born 10 June 1932) is a French mathematician. An associate of the Bourbaki group and at one time a colleague of Alexander Grothendieck, his interests have ranged over algebraic geometry, representation theory, mathematical physics, and category theory.
He studied at the École Normale Supérieure in Paris under Henri Cartan and André Weil. Since his 1958 thesis on algebraic geometry he has worked in a number of fields. He is known for the introduction of the Cartier operator in algebraic geometry in characteristic p, and for work on duality of abelian varieties and on formal groups. He is the eponym of Cartier divisors and Cartier duality.
From 1961 to 1971 he was a professor at the University of Strasbourg. In 1970 he was an Invited Speaker at the International Congress of Mathematicians in Nice. He was awarded the 1978 Prize Ampère of the French Academy of Sciences. In 2012 he became a fellow of the American Mathematical Society.
Publications
(1st edition 1969)
(1st edition 1992)
Freedom in Mathematics, Springer India, 2016 (with Cédric Villani, Jean Dhombres, Gerhard Heinzmann), .
Translation from the French language edition: Mathématiques en liberté, La Ville Brûle, Montreuil 2012, .
Pierre Cartier: Alexander Grothendieck. A country known only by name. Notices AMS, vol. 62, 2015, no. 4, pp. 373–382, PDF.
as editor
(1st edition 1990)
See also
Cotangent complex
Dieudonné module
MacMahon's master theorem
References
External links
Cartier's website at the Institut des Hautes Études Scientifiques, with a photograph, CV, and list of publications
Issue of Moscow Mathematical Journal dedicated to Pierre Cartier
Javier Fresán The Castle of Groups, Interview with Pierre Cartier, EMS Newsletter 2009, pdf
1932 births
Living people
People from Sedan, Ardennes
École Normale Supérieure alumni
Academic staff of the University of Strasbourg
20th-century French mathematicians
21st-century French mathematicians
Institute for Advanced Study visiting scholars
Nicolas Bourbaki
Fellows of the American Mathematical Society
University of Paris alumni
|
https://en.wikipedia.org/wiki/Root%20test
|
In mathematics, the root test is a criterion for the convergence (a convergence test) of an infinite series. It depends on the quantity
where are the terms of the series, and states that the series converges absolutely if this quantity is less than one, but diverges if it is greater than one. It is particularly useful in connection with power series.
Root test explanation
The root test was developed first by Augustin-Louis Cauchy who published it in his textbook Cours d'analyse (1821). Thus, it is sometimes known as the Cauchy root test or Cauchy's radical test. For a series
the root test uses the number
where "lim sup" denotes the limit superior, possibly +∞. Note that if
converges then it equals C and may be used in the root test instead.
The root test states that:
if C < 1 then the series converges absolutely,
if C > 1 then the series diverges,
if C = 1 and the limit approaches strictly from above then the series diverges,
otherwise the test is inconclusive (the series may diverge, converge absolutely or converge conditionally).
There are some series for which C = 1 and the series converges, e.g. , and there are others for which C = 1 and the series diverges, e.g. .
Application to power series
This test can be used with a power series
where the coefficients cn, and the center p are complex numbers and the argument z is a complex variable.
The terms of this series would then be given by an = cn(z − p)n. One then applies the root test to the an as above. Note that sometimes a series like this is called a power series "around p", because the radius of convergence is the radius R of the largest interval or disc centred at p such that the series will converge for all points z strictly in the interior (convergence on the boundary of the interval or disc generally has to be checked separately). A corollary of the root test applied to such a power series is the Cauchy–Hadamard theorem: the radius of convergence is exactly taking care that we really mean ∞ if the denominator is 0.
Proof
The proof of the convergence of a series Σan is an application of the comparison test. If for all n ≥ N (N some fixed natural number) we have , then . Since the geometric series converges so does by the comparison test. Hence Σan converges absolutely.
If for infinitely many n, then an fails to converge to 0, hence the series is divergent.
Proof of corollary:
For a power series Σan = Σcn(z − p)n, we see by the above that the series converges if there exists an N such that for all n ≥ N we have
equivalent to
for all n ≥ N, which implies that in order for the series to converge we must have for all sufficiently large n. This is equivalent to saying
so Now the only other place where convergence is possible is when
(since points > 1 will diverge) and this will not change the radius of convergence since these are just the points lying on the boundary of the interval or disc, so
Examples
Example 1:
Applying the root test and using the fa
|
https://en.wikipedia.org/wiki/Linear%20discriminant%20analysis
|
Linear discriminant analysis (LDA), normal discriminant analysis (NDA), or discriminant function analysis is a generalization of Fisher's linear discriminant, a method used in statistics and other fields, to find a linear combination of features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as a linear classifier, or, more commonly, for dimensionality reduction before later classification.
LDA is closely related to analysis of variance (ANOVA) and regression analysis, which also attempt to express one dependent variable as a linear combination of other features or measurements. However, ANOVA uses categorical independent variables and a continuous dependent variable, whereas discriminant analysis has continuous independent variables and a categorical dependent variable (i.e. the class label). Logistic regression and probit regression are more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables. These other methods are preferable in applications where it is not reasonable to assume that the independent variables are normally distributed, which is a fundamental assumption of the LDA method.
LDA is also closely related to principal component analysis (PCA) and factor analysis in that they both look for linear combinations of variables which best explain the data. LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made.
LDA works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique is discriminant correspondence analysis.
Discriminant analysis is used when groups are known a priori (unlike in cluster analysis). Each case must have a score on one or more quantitative predictor measures, and a score on a group measure. In simple terms, discriminant function analysis is classification - the act of distributing things into groups, classes or categories of the same type.
History
The original dichotomous discriminant analysis was developed by Sir Ronald Fisher in 1936. It is different from an ANOVA or MANOVA, which is used to predict one (ANOVA) or multiple (MANOVA) continuous dependent variables by one or more independent categorical variables. Discriminant function analysis is useful in determining whether a set of variables is effective in predicting category membership.
LDA for two classes
Consider a set of observations (also called features, attributes, variables or measurements)
|
https://en.wikipedia.org/wiki/Nome%20%28mathematics%29
|
In mathematics, specifically the theory of elliptic functions, the nome is a special function that belongs to the non-elementary functions. This function is of great importance in the description of the elliptic functions, especially in the description of the modular identity of the Jacobi theta function, the Hermite elliptic transcendents and the Weber modular functions, that are used for solving equations of higher degrees.
Definition
The nome function is given by
where and are the quarter periods, and and are the fundamental pair of periods, and is the half-period ratio. The nome can be taken to be a function of any one of these quantities; conversely, any one of these quantities can be taken as functions of the nome. Each of them uniquely determines the others when . That is, when , the mappings between these various symbols are both 1-to-1 and onto, and so can be inverted: the quarter periods, the half-periods and the half-period ratio can be explicitly written as functions of the nome. For general with , is not a single-valued function of . Explicit expressions for the quarter periods, in terms of the nome, are given in the linked article.
Notationally, the quarter periods and are usually used only in the context of the Jacobian elliptic functions, whereas the half-periods and are usually used only in the context of Weierstrass elliptic functions. Some authors, notably Apostol, use and to denote whole periods rather than half-periods.
The nome is frequently used as a value with which elliptic functions and modular forms can be described; on the other hand, it can also be thought of as function, because the quarter periods are functions of the elliptic modulus : .
The complementary nome is given by
Sometimes the notation is used for the square of the nome.
The mentioned functions and are called complete elliptic integrals of the first kind. They are defined as follows:
Applications
The nome solves the following equation:
This analogon is valid for the Pythagorean complementary modulus:
where are the complete Jacobi theta functions and is the complete elliptic integral of the first kind with modulus shown in the formula above. For the complete theta functions these definitions introduced by Sir Edmund Taylor Whittaker and George Neville Watson are valid:
These three definition formulas are written down in the fourth edition of the book A Course in Modern Analysis written by Whittaker and Watson on the pages 469 and 470. The nome is commonly used as the starting point for the construction of Lambert series, the q-series and more generally the q-analogs. That is, the half-period ratio is commonly used as a coordinate on the complex upper half-plane, typically endowed with the Poincaré metric to obtain the Poincaré half-plane model. The nome then serves as a coordinate on a punctured disk of unit radius; it is punctured because is not part of the disk (or rather, corresponds to ). This endows the punctured
|
https://en.wikipedia.org/wiki/Quarter%20period
|
In mathematics, the quarter periods K(m) and iK ′(m) are special functions that appear in the theory of elliptic functions.
The quarter periods K and iK ′ are given by
and
When m is a real number, 0 < m < 1, then both K and K ′ are real numbers. By convention, K is called the real quarter period and iK ′ is called the imaginary quarter period. Any one of the numbers m, K, K ′, or K ′/K uniquely determines the others.
These functions appear in the theory of Jacobian elliptic functions; they are called quarter periods because the elliptic functions and are periodic functions with periods and However, the function is also periodic with a smaller period (in terms of the absolute value) than , namely .
Notation
The quarter periods are essentially the elliptic integral of the first kind, by making the substitution . In this case, one writes instead of , understanding the difference between the two depends notationally on whether or is used. This notational difference has spawned a terminology to go with it:
is called the parameter
is called the complementary parameter
is called the elliptic modulus
is called the complementary elliptic modulus, where
the modular angle, where
the complementary modular angle. Note that
The elliptic modulus can be expressed in terms of the quarter periods as
and
where and are Jacobian elliptic functions.
The nome is given by
The complementary nome is given by
The real quarter period can be expressed as a Lambert series involving the nome:
Additional expansions and relations can be found on the page for elliptic integrals.
References
Milton Abramowitz and Irene A. Stegun (1964), Handbook of Mathematical Functions, Dover Publications, New York. . See chapters 16 and 17.
Elliptic functions
|
https://en.wikipedia.org/wiki/Rad%C3%B3%27s%20theorem%20%28harmonic%20functions%29
|
See also Rado's theorem (Ramsey theory)
In mathematics, Radó's theorem is a result about harmonic functions, named after Tibor Radó. Informally, it says that any "nice looking" shape without holes can be smoothly deformed into a disk.
Suppose Ω is an open, connected and convex subset of the Euclidean space R2 with smooth boundary ∂Ω and suppose that D is the unit disk. Then, given any homeomorphism
μ : ∂D → ∂Ω, there exists a unique harmonic function u : D → Ω such that u = μ on ∂D and u is a diffeomorphism.
References
R. Schoen, S. T. Yau. (1997) Lectures on Harmonic Maps. International Press, Inc., Boston, Massachusetts. , page 4.
Theorems in harmonic analysis
|
https://en.wikipedia.org/wiki/Search%20problem
|
In the mathematics of computational complexity theory, computability theory, and decision theory, a search problem is a type of computational problem represented by a binary relation. Intuitively, the problem consists in finding structure "y" in object "x". An algorithm is said to solve the problem if at least one corresponding structure exists, and then one occurrence of this structure is made output; otherwise, the algorithm stops with an appropriate output ("not found" or any message of the like).
Every search problem also has a corresponding decision problem, namely
This definition may be generalized to n-ary relations using any suitable encoding which allows multiple strings to be compressed into one string (for instance by listing them consecutively with a delimiter).
More formally, a relation R can be viewed as a search problem, and a Turing machine which calculates R is also said to solve it. More formally, if R is a binary relation such that field(R) ⊆ Γ+ and T is a Turing machine, then T calculates R if:
If x is such that there is some y such that R(x, y) then T accepts x with output z such that R(x, z) (there may be multiple y, and T need only find one of them)
If x is such that there is no y such that R(x, y) then T rejects x
(Note that the graph of a partial function is a binary relation, and if T calculates a partial function then there is at most one possible output.)
Such problems occur very frequently in graph theory and combinatorial optimization, for example, where searching for structures such as particular matchings, optional cliques, particular stable sets, etc. are subjects of interest.
Definition
A search problem is often characterized by:
A set of states
A start state
A goal state or goal test: a boolean function which tells us whether a given state is a goal state
A successor function: a mapping from a state to a set of new states
Objective
Find a solution when not given an algorithm to solve a problem, but only a specification of what a solution looks like.
Search method
Generic search algorithm: given a graph, start nodes, and goal nodes, incrementally explore paths from the start nodes.
Maintain a frontier of paths from the start node that have been explored.
As search proceeds, the frontier expands into the unexplored nodes until a goal node is encountered.
The way in which the frontier is expanded defines the search strategy.
Input: a graph,
a set of start nodes,
Boolean procedure goal(n) that tests if n is a goal node.
frontier := {s : s is a start node};
while frontier is not empty:
select and remove path <n0, ..., nk> from frontier;
if goal(nk)
return <n0, ..., nk>;
for every neighbor n of nk
add <n0, ..., nk, n> to frontier;
end while
See also
Unbounded search operator
Decision problem
Optimization problem
Counting problem (complexity)
Function problem
Search games
References
Computational problems
|
https://en.wikipedia.org/wiki/Maximal%20subgroup
|
In mathematics, the term maximal subgroup is used to mean slightly different things in different areas of algebra.
In group theory, a maximal subgroup H of a group G is a proper subgroup, such that no proper subgroup K contains H strictly. In other words, H is a maximal element of the partially ordered set of subgroups of G that are not equal to G. Maximal subgroups are of interest because of their direct connection with primitive permutation representations of G. They are also much studied for the purposes of finite group theory: see for example Frattini subgroup, the intersection of the maximal subgroups.
In semigroup theory, a maximal subgroup of a semigroup S is a subgroup (that is, a subsemigroup which forms a group under the semigroup operation) of S which is not properly contained in another subgroup of S. Notice that, here, there is no requirement that a maximal subgroup be proper, so if S is in fact a group then its unique maximal subgroup (as a semigroup) is S itself. Considering subgroups, and in particular maximal subgroups, of semigroups often allows one to apply group-theoretic techniques in semigroup theory. There is a one-to-one correspondence between idempotent elements of a semigroup and maximal subgroups of the semigroup: each idempotent element is the identity element of a unique maximal subgroup.
Existence of maximal subgroup
Any proper subgroup of a finite group is contained in some maximal subgroup, since the proper subgroups form a finite partially ordered set under inclusion. There are, however, infinite abelian groups that contain no maximal subgroups, for example the Prüfer group.
Maximal normal subgroup
Similarly, a normal subgroup N of G is said to be a maximal normal subgroup (or maximal proper normal subgroup) of G if N < G and there is no normal subgroup K of G such that N < K < G. We have the following theorem:
Theorem: A normal subgroup N of a group G is a maximal normal subgroup if and only if the quotient G/N is simple.
Hasse diagrams
These Hasse diagrams show the lattices of subgroups of the symmetric group S4, the dihedral group D4, and C23, the third direct power of the cyclic group C2.
The maximal subgroups are linked to the group itself (on top of the Hasse diagram) by an edge of the Hasse diagram.
References
Subgroup properties
|
https://en.wikipedia.org/wiki/Minkowski%27s%20question-mark%20function
|
In mathematics, Minkowski's question-mark function, denoted , is a function with unusual fractal properties, defined by Hermann Minkowski in 1904. It maps quadratic irrational numbers to rational numbers on the unit interval, via an expression relating the continued fraction expansions of the quadratics to the binary expansions of the rationals, given by Arnaud Denjoy in 1938. It also maps rational numbers to dyadic rationals, as can be seen by a recursive definition closely related to the Stern–Brocot tree.
Definition and intuition
One way to define the question-mark function involves the correspondence between two different ways of representing fractional numbers using finite or infinite binary sequences. Most familiarly, a string of 0's and 1's with a single point mark ".", like "11.001001000011111..." can be interpreted as the binary representation of a number. In this case this number is
There is a different way of interpreting the same sequence, however, using continued fractions.
Interpreting the fractional part "0.001001000011111..." as a binary number in the same way, replace each consecutive block of 0's or 1's by its run length (or, for the first block of zeroes, its run length plus one), in this case generating the sequence . Then, use this sequence as the coefficients of a continued fraction:
The question-mark function reverses this process: it translates the continued-fraction of a given real number into a run-length encoded binary sequence, and then reinterprets that sequence as a binary number. For instance, for the example above, . To define this formally, if an irrational number has the (non-terminating) continued-fraction representation
then the value of the question-mark function on is defined as the value of the infinite series
In the same way, if a rational number has the terminating continued-fraction representation then
the value of the question-mark function on is a finite sum,
Analogously to the way the question-mark function reinterprets continued fractions as binary numbers, the Cantor function can be understood as reinterpreting ternary numbers as binary numbers.
Self-symmetry
The question mark is clearly visually self-similar. A monoid of self-similarities may be generated by two operators and acting on the unit square and defined as follows:
Visually, shrinks the unit square to its bottom-left quarter, while performs a point reflection through its center.
A point on the graph of has coordinates for some in the unit interval. Such a point is transformed by and into another point of the graph, because satisfies the following identities for all :
These two operators may be repeatedly combined, forming a monoid. A general element of the monoid is then
for positive integers . Each such element describes a self-similarity of the question-mark function. This monoid is sometimes called the period-doubling monoid, and all period-doubling fractal curves have a self-symmetry described by it (the de
|
https://en.wikipedia.org/wiki/Glossary%20of%20game%20theory
|
Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.
Definitions of a game
Notational conventions
Real numbers .
The set of players .
Strategy space , where
Player i's strategy space is the space of all possible ways in which player i can play the game.
A strategy for player i
is an element of
.
Complements
an element of , is a tuple of strategies for all players other than i.
Outcome space is in most textbooks identical to -
Payoffs , describing how much gain (money, pleasure, etc.) the players are allocated by the end of the game.
Normal form game
A game in normal form is a function:
Given the tuple of strategies chosen by the players, one is given an allocation of payments (given as real numbers).
A further generalization can be achieved by splitting the game into a composition of two functions:
the outcome function of the game (some authors call this function "the game form"), and:
the allocation of payoffs (or preferences) to players, for each outcome of the game.
Extensive form game
This is given by a tree, where at each vertex of the tree a different player has the choice of choosing an edge. The outcome set of an extensive form game is usually the set of tree leaves.
Cooperative game
A game in which players are allowed to form coalitions (and to enforce coalitionary discipline). A cooperative game is given by stating a value for every coalition:
It is always assumed that the empty coalition gains nil. Solution concepts for cooperative games
usually assume that the players are forming the grand coalition , whose value is then divided among the players to give an allocation.
Simple game
A Simple game is a simplified form of a cooperative game, where the possible gain is assumed to be either '0' or '1'. A simple game is couple (N, W), where W is the list of "winning" coalitions, capable of gaining the loot ('1'), and N is the set of players.
Glossary
Acceptable game is a game form such that for every possible preference profiles, the game has pure nash equilibria, all of which are pareto efficient.
Allocation of goods is a function . The allocation is a cardinal approach for determining the good (e.g. money) the players are granted under the different outcomes of the game.
Best reply the best reply to a given complement is a strategy that maximizes player i's payment. Formally, we want: .
Coalition is any subset of the set of players: .
Condorcet winner Given a preference ν on the outcome space, an outcome a is a condorcet winner if all non-dummy players prefer a to all other outcomes.
Decidability In relation to game theory, refers to the question of the existence of an algorithm that can and will return an answer as to whether a game can be solved or not.
Determinacy A subfield of set theory that examines the conditions under which one or the other player of a game
|
https://en.wikipedia.org/wiki/Jacobi%20triple%20product
|
In mathematics, the Jacobi triple product is the mathematical identity:
for complex numbers x and y, with |x| < 1 and y ≠ 0.
It was introduced by in his work Fundamenta Nova Theoriae Functionum Ellipticarum.
The Jacobi triple product identity is the Macdonald identity for the affine root system of type A1, and is the Weyl denominator formula for the corresponding affine Kac–Moody algebra.
Properties
The basis of Jacobi's proof relies on Euler's pentagonal number theorem, which is itself a specific case of the Jacobi Triple Product Identity.
Let and . Then we have
The Jacobi Triple Product also allows the Jacobi theta function to be written as an infinite product as follows:
Let and
Then the Jacobi theta function
can be written in the form
Using the Jacobi Triple Product Identity we can then write the theta function as the product
There are many different notations used to express the Jacobi triple product. It takes on a concise form when expressed in terms of q-Pochhammer symbols:
where is the infinite q-Pochhammer symbol.
It enjoys a particularly elegant form when expressed in terms of the Ramanujan theta function. For it can be written as
Proof
Let
Substituting for and multiplying the new terms out gives
Since is meromorphic for , it has a Laurent series
which satisfies
so that
and hence
Evaluating
Showing that is technical. One way is to set and show both the numerator and the denominator of
are weight 1/2 modular under , since they are also 1-periodic and bounded on the upper half plane the quotient has to be constant so that .
Other proofs
A different proof is given by G. E. Andrews based on two identities of Euler.
For the analytic case, see Apostol.
References
Peter J. Cameron, Combinatorics: Topics, Techniques, Algorithms, (1994) Cambridge University Press,
Elliptic functions
Theta functions
Mathematical identities
Theorems in number theory
Infinite products
|
https://en.wikipedia.org/wiki/Malfatti%20circles
|
In geometry, the Malfatti circles are three circles inside a given triangle such that each circle is tangent to the other two and to two sides of the triangle. They are named after Gian Francesco Malfatti, who made early studies of the problem of constructing these circles in the mistaken belief that they would have the largest possible total area of any three disjoint circles within the triangle.
Malfatti's problem has been used to refer both to the problem of constructing the Malfatti circles and to the problem of finding three area-maximizing circles within a triangle.
A simple construction of the Malfatti circles was given by , and many mathematicians have since studied the problem. Malfatti himself supplied a formula for the radii of the three circles, and they may also be used to define two triangle centers, the Ajima–Malfatti points of a triangle.
The problem of maximizing the total area of three circles in a triangle is never solved by the Malfatti circles. Instead, the optimal solution can always be found by a greedy algorithm that finds the largest circle within the given triangle, the largest circle within the three connected subsets of the triangle outside of the first circle, and the largest circle within the five connected subsets of the triangle outside of the first two circles. Although this procedure was first formulated in 1930, its correctness was not proven until 1994.
Malfatti's problem
posed the problem of cutting three cylindrical columns out of a triangular prism of marble, maximizing the total volume of the columns. He assumed that the solution to this problem was given by three tangent circles within the triangular cross-section of the wedge. That is, more abstractly, he conjectured that the three Malfatti circles have the maximum total area of any three disjoint circles within a given triangle.
Malfatti's work was popularized for a wider readership in French by Joseph Diaz Gergonne in the first volume of his Annales (1811), with further discussion in the second and tenth. However, Gergonne only stated the circle-tangency problem, not the area-maximizing one.
Malfatti's assumption that the two problems are equivalent is incorrect. , who went back to the original Italian text, observed that for some triangles a larger area can be achieved by a greedy algorithm that inscribes a single circle of maximal radius within the triangle, inscribes a second circle within one of the three remaining corners of the triangle, the one with the smallest angle, and inscribes a third circle within the largest of the five remaining pieces. The difference in area for an equilateral triangle is small, just over 1%, but as pointed out, for an isosceles triangle with a very sharp apex, the optimal circles (stacked one atop each other above the base of the triangle) have nearly twice the area of the Malfatti circles.
In fact, the Malfatti circles are never optimal. It was discovered through numerical computations in the 1960s, and lat
|
https://en.wikipedia.org/wiki/Snowball%20sampling
|
In sociology and statistics research, snowball sampling (or chain sampling, chain-referral sampling, referral sampling) is a nonprobability sampling technique where existing study subjects recruit future subjects from among their acquaintances. Thus the sample group is said to grow like a rolling snowball. As the sample builds up, enough data are gathered to be useful for research. This sampling technique is often used in hidden populations, such as drug users or sex workers, which are difficult for researchers to access.
As sample members are not selected from a sampling frame, snowball samples are subject to numerous biases. For example, people who have many friends are more likely to be recruited into the sample. When virtual social networks are used, then this technique is called virtual snowball sampling.
It was widely believed that it was impossible to make unbiased estimates from snowball samples, but a variation of snowball sampling called respondent-driven sampling
has been shown to allow researchers to make asymptotically unbiased estimates from snowball samples under certain conditions. Snowball sampling and respondent-driven sampling also allows researchers to make estimates about the social network connecting the hidden population.
Description
Snowball sampling uses a small pool of initial informants to nominate, through their social networks, other participants who meet the eligibility criteria and could potentially contribute to a specific study. The term "snowball sampling" reflects an analogy to a snowball increasing in size as it rolls downhill.
Method
Draft a participation program (likely to be subject to change, but indicative).
Approach stakeholders and ask for contacts.
Gain contacts and ask them to participate.
Community issues groups may emerge that can be included in the participation program.
Continue the snowballing with contacts to gain more stakeholders if necessary.
Ensure a diversity of contacts by widening the profile of persons involved in the snowballing exercise.
Applications
Requirement
The participants are likely to know others who share the characteristics that make them eligible for inclusion in the study.
Applicable situation
Snowball sampling is quite suitable to use when members of a population are hidden and difficult to locate (e.g. samples of the homeless or users of illegal drugs) and these members are closely connected (e.g. organized crime, sharing similar interests, involvement in the same groups that are relevant to the project at hand).
Application field
Social computing
Snowball sampling can be perceived as an evaluation sampling in the social computing field. For example, in the interview phase, snowball sampling can be used to reach hard-to-reach populations. Participants or informants with whom contact has already been made can use their social networks to refer the researcher to other people who could potentially participate in or contribute to the study.
Conflict environments
|
https://en.wikipedia.org/wiki/UK%20Singles%20Chart%20records%20and%20statistics
|
The UK Singles Chart was first compiled in 1969. However the records and statistics listed here date back to 1952 because the Official Charts Company counts a selected period of the New Musical Express chart (only from 1952 to 1960) and the Record Retailer chart from 1960 to 1969 as predecessors for the period prior to 11 February 1969, where multiples of competing charts coexisted side by side. For example, the BBC compiled its own chart based on an average of the music papers of the time; many songs announced as having reached number one on BBC Radio and Top of the Pops prior to 1969 may not be listed here as chart-toppers since they do not meet the legacy criteria of the Charts Company.
Number one hits
Most number ones
The following is a list of all the acts who are on eight or more UK number one songs with an individual credit (meaning, the main artist or named separately as a featured artist – being part of a group does not count towards an individual's total).
Simply playing or singing on a single without credit will not count, or the top positions would almost certainly belong to session musicians such as Clem Cattini who is reported to have played drums on over 40 number ones.
Most weeks at number one by artist
Most weeks at number one by single
The record for most non-consecutive weeks at number one is 18 by Frankie Laine's "I Believe" in 1953. It spent nine weeks at number one, dropped down for a week, returned to number one for six weeks, dropped down for a further week and returned to number one for a third time for three weeks.
The longest unbroken run at number one is "(Everything I Do) I Do It for You" by Bryan Adams, which spent 16 consecutive weeks in 1991.
Ed Sheeran is the only artist to ever have multiple songs spend 10 or more weeks at the top of the charts, achieving the feat with both "Shape of You" in 2017 and "Bad Habits" in 2021.
Below is a table of all singles that have spent 10 or more weeks at the top of the charts:
Note: Songs denoted with an asterisk (*) spent non-consecutive weeks at number one.
Biggest climb to number one
The single with the biggest climb to number one is "Marvin Gaye" by Charlie Puth featuring Meghan Trainor, which climbed from number 90 on the week ending 20 August 2015.
The biggest climb to number one within the top 40 goes to "So What" by Pink, which climbed from number 38 on the week ending 11 October 2008.
Biggest drop from number one
The biggest drop from number one within the top 100 is to number 97. "Three Lions" by Baddiel, Skinner and The Lightning Seeds returned to number one for a third non-consecutive week on the week ending 19 July 2018, but in the following week it experienced a large drop after England's loss at the semifinals of the 2018 FIFA World Cup. However, two singles have since fallen completely out of the chart after a week at number one: "Last Christmas" by Wham! on the weeks ending 14 January 2021 and 12 January 2023, and "Merry Christmas" by Ed Sheeran a
|
https://en.wikipedia.org/wiki/Stieltjes%20constants
|
In mathematics, the Stieltjes constants are the numbers that occur in the Laurent series expansion of the Riemann zeta function:
The constant is known as the Euler–Mascheroni constant.
Representations
The Stieltjes constants are given by the limit
(In the case n = 0, the first summand requires evaluation of 00, which is taken to be 1.)
Cauchy's differentiation formula leads to the integral representation
Various representations in terms of integrals and infinite series are given in works of Jensen, Franel, Hermite, Hardy, Ramanujan, Ainsworth, Howell, Coppo, Connon, Coffey, Choi, Blagouchine and some other authors. In particular, Jensen-Franel's integral formula, often erroneously attributed to Ainsworth and Howell, states that
where δn,k is the Kronecker symbol (Kronecker delta). Among other formulae, we find
see.
As concerns series representations, a famous series implying an integer part of a logarithm was given by Hardy in 1912
Israilov gave semi-convergent series in terms of Bernoulli numbers
Connon, Blagouchine and Coppo gave several series with the binomial coefficients
where Gn are Gregory's coefficients, also known as reciprocal logarithmic numbers (G1=+1/2, G2=−1/12, G3=+1/24, G4=−19/720,... ).
More general series of the same nature include these examples
and
or
where are the Bernoulli polynomials of the second kind and are the polynomials given by the generating equation
respectively (note that ).
Oloa and Tauraso showed that series with harmonic numbers may lead to Stieltjes constants
Blagouchine obtained slowly-convergent series involving unsigned Stirling numbers of the first kind
as well as semi-convergent series with rational terms only
where m=0,1,2,... In particular, series for the first Stieltjes constant has a surprisingly simple form
where Hn is the nth harmonic number.
More complicated series for Stieltjes constants are given in works of Lehmer, Liang, Todd, Lavrik, Israilov, Stankus, Keiper, Nan-You, Williams, Coffey.
Bounds and asymptotic growth
The Stieltjes constants satisfy the bound
given by Berndt in 1972. Better bounds in terms of elementary functions were obtained by Lavrik
by Israilov
with k=1,2,... and C(1)=1/2, C(2)=7/12,... , by Nan-You and Williams
by Blagouchine
where Bn are Bernoulli numbers, and by Matsuoka
As concerns estimations resorting to non-elementary functions and solutions, Knessl, Coffey and Fekih-Ahmed obtained quite accurate results. For example, Knessl and Coffey give the following formula that approximates the Stieltjes constants relatively well for large n. If v is the unique solution of
with , and if , then
where
Up to n = 100000, the Knessl-Coffey approximation correctly predicts the sign of γn with the single exception of n = 137.
In 2022 K. Maślanka gave an asymptotic expression for the Stieltjes constants, which is both simpler and more accurate than those previously known. In particular, it reproduces with a relatively small err
|
https://en.wikipedia.org/wiki/Normal%20family
|
In mathematics, with special application to complex analysis, a normal family is a pre-compact subset of the space of continuous functions. Informally, this means that the functions in the family are not widely spread out, but rather stick together in a somewhat "clustered" manner. Note that a compact family of continuous functions is automatically a normal family.
Sometimes, if each function in a normal family F satisfies a particular property (e.g. is holomorphic),
then the property also holds for each limit point of the set F.
More formally, let X and Y be topological spaces. The set of continuous functions has a natural topology called the compact-open topology. A normal family is a pre-compact subset with respect to this topology.
If Y is a metric space, then the compact-open topology is equivalent to the topology of compact convergence, and we obtain a definition which is closer to the classical one: A collection F of continuous functions is called a normal family
if every sequence of functions in F contains a subsequence which converges uniformly on compact subsets of X to a continuous function from X to Y. That is, for every sequence of functions in F, there is a subsequence and a continuous function from X to Y such that the following holds for every compact subset K contained in X:
where is the metric of Y.
Normal families of holomorphic functions
The concept arose in complex analysis, that is the study of holomorphic functions. In this case, X is an open subset of the complex plane, Y is the complex plane, and the metric on Y is given by . As a consequence of Cauchy's integral theorem, a sequence of holomorphic functions that converges uniformly on compact sets must
converge to a holomorphic function. That is, each limit point of a normal family is holomorphic.
Normal families of holomorphic functions provide the quickest way of proving the Riemann mapping theorem.
More generally, if the spaces X and Y are Riemann surfaces, and Y is equipped with the metric coming from the uniformization theorem, then each limit point of a normal family of holomorphic functions is also holomorphic.
For example, if Y is the Riemann sphere, then the metric of uniformization is the spherical distance. In this case, a holomorphic function from X to Y is called a meromorphic function, and so each limit point of a normal family of meromorphic functions is a meromorphic function.
Criteria
In the classical context of holomorphic functions, there are several criteria that can be used to establish that a family is normal:
Montel's theorem states that a family of locally bounded holomorphic functions is normal. The Montel-Caratheodory theorem states that the family of meromorphic functions that omit three distinct values in the extended complex plane is normal. For a family of holomorphic functions, this reduces to requiring two values omitted by viewing each function as a meromorphic function omitting the value infinity.
Marty's theorem
p
|
https://en.wikipedia.org/wiki/Menelaus%27s%20theorem
|
In Euclidean geometry, Menelaus's theorem, named for Menelaus of Alexandria, is a proposition about triangles in plane geometry. Suppose we have a triangle , and a transversal line that crosses at points respectively, with distinct from . A weak version of the theorem states that
where "| |" denotes absolute value (i.e., all segment lengths are positive).
The theorem can be strengthened to a statement about signed lengths of segments, which provides some additional information about the relative order of collinear points. Here, the length is taken to be positive or negative according to whether is to the left or right of in some fixed orientation of the line; for example, is defined as having positive value when is between and and negative otherwise. The signed version of Menelaus's theorem states
Equivalently,
Some authors organize the factors differently and obtain the seemingly different relation
but as each of these factors is the negative of the corresponding factor above, the relation is seen to be the same.
The converse is also true: If points are chosen on respectively so that
then are collinear. The converse is often included as part of the theorem. (Note that the converse of the weaker, unsigned statement is not necessarily true.)
The theorem is very similar to Ceva's theorem in that their equations differ only in sign. By re-writing each in terms of cross-ratios, the two theorems may be seen as projective duals.
Proofs
A standard proof
First, the sign of the left-hand side will be negative since either all three of the ratios are negative, the case where the line misses the triangle (lower diagram), or one is negative and the other two are positive, the case where crosses two sides of the triangle. (See Pasch's axiom.)
To check the magnitude, construct perpendiculars from to the line and let their lengths be respectively. Then by similar triangles it follows that
Therefore,
For a simpler, if less symmetrical way to check the magnitude, draw parallel to where meets at . Then by similar triangles
and the result follows by eliminating from these equations.
The converse follows as a corollary. Let be given on the lines so that the equation holds. Let be the point where crosses . Then by the theorem, the equation also holds for . Comparing the two,
But at most one point can cut a segment in a given ratio so
A proof using homothecies
The following proof uses only notions of affine geometry, notably homothecies.
Whether or not are collinear, there are three homothecies with centers that respectively send to , to , and to . The composition of the three then is an element of the group of homothecy-translations that fixes , so it is a homothecy with center , possibly with ratio 1 (in which case it is the identity). This composition fixes the line if and only if is collinear with (since the first two homothecies certainly fix , and the third does so only if lies on ). Therefore are colli
|
https://en.wikipedia.org/wiki/History%20of%20the%20separation%20axioms
|
The history of the separation axioms in general topology has been convoluted, with many meanings competing for the same terms and many terms competing for the same concept.
Origins
Before the current general definition of topological space, there were many definitions offered, some of which assumed (what we now think of as) some separation axioms. For example, the definition given by Felix Hausdorff in 1914 is equivalent to the modern definition plus the Hausdorff separation axiom.
The separation axioms, as a group, became important in the study of metrisability: the question of which topological spaces can be given the structure of a metric space. Metric spaces satisfy all of the separation axioms; but in fact, studying spaces that satisfy only some axioms helps build up to the notion of full metrisability.
The separation axioms that were first studied together in this way were the axioms for accessible spaces, Hausdorff spaces, regular spaces, and normal spaces. Topologists assigned these classes of spaces the names T1, T2, T3, and T4. Later this system of numbering was extended to include T0, T2½, T3½ (or Tπ), T5, and T6.
But this sequence had its problems. The idea was supposed to be that every Ti space is a special kind of Tj space if i > j. But this is not necessarily true, as definitions vary. For example, a regular space (called T3) does not have to be a Hausdorff space (called T2), at least not according to the simplest definition of regular spaces.
Different definitions
Every author agreed on T0, T1, and T2. For the other axioms, however, different authors could use significantly different definitions, depending on what they were working on. These differences could develop because, if one assumes that a topological space satisfies the T1 axiom, then the various definitions are (in most cases) equivalent. Thus, if one is going to make that assumption, then one would want to use the simplest definition. But if one did not make that assumption, then the simplest definition might not be the right one for the most useful concept; in any case, it would destroy the (transitive) entailment of Ti by Tj, allowing (for example) non-Hausdorff regular spaces.
Topologists working on the metrisation problem generally did assume T1; after all, all metric spaces are T1. Thus, they used the simplest definitions for the Ti. Then, for those occasions when they did not assume T1, they used words ("regular" and "normal") for the more complicated definitions, in order to contrast them with the simpler ones. This approach was used as late as 1970 with the publication of Counterexamples in Topology by Lynn A. Steen and J. Arthur Seebach, Jr.
In contrast, general topologists, led by John L. Kelley in 1955, usually did not assume T1, so they studied the separation axioms in the greatest generality from the beginning. They used the more complicated definitions for Ti, so that they would always have a nice property relating Ti to Tj. Then, for the simple
|
https://en.wikipedia.org/wiki/Maria%20%28reachability%20analyzer%29
|
Maria: The Modular Reachability Analyzer is a reachability analyzer for concurrent systems that uses Algebraic System Nets (a high-level variant of Petri nets) as its modelling formalism.
External links
Petri nets
|
https://en.wikipedia.org/wiki/PFD
|
Science, technology, and medicine
Personal flotation device
Pelvic floor dysfunction
Phase frequency detector in electronics
Primary flight display, in an aircraft
Probability of Failure on Demand, see Safety integrity level#Certification
Process flow diagram, in process engineering
Prepared for dyeing
Professional Disc, recordable optical disc format
PFD allowance in work systems
Partial fraction decomposition
Perfluorodecalin, a molecule capable of dissolving large amounts of gas
Pediatric Feeding Disorder, A unifying diagnostic term, “pediatric feeding disorder” encompassing medical, nutrition, feeding skill, and psychosocial domains
Organizations
Philadelphia Fire Department
Pigespejdernes Fællesråd Danmark, Guiding federation of Denmark
Peters, Fraser & Dunlop, an English literary and talent agency
Other uses
Permanent Fund Dividend of Alaska Permanent Fund
See also
PDF (disambiguation)
|
https://en.wikipedia.org/wiki/Real%20coordinate%20space
|
In mathematics, the real coordinate space of dimension , denoted or is the set of the -tuples of real numbers, that is the set of all sequences of real numbers.
Special cases are called the real line and the real coordinate plane .
With component-wise addition and scalar multiplication, it is a real vector space, and its elements are called coordinate vectors.
The coordinates over any basis of the elements of a real vector space form a real coordinate space of the same dimension as that of the vector space. Similarly, the Cartesian coordinates of the points of a Euclidean space of dimension form a real coordinate space of dimension .
These one to one correspondences between vectors, points and coordinate vectors explain the names of coordinate space and coordinate vector. It allows using geometric terms and methods for studying real coordinate spaces, and, conversely, to use methods of calculus in geometry. This approach of geometry was introduced by René Descartes in the 17th century. It is widely used, as it allows locating points in Euclidean spaces, and computing with them.
Definition and structures
For any natural number , the set consists of all -tuples of real numbers (). It is called the "-dimensional real space" or the "real -space".
An element of is thus a -tuple, and is written
where each is a real number. So, in multivariable calculus, the domain of a function of several real variables and the codomain of a real vector valued function are subsets of for some .
The real -space has several further properties, notably:
With componentwise addition and scalar multiplication, it is a real vector space. Every -dimensional real vector space is isomorphic to it.
With the dot product (sum of the term by term product of the components), it is an inner product space. Every -dimensional real inner product space is isomorphic to it.
As every inner product space, it is a topological space, and a topological vector space.
It is a Euclidean space and a real affine space, and every Euclidean or affine space is isomorphic to it.
It is an analytic manifold, and can be considered as the prototype of all manifolds, as, by definition, a manifold is, near each point, isomorphic to an open subset of .
It is an algebraic variety, and every real algebraic variety is a subset of .
These properties and structures of make it fundamental in almost all areas of mathematics and their application domains, such as statistics, probability theory, and many parts of physics.
The domain of a function of several variables
Any function of real variables can be considered as a function on (that is, with as its domain). The use of the real -space, instead of several variables considered separately, can simplify notation and suggest reasonable definitions. Consider, for , a function composition of the following form:
where functions and are continuous. If
is continuous (by )
is continuous (by )
then is not necessarily continuous. Continuity
|
https://en.wikipedia.org/wiki/Euclidean%20topology
|
In mathematics, and especially general topology, the Euclidean topology is the natural topology induced on -dimensional Euclidean space by the Euclidean metric.
Definition
The Euclidean norm on is the non-negative function defined by
Like all norms, it induces a canonical metric defined by The metric induced by the Euclidean norm is called the Euclidean metric or the Euclidean distance and the distance between points and is
In any metric space, the open balls form a base for a topology on that space.
The Euclidean topology on is the topology by these balls.
In other words, the open sets of the Euclidean topology on are given by (arbitrary) unions of the open balls defined as for all real and all where is the Euclidean metric.
Properties
When endowed with this topology, the real line is a T5 space.
Given two subsets say and of with where denotes the closure of there exist open sets and with and such that
See also
References
Topology
Euclid
|
https://en.wikipedia.org/wiki/Total%20relation
|
In mathematics, a binary relation R ⊆ X×Y between two sets X and Y is total (or left total) if the source set X equals the domain {x : there is a y with xRy }. Conversely, R is called right total if Y equals the range {y : there is an x with xRy }.
When f: X → Y is a function, the domain of f is all of X, hence f is a total relation. On the other hand, if f is a partial function, then the domain may be a proper subset of X, in which case f is not a total relation.
"A binary relation is said to be total with respect to a universe of discourse just in case everything in that universe of discourse stands in that relation to something else."
Algebraic characterization
Total relations can be characterized algebraically by equalities and inequalities involving compositions of relations. To this end, let be two sets, and let For any two sets let be the universal relation between and and let be the identity relation on We use the notation for the converse relation of
is total iff for any set and any implies
is total iff
If is total, then The converse is true if
If is total, then The converse is true if
If is total, then The converse is true if
More generally, if is total, then for any set and any The converse is true if
Notes
References
Gunther Schmidt & Michael Winter (2018) Relational Topology
C. Brink, W. Kahl, and G. Schmidt (1997) Relational Methods in Computer Science, Advances in Computer Science, page 5,
Gunther Schmidt & Thomas Strohlein (2012)[1987]
Gunther Schmidt (2011)
Binary relations
|
https://en.wikipedia.org/wiki/Ranking
|
A ranking is a relationship between a set of items such that, for any two items, the first is either "ranked higher than", "ranked lower than", or "ranked equal to" the second. In mathematics, this is known as a weak order or total preorder of objects. It is not necessarily a total order of objects because two different objects can have the same ranking. The rankings themselves are totally ordered. For example, materials are totally preordered by hardness, while degrees of hardness are totally ordered. If two items are the same in rank it is considered a tie.
By reducing detailed measures to a sequence of ordinal numbers, rankings make it possible to evaluate complex information according to certain criteria. Thus, for example, an Internet search engine may rank the pages it finds according to an estimation of their relevance, making it possible for the user quickly to select the pages they are likely to want to see.
Analysis of data obtained by ranking commonly requires non-parametric statistics.
Strategies for handling ties
It is not always possible to assign rankings uniquely. For example, in a race or competition two (or more) entrants might tie for a place in the ranking. When computing an ordinal measurement, two (or more) of the quantities being ranked might measure equal. In these cases, one of the strategies below for assigning the rankings may be adopted.
A common shorthand way to distinguish these ranking strategies is by the ranking numbers that would be produced for four items, with the first item ranked ahead of the second and third (which compare equal) which are both ranked ahead of the fourth. These names are also shown below.
Standard competition ranking ("1224" ranking)
In competition ranking, items that compare equal receive the same ranking number, and then a gap is left in the ranking numbers. The number of ranking numbers that are left out in this gap is one less than the number of items that compared equal. Equivalently, each item's ranking number is 1 plus the number of items ranked above it. This ranking strategy is frequently adopted for competitions, as it means that if two (or more) competitors tie for a position in the ranking, the position of all those ranked below them is unaffected (i.e., a competitor only comes second if exactly one person scores better than them, third if exactly two people score better than them, fourth if exactly three people score better than them, etc.).
Thus if A ranks ahead of B and C (which compare equal) which are both ranked ahead of D, then A gets ranking number 1 ("first"), B gets ranking number 2 ("joint second"), C also gets ranking number 2 ("joint second") and D gets ranking number 4 ("fourth").
This method is called "Low" by IBM SPSS and "min" by the R programming language in their methods to handle ties.
Modified competition ranking ("1334" ranking)
Sometimes, competition ranking is done by leaving the gaps in the ranking numbers before the sets of equal-ranking items
|
https://en.wikipedia.org/wiki/Circuit%20rank
|
In graph theory, a branch of mathematics, the circuit rank, cyclomatic number, cycle rank, or nullity of an undirected graph is the minimum number of edges that must be removed from the graph to break all its cycles, making it into a tree or forest. It is equal to the number of independent cycles in the graph (the size of a cycle basis). Unlike the corresponding feedback arc set problem for directed graphs, the circuit rank is easily computed using the formula
,
where is the number of edges in the given graph, is the number of vertices, and is the number of connected components.
It is also possible to construct a minimum-size set of edges that breaks all cycles efficiently, either using a greedy algorithm or by complementing a spanning forest.
The circuit rank can be explained in terms of algebraic graph theory as the dimension of the cycle space of a graph, in terms of matroid theory as the corank of a graphic matroid, and in terms of topology as one of the Betti numbers of a topological space derived from the graph. It counts the ears in an ear decomposition of the graph, forms the basis of parameterized complexity on almost-trees, and has been applied in software metrics as part of the definition of cyclomatic complexity of a piece of code. Under the name of cyclomatic number, the concept was introduced by Gustav Kirchhoff.
Matroid rank and construction of a minimum feedback edge set
The circuit rank of a graph may be described using matroid theory as the corank of the graphic matroid of . Using the greedy property of matroids, this means that one can find a minimum set of edges that breaks all cycles using a greedy algorithm that at each step chooses an edge that belongs to at least one cycle of the remaining graph.
Alternatively, a minimum set of edges that breaks all cycles can be found by constructing a spanning forest of and choosing the complementary set of edges that do not belong to the spanning forest.
The number of independent cycles
In algebraic graph theory, the circuit rank is also the dimension of the cycle space of . Intuitively, this can be explained as meaning that the circuit rank counts the number of independent cycles in the graph, where a collection of cycles is independent if it is not possible to form one of the cycles as the symmetric difference of some subset of the others.
This count of independent cycles can also be explained using homology theory, a branch of topology. Any graph may be viewed as an example of a 1-dimensional simplicial complex, a type of topological space formed by representing each graph edge by a line segment and gluing these line segments together at their endpoints.
The cyclomatic number is the rank of the first (integer) homology group of this complex,
Because of this topological connection, the cyclomatic number of a graph is also called the first Betti number of . More generally, the first Betti number of any topological space, defined in the same way, counts the number of ind
|
https://en.wikipedia.org/wiki/Montel%27s%20theorem
|
In complex analysis, an area of mathematics, Montel's theorem refers to one of two theorems about families of holomorphic functions. These are named after French mathematician Paul Montel, and give conditions under which a family of holomorphic functions is normal.
Locally uniformly bounded families are normal
The first, and simpler, version of the theorem states that a family of holomorphic functions defined on an open subset of the complex numbers is normal if and only if it is locally uniformly bounded.
This theorem has the following formally stronger corollary. Suppose that
is a family of
meromorphic functions on an open set . If is such that
is not normal at , and is a neighborhood of , then is dense
in the complex plane.
Functions omitting two values
The stronger version of Montel's Theorem (occasionally referred to as the Fundamental Normality Test) states that a family of holomorphic functions, all of which omit the same two values is normal.
Necessity
The conditions in the above theorems are sufficient, but not necessary for normality. Indeed,
the family is normal, but does not omit any complex value.
Proofs
The first version of Montel's theorem is a direct consequence of Marty's Theorem (which
states that a family is normal if and only if the spherical derivatives are locally bounded)
and Cauchy's integral formula.
This theorem has also been called the Stieltjes–Osgood theorem, after Thomas Joannes Stieltjes and William Fogg Osgood.
The Corollary stated above is deduced as follows. Suppose that all the functions in omit the same neighborhood of the point . By postcomposing with the map we obtain a uniformly bounded family, which is normal by the first version of the theorem.
The second version of Montel's theorem can be deduced from the first by using the fact that there exists a holomorphic universal covering from the unit disk to the twice punctured plane . (Such a covering is given by the elliptic modular function).
This version of Montel's theorem can be also derived from Picard's theorem,
by using Zalcman's lemma.
Relationship to theorems for entire functions
A heuristic principle known as Bloch's Principle (made precise by Zalcman's lemma) states that properties that imply that an entire function is constant correspond to properties that ensure that a family of holomorphic functions is normal.
For example, the first version of Montel's theorem stated above is the analog of Liouville's theorem, while the second version corresponds to Picard's theorem.
See also
Montel space
Fundamental normality test
Riemann mapping theorem
Notes
References
Compactness theorems
Theorems in complex analysis
|
https://en.wikipedia.org/wiki/Mathematical%20Reviews
|
Mathematical Reviews is a journal published by the American Mathematical Society (AMS) that contains brief synopses, and in some cases evaluations, of many articles in mathematics, statistics, and theoretical computer science. The AMS also publishes an associated online bibliographic database called MathSciNet which contains an electronic version of Mathematical Reviews and additionally contains citation information for over 3.5 million items
Reviews
Mathematical Reviews was founded by Otto E. Neugebauer in 1940 as an alternative to the German journal Zentralblatt für Mathematik, which Neugebauer had also founded a decade earlier, but which under the Nazis had begun censoring reviews by and of Jewish mathematicians. The goal of the new journal was to give reviews of every mathematical research publication. As of November 2007, the Mathematical Reviews database contained information on over 2.2 million articles. The authors of reviews are volunteers, usually chosen by the editors because of some expertise in the area of the article. It and Zentralblatt für Mathematik are the only comprehensive resources of this type. (The Mathematics section of Referativny Zhurnal is available only in Russian and is smaller in scale and difficult to access.) Often reviews give detailed summaries of the contents of the paper, sometimes with critical comments by the reviewer and references to related work. However, reviewers are not encouraged to criticize the paper, because the author does not have an opportunity to respond. The author's summary may be quoted when it is not possible to give an independent review, or when the summary is deemed adequate by the reviewer or the editors. Only bibliographic information may be given when a work is in an unusual language, when it is a brief paper in a conference volume, or when it is outside the primary scope of the Reviews. Originally the reviews were written in several languages, but later an "English only" policy was introduced. Selected reviews (called "featured reviews") were also published as a book by the AMS, but this program has been discontinued.
Online database
In 1980, all the contents of Mathematical Reviews since 1940 were integrated into an electronic searchable database. Eventually the contents became part of MathSciNet, which was officially launched in 1996. MathSciNet also has extensive citation information.
Mathematical citation quotient
Mathematical Reviews computes a mathematical citation quotient (MCQ) for each journal. Like the impact factor and other similar citation rates, this is a numerical statistic that measures the frequency of citations to a journal. The MCQ is calculated by counting the total number of citations into the journal that have been indexed by Mathematical Reviews over a five-year period, and dividing this total by the total number of papers published by the journal during that five-year period.
For the period 2012 – 2014, the top five journals in Mathematical Reviews by M
|
https://en.wikipedia.org/wiki/Citizen%20Information%20Project
|
In the United Kingdom, the Citizen Information Project (CIP) was a plan by the Office for National Statistics to build a national population register.
On 18 April 2006 it was announced that instead of continuing as a separate project, it would be integrated into the National Identity Register, the database behind the proposed national identity cards. It has been estimated that this might add £200 million to the cost of the identity cards. The National Identity Register was destroyed as the Identity Cards Act 2006 was repealed in 2011.
Scope and purpose
The register was to have been used as a single reference point for government contact, for the exchange of personal contact data, and for the collection of statistics, so reducing duplication in government departments and agencies. Government databases would have been linked together using National Insurance or other personal numbers.
In late 2003 the project moved into a definition phase. It was hoped that the CIP would be able to use data from the proposed National Identity Register.
A report on preliminary testing was due in April 2005, and it had been expected that it would have been implemented before the end of 2007 if approval had been given by Government. Initial estimates in 2004 suggested that the costs might have been £1.2 - £2.4 billion (240 million annually for a period of 5 to 10 years).
References
External links
Citizen Information Project
Testing of technology involved in a UK population register is soon to begin
Evidence to the Homes Affairs Committee on the CIP in relation to ID cards
Big Brother Awards
Privacy International
Government databases in the United Kingdom
Programmes of the Government of the United Kingdom
Office for National Statistics
|
https://en.wikipedia.org/wiki/Multicollinearity
|
In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be perfectly predicted from the others. In this situation, the coefficient estimates of the multiple regression may change erratically in response to small changes in the data or the procedure used to fit the model.
Contrary to popular belief, including collinear variables does not reduce the predictive power or reliability of the model as a whole, nor does it reduce how accurately coefficients are estimated. In fact, high collinearity indicates that it is exceptionally important to include all variables, as excluding any variable will cause strong confounding.
Note that in statements of the assumptions underlying regression analyses such as ordinary least squares, the phrase "no multicollinearity" usually refers to the absence of multicollinearity, which is an exact (non-stochastic) linear relation among the predictors. In such a case, the design matrix has less than full rank, and therefore the moment matrix cannot be inverted. Under these circumstances, for a general linear model , the ordinary least squares estimator does not exist.
Definition
Multicollinearity refers to a situation in which explanatory variables in a multiple regression model are highly linearly related. Mathematically, a set of variables is perfectly multicollinear if there exist one or more exact linear relationships among some of the variables. That is, for all observations ,
where are constants and is the observation on the explanatory variable.
To explore one issue caused by multicollinearity, consider the process of attempting to obtain estimates for the parameters of the multiple regression equation
.
The ordinary least squares estimates involve inverting the matrix , where
is an matrix, where is the number of observations, is the number of explanatory variables, and . If there is an exact linear relationship (perfect multicollinearity) among the independent variables, then at least one of the columns of is a linear combination of the others, and so the rank of (and therefore of ) is less than , and the matrix will not be invertible.
Perfect collinearity is common when working with raw datasets, which frequently contain redundant information. Once redundancies are identified and removed, however, nearly collinear variables often remain due to correlations inherent in the system being studied. In such a case, Equation () may be modified to include an error term :
.
In this case, there is no exact linear relationship among the variables, but the variables are nearly collinear if the variance of is small. In this case, the matrix has an inverse, but it is ill-conditioned. A computer algorithm may or may not be able to compute an approximate inverse; even if it can, the resulting inverse may have large rounding errors.
Measures
The following are measures of multicollinearity:
Variance inflation factor (
|
https://en.wikipedia.org/wiki/Poincar%C3%A9%20metric
|
In mathematics, the Poincaré metric, named after Henri Poincaré, is the metric tensor describing a two-dimensional surface of constant negative curvature. It is the natural metric commonly used in a variety of calculations in hyperbolic geometry or Riemann surfaces.
There are three equivalent representations commonly used in two-dimensional hyperbolic geometry. One is the Poincaré half-plane model, defining a model of hyperbolic space on the upper half-plane. The Poincaré disk model defines a model for hyperbolic space on the unit disk. The disk and the upper half plane are related by a conformal map, and isometries are given by Möbius transformations. A third representation is on the punctured disk, where relations for q-analogues are sometimes expressed. These various forms are reviewed below.
Overview of metrics on Riemann surfaces
A metric on the complex plane may be generally expressed in the form
where λ is a real, positive function of and . The length of a curve γ in the complex plane is thus given by
The area of a subset of the complex plane is given by
where is the exterior product used to construct the volume form. The determinant of the metric is equal to , so the square root of the determinant is . The Euclidean volume form on the plane is and so one has
A function is said to be the potential of the metric if
The Laplace–Beltrami operator is given by
The Gaussian curvature of the metric is given by
This curvature is one-half of the Ricci scalar curvature.
Isometries preserve angles and arc-lengths. On Riemann surfaces, isometries are identical to changes of coordinate: that is, both the Laplace–Beltrami operator and the curvature are invariant under isometries. Thus, for example, let S be a Riemann surface with metric and T be a Riemann surface with metric . Then a map
with is an isometry if and only if it is conformal and if
.
Here, the requirement that the map is conformal is nothing more than the statement
that is,
Metric and volume element on the Poincaré plane
The Poincaré metric tensor in the Poincaré half-plane model is given on the upper half-plane H as
where we write and .
This metric tensor is invariant under the action of SL(2,R). That is, if we write
for then we can work out that
and
The infinitesimal transforms as
and so
thus making it clear that the metric tensor is invariant under SL(2,R). Indeed,
The invariant volume element is given by
The metric is given by
for
Another interesting form of the metric can be given in terms of the cross-ratio. Given any four points and in the compactified complex plane the cross-ratio is defined by
Then the metric is given by
Here, and are the endpoints, on the real number line, of the geodesic joining and . These are numbered so that lies in between and .
The geodesics for this metric tensor are circular arcs perpendicular to the real axis (half-circles whose origin is on the real axis) and straight vertical lines ending on the real
|
https://en.wikipedia.org/wiki/Schwarz%E2%80%93Ahlfors%E2%80%93Pick%20theorem
|
In mathematics, the Schwarz–Ahlfors–Pick theorem is an extension of the Schwarz lemma for hyperbolic geometry, such as the Poincaré half-plane model.
The Schwarz–Pick lemma states that every holomorphic function from the unit disk U to itself, or from the upper half-plane H to itself, will not increase the Poincaré distance between points. The unit disk U with the Poincaré metric has negative Gaussian curvature −1. In 1938, Lars Ahlfors generalised the lemma to maps from the unit disk to other negatively curved surfaces:
Theorem (Schwarz–Ahlfors–Pick). Let U be the unit disk with Poincaré metric ; let S be a Riemann surface endowed with a Hermitian metric whose Gaussian curvature is ≤ −1; let be a holomorphic function. Then
for all
A generalization of this theorem was proved by Shing-Tung Yau in 1973.
References
Hyperbolic geometry
Riemann surfaces
Theorems in complex analysis
Theorems in differential geometry
|
https://en.wikipedia.org/wiki/Face%20%28disambiguation%29
|
The face is a part of the body, the front of the head.
Face may also refer to:
Generic meanings
Face (geometry), a flat (planar) surface that forms part of the boundary of a solid object
Face (hieroglyph), a portrayal of the human face, frontal view.
Face (mining), the surface where the mining work is advancing
Face (sociological concept), dignity or prestige in social relations
Face (graph theory)
Clock face
Rock face, a cliff or vertical surface on a large rock or mountain, especially a pyramidal peak
Typeface in typography
Books and publications
Face (novel), a novel by Benjamin Zephaniah
The Face (Vance novel), a 1979 science fiction novel by Jack Vance
The Face (Koontz novel), a 2003 novel by Dean Koontz
The Face (Whitaker novel), a 2002 novel by Phil Whitaker
The Face (magazine), a British music, fashion, and culture magazine
The Face, a novel by Angela Elwell Hunt
The Face (comics), a 1940s Columbia Comics superhero
Film and TV
Films
The Magician (1958 film) or The Face
The Face (1996 film), an American television film
Face (1997 film), a British crime drama by Antonia Bird
Face (2000 film), a Japanese dark comedy by Junji Sakamoto and starring Naomi Fujiyama
Face (2002 film), an American drama by Bertha Bay-Sa Pan and starring Bai Ling
Face (2004 film), a Korean horror film by Yoo Sang-gon
Face (2009 film), a Taiwanese-French comedy-drama by Tsai Ming-liang
FACE Film Award of the Council of Europe, a human-rights award bestowed at the Istanbul International Film Festival
Television
The Face (TV series), a multinational reality modeling-themed show
The Face (American TV series), the original series
The Face (Australian TV series)
The Face Thailand
The Face (British TV series)
The Face (Vietnamese TV series)
"Face" (Ghost in the Shell episode)
Face (Nick Jr. mascot)
Templeton Peck or Face, a character in The A-Team
Music
Performers
Face (a cappella group), an American rock a cappella group
The Face (band), a Chinese rock band formed in 1989
Face (musician), a member of So Solid Crew
Face (rapper) (born 1997), Russian rapper
David Morales or the Face (born 1961), American house music DJ and producer
Albums
Face (Of Cabbages and Kings album), 1988
Face (Key album), 2018
Face (Jimin album), 2023
Face, a 2006 album by Kenna
The Face: The Very Best of Visage, a 2010 album by Visage
Face (EP), a 2022 EP by Solar
The Face (album), a 2008 album by BoA
The Face (EP), a 2012 EP by Disclosure
Songs
"Face", a song by Brockhampton from Saturation
"Face", a song by Got7 from 7 for 7
"Face", a song by Rick Ross (featuring Trina) from Deeper Than Rap
"Face", a song by Sevendust from Sevendust
"The Face", a 1990 song by And Why Not?
"The Face", ' a 1974 song by Gentle Giant from The Power and the Glory
Other
Chery A1 Chery Face, a compact car produced by Chery Automobile
The Face, the world's first-ever graded rock climb by Jerry Moffatt.
Face (professional wrestling), a hero character, mean
|
https://en.wikipedia.org/wiki/Pappus%27s%20hexagon%20theorem
|
In mathematics, Pappus's hexagon theorem (attributed to Pappus of Alexandria) states that
given one set of collinear points and another set of collinear points then the intersection points of line pairs and and and are collinear, lying on the Pappus line. These three points are the points of intersection of the "opposite" sides of the hexagon .
It holds in a projective plane over any field, but fails for projective planes over any noncommutative division ring. Projective planes in which the "theorem" is valid are called pappian planes.
If one restricts the projective plane such that the Pappus line is the line at infinity, one gets the affine version of Pappus's theorem shown in the second diagram.
If the Pappus line and the lines have a point in common, one gets the so-called little version of Pappus's theorem.
The dual of this incidence theorem states that given one set of concurrent lines , and another set of concurrent lines , then the lines defined by pairs of points resulting from pairs of intersections and and and are concurrent. (Concurrent means that the lines pass through one point.)
Pappus's theorem is a special case of Pascal's theorem for a conic—the limiting case when the conic degenerates into 2 straight lines. Pascal's theorem is in turn a special case of the Cayley–Bacharach theorem.
The Pappus configuration is the configuration of 9 lines and 9 points that occurs in Pappus's theorem, with each line meeting 3 of the points and each point meeting 3 lines. In general, the Pappus line does not pass through the point of intersection of and . This configuration is self dual. Since, in particular, the lines have the properties of the lines of the dual theorem, and collinearity of is equivalent to concurrence of , the dual theorem is therefore just the same as the theorem itself. The Levi graph of the Pappus configuration is the Pappus graph, a bipartite distance-regular graph with 18 vertices and 27 edges.
Proof: affine form
If the affine form of the statement can be proven, then the projective form of Pappus's theorem is proven, as the extension of a pappian plane to a projective plane is unique.
Because of the parallelity in an affine plane one has to distinct two cases: and . The key for a simple proof is the possibility for introducing a "suitable" coordinate system:
Case 1: The lines intersect at point .
In this case coordinates are introduced, such that (see diagram).
have the coordinates .
From the parallelity of the lines one gets and the parallelity of the lines yields . Hence line has slope and is parallel line .
Case 2: (little theorem).
In this case the coordinates are chosen such that . From the parallelity of and one gets and , respectively, and at least the parallelity .
Proof with homogeneous coordinates
Choose homogeneous coordinates with
.
On the lines , given by , take the points to be
for some . The three lines are , so they pass through the same point if
|
https://en.wikipedia.org/wiki/Generalized%20extreme%20value%20distribution
|
In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.
In some fields of application the generalized extreme value distribution is known as the Fisher–Tippett distribution, named after Ronald Fisher and L. H. C. Tippett who recognised three different forms outlined below. However usage of this name is sometimes restricted to mean the special case of the Gumbel distribution. The origin of the common functional form for all 3 distributions dates back to at least Jenkinson, A. F. (1955), though allegedly it could also have been given by von Mises, R. (1936).
Specification
Using the standardized variable where the location parameter, can be any real number, and is the scale parameter; the cumulative distribution function of the GEV distribution is then
where the shape parameter, can be any real number. Thus, for , the expression is valid for while for it is valid for In the first case, is the negative, lower end-point, where is 0; in the second case, is the positive, upper end-point, where is 1. For the second expression is formally undefined and is replaced with the first expression, which is the result of taking the limit of the second, as in which case can be any real number.
In the special case of so and ≈ for whatever values and might have.
The probability density function of the standardized distribution is
again valid for in the case and for in the case The density is zero outside of the relevant range. In the case the density is positive on the whole real line.
Since the cumulative distribution function is invertible, the quantile function for the GEV distribution has an explicit expression, namely
and therefore the quantile density function is
valid for and for any real
Summary statistics
Some simple statistics of the distribution are:
for
The skewness is for ξ>0
For ξ<0, the sign of the numerator is reversed.
The excess kurtosis is:
where , , and is the gamma function.
Link to Fréchet, Weibull and Gumbel families
The shape parameter governs the tail behavior of the distribution. The sub-families defined by , and correspond, respectively, to the Gumbel, Fréchet and Weibull families, whose cumulative distribution functions are displayed below.
Gumbel or type I extreme value distribution ()
Fréchet or type II extreme v
|
https://en.wikipedia.org/wiki/Sal%20Restivo
|
Sal Restivo (born 1940) is a sociologist/anthropologist.
Work
Restivo is a leading contributor to science studies and in particular to the sociology of mathematics. His current work focuses on the sociology of mind and brain, and the sociology of god and religion. He has also done work in the sociology of social and sociable robotics. He helped launch the ethnographic study of science in the 1970s, and is a founding member (1975) and former president (1994/95) of the Society for Social Studies of Science. He was a founding member of the Association for Humanist Sociology, and was also involved with Science for the People in its formative years and active in the Radical Science Movement.
His pioneering work in the sociology of mathematics has been a key factor in bringing social constructionism into mathematics education and the philosophy of mathematics education. He also helped to develop the science and technology studies curriculum which has become a popular major at universities throughout the US and the world. He is based in the US and worked as a professor for many years at Rensselaer Polytechnic Institute, Troy, NY. He has been awarded multiple NSF and NEH grants and fellowships as well as support from other agencies. He has been a Nordisk Forskerutdanningsakademi Professor simultaneously at Roskilde University (Denmark) and the University of Gothenburg (Sweden); a Belgian National Research Foundation Professor, Free University of Brussels (Belgium); and a Special Professor of Mathematics Education at Nottingham University (United Kingdom). He is a former Hixon/Riggs Professor of Science, Technology, and Society at Harvey Mudd College, and currently holds the title of Special Lecture Professor at the Research Institute for the Philosophy of Science and Technology at Northeastern University in Shenyang, China.
At RPI, he was Professor of Sociology, Science Studies, and Information Technology. He retired from RPI on June 30, 2012 and then spent six months as a Senior Fellow at the University of Ghent in Belgium. He is now living in Ridgewood, NY and taught in the Department of Technology, Culture, and Society at New York University Tandon School of Engineering in Brooklyn NY from 2015-2017. He attended and graduated from Brooklyn Technical High School with honors in electrical engineering; earned his BA with honors at the City College of New York. He has a PhD earned with distinction from Michigan State University
Published works
Comparative Studies in Science and Society (C.E. Merrill, Columbus, 1974). Co-edited with C. K. Vanderpool.
The Sociological Worldview (B. Blackwell, Oxford, 1991); Swedish edition published by Bokforlaget Korpen, Goteborg, Sweden, 1995.
Mathematics in Society and History (Kluwer Academic Publishers, Dordrecht, 1992). Nominated for the Morris D. Forkosch Book Award of the Journal of the History of Ideas.
Math Worlds: Philosophical and Social Studies of Mathematics and Mathematics Education (SUNY Press, Alban
|
https://en.wikipedia.org/wiki/Intrinsic%20equation
|
In geometry, an intrinsic equation of a curve is an equation that defines the curve using a relation between the curve's intrinsic properties, that is, properties that do not depend on the location and possibly the orientation of the curve. Therefore an intrinsic equation defines the shape of the curve without specifying its position relative to an arbitrarily defined coordinate system.
The intrinsic quantities used most often are arc length , tangential angle , curvature or radius of curvature, and, for 3-dimensional curves, torsion . Specifically:
The natural equation is the curve given by its curvature and torsion.
The Whewell equation is obtained as a relation between arc length and tangential angle.
The Cesàro equation is obtained as a relation between arc length and curvature.
The equation of a circle (including a line) for example is given by the equation where is the arc length, the curvature and the radius of the circle.
These coordinates greatly simplify some physical problem. For elastic rods for example, the potential energy is given by
where is the bending modulus . Moreover, as , elasticity of rods can be given a simple variational form.
References
External links
Curves
Equations
|
https://en.wikipedia.org/wiki/List%20of%20census%20divisions%20of%20Saskatchewan
|
The province of Saskatchewan, Canada is divided into 18 census divisions according to Statistics Canada. Unlike in some other provinces, census divisions do not reflect the organization of local government in Saskatchewan. These areas exist solely for the purposes of statistical analysis and presentation; they have no government of their own.
Saskatchewan's census divisions consist of numerous census subdivisions which include subdivisions such as:
Urban municipalities (cities, towns, villages, and resort villages);
Rural municipalities;
Northern municipalities (northern towns, northern villages, and northern hamlets); and
Indian reserves
List of census divisions
See also
Administrative divisions of Canada
List of communities in Saskatchewan
List of cities in Saskatchewan
List of Indian reserves in Saskatchewan
List of resort villages in Saskatchewan
List of rural municipalities in Saskatchewan
List of towns in Saskatchewan
List of villages in Saskatchewan
Notes
References
Census divisions
|
https://en.wikipedia.org/wiki/Pentadecagon
|
In geometry, a pentadecagon or pentakaidecagon or 15-gon is a fifteen-sided polygon.
Regular pentadecagon
A regular pentadecagon is represented by Schläfli symbol {15}.
A regular pentadecagon has interior angles of 156°, and with a side length a, has an area given by
Construction
As 15 = 3 × 5, a product of distinct Fermat primes, a regular pentadecagon is constructible using compass and straightedge:
The following constructions of regular pentadecagons with given circumcircle are similar to the illustration of the proposition XVI in Book IV of Euclid's Elements.
Compare the construction according Euclid in this image: Pentadecagon
In the construction for given circumcircle: is a side of equilateral triangle and is a side of a regular pentagon.
The point divides the radius in golden ratio:
Compared with the first animation (with green lines) are in the following two images the two circular arcs (for angles 36° and 24°) rotated 90° counterclockwise shown. They do not use the segment , but rather they use segment as radius for the second circular arc (angle 36°).
A compass and straightedge construction for a given side length. The construction is nearly equal to that of the pentagon at a given side, then also the presentation is succeed by extension one side and it generates a segment, here which is divided according to the golden ratio:
Circumradius Side length Angle
Symmetry
The regular pentadecagon has Dih15 dihedral symmetry, order 30, represented by 15 lines of reflection. Dih15 has 3 dihedral subgroups: Dih5, Dih3, and Dih1. And four more cyclic symmetries: Z15, Z5, Z3, and Z1, with Zn representing π/n radian rotational symmetry.
On the pentadecagon, there are 8 distinct symmetries. John Conway labels these symmetries with a letter and order of the symmetry follows the letter. He gives r30 for the full reflective symmetry, Dih15. He gives d (diagonal) with reflection lines through vertices, p with reflection lines through edges (perpendicular), and for the odd-sided pentadecagon i with mirror lines through both vertices and edges, and g for cyclic symmetry. a1 labels no symmetry.
These lower symmetries allows degrees of freedoms in defining irregular pentadecagons. Only the g15 subgroup has no degrees of freedom but can seen as directed edges.
Pentadecagrams
There are three regular star polygons: {15/2}, {15/4}, {15/7}, constructed from the same 15 vertices of a regular pentadecagon, but connected by skipping every second, fourth, or seventh vertex respectively.
There are also three regular star figures: {15/3}, {15/5}, {15/6}, the first being a compound of three pentagons, the second a compound of five equilateral triangles, and the third a compound of three pentagrams.
The compound figure {15/3} can be loosely seen as the two-dimensional equivalent of the 3D compound of five tetrahedra.
Isogonal pentadecagons
Deeper truncations of the regular pentadecagon and pentadecagrams can produce isogonal (vertex-transit
|
https://en.wikipedia.org/wiki/Abelian%20von%20Neumann%20algebra
|
In functional analysis, an abelian von Neumann algebra is a von Neumann algebra of operators on a Hilbert space in which all elements commute.
The prototypical example of an abelian von Neumann algebra is the algebra L∞(X, μ) for μ a σ-finite measure on X realized as an algebra of operators on the Hilbert space L2(X, μ) as follows: Each f ∈ L∞(X, μ) is identified with the multiplication operator
Of particular importance are the abelian von Neumann algebras on separable Hilbert spaces, particularly since they are completely classifiable by simple invariants.
Though there is a theory for von Neumann algebras on non-separable Hilbert spaces (and indeed much of the general theory still holds in that case) the theory is considerably simpler for algebras on separable spaces and most applications to other areas of mathematics or physics only use separable Hilbert spaces. Note that if the measure spaces (X, μ) is a standard measure space (that is X − N is a standard Borel space for some null set N and μ is a σ-finite measure) then L2(X, μ) is separable.
Classification
The relationship between commutative von Neumann algebras and measure spaces is analogous to that between commutative C*-algebras and locally compact Hausdorff spaces. Every commutative von Neumann algebra on a separable Hilbert space is isomorphic to L∞(X) for some standard measure space (X, μ) and conversely, for every standard measure space X, L∞(X) is a von Neumann algebra. This isomorphism as stated is an algebraic isomorphism.
In fact we can state this more precisely as follows:
Theorem. Any abelian von Neumann algebra of operators on a separable Hilbert space is *-isomorphic to exactly one of the following
The isomorphism can be chosen to preserve the weak operator topology.
In the above list, the interval [0,1] has Lebesgue measure and the sets {1, 2, ..., n} and N have counting measure. The unions are disjoint unions. This classification is essentially a variant of Maharam's classification theorem for separable measure algebras. The version of Maharam's classification theorem that is most useful involves a point realization of the equivalence, and is somewhat of a folk theorem.
Although every standard measure space is isomorphic to one of the above and the list is exhaustive in this sense, there is a more canonical choice for the measure space in the case of abelian von Neumann algebras A: The set of all projectors is a -complete Boolean algebra, that is a pointfree -algebra. In the special case one recovers the abstract -algebra . This pointfree approach can be turned into a duality theorem analogue to Gelfand-duality between the category of abelian von Neumann algebras and the category of abstract -algebras.
Let μ and ν be non-atomic probability measures on standard Borel spaces X and Y respectively. Then there is a μ null subset N of X, a ν null subset M of Y and a Borel isomorphism
which carries μ into ν.
Notice that in the above result, it
|
https://en.wikipedia.org/wiki/Maximal%20compact%20subgroup
|
In mathematics, a maximal compact subgroup K of a topological group G is a subgroup K that is a compact space, in the subspace topology, and maximal amongst such subgroups.
Maximal compact subgroups play an important role in the classification of Lie groups and especially semi-simple Lie groups. Maximal compact subgroups of Lie groups are not in general unique, but are unique up to conjugation – they are essentially unique.
Example
An example would be the subgroup O(2), the orthogonal group, inside the general linear group GL(2, R). A related example is the circle group SO(2) inside SL(2, R). Evidently SO(2) inside GL(2, R) is compact and not maximal. The non-uniqueness of these examples can be seen as any inner product has an associated orthogonal group, and the essential uniqueness corresponds to the essential uniqueness of the inner product.
Definition
A maximal compact subgroup is a maximal subgroup amongst compact subgroups – a maximal (compact subgroup) – rather than being (alternate possible reading) a maximal subgroup that happens to be compact; which would probably be called a compact (maximal subgroup), but in any case is not the intended meaning (and in fact maximal proper subgroups are not in general compact).
Existence and uniqueness
The Cartan-Iwasawa-Malcev theorem asserts that every connected Lie group (and indeed every connected locally compact group) admits maximal compact subgroups and that they are all conjugate to one another. For a semisimple Lie group uniqueness is a consequence of the Cartan fixed point theorem, which asserts that if a compact group acts by isometries on a complete simply connected negatively curved Riemannian manifold then it has a fixed point.
Maximal compact subgroups of connected Lie groups are usually not unique, but they are unique up to conjugation, meaning that given two maximal compact subgroups K and L, there is an element g ∈ G such that gKg−1 = L. Hence a maximal compact subgroup is essentially unique, and people often speak of "the" maximal compact subgroup.
For the example of the general linear group GL(n, R), this corresponds to the fact that any inner product on Rn defines a (compact) orthogonal group (its isometry group) – and that it admits an orthonormal basis: the change of basis defines the conjugating element conjugating the isometry group to the classical orthogonal group O(n, R).
Proofs
For a real semisimple Lie group, Cartan's proof of the existence and uniqueness of a maximal compact subgroup can be found in and . and discuss the extension to connected Lie groups and connected locally compact groups.
For semisimple groups, existence is a consequence of the existence of a compact real form of the noncompact semisimple Lie group and the corresponding Cartan decomposition. The proof of uniqueness relies on the fact that the corresponding Riemannian symmetric space G/K has negative curvature and
Cartan's fixed point theorem. showed that the derivative of the exponential
|
https://en.wikipedia.org/wiki/Kaprekar%27s%20routine
|
In number theory, Kaprekar's routine is an iterative algorithm named after its inventor, Indian mathematician D. R. Kaprekar. Each iteration starts with a number, sorts the digits into descending and ascending order, and calculates the difference between the two new numbers.
As an example, starting with the number 8991 in base 10:
6174, known as Kaprekar's constant, is a fixed point of this algorithm. Any four-digit number (in base 10) with at least two distinct digits will reach 6174 within seven iterations. The algorithm runs on any natural number in any given number base.
Definition and properties
The algorithm is as follows:
Choose any natural number in a given number base . This is the first number of the sequence.
Create a new number by sorting the digits of in descending order, and another number by sorting the digits of in ascending order. These numbers may have leading zeros, which can be ignored. Subtract to produce the next number of the sequence.
Repeat step 2.
The sequence is called a Kaprekar sequence and the function is the Kaprekar mapping. Some numbers map to themselves; these are the fixed points of the Kaprekar mapping, and are called Kaprekar's constants. Zero is a Kaprekar's constant for all bases , and so is called a trivial Kaprekar's constant. All other Kaprekar's constant are nontrivial Kaprekar's constants.
For example, in base 10, starting with 3524,
with 6174 as a Kaprekar's constant.
All Kaprekar sequences will either reach one of these fixed points or will result in a repeating cycle. Either way, the end result is reached in a fairly small number of steps.
Note that the numbers and have the same digit sum and hence the same remainder modulo . Therefore, each number in a Kaprekar sequence of base numbers (other than possibly the first) is a multiple of .
When leading zeroes are retained, only repdigits lead to the trivial Kaprekar's constant.
Families of Kaprekar's constants
In base 4, it can easily be shown that all numbers of the form 3021, 310221, 31102221, 3...111...02...222...1 (where the length of the "1" sequence and the length of the "2" sequence are the same) are fixed points of the Kaprekar mapping.
In base 10, it can easily be shown that all numbers of the form 6174, 631764, 63317664, 6...333...17...666...4 (where the length of the "3" sequence and the length of the "6" sequence are the same) are fixed points of the Kaprekar mapping.
b = 2k
It can be shown that all natural numbers
are fixed points of the Kaprekar mapping in even base for all natural numbers .
Kaprekar's constants and cycles of the Kaprekar mapping for specific base b
All numbers are expressed in base , using A−Z to represent digit values 10 to 35.
Kaprekar's constants in base 10
Numbers of length four digits
In 1949 D. R. Kaprekar discovered that if the above process is applied to four-digit numbers in base 10, the sequence converges to 6174 within seven iterations or, more rarely, converges to
|
https://en.wikipedia.org/wiki/Conformable%20matrix
|
In mathematics, a matrix is conformable if its dimensions are suitable for defining some operation (e.g. addition, multiplication, etc.).
Examples
If two matrices have the same dimensions (number of rows and number of columns), they are conformable for addition.
Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. That is, if is an matrix and is an matrix, then needs to be equal to for the matrix product to be defined. In this case, we say that and are conformable for multiplication (in that sequence).
Since squaring a matrix involves multiplying it by itself () a matrix must be (that is, it must be a square matrix) to be conformable for squaring. Thus for example only a square matrix can be idempotent.
Only a square matrix is conformable for matrix inversion. However, the Moore–Penrose pseudoinverse and other generalized inverses do not have this requirement.
Only a square matrix is conformable for matrix exponentiation.
See also
Linear algebra
References
Linear algebra
Matrices
|
https://en.wikipedia.org/wiki/Consistent%20estimator
|
In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.
In practice one constructs an estimator as a function of an available sample of size n, and then imagines being able to keep collecting data and expanding the sample ad infinitum. In this way one would obtain a sequence of estimates indexed by n, and consistency is a property of what occurs as the sample size “grows to infinity”. If the sequence of estimates can be mathematically shown to converge in probability to the true value θ0, it is called a consistent estimator; otherwise the estimator is said to be inconsistent.
Consistency as defined here is sometimes referred to as weak consistency. When we replace convergence in probability with almost sure convergence, then the estimator is said to be strongly consistent. Consistency is related to bias; see bias versus consistency.
Definition
Formally speaking, an estimator Tn of parameter θ is said to be weakly consistent, if it converges in probability to the true value of the parameter:
i.e. if, for all ε > 0
An estimator Tn of parameter θ is said to be strongly consistent, if it converges almost surely to the true value of the parameter:
A more rigorous definition takes into account the fact that θ is actually unknown, and thus, the convergence in probability must take place for every possible value of this parameter. Suppose } is a family of distributions (the parametric model), and } is an infinite sample from the distribution pθ. Let { Tn(Xθ) } be a sequence of estimators for some parameter g(θ). Usually, Tn will be based on the first n observations of a sample. Then this sequence {Tn} is said to be (weakly) consistent if
This definition uses g(θ) instead of simply θ, because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example, we estimate the location parameter of the model, but not the scale:
Examples
Sample mean of a normal random variable
Suppose one has a sequence of statistically independent observations {X1, X2, ...} from a normal N(μ, σ2) distribution. To estimate μ based on the first n observations, one can use the sample mean: Tn = (X1 + ... + Xn)/n. This defines a sequence of estimators, indexed by the sample size n.
From the properties of the normal distribution, we know the sampling distribution of this statistic: Tn is itself normally distributed, with mean μ and variance σ2/n. Equivalently, has a standard normal distribution:
as n tends to infinity, for any fixed . Therefore
|
https://en.wikipedia.org/wiki/Finitely%20generated
|
In mathematics, finitely generated may refer to:
Finitely generated object
Finitely generated group
Finitely generated monoid
Finitely generated abelian group
Finitely generated module
Finitely generated ideal
Finitely generated algebra
Finitely generated space
de:Endlich erzeugt
|
https://en.wikipedia.org/wiki/Mellin%20inversion%20theorem
|
In mathematics, the Mellin inversion formula (named after Hjalmar Mellin) tells us conditions under
which the inverse Mellin transform, or equivalently the inverse two-sided Laplace transform, are defined and recover the transformed function.
Method
If is analytic in the strip ,
and if it tends to zero uniformly as for any real value c between a and b, with its integral along such a line converging absolutely, then if
we have that
Conversely, suppose is piecewise continuous on the positive real numbers, taking a value halfway between the limit values at any jump discontinuities, and suppose the integral
is absolutely convergent when . Then is recoverable via the inverse Mellin transform from its Mellin transform . These results can be obtained by relating the Mellin transform to the Fourier transform by a change of variables and then applying an appropriate version of the Fourier inversion theorem.
Boundedness condition
The boundedness condition on can be strengthened if
is continuous. If is analytic in the strip , and if , where K is a positive constant, then as defined by the inversion integral exists and is continuous; moreover the Mellin transform of is for at least .
On the other hand, if we are willing to accept an original which is a
generalized function, we may relax the boundedness condition on
to
simply make it of polynomial growth in any closed strip contained in the open strip .
We may also define a Banach space version of this theorem. If we call by
the weighted Lp space of complex valued functions on the positive reals such that
where ν and p are fixed real numbers with , then if
is in with , then
belongs to with and
Here functions, identical everywhere except on a set of measure zero, are identified.
Since the two-sided Laplace transform can be defined as
these theorems can be immediately applied to it also.
See also
Mellin transform
Nachbin's theorem
References
External links
Tables of Integral Transforms at EqWorld: The World of Mathematical Equations.
Integral transforms
Theorems in complex analysis
Laplace transforms
|
https://en.wikipedia.org/wiki/It%C3%B4%20calculus
|
Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion (see Wiener process). It has important applications in mathematical finance and stochastic differential equations.
The central concept is the Itô stochastic integral, a stochastic generalization of the Riemann–Stieltjes integral in analysis. The integrands and the integrators are now stochastic processes:
where H is a locally square-integrable process adapted to the filtration generated by X , which is a Brownian motion or, more generally, a semimartingale. The result of the integration is then another stochastic process. Concretely, the integral from 0 to any particular t is a random variable, defined as a limit of a certain sequence of random variables. The paths of Brownian motion fail to satisfy the requirements to be able to apply the standard techniques of calculus. So with the integrand a stochastic process, the Itô stochastic integral amounts to an integral with respect to a function which is not differentiable at any point and has infinite variation over every time interval.
The main insight is that the integral can be defined as long as the integrand H is adapted, which loosely speaking means that its value at time t can only depend on information available up until this time. Roughly speaking, one chooses a sequence of partitions of the interval from 0 to t and constructs Riemann sums. Every time we are computing a Riemann sum, we are using a particular instantiation of the integrator. It is crucial which point in each of the small intervals is used to compute the value of the function. The limit then is taken in probability as the mesh of the partition is going to zero. Numerous technical details have to be taken care of to show that this limit exists and is independent of the particular sequence of partitions. Typically, the left end of the interval is used.
Important results of Itô calculus include the integration by parts formula and Itô's lemma, which is a change of variables formula. These differ from the formulas of standard calculus, due to quadratic variation terms.
In mathematical finance, the described evaluation strategy of the integral is conceptualized as that we are first deciding what to do, then observing the change in the prices. The integrand is how much stock we hold, the integrator represents the movement of the prices, and the integral is how much money we have in total including what our stock is worth, at any given moment. The prices of stocks and other traded financial assets can be modeled by stochastic processes such as Brownian motion or, more often, geometric Brownian motion (see Black–Scholes). Then, the Itô stochastic integral represents the payoff of a continuous-time trading strategy consisting of holding an amount Ht of the stock at time t. In this situation, the condition that H is adapted corresponds to the necessary restriction that the trading strategy can only make use of the
|
https://en.wikipedia.org/wiki/Kenkichi%20Iwasawa
|
Kenkichi Iwasawa ( Iwasawa Kenkichi, September 11, 1917 – October 26, 1998) was a Japanese mathematician who is known for his influence on algebraic number theory.
Biography
Iwasawa was born in Shinshuku-mura, a town near Kiryū, in Gunma Prefecture. He attended elementary school there, but later moved to Tokyo to attend Musashi High School.
From 1937 to 1940 Iwasawa studied as an undergraduate at Tokyo Imperial University, after which he entered graduate school at University of Tokyo and became an assistant in the Department of Mathematics. In 1945 he was awarded a Doctor of Science degree. However, this same year Iwasawa became sick with pleurisy, and was unable to return to his position at the university until April 1947. From 1949 to 1955 he worked as assistant professor at Tokyo University.
In 1950, Iwasawa was invited to Cambridge, Massachusetts to give a lecture at the International Congress of Mathematicians on his method to study Dedekind zeta functions using integration over ideles and duality of adeles; this method was also independently obtained by John Tate and is sometimes called Iwasawa–Tate theory. Iwasawa spent the next two years at the Institute for Advanced Study in Princeton, and in Spring of 1952 was offered a job at the Massachusetts Institute of Technology, where he worked until 1967.
From 1967 until his retirement in 1986, Iwasawa served as Professor of Mathematics at Princeton. He returned to Tokyo with his wife in 1987.
Among Iwasawa's most famous students are Robert F. Coleman, Bruce Ferrero, Ralph Greenberg, Gustave Solomon, Larry Washington, and Eugene M. Luks.
Research
Iwasawa is known for introducing what is now called Iwasawa theory, which developed from researches on cyclotomic fields from the later 1950s. Before that he worked on Lie groups and Lie algebras, introducing the general Iwasawa decomposition.
List of books available in English
Lectures on p-adic L-functions / by Kenkichi Iwasawa (1972)
Local class field theory / Kenkichi Iwasawa (1986)
Algebraic functions / Kenkichi Iwasawa ; translated by Goro Kato (1993)
See also
Iwasawa group
Anabelian geometry
Fermat's Last Theorem
References
Sources
External links
1917 births
1998 deaths
People from Gunma Prefecture
20th-century Japanese mathematicians
Number theorists
Institute for Advanced Study visiting scholars
Massachusetts Institute of Technology faculty
Princeton University faculty
University of Tokyo alumni
|
https://en.wikipedia.org/wiki/Hardy%27s%20theorem
|
In mathematics, Hardy's theorem is a result in complex analysis describing the behavior of holomorphic functions.
Let be a holomorphic function on the open ball centered at zero and radius in the complex plane, and assume that is not a constant function. If one defines
for then this function is strictly increasing and is a convex function of .
See also
Maximum principle
Hadamard three-circle theorem
References
John B. Conway. (1978) Functions of One Complex Variable I. Springer-Verlag, New York, New York.
Theorems in complex analysis
|
https://en.wikipedia.org/wiki/Hadamard%20three-circle%20theorem
|
In complex analysis, a branch of mathematics, the
Hadamard three-circle theorem is a result about the behavior of holomorphic functions.
Let be a holomorphic function on the annulus
Let be the maximum of on the circle Then, is a convex function of the logarithm Moreover, if is not of the form for some constants and , then is strictly convex as a function of
The conclusion of the theorem can be restated as
for any three concentric circles of radii
History
A statement and proof for the theorem was given by J.E. Littlewood in 1912, but he attributes it to no one in particular, stating it as a known theorem. Harald Bohr and Edmund Landau attribute the theorem to Jacques Hadamard, writing in 1896; Hadamard published no proof.
Proof
The three circles theorem follows from the fact that for any real a, the function Re log(zaf(z)) is harmonic between two circles, and therefore takes its maximum value on one of the circles. The theorem follows by choosing the constant a so that this harmonic function has the same maximum value on both circles.
The theorem can also be deduced directly from Hadamard's three-lines theorem.
See also
Maximum principle
Logarithmically convex function
Hardy's theorem
Hadamard three-lines theorem
Borel–Carathéodory theorem
Phragmén–Lindelöf principle
Notes
References
E. C. Titchmarsh, The theory of the Riemann Zeta-Function, (1951) Oxford at the Clarendon Press, Oxford. (See chapter 14)
External links
"proof of Hadamard three-circle theorem"
Inequalities
Theorems in complex analysis
|
https://en.wikipedia.org/wiki/Projection-slice%20theorem
|
In mathematics, the projection-slice theorem, central slice theorem or Fourier slice theorem in two dimensions states that the results of the following two calculations are equal:
Take a two-dimensional function f(r), project (e.g. using the Radon transform) it onto a (one-dimensional) line, and do a Fourier transform of that projection.
Take that same function, but do a two-dimensional Fourier transform first, and then slice it through its origin, which is parallel to the projection line.
In operator terms, if
F1 and F2 are the 1- and 2-dimensional Fourier transform operators mentioned above,
P1 is the projection operator (which projects a 2-D function onto a 1-D line),
S1 is a slice operator (which extracts a 1-D central slice from a function),
then
This idea can be extended to higher dimensions.
This theorem is used, for example, in the analysis of medical
CT scans where a "projection" is an x-ray
image of an internal organ. The Fourier transforms of these images are
seen to be slices through the Fourier transform of the 3-dimensional
density of the internal organ, and these slices can be interpolated to build
up a complete Fourier transform of that density. The inverse Fourier transform
is then used to recover the 3-dimensional density of the object. This technique was first derived by Ronald N. Bracewell in 1956 for a radio-astronomy problem.
The projection-slice theorem in N dimensions
In N dimensions, the projection-slice theorem states that the
Fourier transform of the projection of an N-dimensional function
f(r) onto an m-dimensional linear submanifold
is equal to an m-dimensional slice of the N-dimensional Fourier transform of that
function consisting of an m-dimensional linear submanifold through the origin in the Fourier space which is parallel to the projection submanifold. In operator terms:
The generalized Fourier-slice theorem
In addition to generalizing to N dimensions, the projection-slice theorem can be further generalized with an arbitrary change of basis. For convenience of notation, we consider the change of basis to be represented as B, an N-by-N invertible matrix operating on N-dimensional column vectors. Then the generalized Fourier-slice theorem can be stated as
where is the transpose of the inverse of the change of basis transform.
Proof in two dimensions
The projection-slice theorem is easily proven for the case of two dimensions.
Without loss of generality, we can take the projection line to be the x-axis.
There is no loss of generality because if we use a shifted and rotated line, the law still applies. Using a shifted line (in y) gives the same projection and therefore the same 1D Fourier transform results. The rotated function is the Fourier pair of the rotated Fourier transform, for which the theorem again holds.
If f(x, y) is a two-dimensional function, then the projection of f(x, y) onto the x axis is p(x) where
The Fourier transform of is
The slice is then
which is just the Fourier t
|
https://en.wikipedia.org/wiki/Abel%20transform
|
In mathematics, the Abel transform, named for Niels Henrik Abel, is an integral transform often used in the analysis of spherically symmetric or axially symmetric functions. The Abel transform of a function f(r) is given by
Assuming that f(r) drops to zero more quickly than 1/r, the inverse Abel transform is given by
In image analysis, the forward Abel transform is used to project an optically thin, axially symmetric emission function onto a plane, and the inverse Abel transform is used to calculate the emission function given a projection (i.e. a scan or a photograph) of that emission function.
In absorption spectroscopy of cylindrical flames or plumes, the forward Abel transform is the integrated absorbance along a ray with closest distance y from the center of the flame, while the inverse Abel transform gives the local absorption coefficient at a distance r from the center. Abel transform is limited to applications with axially symmetric geometries. For more general asymmetrical cases, more general-oriented reconstruction algorithms such as algebraic reconstruction technique (ART), maximum likelihood expectation maximization (MLEM), filtered back-projection (FBP) algorithms should be employed.
In recent years, the inverse Abel transform (and its variants) has become the cornerstone of data analysis in photofragment-ion imaging and photoelectron imaging. Among recent most notable extensions of inverse Abel transform are the "onion peeling" and "basis set expansion" (BASEX) methods of photoelectron and photoion image analysis.
Geometrical interpretation
In two dimensions, the Abel transform F(y) can be interpreted as the projection of a circularly symmetric function f(r) along a set of parallel lines of sight at a distance y from the origin. Referring to the figure on the right, the observer (I) will see
where f(r) is the circularly symmetric function represented by the gray color in the figure. It is assumed that the observer is actually at x = ∞, so that the limits of integration are ±∞, and all lines of sight are parallel to the x axis.
Realizing that the radius r is related to x and y as r2 = x2 + y2, it follows that
for x > 0. Since f(r) is an even function in x, we may write
which yields the Abel transform of f(r).
The Abel transform may be extended to higher dimensions. Of particular interest is the extension to three dimensions. If we have an axially symmetric function f(ρ, z), where ρ2 = x2 + y2 is the cylindrical radius, then we may want to know the projection of that function onto a plane parallel to the z axis. Without loss of generality, we can take that plane to be the yz plane, so that
which is just the Abel transform of f(ρ, z) in ρ and y.
A particular type of axial symmetry is spherical symmetry. In this case, we have a function f(r), where r2 = x2 + y2 + z2.
The projection onto, say, the yz plane will then be circularly symmetric and expressible as F(s), where s2 = y2 + z2. Carrying out the integr
|
https://en.wikipedia.org/wiki/Retail%20Price%20Index
|
In the United Kingdom, the Retail Prices Index or Retail Price Index (RPI) is a measure of inflation published monthly by the Office for National Statistics. It measures the change in the cost of a representative sample of retail goods and services.
As the RPI was held not to meet international statistical standards, since 2013, the Office for National Statistics no longer classifies it as a "national statistic", emphasising the Consumer Price Index instead. However, as of 2018, the UK Treasury still uses the RPI measure of inflation for various index-linked tax rises.
History
RPI was first introduced in 1956, replacing the previous Interim Index of Retail Prices that had been in use since June 1947. It was once the principal official measure of inflation. It has been superseded in that regard by the Consumer Price Index (CPI).
The RPI is still used by the government as a base for various purposes, such as the amounts payable on index-linked securities, including index-linked gilts, and social housing rent increases. Many employers also use it as a starting point in wage negotiation. Since 2003, it is no longer used by the government for the inflation target for the Bank of England's Monetary Policy Committee nor, from April 2011, as the basis for the indexation of pensions of former public sector employees. , the UK state pension is indexed by the highest of the increase in average earnings, CPI or 2.5% ("the triple lock").
The highest annual inflation since the introduction of the RPI came in June 1975, with an increase in retail prices of 26.9% from a year earlier. By 1978, this had fallen to less than 10%, but it rose again towards 20% over the following two years before falling again. By 1982, it had fallen below 10% and a year later was down to 4%, remaining low for several years until approaching double figures again by 1990. Aided by a recession in the early 1990s, increased interest rates brought inflation down again to an even lower level.
From March to October 2009, the change in RPI measured over a 12-month period was negative, indicating an overall annual reduction in prices, for the first time since 1960. The change in RPI in the 12 months ending in April 2009, at −1.2%, was the lowest since the index started in 1948.
Housing associations lobbied the government to allow them to freeze rents at current levels rather than reduce them in line with the RPI, but the Treasury concluded that rents should follow RPI down as far as −2% per annum, leading to savings in housing benefit.
In February 2011, annual RPI inflation jumped to 5.1% putting pressure on the Bank of England to raise interest rates despite disappointing projected GDP growth of only 1.6% in 2011. The September 2011 figure of 5.6%, the highest for 20 years, was described by the Daily Telegraph as "shockingly bad".
After a thorough review, in 2012 the National Statistician's Consumer Prices Advisory Committee (CPAC) determined that due to the use of the Carli formula
|
https://en.wikipedia.org/wiki/Coefficient%20of%20determination
|
In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s).
It is a statistic used in the context of statistical models whose main purpose is either the prediction of future outcomes or the testing of hypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.
There are several definitions of R2 that are only sometimes equivalent. One class of such cases includes that of simple linear regression where r2 is used instead of R2. When only an intercept is included, then r2 is simply the square of the sample correlation coefficient (i.e., r) between the observed outcomes and the observed predictor values. If additional regressors are included, R2 is the square of the coefficient of multiple correlation. In both such cases, the coefficient of determination normally ranges from 0 to 1.
There are cases where R2 can yield negative values. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. Even if a model-fitting procedure has been used, R2 may still be negative, for example when linear regression is conducted without including an intercept, or when a non-linear function is used to fit the data. In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion.
The coefficient of determination can be more (intuitively) informative than MAE, MAPE, MSE, and RMSE in regression analysis evaluation, as the former can be expressed as a percentage, whereas the latter measures have arbitrary ranges. It also proved more robust for poor fits compared to SMAPE on the test datasets in the article.
When evaluating the goodness-of-fit of simulated (Ypred) vs. measured (Yobs) values, it is not appropriate to base this on the R2 of the linear regression (i.e., Yobs= m·Ypred + b). The R2 quantifies the degree of any linear correlation between Yobs and Ypred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration: Yobs = 1·Ypred + 0 (i.e., the 1:1 line).
Definitions
A data set has n values marked y1,...,yn (collectively known as yi or as a vector y = [y1,...,yn]T), each associated with a fitted (or modeled, or predicted) value f1,...,fn (known as fi, or sometimes ŷi, as a vector f).
Define the residuals as ei = yi − fi (forming a vector e).
If is the mean of the observed data:
then the variability of the data set can be measured with two sums of squares formulas:
The sum of squares of residuals, also called the residual sum of squares:
The total sum of squares (proportional to the variance of
|
https://en.wikipedia.org/wiki/Two-sided%20Laplace%20transform
|
In mathematics, the two-sided Laplace transform or bilateral Laplace transform is an integral transform equivalent to probability's moment generating function. Two-sided Laplace transforms are closely related to the Fourier transform, the Mellin transform, the Z-transform and the ordinary or one-sided Laplace transform. If f(t) is a real- or complex-valued function of the real variable t defined for all real numbers, then the two-sided Laplace transform is defined by the integral
The integral is most commonly understood as an improper integral, which converges if and only if both integrals
exist. There seems to be no generally accepted notation for the two-sided transform; the
used here recalls "bilateral". The two-sided transform
used by some authors is
In pure mathematics the argument t can be any variable, and Laplace transforms are used to study how differential operators transform the function.
In science and engineering applications, the argument t often represents time (in seconds), and the function f(t) often represents a signal or waveform that varies with time. In these cases, the signals are transformed by filters, that work like a mathematical operator, but with a restriction. They have to be causal, which means that the output in a given time t cannot depend on an output which is a higher value of t.
In population ecology, the argument t often represents spatial displacement in a dispersal kernel.
When working with functions of time, f(t) is called the time domain representation of the signal, while F(s) is called the s-domain (or Laplace domain) representation. The inverse transformation then represents a synthesis of the signal as the sum of its frequency components taken over all frequencies, whereas the forward transformation represents the analysis of the signal into its frequency components.
Relationship to the Fourier transform
The Fourier transform can be defined in terms of the two-sided Laplace transform:
Note that definitions of the Fourier transform differ, and in particular
is often used instead. In terms of the Fourier transform, we may also obtain the two-sided Laplace transform, as
The Fourier transform is normally defined so that it exists for real values; the above definition defines the image in a strip which may not include the real axis where the Fourier transform is supposed to converge.
This is then why Laplace transforms retain their value in control theory and signal processing: the convergence of a Fourier transform integral within its domain only means that a linear, shift-invariant system described by it is stable or critical. The Laplace one on the other hand will somewhere converge for every impulse response which is at most exponentially growing, because it involves an extra term which can be taken as an exponential regulator. Since there are no superexponentially growing linear feedback networks, Laplace transform based analysis and solution of linear, shift-invariant systems, takes its
|
https://en.wikipedia.org/wiki/Local%20boundedness
|
In mathematics, a function is locally bounded if it is bounded around every point. A family of functions is locally bounded if for any point in their domain all the functions are bounded around that point and by the same number.
Locally bounded function
A real-valued or complex-valued function defined on some topological space is called a if for any there exists a neighborhood of such that is a bounded set. That is, for some number one has
In other words, for each one can find a constant, depending on which is larger than all the values of the function in the neighborhood of Compare this with a bounded function, for which the constant does not depend on Obviously, if a function is bounded then it is locally bounded. The converse is not true in general (see below).
This definition can be extended to the case when takes values in some metric space Then the inequality above needs to be replaced with
where is some point in the metric space. The choice of does not affect the definition; choosing a different will at most increase the constant for which this inequality is true.
Examples
The function defined by is bounded, because for all Therefore, it is also locally bounded.
The function defined by is bounded, as it becomes arbitrarily large. However, it locally bounded because for each in the neighborhood where
The function defined by is neither bounded locally bounded. In any neighborhood of 0 this function takes values of arbitrarily large magnitude.
Any continuous function is locally bounded. Here is a proof for functions of a real variable. Let be continuous where and we will show that is locally bounded at for all Taking ε = 1 in the definition of continuity, there exists such that for all with . Now by the triangle inequality, which means that is locally bounded at (taking and the neighborhood ). This argument generalizes easily to when the domain of is any topological space.
The converse of the above result is not true however; that is, a discontinuous function may be locally bounded. For example consider the function given by and for all Then is discontinuous at 0 but is locally bounded; it is locally constant apart from at zero, where we can take and the neighborhood for example.
Locally bounded family
A set (also called a family) U of real-valued or complex-valued functions defined on some topological space is called locally bounded if for any there exists a neighborhood of and a positive number such that
for all and In other words, all the functions in the family must be locally bounded, and around each point they need to be bounded by the same constant.
This definition can also be extended to the case when the functions in the family U take values in some metric space, by again replacing the absolute value with the distance function.
Examples
The family of functions where is locally bounded. Indeed, if is a real number, one can choose the neighborh
|
https://en.wikipedia.org/wiki/Greek%20mathematics
|
Greek mathematics refers to mathematics texts and ideas stemming from the Archaic through the Hellenistic and Roman periods, mostly from the late 7th century BC to the 6th century AD, around the shores of the Mediterranean. Greek mathematicians lived in cities spread over the entire region, from Anatolia to Italy and North Africa, but were united by Greek culture and the Greek language. The development of mathematics as a theoretical discipline and the use of deductive reasoning in proofs is an important difference between Greek mathematics and those of preceding civilizations.
Origins and etymology
Greek mathēmatikē ("mathematics") derives from the , , from the verb manthanein, "to learn". Strictly speaking, a máthēma could be any branch of learning, or anything learnt; however, since antiquity certain mathēmata (mainly arithmetic, geometry, astronomy, and harmonics) were granted special status.
The origins of Greek mathematics are not well documented. The earliest advanced civilizations in Greece and Europe were the Minoan and later Mycenaean civilizations, both of which flourished during the 2nd millennium BC. While these civilizations possessed writing and were capable of advanced engineering, including four-story palaces with drainage and beehive tombs, they left behind no mathematical documents.
Though no direct evidence is available, it is generally thought that the neighboring Babylonian and Egyptian civilizations had an influence on the younger Greek tradition. Unlike the flourishing of Greek literature in the span of 800 to 600 BC, not much is known about Greek mathematics in this early period—nearly all of the information was passed down through later authors, beginning in the mid-4th century BC.
Archaic and Classical periods
Greek mathematics allegedly began with Thales of Miletus (c. 624–548 BC). Very little is known about his life, although it is generally agreed that he was one of the Seven Wise Men of Greece. According to Proclus, he traveled to Babylon from where he learned mathematics and other subjects, coming up with the proof of what is now called Thales' Theorem.
An equally enigmatic figure is Pythagoras of Samos (c. 580–500 BC), who supposedly visited Egypt and Babylon, and ultimately settled in Croton, Magna Graecia, where he started a kind of brotherhood. Pythagoreans supposedly believed that "all is number" and were keen in looking for mathematical relations between numbers and things. Pythagoras himself was given credit for many later discoveries, including the construction of the five regular solids. However, Aristotle refused to attribute anything specifically to Pythagoras and only discussed the work of the Pythagoreans as a group.
Almost half of the material in Euclid's Elements is customarily attributed to the Pythagoreans, including the discovery of irrationals, attributed to Hippasus (c. 530–450 BC) and Theodorus (fl. 450 BC). The greatest mathematician associated with the group, however, may have been Ar
|
https://en.wikipedia.org/wiki/Logarithmically%20convex%20function
|
In mathematics, a function f is logarithmically convex or superconvex if , the composition of the logarithm with f, is itself a convex function.
Definition
Let be a convex subset of a real vector space, and let be a function taking non-negative values. Then is:
Logarithmically convex if is convex, and
Strictly logarithmically convex if is strictly convex.
Here we interpret as .
Explicitly, is logarithmically convex if and only if, for all and all , the two following equivalent conditions hold:
Similarly, is strictly logarithmically convex if and only if, in the above two expressions, strict inequality holds for all .
The above definition permits to be zero, but if is logarithmically convex and vanishes anywhere in , then it vanishes everywhere in the interior of .
Equivalent conditions
If is a differentiable function defined on an interval , then is logarithmically convex if and only if the following condition holds for all and in :
This is equivalent to the condition that, whenever and are in and ,
Moreover, is strictly logarithmically convex if and only if these inequalities are always strict.
If is twice differentiable, then it is logarithmically convex if and only if, for all in ,
If the inequality is always strict, then is strictly logarithmically convex. However, the converse is false: It is possible that is strictly logarithmically convex and that, for some , we have . For example, if , then is strictly logarithmically convex, but .
Furthermore, is logarithmically convex if and only if is convex for all .
Sufficient conditions
If are logarithmically convex, and if are non-negative real numbers, then is logarithmically convex.
If is any family of logarithmically convex functions, then is logarithmically convex.
If is convex and is logarithmically convex and non-decreasing, then is logarithmically convex.
Properties
A logarithmically convex function f is a convex function since it is the composite of the increasing convex function and the function , which is by definition convex. However, being logarithmically convex is a strictly stronger property than being convex. For example, the squaring function is convex, but its logarithm is not. Therefore the squaring function is not logarithmically convex.
Examples
is logarithmically convex when and strictly logarithmically convex when .
is strictly logarithmically convex on for all
Euler's gamma function is strictly logarithmically convex when restricted to the positive real numbers. In fact, by the Bohr–Mollerup theorem, this property can be used to characterize Euler's gamma function among the possible extensions of the factorial function to real arguments.
See also
Logarithmically concave function
Notes
References
John B. Conway. Functions of One Complex Variable I, second edition. Springer-Verlag, 1995. .
.
.
Real analysis
|
https://en.wikipedia.org/wiki/Donato%20Acciaioli
|
Donato Acciaioli (15 March 142828 August 1478) was an Italian scholar and statesman. He was known for his learning, especially in Greek and mathematics, and for his services to his native state, the Republic of Florence.
Biography
He was born in Florence, Italy. He was educated under the patronage or guidance of Jacopo Piccolomini-Ammannati (1422–1479), who subsequently was named cardinal. He also putatively gained his knowledge of the classics from Lionardo and Carlo Marsuppini (1399–1453) and from the refugee scholar from Byzantium, Giovanni Argiropolo.
Having previously been entrusted with several important embassies, in 1473 he became Gonfalonier of Florence, one of the nine citizens selected by drawing lots every two months, who formed the government. He died at Milan in 1478, when on his way to Paris to ask the aid of Louis XI on behalf of the Florentines against Pope Sixtus IV. His body was taken back to Florence and buried in the church of the Carthusian order at the public expense, and his daughters were endowed by his fellow-citizens, since he had little in terms of wealth.
He wrote Latin translations of some of Plutarch's Lives (Florence, 1478); Commentaries on Aristotle's Ethics, Politics, Physics, and De anima; the lives of Hannibal, Scipio and Charlemagne as well as the biography of the grand seneschal of the Kingdom of Naples, Niccolò Acciaioli by Matteo Palmieri. In the work on Aristotle he had the cooperation of his master John Argyropulus.
See also
Zanobi Acciaioli, Librarian of the Vatican, of the same family
References
1429 births
1478 deaths
Politicians from Florence
Italian classical scholars
Greek–Latin translators
15th-century people from the Republic of Florence
Italian mathematicians
|
https://en.wikipedia.org/wiki/Chebyshev%20distance
|
In mathematics, Chebyshev distance (or Tchebychev distance), maximum metric, or L∞ metric is a metric defined on a vector space where the distance between two vectors is the greatest of their differences along any coordinate dimension. It is named after Pafnuty Chebyshev.
It is also known as chessboard distance, since in the game of chess the minimum number of moves needed by a king to go from one square on a chessboard to another equals the Chebyshev distance between the centers of the squares, if the squares have side length one, as represented in 2-D spatial coordinates with axes aligned to the edges of the board. For example, the Chebyshev distance between f6 and e2 equals 4.
Definition
The Chebyshev distance between two vectors or points x and y, with standard coordinates and , respectively, is
This equals the limit of the Lp metrics:
hence it is also known as the L∞ metric.
Mathematically, the Chebyshev distance is a metric induced by the supremum norm or uniform norm. It is an example of an injective metric.
In two dimensions, i.e. plane geometry, if the points p and q have Cartesian coordinates
and , their Chebyshev distance is
Under this metric, a circle of radius r, which is the set of points with Chebyshev distance r from a center point, is a square whose sides have the length 2r and are parallel to the coordinate axes.
On a chessboard, where one is using a discrete Chebyshev distance, rather than a continuous one, the circle of radius r is a square of side lengths 2r, measuring from the centers of squares, and thus each side contains 2r+1 squares; for example, the circle of radius 1 on a chess board is a 3×3 square.
Properties
In one dimension, all Lp metrics are equal – they are just the absolute value of the difference.
The two dimensional Manhattan distance has "circles" i.e. level sets in the form of squares, with sides of length r, oriented at an angle of π/4 (45°) to the coordinate axes, so the planar Chebyshev distance can be viewed as equivalent by rotation and scaling to (i.e. a linear transformation of) the planar Manhattan distance.
However, this geometric equivalence between L1 and L∞ metrics does not generalize to higher dimensions. A sphere formed using the Chebyshev distance as a metric is a cube with each face perpendicular to one of the coordinate axes, but a sphere formed using Manhattan distance is an octahedron: these are dual polyhedra, but among cubes, only the square (and 1-dimensional line segment) are self-dual polytopes. Nevertheless, it is true that in all finite-dimensional spaces the L1 and L∞ metrics are mathematically dual to each other.
On a grid (such as a chessboard), the points at a Chebyshev distance of 1 of a point are the Moore neighborhood of that point.
The Chebyshev distance is the limiting case of the order- Minkowski distance, when reaches infinity.
Applications
The Chebyshev distance is sometimes used in warehouse logistics, as it effectively measures the time an overhea
|
https://en.wikipedia.org/wiki/Possibility%20theory
|
Possibility theory is a mathematical theory for dealing with certain types of uncertainty and is an alternative to probability theory. It uses measures of possibility and necessity between 0 and 1, ranging from impossible to possible and unnecessary to necessary, respectively. Professor Lotfi Zadeh first introduced possibility theory in 1978 as an extension of his theory of fuzzy sets and fuzzy logic. Didier Dubois and Henri Prade further contributed to its development. Earlier, in the 1950s, economist G. L. S. Shackle proposed the min/max algebra to describe degrees of potential surprise.
Formalization of possibility
For simplicity, assume that the universe of discourse Ω is a finite set. A possibility measure is a function from to [0, 1] such that:
Axiom 1:
Axiom 2:
Axiom 3: for any disjoint subsets and .
It follows that, like probability on finite probability spaces, the possibility measure is determined by its behavior on singletons:
Axiom 1 can be interpreted as the assumption that Ω is an exhaustive description of future states of the world, because it means that no belief weight is given to elements outside Ω.
Axiom 2 could be interpreted as the assumption that the evidence from which was constructed is free of any contradiction. Technically, it implies that there is at least one element in Ω with possibility 1.
Axiom 3 corresponds to the additivity axiom in probabilities. However there is an important practical difference. Possibility theory is computationally more convenient because Axioms 1–3 imply that:
for any subsets and .
Because one can know the possibility of the union from the possibility of each component, it can be said that possibility is compositional with respect to the union operator. Note however that it is not compositional with respect to the intersection operator. Generally:
When Ω is not finite, Axiom 3 can be replaced by:
For all index sets , if the subsets are pairwise disjoint,
Necessity
Whereas probability theory uses a single number, the probability, to describe how likely an event is to occur, possibility theory uses two concepts, the possibility and the necessity of the event. For any set , the necessity measure is defined by
.
In the above formula, denotes the complement of , that is the elements of that do not belong to . It is straightforward to show that:
for any
and that:
.
Note that contrary to probability theory, possibility is not self-dual. That is, for any event , we only have the inequality:
However, the following duality rule holds:
For any event , either , or
Accordingly, beliefs about an event can be represented by a number and a bit.
Interpretation
There are four cases that can be interpreted as follows:
means that is necessary. is certainly true. It implies that .
means that is impossible. is certainly false. It implies that .
means that is possible. I would not be surprised at all if occurs. It leaves unconstrained.
means that is unnecessary
|
https://en.wikipedia.org/wiki/MSMS
|
MSMS may refer to:
Master of Science in Medical Sciences
Tandem mass spectrometry (MS/MS)
Michigan State Medical Society
Miami Springs Middle School
Mississippi School for Mathematics and Science
Master of Science in Management Studies
Making Science Make Sense, an outreach program from Bayer Corporation
MSMs, or men who have sex with men
See also
MS2 (disambiguation)
MSM (disambiguation)
MS (disambiguation)
|
https://en.wikipedia.org/wiki/Variational
|
Variational may refer to:
Calculus of variations, a field of mathematical analysis that deals with maximizing or minimizing functionals
Variational method (quantum mechanics), a way of finding approximations to the lowest energy eigenstate or ground state in quantum physics
Variational Bayesian methods, a family of techniques for approximating integrals in Bayesian inference and machine learning
Variational properties, properties of an organism relating to the production of variation among its offspring in evolutionary biology
Variationist sociolinguistics or variational sociolinguistics, the study of variation in language use among speakers or groups of speakers
See also
List of variational topics in mathematics and physics
Variation (disambiguation)
|
https://en.wikipedia.org/wiki/Coplanarity
|
In geometry, a set of points in space are coplanar if there exists a geometric plane that contains them all. For example, three points are always coplanar, and if the points are distinct and non-collinear, the plane they determine is unique. However, a set of four or more distinct points will, in general, not lie in a single plane.
Two lines in three-dimensional space are coplanar if there is a plane that includes them both. This occurs if the lines are parallel, or if they intersect each other. Two lines that are not coplanar are called skew lines.
Distance geometry provides a solution technique for the problem of determining whether a set of points is coplanar, knowing only the distances between them.
Properties in three dimensions
In three-dimensional space, two linearly independent vectors with the same initial point determine a plane through that point. Their cross product is a normal vector to that plane, and any vector orthogonal to this cross product through the initial point will lie in the plane. This leads to the following coplanarity test using a scalar triple product:
Four distinct points, , are coplanar if and only if,
which is also equivalent to
If three vectors are coplanar, then if (i.e., and are orthogonal) then
where denotes the unit vector in the direction of . That is, the vector projections of on and on add to give the original .
Coplanarity of points in n dimensions whose coordinates are given
Since three or fewer points are always coplanar, the problem of determining when a set of points are coplanar is generally of interest only when there are at least four points involved. In the case that there are exactly four points, several ad hoc methods can be employed, but a general method that works for any number of points uses vector methods and the property that a plane is determined by two linearly independent vectors.
In an -dimensional space where , a set of points are coplanar if and only if the matrix of their relative differences, that is, the matrix whose columns (or rows) are the vectors is of rank 2 or less.
For example, given four points
if the matrix
is of rank 2 or less, the four points are coplanar.
In the special case of a plane that contains the origin, the property can be simplified in the following way:
A set of points and the origin are coplanar if and only if the matrix of the coordinates of the points is of rank 2 or less.
Geometric shapes
A skew polygon is a polygon whose vertices are not coplanar. Such a polygon must have at least four vertices; there are no skew triangles.
A polyhedron that has positive volume has vertices that are not all coplanar.
See also
Collinearity
Plane of incidence
References
External links
Planes (geometry)
|
https://en.wikipedia.org/wiki/Scalar%20projection
|
In mathematics, the scalar projection of a vector on (or onto) a vector also known as the scalar resolute of in the direction of is given by:
where the operator denotes a dot product, is the unit vector in the direction of is the length of and is the angle between and .
The term scalar component refers sometimes to scalar projection, as, in Cartesian coordinates, the components of a vector are the scalar projections in the directions of the coordinate axes.
The scalar projection is a scalar, equal to the length of the orthogonal projection of on , with a negative sign if the projection has an opposite direction with respect to .
Multiplying the scalar projection of on by converts it into the above-mentioned orthogonal projection, also called vector projection of on .
Definition based on angle θ
If the angle between and is known, the scalar projection of on can be computed using
( in the figure)
The formula above can be inverted to obtain the angle, θ.
Definition in terms of a and b
When is not known, the cosine of can be computed in terms of and by the following property of the dot product :
By this property, the definition of the scalar projection becomes:
Properties
The scalar projection has a negative sign if . It coincides with the length of the corresponding vector projection if the angle is smaller than 90°. More exactly, if the vector projection is denoted and its length :
if
if
See also
Scalar product
Cross product
Vector projection
Sources
Dot products - www.mit.org
Scalar projection - Flexbooks.ck12.org
Scalar Projection & Vector Projection - medium.com
Lesson Explainer: Scalar Projection | Nagwa
Operations on vectors
|
https://en.wikipedia.org/wiki/Demographics%20of%20Serbia
|
Demographic features of the population of Serbia include vital statistics, ethnicity, religious affiliations, education level, health of the populace, and other aspects of the population.
History
Censuses in Serbia ordinarily take place every 10 years, organized by the Statistical Office of the Republic of Serbia. The Principality of Serbia had conducted the first population census in 1834; the subsequent censuses were conducted in 1841, 1843, 1846, 1850, 1854, 1859, 1863 and 1866 and 1874. During the era Kingdom of Serbia, six censuses were conducted in 1884, 1890, 1895, 1900, 1905 and the last one being in 1910. During the Kingdom of Yugoslavia, censuses were conducted in 1931 and 1921; the census in 1941 was never conducted due to the outbreak of World War II. Socialist Yugoslavia conducted censuses in 1948, 1953, 1961, 1971, 1981, and 1991. The two most recent censuses were held in 2011 and 2022.
The years since the first 1834 Census saw frequent border changes of Serbia, first amidst the disintegration of the Ottoman Empire and Austria-Hungary, then subsequent formation and later disintegration of Yugoslavia and, finally, 2008 partially recognized independence of Kosovo which affected territorial scope in which all these censuses have been conducted.
Total fertility rate 1860–1949
The total fertility rate is the number of children born per woman. It is based on fairly good data for the entire period. Sources: Our World In Data and Gapminder Foundation.
Vital statistics
Source: Statistical Office of the Republic of Serbia Data for Serbia excluding Kosovo.
Current vital statistics
Birth statistics by districts
Birth rate by municipalities 1961–2020
Marriages and divorces
Data for Serbia excluding Kosovo.
Ethnic groups
Situated in the middle of the Balkans, Serbia is home to many different ethnic groups. According to the 2022 census, Serbs are the largest ethnic group in the country and constitute 80.6% of population (86.6% if categories not declared and unknown nationalities are excluded). Hungarians are the largest ethnic minority in Serbia, concentrated predominately in northern Vojvodina and representing 2.8% of the country's population (3 % if categories not declared and unknown nationalities are excluded). Bosniaks are second largest ethnic minority mainly inhabiting Sandžak region in southwestern and most southern part of the country representing 2.3% of the country's population (2.5 % if categories not declared and unknown nationalities are excluded). Romani people constitute 2% of the total population, or 2.1% if undeclared and unknown are not taken into account. Other minority groups include Albanians (0.9%), Slovaks and Croats (0.6%), Yugoslavs (0.4%), Romanians, Vlachs and Montenegrins (0.3%). The Chinese and Arabs are the only two significant immigrant minorities, with the former often using Serbia as a transient country on their way to Western Europe. In 2022, 140 thousand migrants arrived in Serbia from Russia, and th
|
https://en.wikipedia.org/wiki/Georg%20Hamel
|
Georg Karl Wilhelm Hamel (12 September 1877 – 4 October 1954) was a German mathematician with interests in mechanics, the foundations of mathematics and function theory.
Biography
Hamel was born in Düren, Rhenish Prussia. He studied at Aachen, Berlin, Göttingen, and Karlsruhe. His doctoral adviser was David Hilbert. He taught at Brünn in 1905, Aachen in 1912, and at the Technical University of Berlin in 1919. In 1927, Hamel studied the size of the key space for the Kryha encryption device. He was an Invited Speaker of the ICM in 1932 at Zurich and in 1936 at Oslo. He was the author of several important treatises on mechanics. He became a member of the Prussian Academy of Sciences in 1938 and the Bavarian Academy of Sciences in 1953. He died in Landshut, Bavaria.
Selected publications
("On the geometries in which the straight lines are the shortest", Hamel's doctoral dissertation on Hilbert's fourth problem. A version may be found in Mathematische Annalen 57, 1903.)
See also
Hamel basis
Hamel dimension
Cauchy's functional equation
Hilbert's fourth problem
References
1877 births
1954 deaths
19th-century German mathematicians
20th-century German mathematicians
Members of the Prussian Academy of Sciences
Modern cryptographers
People from the Rhine Province
RWTH Aachen University alumni
Academic staff of RWTH Aachen University
Humboldt University of Berlin alumni
University of Göttingen alumni
Karlsruhe Institute of Technology alumni
Academic staff of the Technical University of Berlin
German cryptographers
Fluid dynamicists
Linear algebraists
Members of the German Academy of Sciences at Berlin
|
https://en.wikipedia.org/wiki/Cohort%20%28statistics%29
|
In statistics, epidemiology, marketing and demography, a cohort is a group of subjects who share a defining characteristic (typically subjects who experienced a common event in a selected time period, such as birth or graduation).
Cohort data can oftentimes be more advantageous to demographers than period data. Because cohort data is honed to a specific time period, it is usually more accurate. It is more accurate because it can be tuned to retrieve custom data for a specific study.
In addition, cohort data is not affected by tempo effects, unlike period data. However, cohort data can be disadvantageous in the sense that it can take a long amount of time to collect the data necessary for the cohort study. Another disadvantage of cohort studies is that it can be extremely costly to carry out, since the study will go on for a long period of time, demographers often require sufficient funds to fuel the study.
Demography often contrasts cohort perspectives and period perspectives. For instance, the total cohort fertility rate is an index of the average completed family size for cohorts of women, but since it can only be known for women who have finished child-bearing, it cannot be measured for currently fertile women. It can be calculated as the sum of the cohort's age-specific fertility rates that obtain as it ages through time. In contrast, the total period fertility rate uses current age-specific fertility rates to calculate the completed family size for a notional woman, were she to experience these fertility rates through her life.
A study on a cohort is a cohort study.
Two important types of cohort studies are:
Prospective Cohort Study: In this type of study, there is a collection of exposure data (baseline data) from the subjects recruited before development of the outcomes of interest. The subjects are then followed through time (future) to record when the subject develops the outcome of interest. Ways to follow-up with subjects of the study include: phone interviews, face-to-face interviews, physical exams, medical/laboratory tests, and mail questionnaires. An example of a prospective cohort study is, for instance, if a demographer wanted to measure all the males born in the year 2018. The demographer would have to wait for the event to be over, the year 2018 must come to an end in order for the demographer to have all the necessary data.
Retrospective Cohort Study: Retrospective Studies start with subjects that are at risk to have the outcome or disease of interest and identifies the outcome starting from where the subject is when the study starts to the past of the subject to identify the exposure. Retrospective use records: clinical, educational, birth certificates, death certificates, etc. but that may be difficult because there may not be data for the study that is being initiated. These studies may have multiple exposures which may make this study difficult. On the other hand, an example of a retrospective cohort study is, if
|
https://en.wikipedia.org/wiki/Order%20of%20a%20kernel
|
In statistics, the order of a kernel is the degree of the first non-zero moment of a kernel.
Definitions
The literature knows two major definitions of the order of a kernel:
Definition 1
Let be an integer. Then, is a kernel of order if the functions are integrable and satisfy
Definition 2
References
Nonparametric statistics
|
https://en.wikipedia.org/wiki/Reduced%20form
|
In statistics, and particularly in econometrics, the reduced form of a system of equations is the result of solving the system for the endogenous variables. This gives the latter as functions of the exogenous variables, if any. In econometrics, the equations of a structural form model are estimated in their theoretically given form, while an alternative approach to estimation is to first solve the theoretical equations for the endogenous variables to obtain reduced form equations, and then to estimate the reduced form equations.
Let Y be the vector of the variables to be explained (endogeneous variables) by a statistical model and X be the vector of explanatory (exogeneous) variables. In addition let be a vector of error terms. Then the general expression of a structural form is , where f is a function, possibly from vectors to vectors in the case of a multiple-equation model. The reduced form of this model is given by , with g a function.
Structural and reduced forms
Exogenous variables are variables which are not determined by the system. If we assume that demand is influenced not only by price, but also by an exogenous variable, Z, we can consider the structural supply and demand model
supply:
demand:
where the terms are random errors (deviations of the quantities supplied and demanded from those implied by the rest of each equation). By solving for the unknowns (endogenous variables) P and Q, this structural model can be rewritten in the reduced form:
where the parameters depend on the parameters of the structural model, and where the reduced form errors each depend on the structural parameters and on both structural errors. Note that both endogenous variables depend on the exogenous variable Z.
If the reduced form model is estimated using empirical data, obtaining estimated values for the coefficients some of the structural parameters can be recovered: By combining the two reduced form equations to eliminate Z, the structural coefficients of the supply side model ( and ) can be derived:
Note however, that this still does not allow us to identify the structural parameters of the demand equation. For that, we would need an exogenous variable which is included in the supply equation of the structural model, but not in the demand equation.
The general linear case
Let y be a column vector of M endogenous variables. In the case above with Q and P, we had M = 2. Let z be a column vector of K exogenous variables; in the case above z consisted only of Z. The structural linear model is
where is a vector of structural shocks, and A and B are matrices; A is a square M × M matrix, while B is M × K. The reduced form of the system is:
with vector of reduced form errors that each depends on all structural errors, where the matrix A must be nonsingular for the reduced form to exist and be unique. Again, each endogenous variable depends on potentially each exogenous variable.
Without restrictions on the A
|
https://en.wikipedia.org/wiki/Autoregressive%20integrated%20moving%20average
|
In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. To better comprehend the data or to forecast upcoming series points, both of these models are fitted to time series data. ARIMA models are applied in some cases where data show evidence of non-stationarity in the sense of mean (but not variance/autocovariance), where an initial differencing step (corresponding to the "integrated" part of the model) can be applied one or more times to eliminate the non-stationarity of the mean function (i.e., the trend). When the seasonality shows in a time series, the seasonal-differencing could be applied to eliminate the seasonal component. Since the ARMA model, according to the Wold's decomposition theorem, is theoretically sufficient to describe a regular (a.k.a. purely nondeterministic) wide-sense stationary time series, we are motivated to make stationary a non-stationary time series, e.g., by using differencing, before we can use the ARMA model. Note that if the time series contains a predictable sub-process (a.k.a. pure sine or complex-valued exponential process), the predictable component is treated as a non-zero-mean but periodic (i.e., seasonal) component in the ARIMA framework so that it is eliminated by the seasonal differencing.
The autoregressive () part of ARIMA indicates that the evolving variable of interest is regressed on its own lagged (i.e., prior) values. The moving average () part indicates that the regression error is actually a linear combination of error terms whose values occurred contemporaneously and at various times in the past. The (for "integrated") indicates that the data values have been replaced with the difference between their values and the previous values (and this differencing process may have been performed more than once). The purpose of each of these features is to make the model fit the data as well as possible.
Non-seasonal ARIMA models are generally denoted ARIMA(p,d,q) where parameters p, d, and q are non-negative integers, p is the order (number of time lags) of the autoregressive model, d is the degree of differencing (the number of times the data have had past values subtracted), and q is the order of the moving-average model. Seasonal ARIMA models are usually denoted ARIMA(p,d,q)(P,D,Q)m, where m refers to the number of periods in each season, and the uppercase P,D,Q refer to the autoregressive, differencing, and moving average terms for the seasonal part of the ARIMA model.
When two out of the three terms are zeros, the model may be referred to based on the non-zero parameter, dropping "", "" or "" from the acronym describing the model. For example, is , is , and is .
ARIMA models can be estimated following the Box–Jenkins approach.
Definition
Given time series data Xt where t is an integer index and the Xt are real numbers, an model is given by
or
|
https://en.wikipedia.org/wiki/Bayesian%20search%20theory
|
Bayesian search theory is the application of Bayesian statistics to the search for lost objects. It has been used several times to find lost sea vessels, for example USS Scorpion, and has played a key role in the recovery of the flight recorders in the Air France Flight 447 disaster of 2009. It has also been used in the attempts to locate the remains of Malaysia Airlines Flight 370.
Procedure
The usual procedure is as follows:
Formulate as many reasonable hypotheses as possible about what may have happened to the object.
For each hypothesis, construct a probability density function for the location of the object.
Construct a function giving the probability of actually finding an object in location X when searching there if it really is in location X. In an ocean search, this is usually a function of water depth — in shallow water chances of finding an object are good if the search is in the right place. In deep water chances are reduced.
Combine the above information coherently to produce an overall probability density map. (Usually this simply means multiplying the two functions together.) This gives the probability of finding the object by looking in location X, for all possible locations X. (This can be visualized as a contour map of probability.)
Construct a search path which starts at the point of highest probability and 'scans' over high probability areas, then intermediate probabilities, and finally low probability areas.
Revise all the probabilities continuously during the search. For example, if the hypotheses for location X imply the likely disintegration of the object and the search at location X has yielded no fragments, then the probability that the object is somewhere around there is greatly reduced (though not usually to zero) while the probabilities of its being at other locations is correspondingly increased. The revision process is done by applying Bayes' theorem.
In other words, first search where it most probably will be found, then search where finding it is less probable, then search where the probability is even less (but still possible due to limitations on fuel, range, water currents, etc.), until insufficient hope of locating the object at acceptable cost remains.
The advantages of the Bayesian method are that all information available is used coherently (i.e., in a "leak-proof" manner) and the method automatically produces estimates of the cost for a given success probability. That is, even before the start of searching, one can say, hypothetically, "there is a 65% chance of finding it in a 5-day search. That probability will rise to 90% after a 10-day search and 97% after 15 days" or a similar statement. Thus the economic viability of the search can be estimated before committing resources to a search.
Apart from the USS Scorpion, other vessels located by Bayesian search theory include the MV Derbyshire, the largest British vessel ever lost at sea, and the SS Central America. It also proved successful in the
|
https://en.wikipedia.org/wiki/Witt%20algebra
|
In mathematics, the complex Witt algebra, named after Ernst Witt, is the Lie algebra of meromorphic vector fields defined on the Riemann sphere that are holomorphic except at two fixed points. It is also the complexification of the Lie algebra of polynomial vector fields on a circle, and the Lie algebra of derivations of the ring C[z,z−1].
There are some related Lie algebras defined over finite fields, that are also called Witt algebras.
The complex Witt algebra was first defined by Élie Cartan (1909), and its analogues over finite fields were studied by Witt in the 1930s.
Basis
A basis for the Witt algebra is given by the vector fields , for n in .
The Lie bracket of two basis vector fields is given by
This algebra has a central extension called the Virasoro algebra that is important in two-dimensional conformal field theory and string theory.
Note that by restricting n to 1,0,-1, one gets a subalgebra. Taken over the field of complex numbers, this is just the Lie algebra of the Lorentz group . Over the reals, it is the algebra sl(2,R) = su(1,1).
Conversely, su(1,1) suffices to reconstruct the original algebra in a presentation.
Over finite fields
Over a field k of characteristic p>0, the Witt algebra is defined to be the Lie algebra of derivations of the ring
k[z]/zp
The Witt algebra is spanned by Lm for −1≤ m ≤ p−2.
Images
See also
Virasoro algebra
Heisenberg algebra
References
Élie Cartan, Les groupes de transformations continus, infinis, simples. Ann. Sci. Ecole Norm. Sup. 26, 93-161 (1909).
Conformal field theory
Lie algebras
|
https://en.wikipedia.org/wiki/Wald%27s%20equation
|
In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities. In its simplest form, it relates the expectation of a sum of randomly many finite-mean, independent and identically distributed random variables to the expected number of terms in the sum and the random variables' common expectation under the condition that the number of terms in the sum is independent of the summands.
The equation is named after the mathematician Abraham Wald. An identity for the second moment is given by the Blackwell–Girshick equation.
Basic version
Let be a sequence of real-valued, independent and identically distributed random variables and let be an integer-valued random variable that is independent of the sequence . Suppose that and the have finite expectations. Then
Example
Roll a six-sided dice. Take the number on the die (call it ) and roll that number of six-sided dice to get the numbers , and add up their values. By Wald's equation, the resulting value on average is
General version
Let be an infinite sequence of real-valued random variables and let be a nonnegative integer-valued random variable.
Assume that:
. are all integrable (finite-mean) random variables,
. for every natural number , and
. the infinite series satisfies
Then the random sums
are integrable and
If, in addition,
. all have the same expectation, and
. has finite expectation,
then
Remark: Usually, the name Wald's equation refers to this last equality.
Discussion of assumptions
Clearly, assumption () is needed to formulate assumption () and Wald's equation. Assumption () controls the amount of dependence allowed between the sequence and the number of terms; see the counterexample below for the necessity. Note that assumption () is satisfied when is a stopping time for a sequence of independent random variables . Assumption () is of more technical nature, implying absolute convergence and therefore allowing arbitrary rearrangement of an infinite series in the proof.
If assumption () is satisfied, then assumption () can be strengthened to the simpler condition
. there exists a real constant such that for all natural numbers .
Indeed, using assumption (),
and the last series equals the expectation of [Proof], which is finite by assumption (). Therefore, () and () imply assumption ().
Assume in addition to () and () that
. is independent of the sequence and
. there exists a constant such that for all natural numbers .
Then all the assumptions (), (), () and (), hence also () are satisfied. In particular, the conditions () and () are satisfied if
. the random variables all have the same distribution.
Note that the random variables of the sequence don't need to be independent.
The interesting point is to admit some dependence between the random number of terms and the sequence . A standard version is to ass
|
https://en.wikipedia.org/wiki/Michael%20I.%20Jordan
|
Michael Irwin Jordan (born February 25, 1956) is an American scientist, professor at the University of California, Berkeley and researcher in machine learning, statistics, and artificial intelligence.
Jordan was elected a member of the National Academy of Engineering in 2010 for contributions to the foundations and applications of machine learning.
He is one of the leading figures in machine learning, and in 2016 Science reported him as the world's most influential computer scientist.
In 2022, Jordan won the inaugural World Laureates Association Prize in Computer Science or Mathematics, "for fundamental contributions to the foundations of machine learning and its application."
Education
Jordan received his BS magna cum laude in Psychology in 1978 from the Louisiana State University, his MS in Mathematics in 1980 from Arizona State University and his PhD in Cognitive Science in 1985 from the University of California, San Diego. At the University of California, San Diego, Jordan was a student of David Rumelhart and a member of the Parallel Distributed Processing (PDP) Group in the 1980s.
Career and research
Jordan is the Pehong Chen Distinguished Professor at the University of California, Berkeley, where his appointment is split across EECS and Statistics. He was a professor at the Department of Brain and Cognitive Sciences at MIT from 1988 to 1998.
In the 1980s Jordan started developing recurrent neural networks as a cognitive model. In recent years, his work is less driven from a cognitive perspective and more from the background of traditional statistics.
Jordan popularised Bayesian networks in the machine learning community and is known for pointing out links between machine learning and statistics. He was also prominent in the formalisation of variational methods for approximate inference and the popularisation of the expectation–maximization algorithm in machine learning.
Resignation from Machine Learning
In 2001, Jordan and others resigned from the editorial board of the journal Machine Learning. In a public letter, they argued for less restrictive access and pledged support for a new open access journal, the Journal of Machine Learning Research, which was created by Leslie Kaelbling to support the evolution of the field of machine learning.
Honors and awards
Jordan has received numerous awards, including a best student paper award (with X. Nguyen and M. Wainwright) at the International Conference on Machine Learning (ICML 2004), a best paper award (with R. Jacobs) at the American Control Conference (ACC 1991), the ACM-AAAI Allen Newell Award, the IEEE Neural Networks Pioneer Award, and an NSF Presidential Young Investigator Award. In 2002 he was named an AAAI Fellow "for significant contributions to reasoning under uncertainty, machine learning, and human motor control." In 2004 he was named an IMS Fellow "for contributions to graphical models and machine learning." In 2005 he was named an IEEE Fellow "for contributions to proba
|
https://en.wikipedia.org/wiki/Instrumental%20variables%20estimation
|
In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to every unit in a randomized experiment. Intuitively, IVs are used when an explanatory variable of interest is correlated with the error term, in which case ordinary least squares and ANOVA give biased results. A valid instrument induces changes in the explanatory variable but has no independent effect on the dependent variable, allowing a researcher to uncover the causal effect of the explanatory variable on the dependent variable.
Instrumental variable methods allow for consistent estimation when the explanatory variables (covariates) are correlated with the error terms in a regression model. Such correlation may occur when:
changes in the dependent variable change the value of at least one of the covariates ("reverse" causation),
there are omitted variables that affect both the dependent and explanatory variables, or
the covariates are subject to non-random measurement error.
Explanatory variables that suffer from one or more of these issues in the context of a regression are sometimes referred to as endogenous. In this situation, ordinary least squares produces biased and inconsistent estimates. However, if an instrument is available, consistent estimates may still be obtained. An instrument is a variable that does not itself belong in the explanatory equation but is correlated with the endogenous explanatory variables, conditionally on the value of other covariates.
In linear models, there are two main requirements for using IVs:
The instrument must be correlated with the endogenous explanatory variables, conditionally on the other covariates. If this correlation is strong, then the instrument is said to have a strong first stage. A weak correlation may provide misleading inferences about parameter estimates and standard errors.
The instrument cannot be correlated with the error term in the explanatory equation, conditionally on the other covariates. In other words, the instrument cannot suffer from the same problem as the original predicting variable. If this condition is met, then the instrument is said to satisfy the exclusion restriction.
Example
Informally, in attempting to estimate the causal effect of some variable X ("covariate" or "explanatory variable") on another Y ("dependent variable"), an instrument is a third variable Z which affects Y only through its effect on X.
For example, suppose a researcher wishes to estimate the causal effect of smoking (X) on general health (Y). Correlation between smoking and health does not imply that smoking causes poor health because other variables, such as depression, may affect both health and smoking, or because health may affect smoking. It is not possible to conduct controlled experiments on smoking status in the general population. T
|
https://en.wikipedia.org/wiki/Homothetic
|
Homothetic may refer to:
Geometry
Homothetic transformation, also known as homothety, homothecy, or homogeneous dilation
Homothetic center
Homothetic vector field
Economics
Homothetic preferences
|
https://en.wikipedia.org/wiki/Unary%20function
|
In mathematics, a unary function is a function that takes one argument. A unary operator belongs to a subset of unary functions, in that its range coincides with its domain. In contrast, a unary function's domain may or may not coincide with its range.
Examples
The successor function, denoted , is a unary operator. Its domain and codomain are the natural numbers; its definition is as follows:
In many programming languages such as C, executing this operation is denoted by postfixing to the operand, i.e. the use of is equivalent to executing the assignment .
Many of the elementary functions are unary functions, including the trigonometric functions, logarithm with a specified base, exponentiation to a particular power or base, and hyperbolic functions.
See also
Arity
Binary function
Binary operator
List of mathematical functions
Ternary operation
Unary operation
References
Foundations of Genetic Programming
Functions and mappings
Types of functions
|
https://en.wikipedia.org/wiki/Food%20engineering
|
Food engineering is a scientific, academic, and professional field that interprets and applies principles of engineering, science, and mathematics to food manufacturing and operations, including the processing, production, handling, storage, conservation, control, packaging and distribution of food products. Given its reliance on food science and broader engineering disciplines such as electrical, mechanical, civil, chemical, industrial and agricultural engineering, food engineering is considered a multidisciplinary and narrow field.
Due to the complex nature of food materials, food engineering also combines the study of more specific chemical and physical concepts such as biochemistry, microbiology, food chemistry, thermodynamics, transport phenomena, rheology, and heat transfer. Food engineers apply this knowledge to the cost-effective design, production, and commercialization of sustainable, safe, nutritious, healthy, appealing, affordable and high-quality ingredients and foods, as well as to the development of food systems, machinery, and instrumentation.
History
Although food engineering is a relatively recent and evolving field of study, it is based on long-established concepts and activities. The traditional focus of food engineering was preservation, which involved stabilizing and sterilizing foods, preventing spoilage, and preserving nutrients in food for prolonged periods of time. More specific traditional activities include food dehydration and concentration, protective packaging, canning and freeze-drying . The development of food technologies were greatly influenced and urged by wars and long voyages, including space missions, where long-lasting and nutritious foods were essential for survival. Other ancient activities include milling, storage, and fermentation processes. Although several traditional activities remain of concern and form the basis of today’s technologies and innovations, the focus of food engineering has recently shifted to food quality, safety, taste, health and sustainability.
Application and practices
The following are some of the applications and practices used in food engineering to produce safe, healthy, tasty, and sustainable food:
Refrigeration and freezing
The main objective of food refrigeration and/or freezing is to preserve the quality and safety of food materials. Refrigeration and freezing contribute to the preservation of perishable foods, and to the conservation some food quality factors such as visual appearance, texture, taste, flavor and nutritional content. Freezing food slows the growth of bacteria that could potentially harm consumers.
Evaporation
Evaporation is used to pre-concentrate, increase the solid content, change the color, and reduce the water content of food and liquid products. This process is mostly seen when processing milk, starch derivatives, coffee, fruit juices, vegetable pastes and concentrates, seasonings, sauces, sugar, and edible oil. Evaporation is also used in f
|
https://en.wikipedia.org/wiki/Kummer%27s%20function
|
In mathematics, there are several functions known as Kummer's function. One is known as the confluent hypergeometric function of Kummer. Another one, defined below, is related to the polylogarithm. Both are named for Ernst Kummer.
Kummer's function is defined by
The duplication formula is
.
Compare this to the duplication formula for the polylogarithm:
An explicit link to the polylogarithm is given by
References
.
Special functions
hu:Kummer-függvény
|
https://en.wikipedia.org/wiki/Confluent%20hypergeometric%20function
|
In mathematics, a confluent hypergeometric function is a solution of a confluent hypergeometric equation, which is a degenerate form of a hypergeometric differential equation where two of the three regular singularities merge into an irregular singularity. The term confluent refers to the merging of singular points of families of differential equations; confluere is Latin for "to flow together". There are several common standard forms of confluent hypergeometric functions:
Kummer's (confluent hypergeometric) function , introduced by , is a solution to Kummer's differential equation. This is also known as the confluent hypergeometric function of the first kind. There is a different and unrelated Kummer's function bearing the same name.
Tricomi's (confluent hypergeometric) function introduced by , sometimes denoted by , is another solution to Kummer's equation. This is also known as the confluent hypergeometric function of the second kind.
Whittaker functions (for Edmund Taylor Whittaker) are solutions to Whittaker's equation.
Coulomb wave functions are solutions to the Coulomb wave equation.
The Kummer functions, Whittaker functions, and Coulomb wave functions are essentially the same, and differ from each other only by elementary functions and change of variables.
Kummer's equation
Kummer's equation may be written as:
with a regular singular point at and an irregular singular point at . It has two (usually) linearly independent solutions and .
Kummer's function of the first kind is a generalized hypergeometric series introduced in , given by:
where:
is the rising factorial. Another common notation for this solution is . Considered as a function of , , or with the other two held constant, this defines an entire function of or , except when As a function of it is analytic except for poles at the non-positive integers.
Some values of and yield solutions that can be expressed in terms of other known functions. See #Special cases. When is a non-positive integer, then Kummer's function (if it is defined) is a generalized Laguerre polynomial.
Just as the confluent differential equation is a limit of the hypergeometric differential equation as the singular point at 1 is moved towards the singular point at ∞, the confluent hypergeometric function can be given as a limit of the hypergeometric function
and many of the properties of the confluent hypergeometric function are limiting cases of properties of the hypergeometric function.
Since Kummer's equation is second order there must be another, independent, solution. The indicial equation of the method of Frobenius tells us that the lowest power of a power series solution to the Kummer equation is either 0 or . If we let be
then the differential equation gives
which, upon dividing out and simplifying, becomes
This means that is a solution so long as is not an integer greater than 1, just as is a solution so long as is not an integer less than 1. We can also use the
|
https://en.wikipedia.org/wiki/Jotun%20Hein
|
Jotun John Piet Hein (born 19 July 1956) is Professor of Bioinformatics at the Department of Statistics of the University of Oxford and a professorial fellow of University College, Oxford. Hein was previously Director of the Bioinformatics Research Centre at Aarhus University, Denmark.
Hein is the fourth son of Piet Hein, the Danish scientist, mathematician, inventor, designer, author, and poet who wrote the famed Grooks poetry collections and invented the Superegg and the Soma cube. When he was 12 years old, Jotun proved the Soma cube's "Basalt Rock" construction impossible, which was published in the puzzle's instruction manual as "Jotun's Proof."
Hein's research interests are in molecular evolution, molecular population genetics and bioinformatics.
Selected books
Hein, J; Schierup, M. H., and Wiuf, C. Gene Genealogies, Variation and Evolution – A Primer in Coalescent Theory. Oxford University Press, 2005. .
References
External links
Personal home page
1956 births
Living people
British statisticians
Fellows of University College, Oxford
20th-century British mathematicians
21st-century British mathematicians
|
https://en.wikipedia.org/wiki/Lidstone%20series
|
In mathematics, a Lidstone series, named after George James Lidstone, is a kind of polynomial expansion that can express certain types of entire functions.
Let ƒ(z) be an entire function of exponential type less than (N + 1)π, as defined below. Then ƒ(z) can be expanded in terms of polynomials An as follows:
Here An(z) is a polynomial in z of degree n, Ck a constant, and ƒ(n)(a) the nth derivative of ƒ at a.
A function is said to be of exponential type of less than t if the function
is bounded above by t. Thus, the constant N used in the summation above is given by
with
References
Ralph P. Boas, Jr. and C. Creighton Buck, Polynomial Expansions of Analytic Functions, (1964) Academic Press, NY. Library of Congress Catalog 63-23263. Issued as volume 19 of Moderne Funktionentheorie ed. L.V. Ahlfors, series Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer-Verlag
Mathematical series
|
https://en.wikipedia.org/wiki/Circular%20distribution
|
In probability and statistics, a circular distribution or polar distribution is a probability distribution of a random variable whose values are angles, usually taken to be in the range A circular distribution is often a continuous probability distribution, and hence has a probability density, but such distributions can also be discrete, in which case they are called circular lattice distributions. Circular distributions can be used even when the variables concerned are not explicitly angles: the main consideration is that there is not usually any real distinction between events occurring at the lower or upper end of the range, and the division of the range could notionally be made at any point.
Graphical representation
If a circular distribution has a density
it can be graphically represented as a closed curve
where the radius is set equal to
and where a and b are chosen on the basis of appearance.
Examples
By computing the probability distribution of angles along a handwritten ink trace,
a lobe-shaped polar distribution emerges. The main direction of the lobe in the
first quadrant corresponds to the slant of handwriting (see: graphonomics).
An example of a circular lattice distribution would be the probability of being born in a given month of the year, with each calendar month being thought of as arranged round a circle, so that "January" is next to "December".
See also
Circular mean
Circular uniform distribution
von Mises distribution
References
External links
Circular Values Math and Statistics with C++11, A C++11 infrastructure for circular values (angles, time-of-day, etc.) mathematics and statistics
Types of probability distributions
Directional statistics
Statistical charts and diagrams
|
https://en.wikipedia.org/wiki/Running%20angle
|
In mathematics, the running angle is the angle of consecutive vectors with respect to the base line, i.e.
Usually, it is more informative to compute it using a four-quadrant version of the arctan function in a mathematical software library.
See also
Differential geometry
Polar distribution
Penmanship
|
https://en.wikipedia.org/wiki/Gheorghe%20%C8%9Ai%C8%9Beica
|
Gheorghe Țițeica (; 4 October 1873 – 5 February 1939) publishing as George or Georges Tzitzéica) was a Romanian mathematician who made important contributions in geometry. He is recognized as the founder of the Romanian school of differential geometry.
Education
He was born in Turnu Severin, western Oltenia, the son of Anca (née Ciolănescu) and Radu Țiței, originally from Cilibia, in Buzău County. His name was registered as Țițeica–a combination of his parents' surnames. He showed an early interest in science, as well as music and literature. Țițeica was an accomplished violinist, having studied music since childhood: music was to remain his hobby. While studying at the Carol I High School in Craiova, he contributed to the school's magazine, writing the columns on mathematics and studies of literary critique. After graduation in 1892, he obtained a scholarship at the preparatory school in Bucharest, where he also was admitted as a student in the Mathematics Department of University of Bucharest's Faculty of Sciences. His teachers there included David Emmanuel, Spiru Haret, Constantin Gogu, Dimitrie Petrescu, and Iacob Lahovary. In June 1895, he graduated with a Bachelor of Mathematics.
In the summer of 1896, after a stint as a substitute teacher at the Bucharest theological seminary, Țițeica passed his exams for promotion to a secondary school position, becoming teacher in Galați.
In 1897, on the advice of teachers and friends, Țițeica completed his studies at a preparatory school in Paris. Among his mates were Henri Lebesgue and Paul Montel. After ranking first in his class and earning a second undergraduate degree from the Sorbonne in 1897, he was admitted at the École Normale Supérieure, where he took classes with Paul Appell, Gaston Darboux, Édouard Goursat, Charles Hermite, Gabriel Koenigs, Émile Picard, Henri Poincaré, and Jules Tannery. Țițeica chose Darboux to be his thesis advisor; after working for two years on his doctoral dissertation, titled Sur les congruences cycliques et sur les systèmes triplement conjugués, he defended it on 30 June 1899 before a board of examiners consisting of Darboux (as chair), Goursat, and Koenigs.
Career
Upon his return to Romania, Țițeica was appointed assistant professor at the University of Bucharest. He was promoted to full professor on 3 May 1903, retaining this position until his death in 1939. He also taught mathematics at the Polytechnic University of Bucharest, starting in 1928. In 1913, at age 40, Țițeica was elected as a permanent member of the Romanian Academy, replacing Spiru Haret. Later he was appointed in leading roles: in 1922, vice-president of the scientific section, in 1928, vice-president and in 1929 secretary general. Țițeica was also president of the , of the Romanian Association of Science, and of the Association of the development and the spreading of science. He was a vice-president of the Polytechnics Association of Romania and member of the High Council of Public Teaching.
|
https://en.wikipedia.org/wiki/Helly%E2%80%93Bray%20theorem
|
In probability theory, the Helly–Bray theorem relates the weak convergence of cumulative distribution functions to the convergence of expectations of certain measurable functions. It is named after Eduard Helly and Hubert Evelyn Bray.
Let F and F1, F2, ... be cumulative distribution functions on the real line. The Helly–Bray theorem states that if Fn converges weakly to F, then
for each bounded, continuous function g: R → R, where the integrals involved are Riemann–Stieltjes integrals.
Note that if X and X1, X2, ... are random variables corresponding to these distribution functions, then the Helly–Bray theorem does not imply that E(Xn) → E(X), since g(x) = x is not a bounded function.
In fact, a stronger and more general theorem holds. Let P and P1, P2, ... be probability measures on some set S. Then Pn converges weakly to P if and only if
for all bounded, continuous and real-valued functions on S. (The integrals in this version of the theorem are Lebesgue–Stieltjes integrals.)
The more general theorem above is sometimes taken as defining weak convergence of measures (see Billingsley, 1999, p. 3).
References
Probability theorems
|
https://en.wikipedia.org/wiki/Nonmetricity%20tensor
|
In mathematics, the nonmetricity tensor in differential geometry is the covariant derivative of the metric tensor. It is therefore a tensor field of order three. It vanishes for the case of Riemannian geometry and can be
used to study non-Riemannian spacetimes.
Definition
By components, it is defined as follows.
It measures the rate of change of the components of the metric tensor along the flow of a given vector field, since
where is the coordinate basis of vector fields of the tangent bundle, in the case of having a 4-dimensional manifold.
Relation to connection
We say that a connection is compatible with the metric when its associated covariant derivative of the metric tensor (call it , for example) is zero, i.e.
If the connection is also torsion-free (i.e. totally symmetric) then it is known as the Levi-Civita connection, which is the only one without torsion and compatible with the metric tensor. If we see it from a geometrical point of view, a non-vanishing nonmetricity tensor for a metric tensor implies that the modulus of a vector defined on the tangent bundle to a certain point of the manifold, changes when it is evaluated along the direction (flow) of another arbitrary vector.
References
External links
Differential geometry
|
https://en.wikipedia.org/wiki/Lebesgue%20point
|
In mathematics, given a locally Lebesgue integrable function on , a point in the domain of is a Lebesgue point if
Here, is a ball centered at with radius , and is its Lebesgue measure. The Lebesgue points of are thus points where does not oscillate too much, in an average sense.
The Lebesgue differentiation theorem states that, given any , almost every is a Lebesgue point of .
References
Mathematical analysis
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.