source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Fisher%27s%20equation
|
In mathematics, Fisher's equation (named after statistician and biologist Ronald Fisher) also known as the Kolmogorov–Petrovsky–Piskunov equation (named after Andrey Kolmogorov, Ivan Petrovsky, and Nikolai Piskunov), KPP equation or Fisher–KPP equation is the partial differential equation:It is a kind of reaction–diffusion system that can be used to model population growth and wave propagation.
Details
Fisher's equation belongs to the class of reaction-diffusion equations: in fact, it is one of the simplest semilinear reaction-diffusion equations, the one which has the inhomogeneous term
which can exhibit traveling wave solutions that switch between equilibrium states given by . Such equations occur, e.g., in ecology, physiology, combustion, crystallization, plasma physics, and in general phase transition problems.
Fisher proposed this equation in his 1937 paper The wave of advance of advantageous genes in the context of population dynamics to describe the spatial spread of an advantageous allele and explored its travelling wave solutions.
For every wave speed ( in dimensionless form) it admits travelling wave solutions of the form
where is increasing and
That is, the solution switches from the equilibrium state u = 0 to the equilibrium state u = 1. No such solution exists for c < 2. The wave shape for a given wave speed is unique. The travelling-wave solutions are stable against near-field perturbations, but not to far-field perturbations which can thicken the tail. One can prove using the comparison principle and super-solution theory that all solutions with compact initial data converge to waves with the minimum speed.
For the special wave speed , all solutions can be found in a closed form, with
where is arbitrary, and the above limit conditions are satisfied for .
Proof of the existence of travelling wave solutions and analysis of their properties is often done by the phase space method.
KPP equation
In the same year (1937) as Fisher, Kolmogorov, Petrovsky and Piskunov introduced the more general reaction-diffusion equation
where is a sufficiently smooth function with the properties that
and for all . This too has the travelling wave solutions discussed above.
Fisher's equation is obtained upon setting and rescaling the coordinate by a factor of .
A more general example is given by with .
Kolmogorov, Petrovsky and Piskunov discussed the example with in the context of population genetics.
The minimum speed of a KPP-type traveling wave is given by
which differs from other type of waves, see for example ZFK-type waves.
See also
ZFK equation
List of plasma (physics) articles
Allen–Cahn equation
References
External links
Fisher's equation on MathWorld.
Fisher equation on EqWorld.
Partial differential equations
Population ecology
|
https://en.wikipedia.org/wiki/Cartesian%20tensor
|
In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is done through an orthogonal transformation.
The most familiar coordinate systems are the two-dimensional and three-dimensional Cartesian coordinate systems. Cartesian tensors may be used with any Euclidean space, or more technically, any finite-dimensional vector space over the field of real numbers that has an inner product.
Use of Cartesian tensors occurs in physics and engineering, such as with the Cauchy stress tensor and the moment of inertia tensor in rigid body dynamics. Sometimes general curvilinear coordinates are convenient, as in high-deformation continuum mechanics, or even necessary, as in general relativity. While orthonormal bases may be found for some such coordinate systems (e.g. tangent to spherical coordinates), Cartesian tensors may provide considerable simplification for applications in which rotations of rectilinear coordinate axes suffice. The transformation is a passive transformation, since the coordinates are changed and not the physical system.
Cartesian basis and related terminology
Vectors in three dimensions
In 3D Euclidean space, , the standard basis is , , . Each basis vector points along the x-, y-, and z-axes, and the vectors are all unit vectors (or normalized), so the basis is orthonormal.
Throughout, when referring to Cartesian coordinates in three dimensions, a right-handed system is assumed and this is much more common than a left-handed system in practice, see orientation (vector space) for details.
For Cartesian tensors of order 1, a Cartesian vector can be written algebraically as a linear combination of the basis vectors , , :
where the coordinates of the vector with respect to the Cartesian basis are denoted , , . It is common and helpful to display the basis vectors as column vectors
when we have a coordinate vector in a column vector representation:
A row vector representation is also legitimate, although in the context of general curvilinear coordinate systems the row and column vector representations are used separately for specific reasons – see Einstein notation and covariance and contravariance of vectors for why.
The term "component" of a vector is ambiguous: it could refer to:
a specific coordinate of the vector such as (a scalar), and similarly for and , or
the coordinate scalar-multiplying the corresponding basis vector, in which case the "-component" of is (a vector), and similarly for and .
A more general notation is tensor index notation, which has the flexibility of numerical values rather than fixed coordinate labels. The Cartesian labels are replaced by tensor indices in the basis vectors , , and coordinates , , . In general, the notation , , refers to any basis, and , , refers to the corresponding coordinate system; although here they are restricted to th
|
https://en.wikipedia.org/wiki/Silverman%E2%80%93Toeplitz%20theorem
|
In mathematics, the Silverman–Toeplitz theorem, first proved by Otto Toeplitz, is a result in summability theory characterizing matrix summability methods that are regular. A regular matrix summability method is a matrix transformation of a convergent sequence which preserves the limit.
An infinite matrix with complex-valued entries defines a regular summability method if and only if it satisfies all of the following properties:
An example is Cesaro summation, a matrix summability method with
References
Citations
Further reading
Toeplitz, Otto (1911) "Über allgemeine lineare Mittelbildungen." Prace mat.-fiz., 22, 113–118 (the original paper in German)
Silverman, Louis Lazarus (1913) "On the definition of the sum of a divergent series." University of Missouri Studies, Math. Series I, 1–96
, 43-48.
Theorems in analysis
Summability methods
Summability theory
|
https://en.wikipedia.org/wiki/Warren%20Williams%20%28American%20football%29
|
Warren Williams Jr. (born July 29, 1965) is a former professional American football running back. He played college football at the University of Miami.
College statistics
1984: 29 carries for 140 yards. 13 catches for 154 yards and 1 touchdown.
1985: 89 carries for 522 yards and 4 touchdowns. 14 catches for 131 yards.
1986: 80 carries for 399 yards and 3 touchdowns. 13 catches for 114 yards and 2 touchdowns.
1987: 135 carries for 673 yards and 5 touchdowns. 30 catches for 309 yards and 1 touchdown.
NFL career
Williams played in National Football League (NFL) for the Pittsburgh Steelers (1988–1992) and the Indianapolis Colts. He was drafted by the Steelers in the sixth round of the 1988 NFL Draft.
References
1965 births
Living people
Players of American football from Fort Myers, Florida
American football running backs
Miami Hurricanes football players
Pittsburgh Steelers players
Indianapolis Colts players
|
https://en.wikipedia.org/wiki/Factorial%20experiment
|
In statistics, a full factorial experiment is an experiment whose design consists of two or more factors, each with discrete possible values or "levels", and whose experimental units take on all possible combinations of these levels across all such factors. A full factorial design may also be called a fully crossed design. Such an experiment allows the investigator to study the effect of each factor on the response variable, as well as the effects of interactions between factors on the response variable.
For the vast majority of factorial experiments, each factor has only two levels. For example, with two factors each taking two levels, a factorial experiment would have four treatment combinations in total, and is usually called a 2×2 factorial design. In such a design, the interaction between the variables is often the most important. This applies even to scenarios where a main effect and an interaction are present.
If the number of combinations in a full factorial design is too high to be logistically feasible, a fractional factorial design may be done, in which some of the possible combinations (usually at least half) are omitted.
Other terms for "treatment combinations" are often used, such as runs (of an experiment), points (viewing the combinations as vertices of a graph, and cells (arising as intersections of rows and columns).
History
Factorial designs were used in the 19th century by John Bennet Lawes and Joseph Henry Gilbert of the Rothamsted Experimental Station.
Ronald Fisher argued in 1926 that "complex" designs (such as factorial designs) were more efficient than studying one factor at a time. Fisher wrote,
Nature, he suggests, will best respond to "a logical and carefully thought out questionnaire". A factorial design allows the effect of several factors and even interactions between them to be determined with the same number of trials as are necessary to determine any one of the effects by itself with the same degree of accuracy.
Frank Yates made significant contributions, particularly in the analysis of designs, by the Yates analysis.
The term "factorial" may not have been used in print before 1935, when Fisher used it in his book The Design of Experiments.
Advantages and disadvantages of factorial experiments
Many people examine the effect of only a single factor or variable. Compared to such one-factor-at-a-time (OFAT) experiments, factorial experiments offer several advantages
Factorial designs are more efficient than OFAT experiments. They provide more information at similar or lower cost. They can find optimal conditions faster than OFAT experiments.
When the effect of one factor is different for different levels of another factor, it cannot be detected by an OFAT experiment design. Factorial designs are required to detect such interactions. Use of OFAT when interactions are present can lead to serious misunderstanding of how the response changes with the factors.
Factorial designs allow the effects of a fact
|
https://en.wikipedia.org/wiki/Chinn
|
Chinn is a surname, originating both in England and among overseas Chinese communities.
Origins and statistics
As an English surname, it originated as a nickname for people with prominent chins, from Middle English or . It is also a spelling, based on the pronunciation in some varieties of Chinese including Hakka, of the surname pronounced Chen in Mandarin. The similarly spelled surname Chin also shares both of these origins.
According to statistics cited by Patrick Hanks, 1,316 people on the island of Great Britain and four on the island of Ireland bore the surname Chinn in 2011. In 1881 there were 1,032 people with the surname in Great Britain, primarily at Warwickshire and Cornwall.
The 2010 United States Census found 6,211 people with the surname Chinn, making it the 5,601st-most-common name in the country. This represented an increase in absolute numbers, but a decrease in relative frequency, from 6,146 (5,220th-most-common) in the 2000 Census. In both censuses, about half of the bearers of the surname identified as White, one-quarter as Asian, and one-fifth as Black.
People
Adrienne Chinn (1960), Canadian author
Alva Chinn (), American model
Andrew Chinn (1915–1996), American artist and art educator of Chinese descent
Anthony Chinn (1930–2000), Guyanese-born British actor
Benjamen Chinn (1921–2009), American photographer
Betty Kwan Chinn (), American philanthropist who works with homeless people
Bob Chinn (film director) (born 1943), American pornographic film director
Bob Chinn (restaurateur) (1923–2022), American restaurateur, owner of Bob Chinn's Crab House
Bobby Chinn (born 1954), New Zealand chef and television presenter
Carl Chinn (born 1956), English historian
Conor Chinn (born 1987), American soccer forward
George M. Chinn (1902–1987), United States Marine Corps colonel and weapons expert
Howard A. Chinn (1906–?), American audio engineer
Ian Chinn (1917–1956), Australian rules footballer
Jeanne Chinn, American actress
Jeremy Chinn (born 1998), American football player
Joseph W. Chinn (1866–1936), American lawyer and judge from Virginia
Julia Chinn
Kathy L. Chinn (), American politician from Missouri
Ken Chinn (1962–2020), Canadian punk rock musician
Lori Tan Chinn (), American actress
Marlin Chinn (born 1970), American basketball coach
Menzie Chinn (born 1961), American economist
May Edward Chinn (1896–1980), African-American woman physician
Mike Chinn (born 1954), English horror, fantasy, and comics writer
Nicky Chinn (born 1945), English songwriter and record producer
Oscar Chinn (), British transport company operator in Congo, involved in the Permanent Court of International Justice's Oscar Chinn Case
Phyllis Chinn (born 1941), American mathematician
Simon Chinn (), British film producer
Thomas Withers Chinn (1791–1852), American politician from Louisiana
Sir Trevor Chinn (born 1935), British businessman and philanthropist
Trevor Chinn (glaciologist) (–2018), New Zealand scientist
Fictional
|
https://en.wikipedia.org/wiki/Subclass%20%28set%20theory%29
|
In set theory and its applications throughout mathematics, a subclass is a class contained in some other class in the same way that a subset is a set contained in some other set.
That is, given classes A and B, A is a subclass of B if and only if every member of A is also a member of B.
If A and B are sets, then of course A is also a subset of B.
In fact, when using a definition of classes that requires them to be first-order definable, it is enough that B be a set; the axiom of specification essentially says that A must then also be a set.
As with subsets, the empty set is a subclass of every class, and any class is a subclass of itself. But additionally, every class is a subclass of the class of all sets. Accordingly, the subclass relation makes the collection of all classes into a Boolean lattice, which the subset relation does not do for the collection of all sets. Instead, the collection of all sets is an ideal in the collection of all classes. (Of course, the collection of all classes is something larger than even a class!)
References
Set theory
|
https://en.wikipedia.org/wiki/Beta-dual%20space
|
In functional analysis and related areas of mathematics, the beta-dual or -dual is a certain linear subspace of the algebraic dual of a sequence space.
Definition
Given a sequence space the -dual of is defined as
If is an FK-space then each in defines a continuous linear form on
Examples
Properties
The beta-dual of an FK-space is a linear subspace of the continuous dual of . If is an FK-AK space then the beta dual is linear isomorphic to the continuous dual.
Functional analysis
|
https://en.wikipedia.org/wiki/Bijection%2C%20injection%20and%20surjection
|
In mathematics, injections, surjections, and bijections are classes of functions distinguished by the manner in which arguments (input expressions from the domain) and images (output expressions from the codomain) are related or mapped to each other.
A function maps elements from its domain to elements in its codomain. Given a function :
The function is injective, or one-to-one, if each element of the codomain is mapped to by at most one element of the domain, or equivalently, if distinct elements of the domain map to distinct elements in the codomain. An injective function is also called an injection. Notationally:
or, equivalently (using logical transposition),
The function is surjective, or onto, if each element of the codomain is mapped to by at least one element of the domain. That is, the image and the codomain of the function are equal. A surjective function is a surjection. Notationally:
The function is bijective (one-to-one and onto, one-to-one correspondence, or invertible) if each element of the codomain is mapped to by exactly one element of the domain. That is, the function is both injective and surjective. A bijective function is also called a bijection. That is, combining the definitions of injective and surjective,
where means "there exists exactly one ".
In any case (for any function), the following holds:
An injective function need not be surjective (not all elements of the codomain may be associated with arguments), and a surjective function need not be injective (some images may be associated with more than one argument). The four possible combinations of injective and surjective features are illustrated in the adjacent diagrams.
Injection
A function is injective (one-to-one) if each possible element of the codomain is mapped to by at most one argument. Equivalently, a function is injective if it maps distinct arguments to distinct images. An injective function is an injection. The formal definition is the following.
The function is injective, if for all ,
The following are some facts related to injections:
A function is injective if and only if is empty or is left-invertible; that is, there is a function such that identity function on X. Here, is the image of .
Since every function is surjective when its codomain is restricted to its image, every injection induces a bijection onto its image. More precisely, every injection can be factored as a bijection followed by an inclusion as follows. Let be with codomain restricted to its image, and let be the inclusion map from into . Then . A dual factorization is given for surjections below.
The composition of two injections is again an injection, but if is injective, then it can only be concluded that is injective (see figure).
Every embedding is injective.
Surjection
A function is surjective or onto if each element of the codomain is mapped to by at least one element of the domain. In other words, each element of the codomain has a non-empty preimage.
|
https://en.wikipedia.org/wiki/Gyration
|
In geometry, a gyration is a rotation in a discrete subgroup of symmetries of the Euclidean plane such that the subgroup does not also contain a reflection symmetry whose axis passes through the center of rotational symmetry. In the orbifold corresponding to the subgroup, a gyration corresponds to a rotation point that does not lie on a mirror, called a gyration point.
For example, having a sphere rotating about any point that is not the center of the sphere, the sphere is gyrating. If it was rotating about its center, the rotation would be symmetrical and it would not be considered gyration.
References
Euclidean geometry
|
https://en.wikipedia.org/wiki/PEPA
|
Performance Evaluation Process Algebra (PEPA) is a stochastic process algebra designed for modelling computer and communication systems introduced by Jane Hillston in the 1990s. The language extends classical process algebras such as Milner's CCS and Hoare's CSP by introducing probabilistic branching and timing of transitions.
Rates are drawn from the exponential distribution and PEPA models are finite-state and so give rise to a stochastic process, specifically a continuous-time Markov process (CTMC). Thus the language can be used to study quantitative properties of models of computer and communication systems such as throughput, utilisation and response time as well as qualitative properties such as freedom from deadlock. The language is formally defined using a structured operational semantics in the style invented by Gordon Plotkin.
As with most process algebras, PEPA is a parsimonious language. It has only four combinators, prefix, choice, co-operation and hiding. Prefix is the basic building block of a sequential component: the process (a, r).P performs activity a at rate r before evolving to behave as component P. Choice sets up a competition between two possible alternatives: in the process (a, r).P + (b, s).Q either a wins the race (and the process subsequently behaves as P) or b wins the race (and the process subsequently behaves as Q).
The co-operation operator requires the two "co-operands" to join for those activities which are specified in the co-operation set: in the process P < a, b> Q the processes P and Q must co-operate on activities a and b, but any other activities may be performed independently. The reversed compound agent theorem gives a set of sufficient conditions for a co-operation to have a product form stationary distribution.
Finally, the process P/{a} hides the activity a from view (and prevents other processes from joining with it).
Syntax
Given a set of action names, the set of PEPA processes is defined by the following BNF grammar:
The parts of the syntax are, in the order given above
action the process can perform an action a at rate and continue as the process P.
choice the process P+Q may behave as either the process P or the process Q.
cooperation processes P and Q exist simultaneously and behave independently for actions whose names do not appear in L. For actions whose names appear in L, the action must be carried out jointly and a race condition determines the time this takes.
hiding the process P behaves as usual for action names not in L, and performs a silent action for action names that appear in L.
process identifier write to use the identifier A to refer to the process P.
Tools
PEPA Plug-in for Eclipse
ipc: the imperial PEPA compiler
GPAnalyser for fluid analysis of massively parallel systems
References
External links
PEPA: Performance Evaluation Process Algebra
Process calculi
Theoretical computer science
|
https://en.wikipedia.org/wiki/Parametric%20derivative
|
In calculus, a parametric derivative is a derivative of a dependent variable with respect to another dependent variable that is taken when both variables depend on an independent third variable, usually thought of as "time" (that is, when the dependent variables are x and y and are given by parametric equations in t).
First derivative
Let and be the coordinates of the points of the curve expressed as functions of a variable t:
The first derivative implied by these parametric equations is
where the notation denotes the derivative of x with respect to t. This can be derived using the chain rule for derivatives:
and dividing both sides by to give the equation above.
In general all of these derivatives — dy / dt, dx / dt, and dy / dx — are themselves functions of t and so can be written more explicitly as, for example,
Second derivative
The second derivative implied by a parametric equation is given by
by making use of the quotient rule for derivatives. The latter result is useful in the computation of curvature.
Example
For example, consider the set of functions where:
and
Differentiating both functions with respect to t leads to
and
respectively. Substituting these into the formula for the parametric derivative, we obtain
where and are understood to be functions of t.
See also
Generalizations of the derivative
External links
Differential calculus
|
https://en.wikipedia.org/wiki/Differential-algebraic%20system%20of%20equations
|
In electrical engineering, a differential-algebraic system of equations (DAE) is a system of equations that either contains differential equations and algebraic equations, or is equivalent to such a system.
In mathematics these are examples of differential algebraic varieties and correspond to ideals in differential polynomial rings (see the article on differential algebra for the algebraic setup).
We can write these differential equations for a dependent vector of variables x in one independent variable t, as
When considering these symbols as functions of a real variable (as is the case in applications in electrical engineering or control theory) we look at as a vector of dependent variables and the system has as many equations, which we consider as functions .
They are distinct from ordinary differential equation (ODE) in that a DAE is not completely solvable for the derivatives of all components of the function x because these may not all appear (i.e. some equations are algebraic); technically the distinction between an implicit ODE system [that may be rendered explicit] and a DAE system is that the Jacobian matrix is a singular matrix for a DAE system. This distinction between ODEs and DAEs is made because DAEs have different characteristics and are generally more difficult to solve.
In practical terms, the distinction between DAEs and ODEs is often that the solution of a DAE system depends on the derivatives of the input signal and not just the signal itself as in the case of ODEs; this issue is commonly encountered in nonlinear systems with hysteresis, such as the Schmitt trigger.
This difference is more clearly visible if the system may be rewritten so that instead of x we consider a pair of vectors of dependent variables and the DAE has the form
where , , and
A DAE system of this form is called semi-explicit. Every solution of the second half g of the equation defines a unique direction for x via the first half f of the equations, while the direction for y is arbitrary. But not every point (x,y,t) is a solution of g. The variables in x and the first half f of the equations get the attribute differential. The components of y and the second half g of the equations are called the algebraic variables or equations of the system. [The term algebraic in the context of DAEs only means free of derivatives and is not related to (abstract) algebra.]
The solution of a DAE consists of two parts, first the search for consistent initial values and second the computation of a trajectory. To find consistent initial values it is often necessary to consider the derivatives of some of the component functions of the DAE. The highest order of a derivative that is necessary for this process is called the differentiation index. The equations derived in computing the index and consistent initial values may also be of use in the computation of the trajectory. A semi-explicit DAE system can be converted to an implicit one by decreasing the differenti
|
https://en.wikipedia.org/wiki/Exact%20Equation
|
In mathematics, the term exact equation can refer either of the following:
Exact differential equation
Exact differential form
|
https://en.wikipedia.org/wiki/Nakayama%27s%20lemma
|
In mathematics, more specifically abstract algebra and commutative algebra, Nakayama's lemma — also known as the Krull–Azumaya theorem — governs the interaction between the Jacobson radical of a ring (typically a commutative ring) and its finitely generated modules. Informally, the lemma immediately gives a precise sense in which finitely generated modules over a commutative ring behave like vector spaces over a field. It is an important tool in algebraic geometry, because it allows local data on algebraic varieties, in the form of modules over local rings, to be studied pointwise as vector spaces over the residue field of the ring.
The lemma is named after the Japanese mathematician Tadashi Nakayama and introduced in its present form in , although it was first discovered in the special case of ideals in a commutative ring by Wolfgang Krull and then in general by Goro Azumaya (1951). In the commutative case, the lemma is a simple consequence of a generalized form of the Cayley–Hamilton theorem, an observation made by Michael Atiyah (1969). The special case of the noncommutative version of the lemma for right ideals appears in Nathan Jacobson (1945), and so the noncommutative Nakayama lemma is sometimes known as the Jacobson–Azumaya theorem. The latter has various applications in the theory of Jacobson radicals.
Statement
Let be a commutative ring with identity 1. The following is Nakayama's lemma, as stated in :
Statement 1: Let be an ideal in , and a finitely generated module over . If , then there exists with such that .
This is proven below. A useful mnemonic for Nakayama's lemma is "". This summarizes the following alternative formulation:
Statement 2: Let be an ideal in , and a finitely generated module over . If , then there exists an such that for all .
Proof: Take in Statement 1.
The following corollary is also known as Nakayama's lemma, and it is in this form that it most often appears.
Statement 3: If is a finitely generated module over , is the Jacobson radical of , and , then .
Proof: (with as in Statement 1) is in the Jacobson radical so is invertible.
More generally, one has that is a superfluous submodule of when is finitely generated.
Statement 4: If is a finitely generated module over , is a submodule of , and = , then = .
Proof: Apply Statement 3 to .
The following result manifests Nakayama's lemma in terms of generators.
Statement 5: If is a finitely generated module over and the images of elements 1,..., of in generate as an -module, then 1,..., also generate as an -module.
Proof: Apply Statement 4 to .
If one assumes instead that is complete and is separated with respect to the -adic topology for an ideal in , this last statement holds with in place of and without assuming in advance that is finitely generated. Here separatedness means that the -adic topology satisfies the T1 separation axiom, and is equivalent to
Consequences
Local rings
In the special case of a fini
|
https://en.wikipedia.org/wiki/City%20block%20%28disambiguation%29
|
City block may refer to:
City block, an area of a city surrounded by streets
City Block (Judge Dredd), a part of the fictional universe recounted in the Judge Dredd comix
Taxicab geometry or city block distance, a special case of the Minkowski distance
|
https://en.wikipedia.org/wiki/Teichm%C3%BCller%20space
|
In mathematics, the Teichmüller space of a (real) topological (or differential) surface is a space that parametrizes complex structures on up to the action of homeomorphisms that are isotopic to the identity homeomorphism. Teichmüller spaces are named after Oswald Teichmüller.
Each point in a Teichmüller space may be regarded as an isomorphism class of "marked" Riemann surfaces, where a "marking" is an isotopy class of homeomorphisms from to itself. It can be viewed as a moduli space for marked hyperbolic structure on the surface, and this endows it with a natural topology for which it is homeomorphic to a ball of dimension for a surface of genus . In this way Teichmüller space can be viewed as the universal covering orbifold of the Riemann moduli space.
The Teichmüller space has a canonical complex manifold structure and a wealth of natural metrics. The study of geometric features of these various structures is an active body of research.
The sub-field of mathematics that studies the Teichmüller space is called Teichmüller theory.
History
Moduli spaces for Riemann surfaces and related Fuchsian groups have been studied since the work of Bernhard Riemann (1826-1866), who knew that parameters were needed to describe the variations of complex structures on a surface of genus . The early study of Teichmüller space, in the late nineteenth–early twentieth century, was geometric and founded on the interpretation of Riemann surfaces as hyperbolic surfaces. Among the main contributors were Felix Klein, Henri Poincaré, Paul Koebe, Jakob Nielsen, Robert Fricke and Werner Fenchel.
The main contribution of Teichmüller to the study of moduli was the introduction of quasiconformal mappings to the subject. They allow us to give much more depth to the study of moduli spaces by endowing them with additional features that were not present in the previous, more elementary works. After World War II the subject was developed further in this analytic vein, in particular by Lars Ahlfors and Lipman Bers. The theory continues to be active, with numerous studies of the complex structure of Teichmüller space (introduced by Bers).
The geometric vein in the study of Teichmüller space was revived following the work of William Thurston in the late 1970s, who introduced a geometric compactification which he used in his study of the mapping class group of a surface. Other more combinatorial objects associated to this group (in particular the curve complex) have also been related to Teichmüller space, and this is a very active subject of research in geometric group theory.
Definitions
Teichmüller space from complex structures
Let be an orientable smooth surface (a differentiable manifold of dimension 2). Informally the Teichmüller space of is the space of Riemann surface structures on up to isotopy.
Formally it can be defined as follows. Two complex structures on are said to be equivalent if there is a diffeomorphism such that:
It is holomorphic (the d
|
https://en.wikipedia.org/wiki/Riesz%E2%80%93Fischer%20theorem
|
In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.
For many authors, the Riesz–Fischer theorem refers to the fact that the Lp spaces from Lebesgue integration theory are complete.
Modern forms of the theorem
The most common form of the theorem states that a measurable function on is square integrable if and only if the corresponding Fourier series converges in the Lp space This means that if the Nth partial sum of the Fourier series corresponding to a square-integrable function f is given by
where the nth Fourier coefficient, is given by
then
where is the -norm.
Conversely, if is a two-sided sequence of complex numbers (that is, its indices range from negative infinity to positive infinity) such that
then there exists a function f such that f is square-integrable and the values are the Fourier coefficients of f.
This form of the Riesz–Fischer theorem is a stronger form of Bessel's inequality, and can be used to prove Parseval's identity for Fourier series.
Other results are often called the Riesz–Fischer theorem . Among them is the theorem that, if A is an orthonormal set in a Hilbert space H, and then
for all but countably many and
Furthermore, if A is an orthonormal basis for H and x an arbitrary vector, the series
converges (or ) to x. This is equivalent to saying that for every there exists a finite set in A such that
for every finite set B containing B0. Moreover, the following conditions on the set A are equivalent:
the set A is an orthonormal basis of H
for every vector
Another result, which also sometimes bears the name of Riesz and Fischer, is the theorem that (or more generally ) is complete.
Example
The Riesz–Fischer theorem also applies in a more general setting. Let R be an inner product space consisting of functions (for example, measurable functions on the line, analytic functions in the unit disc; in old literature, sometimes called Euclidean Space), and let be an orthonormal system in R (e.g. Fourier basis, Hermite or Laguerre polynomials, etc. – see orthogonal polynomials), not necessarily complete (in an inner product space, an orthonormal set is complete if no nonzero vector is orthogonal to every vector in the set). The theorem asserts that if the normed space R is complete (thus R is a Hilbert space), then any sequence that has finite norm defines a function f in the space R.
The function f is defined by
limit in R-norm.
Combined with the Bessel's inequality, we know the converse as well: if f is a function in R, then the Fourier coefficients have finite norm.
History: the Note of Riesz and the Note of Fischer (1907)
In his Note, states the following result (translated here to modern language at one point: the notation was not used in 1907).
Let be an ortho
|
https://en.wikipedia.org/wiki/Dobi%C5%84ski%27s%20formula
|
In combinatorial mathematics, Dobiński's formula states that the n-th Bell number Bn (i.e., the number of partitions of a set of size n) equals
where denotes Euler's number.
The formula is named after G. Dobiński, who published it in 1877.
Probabilistic content
In the setting of probability theory, Dobiński's formula represents the nth moment of the Poisson distribution with mean 1. Sometimes Dobiński's formula is stated as saying that the number of partitions of a set of size n equals the nth moment of that distribution.
Reduced formula
The computation of the sum of Dobiński's series can be reduced to a finite sum of terms, taking into account the information that is an integer. Precisely one has, for any integer
provided (a condition that of course implies , but that is satisfied by some of size ). Indeed, since , one has
Therefore for all
so that the tail is dominated by the series , which implies , whence the reduced formula.
Generalization
Dobiński's formula can be seen as a particular case, for , of the more general relation:
and for in this formula for Touchard polynomials
Proof
One proof relies on a formula for the generating function for Bell numbers,
The power series for the exponential gives
so
The coefficient of in this power series must be , so
Another style of proof was given by Rota. Recall that if x and n are nonnegative integers then the number of one-to-one functions that map a size-n set into a size-x set is the falling factorial
Let ƒ be any function from a size-n set A into a size-x set B. For any b ∈ B, let ƒ −1(b) = {a ∈ A : ƒ(a) = b}. Then } is a partition of A. Rota calls this partition the "kernel" of the function ƒ. Any function from A into B factors into
one function that maps a member of A to the element of the kernel to which it belongs, and
another function, which is necessarily one-to-one, that maps the kernel into B.
The first of these two factors is completely determined by the partition that is the kernel. The number of one-to-one functions from into B is (x)||, where || is the number of parts in the partition . Thus the total number of functions from a size-n set A into a size-x set B is
the index running through the set of all partitions of A. On the other hand, the number of functions from A into B is clearly xn. Therefore, we have
Rota continues the proof using linear algebra, but it is enlightening to introduce a Poisson-distributed random variable X with mean 1. The equation above implies that the nth moment of this random variable is
where E stands for expected value. But we shall show that all the quantities E((X)k) equal 1. It follows that
and this is just the number of partitions of the set A.
The quantity E((X)k) is called the kth factorial moment of the random variable X. To show that this equals 1 for all k when X is a Poisson-distributed random variable with mean 1, recall that this random variable assumes each value integer value with p
|
https://en.wikipedia.org/wiki/CAL%20%28programming%20language%29
|
CAL, short for Conversational Algebraic Language, was a programming language and system designed and developed by Butler Lampson at Berkeley in 1967 for the SDS 940 mainframe computer. CAL is a version of the seminal JOSS language with several cleanups and new features to take advantage of the SDS platform.
The Berkeley SDS was used for the development of the Tymshare commercial time-sharing platform and an improved version of CAL was offered as a programming environment to its customers in 1969. Although CAL saw "almost no use", it had a lasting impact by influencing the design of Tymshare SUPER BASIC which copied a number of its features. Some of those features, in turn, appeared in BASIC-PLUS on the PDP-11, which is the direct ancestor of Microsoft BASIC.
Description
Basic concepts
JOSS had been designed to be used by non-programmers in the US Air Force and within Rand Corporation, and to aid with that, Rand designed to custom computer terminals that were easier to set up and use. These terminals, based on the IBM Selectric typewriter, also included a custom character set that implemented common mathematical symbols like and .
To a large degree, CAL was a version of JOSS that replaced these sorts of customizations with more common solutions like and so they could run on common terminals. The other noticeable differences were that CAL was all upper-case, as opposed to sentence casing in JOSS, and it did not require a period at the end of the line. The commands were otherwise almost identical and the overall structure and syntax were the same.
As with JOSS, CAL had an interactive user interface that allowed the user to type in statements in "direct mode" or programs to be run in "indirect mode". In BASIC, the former is more commonly referred to as "immediate mode". Both CAL and JOSS used a two-part line number, known as the part and step, separated by a period, for instance, for part 1 step 100. Parts were generally used to group related statements into subroutines. In CAL, the part number could be between 0 and 999999, and the step from 0 to 99999.
There were two main versions of CAL, released in 1967 and 1969. The following description will be based primarily on the former version unless otherwise noted.
Jumps and subroutines
As in JOSS, CAL supported the command to branch to a provided part or step, as in or , and for subroutine calls, as in to perform the entire part, or to run that single step and then return. The later syntax was useful when there were many small subroutines as they could be implemented on a single line without an associated or similar concept.
Conditional branching and assignment
One of the more notable syntactic features of JOSS was the concept of "statement modifiers" which controlled the operation of other statements. JOSS used this for conditional branching.
In most languages, one would write something to the effect of "If this expression is true, then do this...". In JOSS, this order was reversed,
|
https://en.wikipedia.org/wiki/Complete%20Boolean%20algebra
|
In mathematics, a complete Boolean algebra is a Boolean algebra in which every subset has a supremum (least upper bound). Complete Boolean algebras are used to construct Boolean-valued models of set theory in the theory of forcing. Every Boolean algebra A has an essentially unique completion, which is a complete Boolean algebra containing A such that every element is the supremum of some subset of A. As a partially ordered set, this completion of A is the Dedekind–MacNeille completion.
More generally, if κ is a cardinal then a Boolean algebra is called κ-complete if every subset of cardinality less than κ has a supremum.
Examples
Complete Boolean algebras
Every finite Boolean algebra is complete.
The algebra of subsets of a given set is a complete Boolean algebra.
The regular open sets of any topological space form a complete Boolean algebra. This example is of particular importance because every forcing poset can be considered as a topological space (a base for the topology consisting of sets that are the set of all elements less than or equal to a given element). The corresponding regular open algebra can be used to form Boolean-valued models which are then equivalent to generic extensions by the given forcing poset.
The algebra of all measurable subsets of a σ-finite measure space, modulo null sets, is a complete Boolean algebra. When the measure space is the unit interval with the σ-algebra of Lebesgue measurable sets, the Boolean algebra is called the random algebra.
The Boolean algebra of all Baire sets modulo meager sets in a topological space with a countable base is complete; when the topological space is the real numbers the algebra is sometimes called the Cantor algebra.
Non-complete Boolean algebras
The algebra of all subsets of an infinite set that are finite or have finite complement is a Boolean algebra but is not complete.
The algebra of all measurable subsets of a measure space is a ℵ1-complete Boolean algebra, but is not usually complete.
Another example of a Boolean algebra that is not complete is the Boolean algebra P(ω) of all sets of natural numbers, quotiented out by the ideal Fin of finite subsets. The resulting object, denoted P(ω)/Fin, consists of all equivalence classes of sets of naturals, where the relevant equivalence relation is that two sets of naturals are equivalent if their symmetric difference is finite. The Boolean operations are defined analogously, for example, if A and B are two equivalence classes in P(ω)/Fin, we define to be the equivalence class of , where a and b are some (any) elements of A and B respectively.
Now let a0, a1, … be pairwise disjoint infinite sets of naturals, and let A0, A1, … be their corresponding equivalence classes in P(ω)/Fin. Then given any upper bound X of A0, A1, … in P(ω)/Fin, we can find a lesser upper bound, by removing from a representative for X one element of each an. Therefore the An have no supremum.
Properties of complete Boolean algebras
Every subset
|
https://en.wikipedia.org/wiki/Power%20closed
|
In mathematics a p-group is called power closed if for every section of the product of powers is again a th power.
Regular p-groups are an example of power closed groups. On the other hand, powerful p-groups, for which the product of powers is again a th power are not power closed, as this property does not hold for all sections of powerful p-groups.
The power closed 2-groups of exponent at least eight are described in .
References
Group theory
P-groups
|
https://en.wikipedia.org/wiki/Von%20Mises%20distribution
|
In probability theory and directional statistics, the von Mises distribution (also known as the circular normal distribution or Tikhonov distribution) is a continuous probability distribution on the circle. It is a close approximation to the wrapped normal distribution, which is the circular analogue of the normal distribution. A freely diffusing angle on a circle is a wrapped normally distributed random variable with an unwrapped variance that grows linearly in time. On the other hand, the von Mises distribution is the stationary distribution of a drift and diffusion process on the circle in a harmonic potential, i.e. with a preferred orientation. The von Mises distribution is the maximum entropy distribution for circular data when the real and imaginary parts of the first circular moment are specified. The von Mises distribution is a special case of the von Mises–Fisher distribution on the N-dimensional sphere.
Definition
The von Mises probability density function for the angle x is given by:
where I0() is the modified Bessel function of the first kind of order 0, with this scaling constant chosen so that the distribution sums to unity:
The parameters μ and 1/ are analogous to μ and σ (the mean and variance) in the normal distribution:
μ is a measure of location (the distribution is clustered around μ), and
is a measure of concentration (a reciprocal measure of dispersion, so 1/ is analogous to σ).
If is zero, the distribution is uniform, and for small , it is close to uniform.
If is large, the distribution becomes very concentrated about the angle μ with being a measure of the concentration. In fact, as increases, the distribution approaches a normal distribution in x with mean μ and variance 1/.
The probability density can be expressed as a series of Bessel functions
where Ij(x) is the modified Bessel function of order j.
The cumulative distribution function is not analytic and is best found by integrating the above series. The indefinite integral of the probability density is:
The cumulative distribution function will be a function of the lower limit of
integration x0:
Moments
The moments of the von Mises distribution are usually calculated as the moments of the complex exponential z = e rather than the angle x itself. These moments are referred to as circular moments. The variance calculated from these moments is referred to as the circular variance. The one exception to this is that the "mean" usually refers to the argument of the complex mean.
The nth raw moment of z is:
where the integral is over any interval of length 2π. In calculating the above integral, we use the fact that z = cos(nx) + i sin(nx) and the Bessel function identity:
The mean of the complex exponential z is then just
and the circular mean value of the angle x is then taken to be the argument μ. This is the expected or preferred direction of the angular random variables. The variance of z, or the circular variance of x is:
Limiting behavi
|
https://en.wikipedia.org/wiki/Powerful%20p-group
|
In mathematics, in the field of group theory, especially in the study of p-groups and pro-p-groups, the concept of powerful p-groups plays an important role. They were introduced in , where a number of applications are given, including results on Schur multipliers. Powerful p-groups are used in the study of automorphisms of p-groups , the solution of the restricted Burnside problem , the classification of finite p-groups via the coclass conjectures , and provided an excellent method of understanding analytic pro-p-groups .
Formal definition
A finite p-group is called powerful if the commutator subgroup is contained in the subgroup for odd , or if is contained in the subgroup for .
Properties of powerful p-groups
Powerful p-groups have many properties similar to abelian groups, and thus provide a good basis for studying p-groups. Every finite p-group can be expressed as a section of a powerful p-group.
Powerful p-groups are also useful in the study of pro-p groups as it provides a simple means for characterising p-adic analytic groups (groups that are manifolds over the p-adic numbers): A finitely generated pro-p group is p-adic analytic if and only if it contains an open normal subgroup that is powerful: this is a special case of a deep result of Michel Lazard (1965).
Some properties similar to abelian p-groups are: if is a powerful p-group then:
The Frattini subgroup of has the property
for all That is, the group generated by th powers is precisely the set of th powers.
If then for all
The th entry of the lower central series of has the property for all
Every quotient group of a powerful p-group is powerful.
The Prüfer rank of is equal to the minimal number of generators of
Some less abelian-like properties are: if is a powerful p-group then:
is powerful.
Subgroups of are not necessarily powerful.
References
Lazard, Michel (1965), Groupes analytiques p-adiques, Publ. Math. IHÉS 26 (1965), 389–603.
P-groups
Properties of groups
|
https://en.wikipedia.org/wiki/Cartan%27s%20equivalence%20method
|
In mathematics, Cartan's equivalence method is a technique in differential geometry for determining whether two geometrical structures are the same up to a diffeomorphism. For example, if M and N are two Riemannian manifolds with metrics g and h, respectively,
when is there a diffeomorphism
such that
?
Although the answer to this particular question was known in dimension 2 to Gauss and in higher dimensions to Christoffel and perhaps Riemann as well, Élie Cartan and his intellectual heirs developed a technique for answering similar questions for radically different geometric structures. (For example see the Cartan–Karlhede algorithm.)
Cartan successfully applied his equivalence method to many such structures, including projective structures, CR structures, and complex structures, as well as ostensibly non-geometrical structures such as the equivalence of Lagrangians and ordinary differential equations. (His techniques were later developed more fully by many others, such as D. C. Spencer and Shiing-Shen Chern.)
The equivalence method is an essentially algorithmic procedure for determining when two geometric structures are identical. For Cartan, the primary geometrical information was expressed in a coframe or collection of coframes on a differentiable manifold. See method of moving frames.
Overview
Specifically, suppose that M and N are a pair of manifolds each carrying a G-structure for a structure group G. This amounts to giving a special class of coframes on M and N. Cartan's method addresses the question of whether there exists a local diffeomorphism φ:M→N under which the G-structure on N pulls back to the given G-structure on M. An equivalence problem has been "solved" if one can give a complete set of structural invariants for the G-structure: meaning that such a diffeomorphism exists if and only if all of the structural invariants agree in a suitably defined sense.
Explicitly, local systems of one-forms θi and γi are given on M and N, respectively, which span the respective cotangent bundles (i.e., are coframes). The question is whether there is a local diffeomorphism φ:M→N such that the pullback of the coframe on N satisfies
(1)
where the coefficient g is a function on M taking values in the Lie group G. For example, if M and N are Riemannian manifolds, then G=O(n) is the orthogonal group and θi and γi are orthonormal coframes of M and N respectively. The question of whether two Riemannian manifolds are isometric is then a question of whether there exists a diffeomorphism φ satisfying (1).
The first step in the Cartan method is to express the pullback relation (1) in as invariant a way as possible through the use of a "prolongation". The most economical way to do this is to use a G-subbundle PM of the principal bundle of linear coframes LM, although this approach can lead to unnecessary complications when performing actual calculations. In particular, later on this article uses a different approach. But for the purp
|
https://en.wikipedia.org/wiki/Strike%20rate
|
Strike rate refers to two different statistics in the sport of cricket. Batting strike rate is a measure of how quickly a batter achieves the primary goal of batting, namely scoring runs, measured in runs per 100 balls; higher is better. Bowling strike rate is a measure of how quickly a bowler achieves the primary goal of bowling, namely taking wickets (i.e. getting batters out)measured in balls per wicket; lower is better. For bowlers, economy rate is a more frequently discussed statistic.
Both strike rates are relatively new statistics, having only been invented and considered of importance after the introduction of One Day International cricket in the 1970s.
Batting strike rate
Batting strike rate (s/r) is defined for a batter as the average number of runs scored per 100 balls faced. The higher the strike rate, the more effective a batter is at scoring quickly.
In Test cricket, a batter's strike rate is of secondary importance to ability to score runs without getting out. This means a Test batter's most important statistic is generally considered to be batting average, rather than strike rate.
In limited overs cricket, strike rates are of considerably more importance. Since each team only faces a limited number of balls in an innings, the faster a batter scores, the more runs the team will be able to accumulate. Strike rates of over 150 are becoming common in Twenty20 cricket. Strike rate is probably considered by most as the key factor in a batter in one day cricket. Accordingly, the batters with higher strike rates, especially in Twenty20 matches, are more valued than those with a lesser strike rate. Strike rate is also used to compare a batter’s ability to score runs against differing forms of bowling (eg spin bowling, fast bowling), often giving an indication to the bowling team as to how successfully to limit a batter's ability to score.
Highest career strike rate (T20I)
Highest career strike rate (ODI)
Bowling strike rate
Bowling strike rate is defined for a bowler as the average number of balls bowled per wicket taken. The lower the strike rate, the more effective a bowler is at taking wickets quickly.
Although introduced as a statistic complementary to the batting strike rate during the ascension of one-day cricket in the 1980s, bowling strike rates are arguably of more importance in Test cricket than One-day Internationals. This is because the primary goal of a bowler in Test cricket is to take wickets, whereas in a one-day match it is often sufficient to bowl economically - giving away as few runs as possible even if this means taking fewer wickets.
Best career strike rate (ODI and T20I)
Best career strike rate (Tests)
References
Cricket terminology
Cricket records and statistics
Rates
|
https://en.wikipedia.org/wiki/Pro-p%20group
|
In mathematics, a pro-p group (for some prime number p) is a profinite group such that for any open normal subgroup the quotient group is a p-group. Note that, as profinite groups are compact, the open subgroups are exactly the closed subgroups of finite index, so that the discrete quotient group is always finite.
Alternatively, one can define a pro-p group to be the inverse limit of an inverse system of discrete finite p-groups.
The best-understood (and historically most important) class of pro-p groups is the p-adic analytic groups: groups with the structure of an analytic manifold over such that group multiplication and inversion are both analytic functions.
The work of Lubotzky and Mann, combined with Michel Lazard's solution to Hilbert's fifth problem over the p-adic numbers, shows that a pro-p group is p-adic analytic if and only if it has finite rank, i.e. there exists a positive integer such that any closed subgroup has a topological generating set with no more than elements. More generally it was shown that a finitely generated profinite group is a compact p-adic Lie group if and only if it has an open subgroup that is a uniformly powerful pro-p-group.
The Coclass Theorems have been proved in 1994 by A. Shalev and independently by C. R. Leedham-Green. Theorem D is one of these theorems and asserts that, for any prime number p and any positive integer r, there exist only finitely many pro-p groups of coclass r. This finiteness result is fundamental for the classification of finite p-groups by means of directed coclass graphs.
Examples
The canonical example is the p-adic integers
The group of invertible n by n matrices over has an open subgroup U consisting of all matrices congruent to the identity matrix modulo . This U is a pro-p group. In fact the p-adic analytic groups mentioned above can all be found as closed subgroups of for some integer n,
Any finite p-group is also a pro-p-group (with respect to the constant inverse system).
Fact: A finite homomorphic image of a pro-p group is a p-group. (due to J.P. Serre)
See also
Residual property (mathematics)
Profinite group (See Property or Fact 5)
References
Infinite group theory
Topological groups
P-groups
Properties of groups
|
https://en.wikipedia.org/wiki/Municipality%20of%20the%20District%20of%20East%20Hants
|
East Hants, officially named the Municipality of the District of East Hants, is a district municipality in Hants County, Nova Scotia, Canada. Statistics Canada classifies the district municipality as a municipal district.
With its administrative seat in Elmsdale, the district municipality occupies the eastern half of Hants County from the Minas Basin to the boundary with Halifax County, sharing this boundary with the West Hants Regional Municipality. It was made in 1861 from the former townships of Uniacke, Rawdon, Douglas, Walton, Shubenacadie and Maitland. Its most settled area is in the Shubenacadie Valley.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, the Municipality of the District of East Hants had a population of living in of its total private dwellings, a change of from its 2016 population of . With a land area of , it had a population density of in 2021.
Public works
The Public Works division operates two water utility distribution sites and three sewage collection and treatment systems for communities in the serviced areas adjacent to Highway 102 and along the Shubenacadie River. The division also operates an engineered spring which draws additional water from Grand Lake to the Shubenacadie River during low water level events.
Drinking water is distributed across 71.0 kilometers of main distribution lines. Wastewater is distributed through 80.5 kilometers of wastewater collection mains. Please visit the Public Works section for more detailed information.
The Environmental Services division works closely with Public Works. This division monitors and reviews data to ensure compliance of operating approvals. Environmental Services also runs a watershed protection program that focuses on building awareness of watershed issues that impact watersheds of interest to the municipality.
Notable people
Hip hop artist Buck 65 is from Mount Uniacke, East Hants. Born Richard Terfry, he is also a radio host on CBC Radio.
Luke Boyd, international recording artist better known as Classified, was born in Enfield, East Hants.
Communities
Education
Riverside Educational Centre middle school is located in Milford Station
Elmsdale District School is located in Elmsdale
Kennetcook District Elementary is located in Kennetcook
Uniacke District School is located in Mount Uniacke
Hants East Rural High School is located in Milford Station
Hants North Rural High School is located in Kennetcook
Cobequid District School is located in Noel
Rawdon District School is located in Rawdon
See also
List of municipalities in Nova Scotia
References
External links
East Hants
East Hants
1879 establishments in Canada
|
https://en.wikipedia.org/wiki/Rice%20distribution
|
In probability theory, the Rice distribution or Rician distribution (or, less commonly, Ricean distribution) is the probability distribution of the magnitude of a circularly-symmetric bivariate normal random variable, possibly with non-zero mean (noncentral). It was named after Stephen O. Rice (1907–1986).
Characterization
The probability density function is
where I0(z) is the modified Bessel function of the first kind with order zero.
In the context of Rician fading, the distribution is often also rewritten using the Shape Parameter , defined as the ratio of the power contributions by line-of-sight path to the remaining multipaths, and the Scale parameter , defined as the total power received in all paths.
The characteristic function of the Rice distribution is given as:
where is one of Horn's confluent hypergeometric functions with two variables and convergent for all finite values of and . It is given by:
where
is the rising factorial.
Properties
Moments
The first few raw moments are:
and, in general, the raw moments are given by
Here Lq(x) denotes a Laguerre polynomial:
where is the confluent hypergeometric function of the first kind. When k is even, the raw moments become simple polynomials in σ and ν, as in the examples above.
For the case q = 1/2:
The second central moment, the variance, is
Note that indicates the square of the Laguerre polynomial , not the generalized Laguerre polynomial
Related distributions
if where and are statistically independent normal random variables and is any real number.
Another case where comes from the following steps:
Generate having a Poisson distribution with parameter (also mean, for a Poisson)
Generate having a chi-squared distribution with degrees of freedom.
Set
If then has a noncentral chi-squared distribution with two degrees of freedom and noncentrality parameter .
If then has a noncentral chi distribution with two degrees of freedom and noncentrality parameter .
If then , i.e., for the special case of the Rice distribution given by , the distribution becomes the Rayleigh distribution, for which the variance is .
If then has an exponential distribution.
If then has an Inverse Rician distribution.
The folded normal distribution is the univariate special case of the Rice distribution.
Limiting cases
For large values of the argument, the Laguerre polynomial becomes
It is seen that as ν becomes large or σ becomes small the mean becomes ν and the variance becomes σ2.
The transition to a Gaussian approximation proceeds as follows. From Bessel function theory we have
so, in the large region, an asymptotic expansion of the Rician distribution:
Moreover, when the density is concentrated around and because of the Gaussian exponent, we can also write and finally get the Normal approximation
The approximation becomes usable for
Parameter estimation (the Koay inversion technique)
There are three different methods for estimating the parameters of the
|
https://en.wikipedia.org/wiki/Peter%20Woit
|
Peter Woit (; born September 11, 1957) is an American theoretical physicist. He is a senior lecturer in the Mathematics department at Columbia University. Woit, a critic of string theory, has published a book Not Even Wrong (2006) and writes a blog of the same name.
Career
Woit graduated in 1979 from Harvard University with bachelor's and master's degrees in physics. He obtained his PhD in particle physics from Princeton University in 1985, followed by postdoctoral work in theoretical physics at State University of New York at Stony Brook and mathematics at the Mathematical Sciences Research Institute (MSRI) in Berkeley. He spent four years as an assistant professor at Columbia. He now holds a permanent position in the mathematics department, as senior lecturer and as departmental computer administrator.
Woit is a U.S. citizen and also has a Latvian passport. His father was born in Riga and became exiled with his own parents at the beginning of the Soviet occupation of Latvia.
Criticism of string theory
He is critical of string theory on the grounds that it lacks testable predictions and is promoted with public money despite its failures so far, and has authored both scientific papers and popular polemics on this topic. His writings claim that excessive media attention and funding of this one particular mainstream endeavour, which he considers speculative, risks undermining public faith in the freedom of scientific research. His moderated weblog on string theory and other topics is titled "Not Even Wrong", a derogatory term for scientifically useless arguments coined by Wolfgang Pauli.
"The String Wars"
A discussion in 2006 took place between University of California, Santa Barbara physicists at the Kavli Institute for Theoretical Physics and science journalist George Johnson regarding the controversy caused by the books of Lee Smolin (The Trouble with Physics) and Woit (Not Even Wrong). The meeting was titled "The String Wars".
Selected publications
1988, "Supersymmetric quantum mechanics, spinors and the standard model," Nuclear Physics B303: 329-42.
1990, "Topological quantum theories and representation theory" in Ling-Lie Chau and Werner Nahm, eds., Differential Geometric Methods in Theoretical Physics: Physics and Geometry, Proceedings of NATO Advanced Research Workshop. Plenum Press: 533-45.
2006. Not Even Wrong: The Failure of String Theory & the Continuing Challenge to Unify the Laws of Physics. (Jonathan Cape), (Basic Books)
2017 Quantum Theory, Groups and Representations Springer International Publishing, Hardcover , eBook ,
See also
The Trouble with Physics
Lee Smolin
References
External links
Peter Woit's Home Page.
Not Even Wrong, Woit's weblog.
Video of discussion/debate with Peter Woit on Bloggingheads.tv
1957 births
Harvard College alumni
Princeton University alumni
Columbia University faculty
21st-century American physicists
American bloggers
American people of Latvian descent
Living people
Science blogg
|
https://en.wikipedia.org/wiki/Brownian%20bridge
|
A Brownian bridge is a continuous-time stochastic process B(t) whose probability distribution is the conditional probability distribution of a standard Wiener process W(t) (a mathematical model of Brownian motion) subject to the condition (when standardized) that W(T) = 0, so that the process is pinned to the same value at both t = 0 and t = T. More precisely:
The expected value of the bridge at any t in the interval [0,T] is zero, with variance , implying that the most uncertainty is in the middle of the bridge, with zero uncertainty at the nodes. The covariance of B(s) and B(t) is , or s(T − t)/T if s < t.
The increments in a Brownian bridge are not independent.
Relation to other stochastic processes
If W(t) is a standard Wiener process (i.e., for t ≥ 0, W(t) is normally distributed with expected value 0 and variance t, and the increments are stationary and independent), then
is a Brownian bridge for t ∈ [0, T]. It is independent of W(T)
Conversely, if B(t) is a Brownian bridge and Z is a standard normal random variable independent of B, then the process
is a Wiener process for t ∈ [0, 1]. More generally, a Wiener process W(t) for t ∈ [0, T] can be decomposed into
Another representation of the Brownian bridge based on the Brownian motion is, for t ∈ [0, T]
Conversely, for t ∈ [0, ∞]
The Brownian bridge may also be represented as a Fourier series with stochastic coefficients, as
where are independent identically distributed standard normal random variables (see the Karhunen–Loève theorem).
A Brownian bridge is the result of Donsker's theorem in the area of empirical processes. It is also used in the Kolmogorov–Smirnov test in the area of statistical inference.
Intuitive remarks
A standard Wiener process satisfies W(0) = 0 and is therefore "tied down" to the origin, but other points are not restricted. In a Brownian bridge process on the other hand, not only is B(0) = 0 but we also require that B(T) = 0, that is the process is "tied down" at t = T as well. Just as a literal bridge is supported by pylons at both ends, a Brownian Bridge is required to satisfy conditions at both ends of the interval [0,T]. (In a slight generalization, one sometimes requires B(t1) = a and B(t2) = b where t1, t2, a and b are known constants.)
Suppose we have generated a number of points W(0), W(1), W(2), W(3), etc. of a Wiener process path by computer simulation. It is now desired to fill in additional points in the interval [0,T], that is to interpolate between the already generated points W(0) and W(T). The solution is to use a Brownian bridge that is required to go through the values W(0) and W(T).
General case
For the general case when B(t1) = a and B(t2) = b, the distribution of B at time t ∈ (t1, t2) is normal, with mean
and variance
and the covariance between B(s) and B(t), with s < t is
References
Wiener process
Empirical process
|
https://en.wikipedia.org/wiki/Locally%20compact%20quantum%20group
|
In mathematics and theoretical physics, a locally compact quantum group is a relatively new C*-algebraic approach toward quantum groups that generalizes the Kac algebra, compact-quantum-group and Hopf-algebra approaches. Earlier attempts at a unifying definition of quantum groups using, for example, multiplicative unitaries have enjoyed some success but have also encountered several technical problems.
One of the main features distinguishing this new approach from its predecessors is the axiomatic existence of left and right invariant weights. This gives a noncommutative analogue of left and right Haar measures on a locally compact Hausdorff group.
Definitions
Before we can even begin to properly define a locally compact quantum group, we first need to define a number of preliminary concepts and also state a few theorems.
Definition (weight). Let be a C*-algebra, and let denote the set of positive elements of . A weight on is a function such that
for all , and
for all and .
Some notation for weights. Let be a weight on a C*-algebra . We use the following notation:
, which is called the set of all positive -integrable elements of .
, which is called the set of all -square-integrable elements of .
, which is called the set of all -integrable elements of .
Types of weights. Let be a weight on a C*-algebra .
We say that is faithful if and only if for each non-zero .
We say that is lower semi-continuous if and only if the set is a closed subset of for every .
We say that is densely defined if and only if is a dense subset of , or equivalently, if and only if either or is a dense subset of .
We say that is proper if and only if it is non-zero, lower semi-continuous and densely defined.
Definition (one-parameter group). Let be a C*-algebra. A one-parameter group on is a family of *-automorphisms of that satisfies for all . We say that is norm-continuous if and only if for every , the mapping defined by is continuous (surely this should be called strongly continuous?).
Definition (analytic extension of a one-parameter group). Given a norm-continuous one-parameter group on a C*-algebra , we are going to define an analytic extension of . For each , let
,
which is a horizontal strip in the complex plane. We call a function norm-regular if and only if the following conditions hold:
It is analytic on the interior of , i.e., for each in the interior of , the limit exists with respect to the norm topology on .
It is norm-bounded on .
It is norm-continuous on .
Suppose now that , and let
Define by . The function is uniquely determined (by the theory of complex-analytic functions), so is well-defined indeed. The family is then called the analytic extension of .
Theorem 1. The set , called the set of analytic elements of , is a dense subset of .
Definition (K.M.S. weight). Let be a C*-algebra and a weight on . We say that is a K.M.S. weight ('K.M.S.' stands for 'Kubo-Martin-Schwinger') on if and only if
|
https://en.wikipedia.org/wiki/Charles%20Hellaby
|
Charles William Hellaby is a South African mathematician who is an associate professor of applied mathematics at the University of Cape Town, South Africa, working in the field of cosmology. He is a member of the International Astronomical Union and a member of the Baháʼí Faith.
Life
Hellaby was born to Rev. William Allen Meldrum Hellaby and Emily Madeline Hellaby. His twin brother, Mark Edwin Hellaby, pursued a career in literature while his younger brother, Julian Meldrum Hellaby, took to music as a career. He obtained a BSc (Physics & Astronomy) at the University of St Andrews, Scotland in 1977. He completed his MSc (Relativity) at Queen's University, Kingston, Ontario in 1981 and his PhD (Relativity) at Queen's University in 1985.
From 1985 to 1988 he was a Post Doctoral Researcher at the University of Cape Town under George Ellis. In 1989 he was appointed a lecturer at the University of Cape Town.
Hellaby is a member of the International Astronomical Union (Division J Galaxies and Cosmology), having previously been a member of Division VIII Galaxies & the Universe and subsequently Commission 47 Cosmology.
Research
His research interests include:
Inhomogeneous cosmology. Standard cosmology assumes a smooth homogeneous universe, but the real universe is very lumpy
Inhomogeneous cosmological models - their evolution, geometry and singularities
Non-linear structure formation in the universe
Extracting the geometry of the cosmos from observations
The Lemaitre–Tolman cosmological model
The Szekeres cosmological model
Junction conditions in GR
Dense black holes
Local inhomogeneities and the Swiss cheese model
He has also worked on
The models of Vaidya, Schwarzschild–Kruskal–Szekeres & Kinnersley
Classical signature change
Cosmic strings
Gravitational collapse
Hellaby co-authored Structures in the Universe by Exact Methods: Formation, Evolution, Interactions in which applications of inhomogenous solutions to Albert Einstein's field equations of cosmology are reviewed. The structure of galaxy clusters, galaxies with central black holes and supernovae dimming can be studied with the aid of inhomogenous models.
References
External links
Living people
21st-century South African physicists
Academic staff of the University of Cape Town
South African mathematicians
Year of birth missing (living people)
|
https://en.wikipedia.org/wiki/Luigi%20Cremona
|
Antonio Luigi Gaudenzio Giuseppe Cremona (7 December 1830 – 10 June 1903) was an Italian mathematician. His life was devoted to the study of geometry and reforming advanced mathematical teaching in Italy. He worked on algebraic curves and algebraic surfaces, particularly through his paper Introduzione ad una teoria geometrica delle curve piane ("Introduction to a geometrical theory of the plane curves"), and was a founder of the Italian school of algebraic geometry.
Biography
Luigi Cremona was born in Pavia (Lombardy), then part of the Austrian-controlled Kingdom of Lombardy–Venetia. His youngest brother was the painter Tranquillo Cremona.
In 1848, when Milan and Venice rose against Austria, Cremona, then only seventeen, joined the ranks of the Italian volunteers. He remained with them, fighting on behalf of his country's freedom, until, in 1849, the capitulation of Venice put an end to the campaign.
He then returned to Pavia, where he pursued his studies at the university under Francesco Brioschi, and determined to seek a career as teacher of mathematics. He graduated in 1853 as dottore negli studi di ingegnere civile e architetto.
Cremona is noted for the important role he played in bringing about the great geometrical advances in Italy. While, at the beginning of the nineteenth century, Italy had very little mathematical standing, the end of the century found Italy in the lead along geometric lines, largely as a result of the work of Cremona. He was very influential in bringing about reforms in the secondary schools of Italy and became a leader in questions of mathematical pedagogy as well as in those relating to the advancement of knowledge. The mathematical advances which Italy made since the middle of the nineteenth century were largely guided by Cremona, Brioschi, and Beltrami.
His first appointment was as elementary mathematical master at the gymnasium and lyceum of Cremona, and he afterwards obtained a similar post at Milan. In 1860 he was appointed to the professorship of higher geometry at the University of Bologna, and in 1866 to that of higher geometry and graphical statics at the higher technical college of Milan. In this same year he competed for the Steiner Prize of the Berlin Academy, with a treatise entitled Memoria sulle superfici del terzo ordine, and shared the award with J. C. F. Sturm. Two years later the same prize was conferred on him without competition.
As early as 1856 Cremona had begun to contribute to the Annali di scienze matematiche e fisiche, and to the Annali di matematica, of which he became afterwards joint editor. Papers by him appeared in the mathematical journals of Italy, France, Germany and England, and he published several important works, many of which have been translated into other languages. His manual Graphical Statics and his Elements of Projective Geometry (translated by Thomas Hudson Beare and C. Leudesdorf respectively) were published in English by the Clarendon Press.
In 1873 he was call
|
https://en.wikipedia.org/wiki/SUP
|
Sup or SUP may refer to:
Saskatchewan United Party, a political party in Saskatchewan
Supremum or sup, in mathematics, the least upper bound
Societas unius personae, proposed EU type of single-person company
SUP Media or Sup Fabrik, a Russian internet company
Sailors' Union of the Pacific
Scottish Unionist Party (1986), established in the mid-1980s
Simple Update Protocol, dropped proposal to speed RSS and Atom
Software Upgrade Protocol
Standup paddleboarding
Stanford University Press
Sydney University Press
Syracuse University Press
Sup squark, the supersymmetric partner of the up quark
<sup>, an HTML tag for superscript
Supangle or sup, a Turkish dessert
See also
Socialist Unity Party (disambiguation)
Syriac Union Party (disambiguation)
Supper, a meal that is consumed before bed
Super (disambiguation)
|
https://en.wikipedia.org/wiki/Indian%20mathematics
|
Indian mathematics emerged in the Indian subcontinent from 1200 BCE until the end of the 18th century. In the classical period of Indian mathematics (400 CE to 1200 CE), important contributions were made by scholars like Aryabhata, Brahmagupta, Bhaskara II, and Varāhamihira. The decimal number system in use today was first recorded in Indian mathematics. Indian mathematicians made early contributions to the study of the concept of zero as a number, negative numbers, arithmetic, and algebra. In addition, trigonometry
was further advanced in India, and, in particular, the modern definitions of sine and cosine were developed there. These mathematical concepts were transmitted to the Middle East, China, and Europe and led to further developments that now form the foundations of many areas of mathematics.
Ancient and medieval Indian mathematical works, all composed in Sanskrit, usually consisted of a section of sutras in which a set of rules or problems were stated with great economy in verse in order to aid memorization by a student. This was followed by a second section consisting of a prose commentary (sometimes multiple commentaries by different scholars) that explained the problem in more detail and provided justification for the solution. In the prose section, the form (and therefore its memorization) was not considered so important as the ideas involved. All mathematical works were orally transmitted until approximately 500 BCE; thereafter, they were transmitted both orally and in manuscript form. The oldest extant mathematical document produced on the Indian subcontinent is the birch bark Bakhshali Manuscript, discovered in 1881 in the village of Bakhshali, near Peshawar (modern day Pakistan) and is likely from the 7th century CE.
A later landmark in Indian mathematics was the development of the series expansions for trigonometric functions (sine, cosine, and arc tangent) by mathematicians of the Kerala school in the 15th century CE. Their work, completed two centuries before the invention of calculus in Europe, provided what is now considered the first example of a power series (apart from geometric series). However, they did not formulate a systematic theory of differentiation and integration, nor is there any direct evidence of their results being transmitted outside Kerala.
Prehistory
Excavations at Harappa, Mohenjo-daro and other sites of the Indus Valley civilisation have uncovered evidence of the use of "practical mathematics". The people of the Indus Valley Civilization manufactured bricks whose dimensions were in the proportion 4:2:1, considered favourable for the stability of a brick structure. They used a standardised system of weights based on the ratios: 1/20, 1/10, 1/5, 1/2, 1, 2, 5, 10, 20, 50, 100, 200, and 500, with the unit weight equaling approximately 28 grams (and approximately equal to the English ounce or Greek uncia). They mass-produced weights in regular geometrical shapes, which included hexahedra, barrels, co
|
https://en.wikipedia.org/wiki/0.999...
|
In mathematics, 0.999... (also written as 0., 0. or 0.(9)) is a notation for the repeating decimal consisting of an unending sequence of 9s after the decimal point. This repeating decimal is a numeral that represents the smallest number no less than every number in the sequence (0.9, 0.99, 0.999, ...); that is, the supremum of this sequence. This number is equal to1. In other words, "0.999..." is not "almost exactly" or "very, very nearly but not quite" rather, "0.999..." and "1" represent the same number.
There are many ways of showing this equality, from intuitive arguments to mathematically rigorous proofs. The technique used depends on the target audience, background assumptions, historical context, and preferred development of the real numbers, the system within which 0.999... is commonly defined. In other systems, 0.999... can have the same meaning, a different definition, or be undefined.
More generally, every nonzero terminating decimal has two equal representations (for example, 8.32 and 8.31999...), which is a property of all positional numeral system representations regardless of base. The utilitarian preference for the terminating decimal representation contributes to the misconception that it is the only representation. For this and other reasons—such as rigorous proofs relying on non-elementary techniques, properties, or disciplines—some people can find the equality sufficiently counterintuitive that they question or reject it. This has been the subject of several studies in mathematics education.
Elementary proof
There is an elementary proof of the equation , which uses just the mathematical tools of comparison and addition of (finite) decimal numbers, without any reference to more advanced topics such as series, limits, formal construction of real numbers, etc. The proof, given below, is a direct formalization of the intuitive fact that, if one draws 0.9, 0.99, 0.999, etc. on the number line there is no room left for placing a number between them and 1. The meaning of the notation 0.999... is the least point on the number line lying to the right of all of the numbers 0.9, 0.99, 0.999, etc. Because there is ultimately no room between 1 and these numbers, the point 1 must be this least point, and so .
Intuitive explanation
If one places 0.9, 0.99, 0.999, etc. on the number line, one sees immediately that all these points are to the left of 1, and that they get closer and closer to 1.
More precisely, the distance from 0.9 to 1 is , the distance from 0.99 to 1 is , and so on. The distance to 1 from the th point (the one with 9s after the decimal point) is .
Therefore, if 1 were not the smallest number greater than 0.9, 0.99, 0.999, etc., then there would be a point on the number line that lies between 1 and all these points. This point would be at a positive distance from 1 that is less than for every integer . In the standard number systems (the rational numbers and the real numbers), there is no positive number that
|
https://en.wikipedia.org/wiki/Picard%20group
|
In mathematics, the Picard group of a ringed space X, denoted by Pic(X), is the group of isomorphism classes of invertible sheaves (or line bundles) on X, with the group operation being tensor product. This construction is a global version of the construction of the divisor class group, or ideal class group, and is much used in algebraic geometry and the theory of complex manifolds.
Alternatively, the Picard group can be defined as the sheaf cohomology group
For integral schemes the Picard group is isomorphic to the class group of Cartier divisors. For complex manifolds the exponential sheaf sequence gives basic information on the Picard group.
The name is in honour of Émile Picard's theories, in particular of divisors on algebraic surfaces.
Examples
The Picard group of the spectrum of a Dedekind domain is its ideal class group.
The invertible sheaves on projective space Pn(k) for k a field, are the twisting sheaves so the Picard group of Pn(k) is isomorphic to Z.
The Picard group of the affine line with two origins over k is isomorphic to Z.
The Picard group of the -dimensional complex affine space: , indeed the exponential sequence yields the following long exact sequence in cohomology
and since we have because is contractible, then and we can apply the Dolbeault isomorphism to calculate by the Dolbeault-Grothendieck lemma.
Picard scheme
The construction of a scheme structure on (representable functor version of) the Picard group, the Picard scheme, is an important step in algebraic geometry, in particular in the duality theory of abelian varieties. It was constructed by , and also described by and .
In the cases of most importance to classical algebraic geometry, for a non-singular complete variety V over a field of characteristic zero, the connected component of the identity in the Picard scheme is an abelian variety called the Picard variety and denoted Pic0(V). The dual of the Picard variety is the Albanese variety, and in the particular case where V is a curve, the Picard variety is naturally isomorphic to the Jacobian variety of V. For fields of positive characteristic however, Igusa constructed an example of a smooth projective surface S with Pic0(S) non-reduced, and hence not an abelian variety.
The quotient Pic(V)/Pic0(V) is a finitely-generated abelian group denoted NS(V), the Néron–Severi group of V. In other words the Picard group fits into an exact sequence
The fact that the rank of NS(V) is finite is Francesco Severi's theorem of the base; the rank is the Picard number of V, often denoted ρ(V). Geometrically NS(V) describes the algebraic equivalence classes of divisors on V; that is, using a stronger, non-linear equivalence relation in place of linear equivalence of divisors, the classification becomes amenable to discrete invariants. Algebraic equivalence is closely related to numerical equivalence, an essentially topological classification by intersection numbers.
Relative Picard scheme
Let f: X →S be a m
|
https://en.wikipedia.org/wiki/Fermat%27s%20theorem%20on%20sums%20of%20two%20squares
|
In additive number theory, Fermat's theorem on sums of two squares states that an odd prime p can be expressed as:
with x and y integers, if and only if
The prime numbers for which this is true are called Pythagorean primes.
For example, the primes 5, 13, 17, 29, 37 and 41 are all congruent to 1 modulo 4, and they can be expressed as sums of two squares in the following ways:
On the other hand, the primes 3, 7, 11, 19, 23 and 31 are all congruent to 3 modulo 4, and none of them can be expressed as the sum of two squares. This is the easier part of the theorem, and follows immediately from the observation that all squares are congruent to 0 or 1 modulo 4.
Since the Diophantus identity implies that the product of two integers each of which can be written as the sum of two squares is itself expressible as the sum of two squares, by applying Fermat's theorem to the prime factorization of any positive integer n, we see that if all the prime factors of n congruent to 3 modulo 4 occur to an even exponent, then n is expressible as a sum of two squares. The converse also holds. This generalization of Fermat's theorem is known as the sum of two squares theorem.
History
Albert Girard was the first to make the observation, characterizing the positive integers (not necessarily primes) that are expressible as the sum of two squares of positive integers; this was published in 1625. The statement that every prime p of the form 4n+1 is the sum of two squares is sometimes called Girard's theorem. For his part, Fermat wrote an elaborate version of the statement (in which he also gave the number of possible expressions of the powers of p as a sum of two squares) in a letter to Marin Mersenne dated December 25, 1640: for this reason this version of the theorem is sometimes called Fermat's Christmas theorem.
Gaussian primes
Fermat's theorem on sums of two squares is strongly related with the theory of Gaussian primes.
A Gaussian integer is a complex number such that and are integers. The norm of a Gaussian integer is an integer equal to the square of the absolute value of the Gaussian integer. The norm of a product of Gaussian integers is the product of their norms. This is the Diophantus identity, which results immediately from the similar property of the absolute value.
Gaussian integers form a principal ideal domain. This implies that Gaussian primes can be defined similarly as primes numbers, that is as those Gaussian integers that are not the product of two non-units (here the units are and ).
The multiplicative property of the norm implies that a prime number is either a Gaussian prime or the norm of a Gaussian prime. Fermat's theorem asserts that the first case occurs when and that the second case occurs when and The last case is not considered in Fermat's statement, but is trivial, as
Related results
Above point of view on Fermat's theorem is a special case of the theory of factorization of ideals in rings of quadratic integers. In summa
|
https://en.wikipedia.org/wiki/Freedman%E2%80%93Diaconis%20rule
|
In statistics, the Freedman–Diaconis rule can be used to select the width of the bins to be used in a histogram. It is named after David A. Freedman and Persi Diaconis.
For a set of empirical measurements sampled from some probability distribution, the Freedman-Diaconis rule is designed roughly to minimize the integral of the squared difference between the histogram (i.e., relative frequency density) and the density of the theoretical probability distribution.
The general equation for the rule is:
where is the interquartile range of the data and is the number of observations in the sample
Other approaches
With the factor 2 replaced by approximately 2.59, the Freedman-Diaconis rule asymptotically matches Scott's normal reference rule for data sampled
from a normal distribution.
Another approach is to use Sturges' rule: use a bin so large that there are about non-empty bins (Scott, 2009). This works well for n under 200, but was found to be inaccurate for large n.
For a discussion and an alternative approach, see Birgé and Rozenholc.
References
Rules of thumb
Statistical charts and diagrams
Infographics
|
https://en.wikipedia.org/wiki/Dyck%20language
|
In the theory of formal languages of computer science, mathematics, and linguistics, a Dyck word is a balanced string of brackets.
The set of Dyck words forms a Dyck language. The simplest, D1, use just two matching brackets, e.g. ( and ).
Dyck words and language are named after the mathematician Walther von Dyck. They have applications in the parsing of expressions that must have a correctly nested sequence of brackets, such as arithmetic or algebraic expressions.
Formal definition
Let be the alphabet consisting of the symbols [ and ]. Let denote its Kleene closure.
The Dyck language is defined as:
Context-free grammar
It may be helpful to define the Dyck language via a context-free grammar in some situations.
The Dyck language is generated by the context-free grammar with a single non-terminal , and the production:
That is, S is either the empty string () or is "[", an element of the Dyck language, the matching "]", and an element of the Dyck language.
An alternative context-free grammar for the Dyck language is given by the production:
That is, S is zero or more occurrences of the combination of "[", an element of the Dyck language, and a matching "]", where multiple elements of the Dyck language on the right side of the production are free to differ from each other.
Alternative definition
In yet other contexts it may instead be helpful to define the Dyck language by splitting into equivalence classes, as follows.
For any element of length , we define partial functions and by
is with "" inserted into the th position
is with "" deleted from the th position
with the understanding that is undefined for and is undefined if . We define an equivalence relation on as follows: for elements we have if and only if there exists a sequence of zero or more applications of the and functions starting with and ending with . That the sequence of zero operations is allowed accounts for the reflexivity of . Symmetry follows from the observation that any finite sequence of applications of to a string can be undone with a finite sequence of applications of . Transitivity is clear from the definition.
The equivalence relation partitions the language into equivalence classes. If we take to denote the empty string, then the language corresponding to the equivalence class is called the Dyck language.
Properties
The Dyck language is closed under the operation of concatenation.
By treating as an algebraic monoid under concatenation we see that the monoid structure transfers onto the quotient , resulting in the syntactic monoid of the Dyck language. The class will be denoted .
The syntactic monoid of the Dyck language is not commutative: if and then .
With the notation above, but neither nor are invertible in .
The syntactic monoid of the Dyck language is isomorphic to the bicyclic semigroup by virtue of the properties of and described above.
By the Chomsky–Schützenberger representation theorem, any context-fr
|
https://en.wikipedia.org/wiki/Volume%20form
|
In mathematics, a volume form or top-dimensional form is a differential form of degree equal to the differentiable manifold dimension. Thus on a manifold of dimension , a volume form is an -form. It is an element of the space of sections of the line bundle , denoted as . A manifold admits a nowhere-vanishing volume form if and only if it is orientable. An orientable manifold has infinitely many volume forms, since multiplying a volume form by a nowhere-vanishing real valued function yields another volume form. On non-orientable manifolds, one may instead define the weaker notion of a density.
A volume form provides a means to define the integral of a function on a differentiable manifold. In other words, a volume form gives rise to a measure with respect to which functions can be integrated by the appropriate Lebesgue integral. The absolute value of a volume form is a volume element, which is also known variously as a twisted volume form or pseudo-volume form. It also defines a measure, but exists on any differentiable manifold, orientable or not.
Kähler manifolds, being complex manifolds, are naturally oriented, and so possess a volume form. More generally, the th exterior power of the symplectic form on a symplectic manifold is a volume form. Many classes of manifolds have canonical volume forms: they have extra structure which allows the choice of a preferred volume form. Oriented pseudo-Riemannian manifolds have an associated canonical volume form.
Orientation
The following will only be about orientability of differentiable manifolds (it's a more general notion defined on any topological manifold).
A manifold is orientable if it has a coordinate atlas all of whose transition functions have positive Jacobian determinants. A selection of a maximal such atlas is an orientation on A volume form on gives rise to an orientation in a natural way as the atlas of coordinate charts on that send to a positive multiple of the Euclidean volume form
A volume form also allows for the specification of a preferred class of frames on Call a basis of tangent vectors right-handed if
The collection of all right-handed frames is acted upon by the group of general linear mappings in dimensions with positive determinant. They form a principal sub-bundle of the linear frame bundle of and so the orientation associated to a volume form gives a canonical reduction of the frame bundle of to a sub-bundle with structure group That is to say that a volume form gives rise to -structure on More reduction is clearly possible by considering frames that have
Thus a volume form gives rise to an -structure as well. Conversely, given an -structure, one can recover a volume form by imposing () for the special linear frames and then solving for the required -form by requiring homogeneity in its arguments.
A manifold is orientable if and only if it has a nowhere-vanishing volume form. Indeed, is a deformation retract since where the positive reals a
|
https://en.wikipedia.org/wiki/Tautological%20one-form
|
In mathematics, the tautological one-form is a special 1-form defined on the cotangent bundle of a manifold In physics, it is used to create a correspondence between the velocity of a point in a mechanical system and its momentum, thus providing a bridge between Lagrangian mechanics and Hamiltonian mechanics (on the manifold ).
The exterior derivative of this form defines a symplectic form giving the structure of a symplectic manifold. The tautological one-form plays an important role in relating the formalism of Hamiltonian mechanics and Lagrangian mechanics. The tautological one-form is sometimes also called the Liouville one-form, the Poincaré one-form, the canonical one-form, or the symplectic potential. A similar object is the canonical vector field on the tangent bundle.
To define the tautological one-form, select a coordinate chart on and a canonical coordinate system on Pick an arbitrary point By definition of cotangent bundle, where and The tautological one-form is given by
with and being the coordinate representation of
Any coordinates on that preserve this definition, up to a total differential (exact form), may be called canonical coordinates; transformations between different canonical coordinate systems are known as canonical transformations.
The canonical symplectic form, also known as the Poincaré two-form, is given by
The extension of this concept to general fibre bundles is known as the solder form. By convention, one uses the phrase "canonical form" whenever the form has a unique, canonical definition, and one uses the term "solder form", whenever an arbitrary choice has to be made. In algebraic geometry and complex geometry the term "canonical" is discouraged, due to confusion with the canonical class, and the term "tautological" is preferred, as in tautological bundle.
Coordinate-free definition
The tautological 1-form can also be defined rather abstractly as a form on phase space. Let be a manifold and be the cotangent bundle or phase space. Let
be the canonical fiber bundle projection, and let
be the induced tangent map. Let be a point on Since is the cotangent bundle, we can understand to be a map of the tangent space at :
That is, we have that is in the fiber of The tautological one-form at point is then defined to be
It is a linear map
and so
Symplectic potential
The symplectic potential is generally defined a bit more freely, and also only defined locally: it is any one-form such that ; in effect, symplectic potentials differ from the canonical 1-form by a closed form.
Properties
The tautological one-form is the unique one-form that "cancels" pullback. That is, let
be a 1-form on is a section For an arbitrary 1-form on the pullback of by is, by definition, Here, is the pushforward of Like is a 1-form on The tautological one-form is the only form with the property that for every 1-form on
So, by the commutation between the pull-back and the exterior deriv
|
https://en.wikipedia.org/wiki/Rademacher%20distribution
|
In probability theory and statistics, the Rademacher distribution (which is named after Hans Rademacher) is a discrete probability distribution where a random variate X has a 50% chance of being +1 and a 50% chance of being -1.
A series (that is, a sum) of Rademacher distributed variables can be regarded as a simple symmetrical random walk where the step size is 1.
Mathematical formulation
The probability mass function of this distribution is
In terms of the Dirac delta function, as
Bounds on sums of independent Rademacher variables
There are various results in probability theory around analyzing the sum of i.i.d. Rademacher variables, including concentration inequalities such as Bernstein inequalities as well as anti-concentration inequalities like Tomaszewski's conjecture.
Concentration inequalities
Let {xi} be a set of random variables with a Rademacher distribution. Let {ai} be a sequence of real numbers. Then
where ||a||2 is the Euclidean norm of the sequence {ai}, t > 0 is a real number and Pr(Z) is the probability of event Z.
Let Y = Σ xiai and let Y be an almost surely convergent series in a Banach space. The for t > 0 and s ≥ 1 we have
for some constant c.
Let p be a positive real number. Then the Khintchine inequality says that
where c1 and c2 are constants dependent only on p.
For p ≥ 1,
Tomaszewski’s conjecture
In 1986, Bogusław Tomaszewski proposed a question about the distribution of the sum of independent Rademacher variables. A series of works on this question culminated into a proof in 2020 by Nathan Keller and Ohad Klein of the following conjecture.
Conjecture. Let , where and the 's are independent Rademacher variables. Then
For example, when , one gets the following bound, first shown by Van Zuijlen.
The bound is sharp and better than that which can be derived from the normal distribution (approximately Pr > 0.31).
Applications
The Rademacher distribution has been used in bootstrapping.
The Rademacher distribution can be used to show that normally distributed and uncorrelated does not imply independent.
Random vectors with components sampled independently from the Rademacher distribution are useful for various stochastic approximations, for example:
The Hutchinson trace estimator, which can be used to efficiently approximate the trace of a matrix of which the elements are not directly accessible, but rather implicitly defined via matrix-vector products.
SPSA, a computationally cheap, derivative-free, stochastic gradient approximation, useful for numerical optimization.
Rademacher random variables are used in the Symmetrization Inequality.
Related distributions
Bernoulli distribution: If X has a Rademacher distribution, then has a Bernoulli(1/2) distribution.
Laplace distribution: If X has a Rademacher distribution and Y ~ Exp(λ) is independent from X, then XY ~ Laplace(0, 1/λ).
References
Discrete distributions
it:Distribuzione discreta uniforme#Altre distribuzioni
|
https://en.wikipedia.org/wiki/Graph%20manifold
|
In topology, a graph manifold (in German: Graphenmannigfaltigkeit) is a 3-manifold which is obtained by gluing some circle bundles. They were discovered and classified by the German topologist Friedhelm Waldhausen in 1967. This definition allows a very convenient combinatorial description as a graph whose vertices are the fundamental parts and (decorated) edges stand for the description of the gluing, hence the name.
Two very important classes of examples are given by the Seifert bundles and the Solv manifolds. This leads to a more modern definition: a graph manifold is either a Solv manifold, a manifold having only Seifert pieces in its JSJ decomposition, or connect sums of the previous two categories. From this perspective, Waldhausen's article can be seen as the first breakthrough towards the discovery of JSJ decomposition.
One of the numerous consequences of the Thurston-Perelman geometrization theorem is that graph manifolds are precisely the 3-manifolds whose Gromov norm vanishes.
References
3-manifolds
Topological graph theory
|
https://en.wikipedia.org/wiki/Omega%20function
|
In mathematics, omega function refers to a function using the Greek letter omega, written ω or Ω.
(big omega) may refer to:
The lower bound in Big O notation, , meaning that the function dominates in some limit
The prime omega function , giving the total number of prime factors of , counting them with their multiplicity.
The Lambert W function , the inverse of , also denoted .
Absolute Infinity
(omega) may refer to:
The Wright Omega Function , related to the Lambert W Function
The Pearson–Cunningham function
The prime omega function , giving the number of distinct prime factors of .
|
https://en.wikipedia.org/wiki/Su%20Song
|
Su Song (, 1020–1101), courtesy name Zirong (), was a Chinese polymathic scientist and statesman. Excelling in a variety of fields, he was accomplished in mathematics, astronomy, cartography, geography, horology, pharmacology, mineralogy, metallurgy, zoology, botany, mechanical engineering, hydraulic engineering, civil engineering, invention, art, poetry, philosophy, antiquities, and statesmanship during the Song dynasty (960–1279).
Su Song was the engineer for a hydro-mechanical astronomical clock tower in medieval Kaifeng, which employed an early escapement mechanism. The escapement mechanism of Su's clock tower had been invented by Tang dynasty Buddhist monk Yi Xing and government official Liang Lingzan in 725 AD to operate a water-powered armillary sphere, although Su's armillary sphere was the first to be provided with a mechanical clock drive. Su's clock tower also featured the oldest known endless power-transmitting chain drive, called the tian ti (), or "celestial ladder", as depicted in his horological treatise. The clock tower had 133 different clock jacks to indicate and sound the hours. Su Song's treatise about the clock tower, Xinyi Xiangfayao (), has survived since its written form in 1092 and official printed publication in 1094. The book has been analyzed by many historians, such as the British biochemist, historian, and sinologist Joseph Needham. The clock itself, however, was dismantled by the invading Jurchen army in 1127 AD, and although attempts were made to reassemble it, the tower was never successfully reinstated.
The Xinyi Xiangfayao was Su's best-known treatise, but the polymath compiled other works as well. He completed a large celestial atlas of several star maps, several terrestrial maps, as well as a treatise on pharmacology. The latter discussed related subjects on mineralogy, zoology, botany, and metallurgy.
European Jesuit visitors to China like Matteo Ricci and Nicolas Trigault briefly wrote about Chinese clocks with wheel drives, but others mistakenly believed that the Chinese had never advanced beyond the stage of the clepsydra, incense clock, and sundial. They thought that advanced mechanical clockworks were new to China and that these mechanisms were something valuable that Europeans could offer to the Chinese. Although not as prominent as in the Song period, contemporary Chinese texts of the Ming dynasty (1368–1644) described a relatively unbroken history of mechanical clocks in China, from the 13th century to the 16th. However, Su Song's clock tower still relied on the use of a waterwheel to power it, and was thus not fully mechanical like late medieval European clocks.
Life and works
Career as a scholar-official
Su Song was of Hokkien ancestry who was born in modern-day Fujian, near medieval Quanzhou. Like his contemporary, Shen Kuo (1031–1095), Su Song was a polymath, a person whose expertise spans a significant number of different fields of study. It was written by his junior colleague and Hanlin
|
https://en.wikipedia.org/wiki/Geocomputation
|
Geocomputation (sometimes GeoComputation) is a field of study at the intersection of geography and computation.
See also
Geoinformatics
Geomathematics
Geographic information system
Bibliography
Openshaw, S., and R. J. Abrahart. (1996). “Geocomputation.” In Proceedings of the 1st International Conference on GeoComputation, 665–6, edited by R. J. Abrahart. Leeds, U.K.: University of Leeds
Longley, P. A., S. M. Brooks, R. McDonnell, and W. D. Macmillan. (1998). Geocomputation: A Primer. Chichester, U.K.: John Wiley & Sons
Gahegan, M. (1999). “Guest Editorial: What is Geocomputation?” Transactions in GIS 3(3), 203–6.
Brunsdon, C., and A. D. Singleton. (2015). Geocomputation: A Practical Primer. London: Sage
Harris, R., D. O’Sullivan, M. Gahegan, M. Charlton, L. Comber, P. Longley, C. Brunsdon, N. Malleson, A. Heppenstall, A. Singleton, D. Arribas-Bel, and A. Evans. (2017). “More Bark than Bytes? Reflections on 21+ Years of Geocomputation.” Environment and Planning B 44(4), 598–617.
Geographic data and information fields of study
Computational fields of study
|
https://en.wikipedia.org/wiki/Millennium%20Prize
|
Millennium Prize may refer to:
Millennium Prize Problems of Clay Mathematics Institute
Millennium Technology Prize of Finland
|
https://en.wikipedia.org/wiki/Dual%20code
|
In coding theory, the dual code of a linear code
is the linear code defined by
where
is a scalar product. In linear algebra terms, the dual code is the annihilator of C with respect to the bilinear form . The dimension of C and its dual always add up to the length n:
A generator matrix for the dual code is the parity-check matrix for the original code and vice versa. The dual of the dual code is always the original code.
Self-dual codes
A self-dual code is one which is its own dual. This implies that n is even and dim C = n/2. If a self-dual code is such that each codeword's weight is a multiple of some constant , then it is of one of the following four types:
Type I codes are binary self-dual codes which are not doubly even. Type I codes are always even (every codeword has even Hamming weight).
Type II codes are binary self-dual codes which are doubly even.
Type III codes are ternary self-dual codes. Every codeword in a Type III code has Hamming weight divisible by 3.
Type IV codes are self-dual codes over F4. These are again even.
Codes of types I, II, III, or IV exist only if the length n is a multiple of 2, 8, 4, or 2 respectively.
If a self-dual code has a generator matrix of the form , then the dual code has generator matrix , where is the identity matrix and .
References
External links
MATH32031: Coding Theory - Dual Code - pdf with some examples and explanations
Coding theory
|
https://en.wikipedia.org/wiki/Hans%20Rademacher
|
Hans Adolph Rademacher (; 3 April 1892 – 7 February 1969) was a German-born American mathematician, known for work in mathematical analysis and number theory.
Biography
Rademacher received his Ph.D. in 1916 from Georg-August-Universität Göttingen; Constantin Carathéodory supervised his dissertation. In 1919, he became privatdozent under Constantin Carathéodory at University of Berlin. In 1922, he became an assistant professor at the University of Hamburg, where he supervised budding mathematicians like Theodor Estermann. He was dismissed from his position at the University of Breslau by the Nazis in 1933 due to his public support of the Weimar Republic, and emigrated from Europe in 1934.
After leaving Germany, he moved to Philadelphia and worked at the University of Pennsylvania until his retirement in 1962; he held the Thomas A. Scott Professorship of Mathematics at Pennsylvania from 1956 to 1962. Rademacher had a number of well-known students, including George Andrews, Paul T. Bateman, Theodor Estermann and Emil Grosswald.
Research
Rademacher performed research in analytic number theory, mathematical genetics, the theory of functions of a real variable, and quantum theory. Most notably, he developed the theory of Dedekind sums. In 1937 Rademacher discovered an exact convergent series for the partition function P(n), the number of integer partitions of a number, improving upon Ramanujan's asymptotic non-convergent series and validating Ramanujan's supposition that an exact series representation existed.
Awards and honors
With his retirement from the University of Pennsylvania, a group of mathematicians provided the seed funding for The Hans A. Rademacher Instructorships, and honored him with an honorary degree as Doctor of Science.
Rademacher is the co-author (with Otto Toeplitz) of the popular mathematics book The Enjoyment of Mathematics, published in German in 1930 and still in print.
Works
with Otto Toeplitz: Von Zahlen und Figuren. 1930. 2nd edn. 1933. Springer 2001, .
The Enjoyment of Mathematics. Von Zahlen und Figuren translated into English by Herbert Zuckerman, Princeton University Press, 1957
with Ernst Steinitz Vorlesungen über die Theorie der Polyeder- unter Einschluss der Elemente der Topologie. Springer 1932, 1976.
Generalization of the Reciprocity Formula for Dedekind Sums. In: Duke Math. Journal. Vol. 21, 1954, pp. 391–397.
Lectures on analytic number theory. 1955.
Lectures on elementary number theory. Blaisdell, New York 1964, Krieger 1977.
with Grosswald: Dedekind sums. Carus Mathematical Monographs 1972.
Topics in analytic number theory. ed. Grosswald. Springer Verlag, 1973 (Grundlehren der mathematischen Wissenschaften).
Collected papers. 2 vols. ed. Grosswald. MIT press, 1974.
Higher mathematics from an elementary point of view. Birkhäuser 1983.
Further reading
George E. Andrews, David M. Bressoud, L. Alayne Parson (eds.) The Rademacher legacy to mathematics. American Mathematical Society, 1994.
Lexikon
|
https://en.wikipedia.org/wiki/Canonical%20ring
|
In mathematics, the pluricanonical ring of an algebraic variety V (which is nonsingular), or of a complex manifold, is the graded ring
of sections of powers of the canonical bundle K. Its nth graded component (for ) is:
that is, the space of sections of the n-th tensor product Kn of the canonical bundle K.
The 0th graded component is sections of the trivial bundle, and is one-dimensional as V is projective. The projective variety defined by this graded ring is called the canonical model of V, and the dimension of the canonical model is called the Kodaira dimension of V.
One can define an analogous ring for any line bundle L over V; the analogous dimension is called the Iitaka dimension. A line bundle is called big if the Iitaka dimension equals the dimension of the variety.
Properties
Birational invariance
The canonical ring and therefore likewise the Kodaira dimension is a birational invariant: Any birational map between smooth compact complex manifolds induces an isomorphism between the respective canonical rings. As a consequence one can define the Kodaira dimension of a singular space as the Kodaira dimension of a desingularization. Due to the birational invariance this is well defined, i.e., independent of the choice of the desingularization.
Fundamental conjecture of birational geometry
A basic conjecture is that the pluricanonical ring is finitely generated. This is considered a major step in the Mori program.
proved this conjecture.
The plurigenera
The dimension
is the classically defined n-th plurigenus of V. The pluricanonical divisor , via the corresponding linear system of divisors, gives a map to projective space , called the n-canonical map.
The size of R is a basic invariant of V, and is called the Kodaira dimension.
Notes
References
Algebraic geometry
Birational geometry
Structures on manifolds
|
https://en.wikipedia.org/wiki/Kodaira%20dimension
|
In algebraic geometry, the Kodaira dimension κ(X) measures the size of the canonical model of a projective variety X.
Igor Shafarevich in a seminar introduced an important numerical invariant of surfaces with the notation κ. Shigeru Iitaka extended it and defined the Kodaira dimension for higher dimensional varieties (under the name of canonical dimension), and later named it after Kunihiko Kodaira.
The plurigenera
The canonical bundle of a smooth algebraic variety X of dimension n over a field is the line bundle of n-forms,
which is the nth exterior power of the cotangent bundle of X.
For an integer d, the dth tensor power of KX is again a line bundle.
For d ≥ 0, the vector space of global sections H0(X,KXd) has the remarkable property that it is a birational invariant of smooth projective varieties X. That is, this vector space is canonically identified with the corresponding space for any smooth projective variety which is isomorphic to X outside lower-dimensional subsets.
For d ≥ 0, the
dth plurigenus of X is defined as the dimension of the vector space
of global sections of KXd:
The plurigenera are important birational invariants of an algebraic variety. In particular, the simplest way to prove that a variety is not rational (that is, not birational to projective space) is to show that some plurigenus Pd with d > 0
is not zero. If the space of sections of KXd is nonzero, then there is a natural rational map from X to the projective space
called the d-canonical map. The canonical ring R(KX) of a variety X is the graded ring
Also see geometric genus and arithmetic genus.
The Kodaira dimension of X is defined to be if the plurigenera Pd are zero for all d > 0; otherwise, it is the minimum κ such that Pd/dκ is bounded. The Kodaira dimension of an n-dimensional variety is either or an integer in the range from 0 to n.
Interpretations of the Kodaira dimension
The following integers are equal if they are non-negative. A good reference is , Theorem 2.1.33.
The dimension of the Proj construction , a projective variety called the canonical model of X depending only on the birational equivalence class of X. (This is defined only if the canonical ring is finitely generated, which is true in characteristic zero and conjectured in general.)
The dimension of the image of the d-canonical mapping for all positive multiples d of some positive integer .
The transcendence degree of the fraction field of R, minus one; i.e. , where t is the number of algebraically independent generators one can find.
The rate of growth of the plurigenera: that is, the smallest number κ such that is bounded. In Big O notation, it is the minimal κ such that .
When one of these numbers is undefined or negative, then all of them are. In this case, the Kodaira dimension is said to be negative or to be . Some historical references define it to be −1, but then the formula does not always hold, and the statement of the Iitaka conjecture becomes more complicated. For
|
https://en.wikipedia.org/wiki/Binary%20quadratic%20form
|
In mathematics, a binary quadratic form is a quadratic homogeneous polynomial in two variables
where a, b, c are the coefficients. When the coefficients can be arbitrary complex numbers, most results are not specific to the case of two variables, so they are described in quadratic form. A quadratic form with integer coefficients is called an integral binary quadratic form, often abbreviated to binary quadratic form.
This article is entirely devoted to integral binary quadratic forms. This choice is motivated by their status as the driving force behind the development of algebraic number theory. Since the late nineteenth century, binary quadratic forms have given up their preeminence in algebraic number theory to quadratic and more general number fields, but advances specific to binary quadratic forms still occur on occasion.
Pierre Fermat stated that if p is an odd prime then the equation has a solution iff , and he made similar statement about the equations , , and .
and so on are quadratic forms, and the theory of quadratic forms gives a unified way of looking at and proving these theorems.
Another instance of quadratic forms is Pell's equation .
Binary quadratic forms are closely related to ideals in quadratic fields, this allows the class number of a quadratic field to be calculated by counting the number of reduced binary quadratic forms of a given discriminant.
The classical theta function of 2 variables is , if is a positive definite quadratic form then is a theta function.
Equivalence
Two forms f and g are called equivalent if there exist integers such that the following conditions hold:
For example, with and , , , and , we find that f is equivalent to , which simplifies to .
The above equivalence conditions define an equivalence relation on the set of integral quadratic forms. It follows that the quadratic forms are partitioned into equivalence classes, called classes of quadratic forms. A class invariant can mean either a function defined on equivalence classes of forms or a property shared by all forms in the same class.
Lagrange used a different notion of equivalence, in which the second condition is replaced by . Since Gauss it has been recognized that this definition is inferior to that given above. If there is a need to distinguish, sometimes forms are called properly equivalent using the definition above and improperly equivalent if they are equivalent in Lagrange's sense.
In matrix terminology, which is used occasionally below, when
has integer entries and determinant 1, the map is a (right) group action of on the set of binary quadratic forms. The equivalence relation above then arises from the general theory of group actions.
If , then important invariants include
The discriminant .
The content, equal to the greatest common divisor of a, b, and c.
Terminology has arisen for classifying classes and their forms in terms of their invariants. A form of discriminant is definite if , de
|
https://en.wikipedia.org/wiki/Implicit
|
Implicit may refer to:
Mathematics
Implicit function
Implicit function theorem
Implicit curve
Implicit surface
Implicit differential equation
Other uses
Implicit assumption, in logic
Implicit-association test, in social psychology
Implicit bit, in floating-point arithmetic
Implicit learning, in learning psychology
Implicit memory, in long-term human memory
Implicit solvation, in computational chemistry
Implicit stereotype (implicit bias), in social identity theory
Implicit type conversion, in computing
See also
Implicit and explicit atheism, types of atheism coined by George H. Smith
Implication (disambiguation)
Implicature
|
https://en.wikipedia.org/wiki/Cobweb%20plot
|
A cobweb plot, or Verhulst diagram is a visual tool used in the dynamical systems field of mathematics to investigate the qualitative behaviour of one-dimensional iterated functions, such as the logistic map. Using a cobweb plot, it is possible to infer the long term status of an initial condition under repeated application of a map.
Method
For a given iterated function , the plot consists of a diagonal () line and a curve representing . To plot the behaviour of a value , apply the following steps.
Find the point on the function curve with an x-coordinate of . This has the coordinates ().
Plot horizontally across from this point to the diagonal line. This has the coordinates ().
Plot vertically from the point on the diagonal to the function curve. This has the coordinates ().
Repeat from step 2 as required.
Interpretation
On the cobweb plot, a stable fixed point corresponds to an inward spiral, while an unstable fixed point is an outward one. It follows from the definition of a fixed point that these spirals will center at a point where the diagonal y=x line crosses the function graph. A period 2 orbit is represented by a rectangle, while greater period cycles produce further, more complex closed loops. A chaotic orbit would show a 'filled out' area, indicating an infinite number of non-repeating values.
See also
Jones diagram – similar plotting technique
Fixed-point iteration – iterative algorithm to find fixed points (produces a cobweb plot)
References
Plots (graphics)
Dynamical systems
|
https://en.wikipedia.org/wiki/Survey%20of%20Activities%20of%20Young%20People
|
The Survey of Activities of Young People (SAYP) is a national household-based survey of work-related activities among South African children, conducted for the first time in 1999 by Statistics South Africa.
The official results were released in October 2002, and provides a national, quantitative picture. It also gives an indication of the different categories of working children who are most in need or who are at the greatest risk of exploitation in work and employment.
The survey constituted the first step in the development of the South African Child Labour Programme of Action which was provisionally adopted in September 2003.
A household-based survey cannot pick up some of the worst forms of child labour — for this reason, qualitative research projects are undertaken or planned by the "Towards the Elimination of the worst forms of Child Labour" (TECL) Programme.
External links
The results of the SAYP from the Department of Labour.
Child Labour Programme of Action (South Africa)
1999 establishments in South Africa
Demographics of South Africa
|
https://en.wikipedia.org/wiki/Zero%20ring
|
In ring theory, a branch of mathematics, the zero ring or trivial ring is the unique ring (up to isomorphism) consisting of one element. (Less commonly, the term "zero ring" is used to refer to any rng of square zero, i.e., a rng in which for all x and y. This article refers to the one-element ring.)
In the category of rings, the zero ring is the terminal object, whereas the ring of integers Z is the initial object.
Definition
The zero ring, denoted {0} or simply 0, consists of the one-element set {0} with the operations + and · defined such that 0 + 0 = 0 and 0 · 0 = 0.
Properties
The zero ring is the unique ring in which the additive identity 0 and multiplicative identity 1 coincide. (Proof: If in a ring R, then for all r in R, we have . The proof of the last equality is found here.)
The zero ring is commutative.
The element 0 in the zero ring is a unit, serving as its own multiplicative inverse.
The unit group of the zero ring is the trivial group {0}.
The element 0 in the zero ring is not a zero divisor.
The only ideal in the zero ring is the zero ideal {0}, which is also the unit ideal, equal to the whole ring. This ideal is neither maximal nor prime.
The zero ring is generally excluded from fields, while occasionally called as the trivial field. Excluding it agrees with the fact that its zero ideal is not maximal. (When mathematicians speak of the "field with one element", they are referring to a non-existent object, and their intention is to define the category that would be the category of schemes over this object if it existed.)
The zero ring is generally excluded from integral domains. Whether the zero ring is considered to be a domain at all is a matter of convention, but there are two advantages to considering it not to be a domain. First, this agrees with the definition that a domain is a ring in which 0 is the only zero divisor (in particular, 0 is required to be a zero divisor, which fails in the zero ring). Second, this way, for a positive integer n, the ring Z/nZ is a domain if and only if n is prime, but 1 is not prime.
For each ring A, there is a unique ring homomorphism from A to the zero ring. Thus the zero ring is a terminal object in the category of rings.
If A is a nonzero ring, then there is no ring homomorphism from the zero ring to A. In particular, the zero ring is not a subring of any nonzero ring.
The zero ring is the unique ring of characteristic 1.
The only module for the zero ring is the zero module. It is free of rank א for any cardinal number א.
The zero ring is not a local ring. It is, however, a semilocal ring.
The zero ring is Artinian and (therefore) Noetherian.
The spectrum of the zero ring is the empty scheme.
The Krull dimension of the zero ring is −∞.
The zero ring is semisimple but not simple.
The zero ring is not a central simple algebra over any field.
The total quotient ring of the zero ring is itself.
Constructions
For any ring A and ideal I of A, the quotient A/I
|
https://en.wikipedia.org/wiki/Hypoexponential%20distribution
|
In probability theory the hypoexponential distribution or the generalized Erlang distribution is a continuous distribution, that has found use in the same fields as the Erlang distribution, such as queueing theory, teletraffic engineering and more generally in stochastic processes. It is called the hypoexponetial distribution as it has a coefficient of variation less than one, compared to the hyper-exponential distribution which has coefficient of variation greater than one and the exponential distribution which has coefficient of variation of one.
Overview
The Erlang distribution is a series of k exponential distributions all with rate . The hypoexponential is a series of k exponential distributions each with their own rate , the rate of the exponential distribution. If we have k independently distributed exponential random variables , then the random variable,
is hypoexponentially distributed. The hypoexponential has a minimum coefficient of variation of .
Relation to the phase-type distribution
As a result of the definition it is easier to consider this distribution as a special case of the phase-type distribution. The phase-type distribution is the time to absorption of a finite state Markov process. If we have a k+1 state process, where the first k states are transient and the state k+1 is an absorbing state, then the distribution of time from the start of the process until the absorbing state is reached is phase-type distributed. This becomes the hypoexponential if we start in the first 1 and move skip-free from state i to i+1 with rate until state k transitions with rate to the absorbing state k+1. This can be written in the form of a subgenerator matrix,
For simplicity denote the above matrix . If the probability of starting in each of the k states is
then
Two parameter case
Where the distribution has two parameters () the explicit forms of the probability functions and the associated statistics are
CDF:
PDF:
Mean:
Variance:
Coefficient of variation:
The coefficient of variation is always < 1.
Given the sample mean () and sample coefficient of variation (), the parameters and can be estimated as follows:
These estimators can be derived from the methods of moments by setting and .
The resulting parameters and are real values if .
Characterization
A random variable has cumulative distribution function given by,
and density function,
where is a column vector of ones of the size k and is the matrix exponential of A. When for all , the density function can be written as
where are the Lagrange basis polynomials associated with the points .
The distribution has Laplace transform of
Which can be used to find moments,
General case
In the general case
where there are distinct sums of exponential distributions
with rates and a number of terms in each
sum equals to respectively. The cumulative
distribution function for is given by
with
with the additional convention .
Uses
This distribution has been u
|
https://en.wikipedia.org/wiki/Philip%20Dawid
|
Alexander Philip Dawid (pronounced 'David'; born 1 February 1946) is Emeritus Professor of Statistics of the University of Cambridge, and a Fellow of Darwin College, Cambridge. He is a leading proponent of Bayesian statistics.
Education
Dawid was educated at the City of London School, Trinity Hall, Cambridge and Darwin College, Cambridge.
Career and research
Dawid has made fundamental contributions to both the philosophical underpinnings and the practical applications of statistics. His theory of conditional independence is a keystone of modern statistical theory and methods, and he has demonstrated its usefulness in a host of applications, including computation in probabilistic expert systems, causal inference, and forensic identification.
Dawid was lecturer in statistics at University College London from 1969 to 1978. He was subsequently Professor of Statistics at City University, London until 1981, when he returned to UCL as a reader, becoming Pearson Professor of Statistics there in 1982. He moved to the University of Cambridge where he was appointed Professor of Statistics in 2007, retiring in 2013.
Awards and honours
He was elected a member of the International Statistical Institute in 1978, and a Chartered Statistician of the Royal Statistical Society in 1993. He was editor of Biometrika from 1992 to 1996 and President of the International Society for Bayesian Analysis in 2000. He is also an elected Fellow of the Institute of Mathematical Statistics. and of the Royal Society. He received the 1977 George W. Snedecor Award from the Committee of Presidents of Statistical Societies.
Dawid was awarded the 1978 Guy Medal in Bronze and the 2001 Guy Medal in Silver by the Royal Statistical Society.
His book Probabilistic Networks and Expert Systems, written jointly with Robert G. Cowell, Steffen Lauritzen, and David Spiegelhalter, received the 2001 DeGroot Prize from the International Society for Bayesian Analysis.
References
1946 births
Fellows of the Institute of Mathematical Statistics
Elected Members of the International Statistical Institute
20th-century British mathematicians
Academics of City, University of London
Academics of University College London
Alumni of Trinity Hall, Cambridge
Alumni of Darwin College, Cambridge
Bayesian statisticians
English statisticians
Fellows of Darwin College, Cambridge
Living people
Cambridge mathematicians
People educated at the City of London School
|
https://en.wikipedia.org/wiki/N%C3%A9ron%E2%80%93Severi%20group
|
In algebraic geometry, the Néron–Severi group of a variety is
the group of divisors modulo algebraic equivalence; in other words it is the group of components of the Picard scheme of a variety. Its rank is called the Picard number. It is named after Francesco Severi and André Néron.
Definition
In the cases of most importance to classical algebraic geometry, for a complete variety V that is non-singular, the connected component of the Picard scheme is an abelian variety written
Pic0(V).
The quotient
Pic(V)/Pic0(V)
is an abelian group NS(V), called the Néron–Severi group of V. This is a finitely-generated abelian group by the Néron–Severi theorem, which was proved by Severi over the complex numbers and by Néron over more general fields.
In other words, the Picard group fits into an exact sequence
The fact that the rank is finite is Francesco Severi's theorem of the base; the rank is the Picard number of V, often denoted ρ(V). The elements of finite order are called Severi divisors, and form a finite group which is a birational invariant and whose order is called the Severi number. Geometrically NS(V) describes the algebraic equivalence classes of divisors on V; that is, using a stronger, non-linear equivalence relation in place of linear equivalence of divisors, the classification becomes amenable to discrete invariants. Algebraic equivalence is closely related to numerical equivalence, an essentially topological classification by intersection numbers.
First Chern class and integral valued 2-cocycles
The exponential sheaf sequence
gives rise to a long exact sequence featuring
The first arrow is the first Chern class on the Picard group
and the Neron-Severi group can be identified with its image.
Equivalently, by exactness, the Neron-Severi group is the kernel of the second arrow
In the complex case, the Neron-Severi group is therefore the group of 2-cocycles whose Poincaré dual is represented by a complex hypersurface, that is, a Weil divisor.
For complex tori
Complex tori are special because they have multiple equivalent definitions of the Neron-Severi group. One definition uses its complex structure for the definitionpg 30. For a complex torus , where is a complex vector space of dimension and is a lattice of rank embedding in , the first Chern class makes it possible to identify the Neron-Severi group with the group of Hermitian forms on such thatNote that is an alternating integral form on the lattice .
See also
Complex torus
References
A. Néron, Problèmes arithmétiques et géometriques attachée à la notion de rang d'une courbe algébrique dans un corps Bull. Soc. Math. France, 80 (1952) pp. 101–166
A. Néron, La théorie de la base pour les diviseurs sur les variétés algébriques, Coll. Géom. Alg. Liège, G. Thone (1952) pp. 119–126
F. Severi, La base per le varietà algebriche di dimensione qualunque contenute in una data e la teoria generale delle corrispondénze fra i punti di due superficie algebriche
|
https://en.wikipedia.org/wiki/False%20statement
|
A false statement is a statement that is not true. Although the word fallacy is sometimes used as a synonym for false statement, that is not how the word is used in philosophy, mathematics, logic and most formal contexts.
A false statement does not need to be a lie. A lie is a statement that is known to be untrue and is used to mislead. A false statement is a statement that is untrue but not necessarily told to mislead, as a statement given by someone who does not know it is untrue.
Examples of false statements
Misleading statement (lie)
John told his little brother that sea otters aren't mammals, but fish, even though John himself was a marine biologist and knew otherwise. John simply wanted to see his little brother fail his class report, in order to teach him to begin projects early, which help him develop skills necessary to succeed in life.
Statement made out of ignorance
James, John's brother, stated in his class report that sea otters were fish. James got an F after his teacher pointed out why that statement was false. James did not know that sea otters were in fact mammals because he heard that sea otters were fish from his older brother John, a marine biologist.
In law
In some jurisdictions, false statement is a crime similar to perjury.
United States
In U.S. law, a "false statement" generally refers to United States federal false statements statute, contained in . Most commonly, prosecutors use this statute to reach cover-up crimes such as perjury, false declarations, and obstruction of justice and government fraud cases. Its earliest progenitor was the False Claims Act of 1863, and in 1934 the requirement of an intent to defraud was eliminated to enforce the National Industrial Recovery Act of 1933 (NIRA) against producers of "hot oil", oil produced in violation of production restrictions established pursuant to the NIRA.
The statute criminalizes a government official who "knowingly and willfully":
(1) falsifies, conceals, or covers up by any trick, scheme, or device a material fact;(2) makes any materially false, fictitious, or fraudulent statement or representation; or(3) makes or uses any false writing or document knowing the same to contain any materially false, fictitious, or fraudulent statement or entry.
See also
Misinformation
Fake news
False accusation
False statements of fact
Jumping to conclusions
Making false statements
References
Statements
|
https://en.wikipedia.org/wiki/Connection
|
Connection may refer to:
Mathematics
Connection (algebraic framework)
Connection (mathematics), a way of specifying a derivative of a geometrical object along a vector field on a manifold
Connection (affine bundle)
Connection (composite bundle)
Connection (fibred manifold)
Connection (principal bundle), gives the derivative of a section of a principal bundle
Connection (vector bundle), differentiates a section of a vector bundle along a vector field
Cartan connection, achieved by identifying tangent spaces with the tangent space of a certain model Klein geometry
Ehresmann connection, gives a manner for differentiating sections of a general fibre bundle
Electrical connection, allows the flow of electrons
Galois connection, a type of correspondence between two partially ordered sets
Affine connection, a geometric object on a smooth manifold which connects nearby tangent spaces
Levi-Civita connection, used in differential geometry and general relativity; differentiates a vector field along another vector field
Music
Connection (The Green Children album), 2013
Connection (Don Ellis album), 1972
Connection (Up10tion album), 2021
Connection (EP), a 2000 split EP by Home Grown and Limbeck
Connection, a 2019 EP by Seyong
"Connection" (Elastica song) (1994)
"Connection" (OneRepublic song) (2018)
"Connection" (Rolling Stones song) (1967)
"Connection", a song by Avail from Satiate
"Connection", a 1976 song by Can from Unlimited Edition
"Connection", a song by the Kooks from 10 Tracks to Echo in the Dark (2022)
Other uses
Connection (film), a 2017 Konkani film in Goa
Connection (dance), a means of communication between the lead and the follow
Layover or connection, a transfer from one means of transport to another
See also
Connected sum
Connectedness
Connecting (TV series)
Connections (disambiguation)
Connexion (disambiguation)
Contiguity (disambiguation)
Database connection
Disconnection (disambiguation)
Link (disambiguation)
PC Connection, a Fortune 1000, National Technology Solutions Provider, based in Merrimack, New Hampshire
Rapport
Six degrees of separation
Telecommunication circuit, the complete path between two terminals
The Connection (disambiguation)
Virtual connection, also known as a virtual circuit
|
https://en.wikipedia.org/wiki/Hamming%20bound
|
In mathematics and computer science, in the field of coding theory, the Hamming bound is a limit on the parameters of an arbitrary block code: it is also known as the sphere-packing bound or the volume bound from an interpretation in terms of packing balls in the Hamming metric into the space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which its code words are embedded. A code that attains the Hamming bound is said to be a perfect code.
Background on error-correcting codes
An original message and an encoded version are both composed in an alphabet of q letters. Each code word contains n letters. The original message (of length m) is shorter than n letters. The message is converted into an n-letter codeword by an encoding algorithm, transmitted over a noisy channel, and finally decoded by the receiver. The decoding process interprets a garbled codeword, referred to as simply a word, as the valid codeword "nearest" the n-letter received string.
Mathematically, there are exactly qm possible messages of length m, and each message can be regarded as a vector of length m. The encoding scheme converts an m-dimensional vector into an n-dimensional vector. Exactly qm valid codewords are possible, but any one of qn words can be received because the noisy channel might distort one or more of the n letters when a codeword is transmitted.
Statement of the bound
Preliminary definitions
An alphabet set is a set of symbols with elements. The set of strings of length on the alphabet set are denoted . (There are distinct strings in this set of strings.) A -ary block code of length is a subset of the strings of , where the alphabet set is any alphabet set having elements.
Defining the bound
Let denote the maximum possible size of a -ary block code of length and minimum Hamming distance between elements of the block code (necessarily positive for ).
Then, the Hamming bound is:
where
Proof
It follows from the definition of that if at most
errors are made during transmission of a codeword then minimum distance decoding will decode it correctly (i.e., it decodes the received word as the codeword that was sent). Thus the code is said to be capable of correcting errors.
For each codeword , consider a ball of fixed radius around . Every pair of these balls (Hamming spheres) are non-intersecting by the -error-correcting property. Let be the number of words in each ball (in other words, the volume of the ball). A word that is in such a ball can deviate in at most components from those of the ball's centre, which is a codeword. The number of such words is then obtained by choosing up to of the components of a codeword to deviate to one of possible other values (recall, the code is -ary: it takes values in ). Thus,
is the (maximum) total number of codewords in , and so, by the definition of , the greatest number of balls with no two balls having a wo
|
https://en.wikipedia.org/wiki/Lagrangian%20foliation
|
In mathematics, a Lagrangian foliation or polarization is a foliation of a symplectic manifold, whose leaves are Lagrangian submanifolds. It is one of the steps involved in the geometric quantization of a square-integrable functions on a symplectic manifold.
References
Kenji FUKAYA, Floer homology of Lagrangian Foliation and Noncommutative Mirror Symmetry, (2000)
Symplectic geometry
Foliations
Mathematical quantization
|
https://en.wikipedia.org/wiki/Freudenthal%20suspension%20theorem
|
In mathematics, and specifically in the field of homotopy theory, the Freudenthal suspension theorem is the fundamental result leading to the concept of stabilization of homotopy groups and ultimately to stable homotopy theory. It explains the behavior of simultaneously taking suspensions and increasing the index of the homotopy groups of the space in question. It was proved in 1937 by Hans Freudenthal.
The theorem is a corollary of the homotopy excision theorem.
Statement of the theorem
Let X be an n-connected pointed space (a pointed CW-complex or pointed simplicial set). The map
induces a map
on homotopy groups, where Ω denotes the loop functor and Σ denotes the reduced suspension functor. The suspension theorem then states that the induced map on homotopy groups is an isomorphism if k ≤ 2n and an epimorphism if k = 2n + 1.
A basic result on loop spaces gives the relation
so the theorem could otherwise be stated in terms of the map
with the small caveat that in this case one must be careful with the indexing.
Proof
As mentioned above, the Freudenthal suspension theorem follows quickly from homotopy excision; this proof is in terms of the natural map . If a space is -connected, then the pair of spaces is -connected, where is the reduced cone over ; this follows from the relative homotopy long exact sequence. We can decompose as two copies of , say , whose intersection is . Then, homotopy excision says the inclusion map:
induces isomorphisms on and a surjection on . From the same relative long exact sequence, and since in addition cones are contractible,
Putting this all together, we get
for , i.e. , as claimed above; for the left and right maps are isomorphisms, regardless of how connected is, and the middle one is a surjection by excision, so the composition is a surjection as claimed.
Corollary 1
Let Sn denote the n-sphere and note that it is (n − 1)-connected so that the groups stabilize for by the Freudenthal theorem. These groups represent the kth stable homotopy group of spheres.
Corollary 2
More generally, for fixed k ≥ 1, k ≤ 2n for sufficiently large n, so that any n-connected space X will have corresponding stabilized homotopy groups. These groups are actually the homotopy groups of an object corresponding to X in the stable homotopy category.
References
.
.
.
Theorems in homotopy theory
|
https://en.wikipedia.org/wiki/Factor%20theorem
|
In algebra, the factor theorem connects polynomial factors with polynomial roots. Specifically, if is a polynomial, then is a factor of if and only if (that is, is a root of the polynomial). The theorem is a special case of the polynomial remainder theorem.
The theorem results from basic properties of addition and multiplication. It follows that the theorem holds also when the coefficients and the element belong to any commutative ring, and not just a field.
In particular, since multivariate polynomials can be viewed as univariate in one of their variables, the following generalization holds : If and are multivariate polynomials and is independent of , then is a factor of if and only if is the zero polynomial.
Factorization of polynomials
Two problems where the factor theorem is commonly applied are those of factoring a polynomial and finding the roots of a polynomial equation; it is a direct consequence of the theorem that these problems are essentially equivalent.
The factor theorem is also used to remove known zeros from a polynomial while leaving all unknown zeros intact, thus producing a lower degree polynomial whose zeros may be easier to find. Abstractly, the method is as follows:
Deduce the candidate of zero of the polynomial from its leading coefficient and constant term . (See Rational Root Theorem.)
Use the factor theorem to conclude that is a factor of .
Compute the polynomial , for example using polynomial long division or synthetic division.
Conclude that any root of is a root of . Since the polynomial degree of is one less than that of , it is "simpler" to find the remaining zeros by studying .
Continuing the process until the polynomial is factored completely, which all its factors is irreducible on or .
Example
Find the factors of
Solution: Let be the above polynomial
Constant term = 2
Coefficient of
All possible factors of 2 are and . Substituting , we get:
So, , i.e, is a factor of . On dividing by , we get
Quotient =
Hence,
Out of these, the quadratic factor can be further factored using the quadratic formula, which gives as roots of the quadratic Thus the three irreducible factors of the original polynomial are and
Proof
Several proofs of the theorem are presented here.
If is a factor of it is immediate that So, only the converse will be proved in the following.
Proof 1
This argument begins by verifying the theorem for . That is, it aims to show that for any polynomial for which it is true that for some polynomial . To that end, write explicitly as . Now observe that , so . Thus, . This case is now proven.
What remains is to prove the theorem for general by reducing to the case. To that end, observe that is a polynomial with a root at . By what has been shown above, it follows that for some polynomial . Finally, .
Proof 2
First, observe that whenever and belong to any commutative ring (the same one) then the identity is true. This is shown by multiplyi
|
https://en.wikipedia.org/wiki/CO3
|
CO3 or Co3 may refer to:
Carbon trioxide
Carbonate
MT-CO3
A postcode district in Colchester, UK
Conway group Co3 in mathematics
Co3, Australian contemporary dance company listed in Australian contemporary dance
Company 3
Colorado's 3rd congressional district
|
https://en.wikipedia.org/wiki/Exponential%20sheaf%20sequence
|
In mathematics, the exponential sheaf sequence is a fundamental short exact sequence of sheaves used in complex geometry.
Let M be a complex manifold, and write OM for the sheaf of holomorphic functions on M. Let OM* be the subsheaf consisting of the non-vanishing holomorphic functions. These are both sheaves of abelian groups. The exponential function gives a sheaf homomorphism
because for a holomorphic function f, exp(f) is a non-vanishing holomorphic function, and exp(f + g) = exp(f)exp(g). Its kernel is the sheaf 2πiZ of locally constant functions on M taking the values 2πin, with n an integer. The exponential sheaf sequence is therefore
The exponential mapping here is not always a surjective map on sections; this can be seen for example when M is a punctured disk in the complex plane. The exponential map is surjective on the stalks: Given a germ g of an holomorphic function at a point P such that g(P) ≠ 0, one can take the logarithm of g in a neighborhood of P. The long exact sequence of sheaf cohomology shows that we have an exact sequence
for any open set U of M. Here H0 means simply the sections over U, and the sheaf cohomology H1(2πiZ|U) is the singular cohomology of U.
One can think of H1(2πiZ|U) as associating an integer to each loop in U. For each section of OM*, the connecting homomorphism to H1(2πiZ|U) gives the winding number for each loop. So this homomorphism is therefore a generalized winding number and measures the failure of U to be contractible. In other words, there is a potential topological obstruction to taking a global logarithm of a non-vanishing holomorphic function, something that is always locally possible.
A further consequence of the sequence is the exactness of
Here H1(OM*) can be identified with the Picard group of holomorphic line bundles on M. The connecting homomorphism sends a line bundle to its first Chern class.
References
, see especially p. 37 and p. 139
Complex manifolds
Sheaf theory
|
https://en.wikipedia.org/wiki/List%20of%20algebraic%20coding%20theory%20topics
|
This is a list of algebraic coding theory topics.
Algebraic coding theory
|
https://en.wikipedia.org/wiki/Algebraic%20geometry%20code
|
Algebraic geometry codes, often abbreviated AG codes, are a type of linear code that generalize Reed–Solomon codes. The Russian mathematician V. D. Goppa constructed these codes for the first time in 1982.
History
The name of these codes has evolved since the publication of Goppa's paper describing them. Historically these codes have also been referred to as geometric Goppa codes; however, this is no longer the standard term used in coding theory literature. This is due to the fact that Goppa codes are a distinct class of codes which were also constructed by Goppa in the early 1970s.
These codes attracted interest in the coding theory community because they have the ability to surpass the Gilbert–Varshamov bound; at the time this was discovered, the Gilbert–Varshamov bound had not been broken in the 30 years since its discovery. This was demonstrated by Tfasman, Vladut, and Zink in the same year as the code construction was published, in their paper "Modular curves, Shimura curves, and Goppa codes, better than Varshamov-Gilbert bound". The name of this paper may be one source of confusion affecting references to algebraic geometry codes throughout 1980s and 1990s coding theory literature.
Construction
In this section the construction of algebraic geometry codes is described. The section starts with the ideas behind Reed–Solomon codes, which are used to motivate the construction of algebraic geometry codes.
Reed–Solomon codes
Algebraic geometry codes are a generalization of Reed–Solomon codes. Constructed by Irving Reed and Gustave Solomon in 1960, Reed–Solomon codes use univariate polynomials to form codewords, by evaluating polynomials of sufficiently small degree at the points in a finite field .
Formally, Reed–Solomon codes are defined in the following way. Let . Set positive integers . Let The Reed–Solomon code is the evaluation code
Codes from algebraic curves
Goppa observed that can be considered as an affine line, with corresponding projective line . Then, the polynomials in (i.e. the polynomials of degree less than over ) can be thought of as polynomials with pole allowance no more than at the point at infinity in .
With this idea in mind, Goppa looked toward the Riemann–Roch theorem. The elements of a Riemann–Roch space are exactly those functions with pole order restricted below a given threshold, with the restriction being encoded in the coefficients of a corresponding divisor. Evaluating those functions at the rational points on an algebraic curve over (that is, the points in on the curve ) gives a code in the same sense as the Reed-Solomon construction.
However, because the parameters of algebraic geometry codes are connected to algebraic function fields, the definitions of the codes are often given in the language of algebraic function fields over finite fields. Nevertheless, it is important to remember the connection to algebraic curves, as this provides a more geometrically intuitive method of thinking about AG
|
https://en.wikipedia.org/wiki/Richard%20H.%20Schwartz
|
Richard H. Schwartz is a professor emeritus of mathematics at the College of Staten Island; president emeritus of the Jewish Vegetarians of North America (JVNA); and co-founder and coordinator of the Society of Ethical and Religious Vegetarians (SERV). He is best known as a Jewish vegetarian activist and advocate for animal rights in the United States and Israel.
Early life
Schwartz was born in Arverne, New York in 1934. His father, Joseph (Zundel), was 31 at the time, and his mother, Rose, was 29. They were not vegetarians, nor was he a vegetarian as a youth. He describes his upbringing as being a "meat and potatoes person" whose favorite dish was pot roast. In 1975, he began teaching a course called "Mathematics and the Environment" at the College of Staten Island.
Schwartz married Loretta Suskind on February 14, 1960, at the Utopia Jewish Center in Queens. He reports that he became vegetarian in 1977 and vegan in 2000.
Activism
As an Orthodox Jew, Schwartz began to explore what Judaism had to say about diet, ecology, and the proper treatment of animals. The result was his best-known book, Judaism and Vegetarianism. It was first published in 1982, with later, expanded editions published in 1988 and 2001. It explores vegetarianism from the standpoint of biblical, Talmudic, and rabbinical references, and concludes that vegetarianism is the highest form of kosher and the best diet for Jews in the modern world. The second edition was a B'nai Brith Book Club Selection that same year.
Schwartz argues that the realities of animal-based diets and agriculture conflict with basic Jewish mandates to preserve human health, treat animals with compassion, protect the environment, conserve natural resources, help hungry people, and pursue peace. He has been active in a variety of vegetarian and animal rights organizations, and in July 2005 was inducted into the Vegetarian Hall of Fame by the North American Vegetarian Society (NAVS). The ceremony was held at the 31st Annual NAVS Summerfest on the University of Pittsburgh campus. Schwartz also spoke at the Summerfest on "Judaism and Vegetarianism" and "Ten Approaches to Obtain a Vegetarian-Conscious World by 2010."
In 2010, Schwartz served as a Green Zionist Alliance delegate to the World Zionist Congress.
Schwartz was involved in the formation of the Jewish Vegetarians of North America. He became president of the organization in 2002 and continues to serve as president emeritus.
Schwartz also reaches out to vegetarians from other religions, and his writings helped inspire the formation of the Christian Vegetarian Association, and their original campaign and literature, namely "What Would Jesus Eat...Today?" This campaign has more recently evolved into the broader "Honoring God's Creation" campaign and has strongly influenced the Christian vegetarian movement. He also is president of the interfaith group, "Society of Ethical and Religious Vegetarians" (SERV), which he cofounded.
A Sacred Duty
Schwart
|
https://en.wikipedia.org/wiki/Society%20for%20Industrial%20and%20Applied%20Mathematics
|
Society for Industrial and Applied Mathematics (SIAM) is a professional society dedicated to applied mathematics, computational science, and data science through research, publications, and community. SIAM is the world's largest scientific society devoted to applied mathematics, and roughly two-thirds of its membership resides within the United States. Founded in 1951, the organization began holding annual national meetings in 1954, and now hosts conferences, publishes books and scholarly journals, and engages in advocacy in issues of interest to its membership. Members include engineers, scientists, and mathematicians, both those employed in academia and those working in industry. The society supports educational institutions promoting applied mathematics.
SIAM is one of the four member organizations of the Joint Policy Board for Mathematics.
Membership
Membership is open to both individuals and organizations. By the end of its first full year of operation, SIAM had 130 members; by 1968, it had 3,700.
Student members can join SIAM chapters affiliated and run by students and faculty at universities. Most universities with SIAM chapters are in the United States (including Harvard and MIT), but SIAM chapters also exist in other countries, for example at Oxford, at the École Polytechnique Fédérale de Lausanne and at Peking University. SIAM publishes the SIAM Undergraduate Research Online, a venue for undergraduate research in applied and computational mathematics. (SIAM also offers the SIAM Visiting Lecture Program, which helps arrange visits from industrial mathematicians to speak to student groups about applied mathematics and their own professional experiences.)
In 2009, SIAM instituted a Fellows program to recognize certain members who have made outstanding contributions to the fields that SIAM serves.
Activity groups
The society includes a number of activity groups (SIAGs) to allow for more focused group discussions and collaborations. Activity groups organize domain-specific conferences and minisymposia, and award prizes.
Unlike special interest groups in similar academic associations like ACM, activity groups are chartered for a fixed period of time, typically for two years, and require submitting a petition to the SIAM Council and Board for renewal. Charter approval is largely based on group size, as topics that were considered hot at one time may have fewer active researchers later.
Current Activity Groups:
Algebraic Geometry
Analysis of Partial Differential Equations
Applied and Computational Discrete Algorithms
Applied Mathematics Education
Computational Science and Engineering
Control and Systems Theory
Data Science
Discrete Mathematics
Dynamical Systems
Financial Mathematics and Engineering
Geometric Design
Geosciences
Imaging Science
Life Sciences
Linear Algebra
Mathematical Aspects of Materials Science
Mathematics of Planet Earth
Nonlinear Waves and Coherent Structures
Optimization
Orthogonal Polynomials and Special Function
|
https://en.wikipedia.org/wiki/Shulba%20Sutras
|
The Shulva Sutras or Śulbasūtras (Sanskrit: शुल्बसूत्र; : "string, cord, rope") are sutra texts belonging to the Śrauta ritual and containing geometry related to fire-altar construction.
Purpose and origins
The Shulba Sutras are part of the larger corpus of texts called the Shrauta Sutras, considered to be appendices to the Vedas. They are the only sources of knowledge of Indian mathematics from the Vedic period. Unique fire-altar shapes were associated with unique gifts from the Gods. For instance, "he who desires heaven is to construct a fire-altar in the form of a falcon"; "a fire-altar in the form of a tortoise is to be constructed by one desiring to win the world of Brahman" and "those who wish to destroy existing and future enemies should construct a fire-altar in the form of a rhombus".
The four major Shulba Sutras, which are mathematically the most significant, are those attributed to Baudhayana, Manava, Apastamba and Katyayana. Their language is late Vedic Sanskrit, pointing to a composition roughly during the 1st millennium BCE. The oldest is the sutra attributed to Baudhayana, possibly compiled around 800 BCE to 500 BCE. Pingree says that the Apastamba is likely the next oldest; he places the Katyayana and the Manava third and fourth chronologically, on the basis of apparent borrowings. According to Plofker, the Katyayana was composed after "the great grammatical codification of Sanskrit by Pāṇini in probably the mid-fourth century BCE", but she places the Manava in the same period as the Baudhayana.
With regard to the composition of Vedic texts, Plofker writes,The Vedic veneration of Sanskrit as a sacred speech, whose divinely revealed texts were meant to be recited, heard, and memorized rather than transmitted in writing, helped shape Sanskrit literature in general. ... Thus texts were composed in formats that could be easily memorized: either condensed prose aphorisms (sūtras, a word later applied to mean a rule or algorithm in general) or verse, particularly in the Classical period. Naturally, ease of memorization sometimes interfered with ease of comprehension. As a result, most treatises were supplemented by one or more prose commentaries ..." There are multiple commentaries for each of the Shulba Sutras, but these were written long after the original works. The commentary of Sundararāja on the Apastamba, for example, comes from the late 15th century CE and the commentary of Dvārakãnātha on the Baudhayana appears to borrow from Sundararāja. According to Staal, certain aspects of the tradition described in the Shulba Sutras would have been "transmitted orally", and he points to places in southern India where the fire-altar ritual is still practiced and an oral tradition preserved. The fire-altar tradition largely died out in India, however, and Plofker warns that those pockets where the practice remains may reflect a later Vedic revival rather than an unbroken tradition. Archaeological evidence of the altar constru
|
https://en.wikipedia.org/wiki/Brahmagupta%20theorem
|
In geometry, Brahmagupta's theorem states that if a cyclic quadrilateral is orthodiagonal (that is, has perpendicular diagonals), then the perpendicular to a side from the point of intersection of the diagonals always bisects the opposite side. It is named after the Indian mathematician Brahmagupta (598-668).
More specifically, let A, B, C and D be four points on a circle such that the lines AC and BD are perpendicular. Denote the intersection of AC and BD by M. Drop the perpendicular from M to the line BC, calling the intersection E. Let F be the intersection of the line EM and the edge AD. Then, the theorem states that F is the midpoint AD.
Proof
We need to prove that AF = FD. We will prove that both AF and FD are in fact equal to FM.
To prove that AF = FM, first note that the angles FAM and CBM are equal, because they are inscribed angles that intercept the same arc of the circle. Furthermore, the angles CBM and CME are both complementary to angle BCM (i.e., they add up to 90°), and are therefore equal. Finally, the angles CME and FMA are the same. Hence, AFM is an isosceles triangle, and thus the sides AF and FM are equal.
The proof that FD = FM goes similarly: the angles FDM, BCM, BME and DMF are all equal, so DFM is an isosceles triangle, so FD = FM. It follows that AF = FD, as the theorem claims.
See also
Brahmagupta's formula for the area of a cyclic quadrilateral
References
External links
Brahmagupta's Theorem at cut-the-knot
Brahmagupta
Theorems about quadrilaterals and circles
Articles containing proofs
|
https://en.wikipedia.org/wiki/Wigner%20quasiprobability%20distribution
|
The Wigner quasiprobability distribution (also called the Wigner function or the Wigner–Ville distribution, after Eugene Wigner and Jean-André Ville) is a quasiprobability distribution. It was introduced by Eugene Wigner in 1932 to study quantum corrections to classical statistical mechanics. The goal was to link the wavefunction that appears in Schrödinger's equation to a probability distribution in phase space.
It is a generating function for all spatial autocorrelation functions of a given quantum-mechanical wavefunction .
Thus, it maps on the quantum density matrix in the map between real phase-space functions and Hermitian operators introduced by Hermann Weyl in 1927, in a context related to representation theory in mathematics (see Weyl quantization). In effect, it is the Wigner–Weyl transform of the density matrix, so the realization of that operator in phase space. It was later rederived by Jean Ville in 1948 as a quadratic (in signal) representation of the local time-frequency energy of a signal, effectively a spectrogram.
In 1949, José Enrique Moyal, who had derived it independently, recognized it as the quantum moment-generating functional, and thus as the basis of an elegant encoding of all quantum expectation values, and hence quantum mechanics, in phase space (see Phase-space formulation). It has applications in statistical mechanics, quantum chemistry, quantum optics, classical optics and signal analysis in diverse fields, such as electrical engineering, seismology, time–frequency analysis for music signals, spectrograms in biology and speech processing, and engine design.
Relation to classical mechanics
A classical particle has a definite position and momentum, and hence it is represented by a point in phase space. Given a collection (ensemble) of particles, the probability of finding a particle at a certain position in phase space is specified by a probability distribution, the Liouville density. This strict interpretation fails
for a quantum particle, due to the uncertainty principle. Instead, the above quasiprobability Wigner distribution plays an analogous role, but does not satisfy all the properties of a conventional probability distribution; and, conversely, satisfies boundedness properties unavailable to classical distributions.
For instance, the Wigner distribution can and normally does take on negative values for states which have no classical model—and is a convenient indicator of quantum-mechanical interference. (See below for a characterization of pure states whose Wigner functions are non-negative.)
Smoothing the Wigner distribution through a filter of size larger than (e.g., convolving with a
phase-space Gaussian, a Weierstrass transform, to yield the Husimi representation, below), results in a positive-semidefinite function, i.e., it may be thought to have been coarsened to a semi-classical one.
Regions of such negative value are provable (by convolving them with a small Gaussian) to be "small": they cannot
|
https://en.wikipedia.org/wiki/Semiregular%20space
|
A semiregular space is a topological space whose regular open sets (sets that equal the interiors of their closures) form a base for the topology.
Examples and sufficient conditions
Every regular space is semiregular, and every topological space may be embedded into a semiregular space.
The space with the double origin topology and the Arens square are examples of spaces that are Hausdorff semiregular, but not regular.
See also
Notes
References
Lynn Arthur Steen and J. Arthur Seebach, Jr., Counterexamples in Topology. Springer-Verlag, New York, 1978. Reprinted by Dover Publications, New York, 1995. (Dover edition).
Properties of topological spaces
Separation axioms
|
https://en.wikipedia.org/wiki/Research%20Experiences%20for%20Undergraduates
|
Research Experiences for Undergraduates (or REUs) are competitive summer research programs in the United States for undergraduates studying science, engineering, or mathematics. The programs are sponsored by the National Science Foundation, and are hosted in various universities. REUs tend to be specialized in a particular field of science. There are REUs in many scientific fields such as mathematics, physics, chemistry, geology, biology, psychology, and computer science.
There are two kinds of REU experiences: REU individual experiences (funded by NSF via their REU Supplements category of grant supplements) and REU sites (funded by NSF via their REU Sites category of grant proposals).
How students apply to participate
REU sites typically consist of ten undergraduates working in the research program of the host institution either in the US or abroad, for example, CERN. As the program is funded by the NSF, undergraduates must be citizens or permanent residents of the US or its possessions to be eligible for funding. However, some REU sites accept "self-funder" international students. Applications are typically due between February and the end of April. The length of the application ranges from a single letter of reference without supporting materials all the way up to something comparable to a college admissions application. The programs generally require between one and three letters of reference, a transcript, 0-2 essays, a letter of interest, a resume, a biographical form, or some combination thereof. Although all eligible students are encouraged to apply, there is an emphasis on including populations underrepresented in science—women, underrepresented minorities, and persons with disabilities.
REU individual experiences typically consist of one undergraduate student, or two undergraduate students working together. Sometimes these undergraduates work with a larger research team that includes graduate students. These REU experiences take place at the student's current university, and can last anywhere from a few weeks to an entire year. The application process varies by the particular faculty member who plans to work with these students.
Compensation
Students participating in REU sites are generally provided with a modest stipend ($4,000–$6,000 for 10 weeks of work), housing, transportation to and from the site, and often arrangements for food. REU individual experiences pay (stipends or on an hourly basis) at about the same payrate as REU sites.
History
Research grants which included undergraduate research assistants have been funded from the very beginning of the NSF. But in 1958, the NSF established the Undergraduate Research Participation Program, and funding for that program continued until FY 1982, when it was abolished in the Reagan Administration cuts of NSF education funding. A program to enhance research experiences for undergraduates was reestablished in FY 1987 with the title Research Experiences for Undergraduates. One long-run
|
https://en.wikipedia.org/wiki/Polygon%20%28disambiguation%29
|
A polygon is a geometric figure.
Polygon may also refer to:
Mathematics and computing
Simple polygon, a single contiguous closed region, the more common usage of "polygon"
Star polygon, a star-like polygon
Polygon (computer graphics), a representation of a polygon in computer graphics
Companies
Polygon (blockchain)
Polygon Bikes, an Indonesian bike company
Polygon Books, an imprint of Birlinn Limited
Polygon Pictures, Japanese 3DCG anime studio
Polygon Records, a 1950s record company
Places
Semipalatinsk Test Site, a nuclear test site near Semey, Kazakhstan
The Polygon, Southampton, a district in the city of Southampton
Polygon Wood, Zonnebeke, Belgium, site of the Battle of Passchendaele in World War I
Other uses
Polygon (film) a 1977 Soviet animation film
Polygon (website), a video game website
POLYGON experiment, an experiment in physical oceanography which established the existence of mesoscale eddies
Polygon Man, the former mascot for the Sony PlayStation in North America
Polygon, a Danish magazine (pencil and paper) version of the strategy board game Hex
Polygon, a chemical compound also known as sodium triphosphate
Polygons, a type of patterned ground created by permafrost expanding and contracting
See also
Poligon Creative Centre
Polygone, an electronic warfare tactics range between Germany and France
The Polygon (disambiguation)
|
https://en.wikipedia.org/wiki/Zvonimir%20Janko
|
Zvonimir Janko (26 July 1932 – 12 April 2022) was a Croatian mathematician who was the eponym of the Janko groups, sporadic simple groups in group theory. The first few sporadic simple groups were discovered by Émile Léonard Mathieu, which were then called the Mathieu groups. It was after 90 years of the discovery of the last Mathieu group that Zvonimir Janko constructed a new sporadic simple group in 1964. In his honour, this group is now called J1. This discovery launched the modern theory of sporadic groups and it was an important milestone in the classification of finite simple groups.
Biography
Janko was born in Bjelovar, Croatia. He studied at the University of Zagreb where he received Ph.D. in 1960, with advisor Vladimir Devidé. The title of the thesis was Dekompozicija nekih klasa nedegeneriranih Rédeiovih grupa na Schreierova proširenja (Decomposition of some classes of nondegenerate Rédei Groups on Schreier extensions), in which he solved a problem posed by László Rédei. He then taught physics at a high school in Široki Brijeg in Bosnia and Herzegovina.
In 1962 Janko decided to leave Yugoslavia for Australia, where he first taught at Monash University in Melbourne. In 1964 he joined as full professor the Australian National University in Canberra. He then moved in 1968 to the United States, where he first was a visiting professor at Princeton University, and then full professor at the Ohio State University. In 1970 he was an invited speaker at the International Congress of Mathematicians in Nice. In 1972 Janko moved to Germany, where he was a full professor at Heidelberg University until his retirement in 2000.
Janko discovered his first sporadic simple group (called J1) in 1964, when he was at the Australian National University. This was followed in 1966 by the prediction of J2, whose existence was established in 1968 by Marshall Hall and David Wales, and J3, whose existence was established in 1969 by Graham Higman and John McKay. Finally, Janko found the group J4 in 1975; its existence was confirmed in 1980 by Simon P. Norton and others using computer calculations.
See also
Iwasawa group
Lyons group
Thin group (finite group theory)
References
1932 births
2022 deaths
People from Bjelovar
20th-century Croatian mathematicians
Yugoslav mathematicians
Group theorists
Faculty of Science, University of Zagreb alumni
Academic staff of the University of Zagreb
Ohio State University faculty
Academic staff of Heidelberg University
Academic staff of Monash University
Academic staff of the Australian National University
|
https://en.wikipedia.org/wiki/PPRM
|
PPRM may refer to:
Positive Polarity Reed-Muller: representation of a boolean function as a single algebraic sum (xor) of one or more conjunctions of one or more literals
Greater Romania Party
|
https://en.wikipedia.org/wiki/Fantasy%20wrestling
|
Fantasy wrestling is an umbrella term representing the genre of role-playing and statistics-based games which are set in the world of professional wrestling. Several variants of fantasy wrestling exist which may be differentiated by the way they are transmitted (through websites, message boards, e-mail, postal mail, face-to-face, etc.), the method in which the storyline is determined, (via roleplay, "angles", strategy- or statistics-based systems, etc.) and how the roster is composed (are characters created by the players, are real wrestlers "imported" into the game, etc.).
Fantasy wrestling's roots lie in the play-by-mail wrestling games often featured in professional wrestling magazines that became prominent in the mid-to-late 1980s during one of professional wrestling's boom periods. By the late 1980s, fantasy wrestling games had started to appear on the internet. In the early 1990s, the advent of national bulletin board services like Prodigy, AOL, and Compuserve allowed players to use e-mail and bulletin boards to more easily trade information and post roleplays. As technology progressed and the internet evolved, fantasy wrestling enthusiasts took advantage, using websites and newsgroups to connect and build broader communities for gameplay.
History and progression
Creation of character(s)
In order to begin fantasy wrestling, one must create a custom character. Some people will elect to use pro-wrestlers over their own, custom characters; this can be either allowed or rejected by the administrator of the federation. Usually, one will decide the following: physical appearance (height, weight, sex; these can be copied through "renders" - images of pro-wrestlers - or through their imagination/CAW mechanics through wrestling games), moveset (style, signature moves, finishing moves, etc.), fan reaction (heel or face; booed or cheered), and entrance music. Once this is all decided, the character is named and the player can begin to promo in the federation. Some players create multiple characters, either for choice for playing or to freshen the experience/start new.
Play-by-mail
Early versions of the game began in the 1980s using play-by-mail formats. Based on the moves and any strategies applied the adjudicator would then decide the outcome. Play-by-mail leagues often included a 'pay to play' model where handlers paid a fee per match and/or 'strategy' applied. The later expansion into email was a natural progression, often using the same mechanics as the pbm format.
Internet email play
In the late 1980s, e-wrestling got started on the Internet, played by email and often advertised via Usenet, including rec.games.pbm and rec.games.frp. The early games followed the model of a simplified role-playing game with "combat systems" of varying complexity, resolved by the Federation Head, or "Fedhead." The role-playing aspect was significant and the fast turn-around of email allowed for collaboration in the creation of "promos" and the formation of
|
https://en.wikipedia.org/wiki/Plebanski%20action
|
General relativity and supergravity in all dimensions meet each other at a common assumption:
Any configuration space can be coordinatized by gauge fields , where the index is a Lie algebra index and is a spatial manifold index.
Using these assumptions one can construct an effective field theory in low energies for both. In this form the action of general relativity can be written in the form of the Plebanski action which can be constructed using the Palatini action to derive Einstein's field equations of general relativity.
The form of the action introduced by Plebanski is:
where
are internal indices, is a curvature on the orthogonal group and the connection variables (the gauge fields) are denoted by . The symbol is the Lagrangian multiplier and is the antisymmetric symbol valued over .
The specific definition
formally satisfies the Einstein's field equation of general relativity.
Application is to the Barrett–Crane model.
See also
Tetradic Palatini action
Barrett–Crane model
BF model
References
Variational formalism of general relativity
|
https://en.wikipedia.org/wiki/Riemann%20series%20theorem
|
In mathematics, the Riemann series theorem, also called the Riemann rearrangement theorem, named after 19th-century German mathematician Bernhard Riemann, says that if an infinite series of real numbers is conditionally convergent, then its terms can be arranged in a permutation so that the new series converges to an arbitrary real number, or diverges. This implies that a series of real numbers is absolutely convergent if and only if it is unconditionally convergent.
As an example, the series 1 − 1 + 1/2 − 1/2 + 1/3 − 1/3 + ⋯ converges to 0 (for a sufficiently large number of terms, the partial sum gets arbitrarily near to 0); but replacing all terms with their absolute values gives 1 + 1 + 1/2 + 1/2 + 1/3 + 1/3 + ⋯, which sums to infinity. Thus the original series is conditionally convergent, and can be rearranged (by taking the first two positive terms followed by the first negative term, followed by the next two positive terms and then the next negative term, etc.) to give a series that converges to a different sum: 1 + 1/2 − 1 + 1/3 + 1/4 − 1/2 + ⋯ = ln 2. More generally, using this procedure with p positives followed by q negatives gives the sum ln(p/q). Other rearrangements give other finite sums or do not converge to any sum.
History
It is a basic result that the sum of finitely many numbers does not depend on the order in which they are added. For example, . The observation that the sum of an infinite sequence of numbers can depend on the ordering of the summands is commonly attributed to Augustin-Louis Cauchy in 1833. He analyzed the alternating harmonic series, showing that certain rearrangements of its summands result in different limits. Around the same time, Peter Gustav Lejeune Dirichlet highlighted that such phenomena is ruled out in the context of absolute convergence, and gave further examples of Cauchy's phenomena for some other series which fail to be absolutely convergent.
In the course of his analysis of Fourier series and the theory of Riemann integration, Bernhard Riemann gave a full characterization of the rearrangement phenomena. He proved that in the case of a convergent series which does not converge absolutely (known as conditional convergence), rearrangements can be found so that the new series converges to any arbitrarily prescribed real number. Riemann's theorem is now considered as a basic part of the field of mathematical analysis.
For any series, one may consider the set of all possible sums, corresponding to all possible rearrangements of the summands. Riemann's theorem can be formulated as saying that, for a series of real numbers, this set is either empty, a single point (in the case of absolute convergence), or the entire real number line (in the case of conditional convergence). In this formulation, Riemann's theorem was extended by Paul Lévy and Ernst Steinitz to series whose summands are complex numbers or, even more generally, elements of a finite-dimensional real vector space. They proved that the se
|
https://en.wikipedia.org/wiki/List%20of%20stochastic%20processes%20topics
|
In the mathematics of probability, a stochastic process is a random function. In practical applications, the domain over which the function is defined is a time interval (time series) or a region of space (random field).
Familiar examples of time series include stock market and exchange rate fluctuations, signals such as speech, audio and video; medical data such as a patient's EKG, EEG, blood pressure or temperature; and random movement such as Brownian motion or random walks.
Examples of random fields include static images, random topographies (landscapes), or composition variations of an inhomogeneous material.
Stochastic processes topics
This list is currently incomplete. See also :Category:Stochastic processes
Basic affine jump diffusion
Bernoulli process: discrete-time processes with two possible states.
Bernoulli schemes: discrete-time processes with N possible states; every stationary process in N outcomes is a Bernoulli scheme, and vice versa.
Bessel process
Birth–death process
Branching process
Branching random walk
Brownian bridge
Brownian motion
Chinese restaurant process
CIR process
Continuous stochastic process
Cox process
Dirichlet processes
Finite-dimensional distribution
First passage time
Galton–Watson process
Gamma process
Gaussian process – a process where all linear combinations of coordinates are normally distributed random variables.
Gauss–Markov process (cf. below)
GenI process
Girsanov's theorem
Hawkes process
Homogeneous processes: processes where the domain has some symmetry and the finite-dimensional probability distributions also have that symmetry. Special cases include stationary processes, also called time-homogeneous.
Karhunen–Loève theorem
Lévy process
Local time (mathematics)
Loop-erased random walk
Markov processes are those in which the future is conditionally independent of the past given the present.
Markov chain
Markov chain central limit theorem
Continuous-time Markov process
Markov process
Semi-Markov process
Gauss–Markov processes: processes that are both Gaussian and Markov
Martingales – processes with constraints on the expectation
Onsager–Machlup function
Ornstein–Uhlenbeck process
Percolation theory
Point processes: random arrangements of points in a space . They can be modelled as stochastic processes where the domain is a sufficiently large family of subsets of S, ordered by inclusion; the range is the set of natural numbers; and, if A is a subset of B, ƒ(A) ≤ ƒ(B) with probability 1.
Poisson process
Compound Poisson process
Population process
Probabilistic cellular automaton
Queueing theory
Queue
Random field
Gaussian random field
Markov random field
Sample-continuous process
Stationary process
Stochastic calculus
Itô calculus
Malliavin calculus
Semimartingale
Stratonovich integral
Stochastic control
Stochastic differential equation
Stochastic proces
|
https://en.wikipedia.org/wiki/Area%20of%20a%20circle
|
In geometry, the area enclosed by a circle of radius is . Here the Greek letter represents the constant ratio of the circumference of any circle to its diameter, approximately equal to 3.14159.
One method of deriving this formula, which originated with Archimedes, involves viewing the circle as the limit of a sequence of regular polygons with an increasing number of sides. The area of a regular polygon is half its perimeter multiplied by the distance from its center to its sides, and because the sequence tends to a circle, the corresponding formula–that the area is half the circumference times the radius–namely, , holds for a circle.
Terminology
Although often referred to as the area of a circle in informal contexts, strictly speaking the term disk refers to the interior region of the circle, while circle is reserved for the boundary only, which is a curve and covers no area itself. Therefore, the area of a disk is the more precise phrase for the area enclosed by a circle.
History
Modern mathematics can obtain the area using the methods of integral calculus or its more sophisticated offspring, real analysis. However, the area of a disk was studied by the Ancient Greeks. Eudoxus of Cnidus in the fifth century B.C. had found that the area of a disk is proportional to its radius squared. Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius in his book Measurement of a Circle. The circumference is 2r, and the area of a triangle is half the base times the height, yielding the area r2 for the disk. Prior to Archimedes, Hippocrates of Chios was the first to show that the area of a disk is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality.
Historical arguments
A variety of arguments have been advanced historically to establish the equation to varying degrees of mathematical rigor. The most famous of these is Archimedes' method of exhaustion, one of the earliest uses of the mathematical concept of a limit, as well as the origin of Archimedes' axiom which remains part of the standard analytical treatment of the real number system. The original proof of Archimedes is not rigorous by modern standards, because it assumes that we can compare the length of arc of a circle to the length of a secant and a tangent line, and similar statements about the area, as geometrically evident.
Using polygons
The area of a regular polygon is half its perimeter times the apothem. As the number of sides of the regular polygon increases, the polygon tends to a circle, and the apothem tends to the radius. This suggests that the area of a disk is half the circumference of its bounding circle times the radius.
Archimedes's proof
Following Archimedes' argument in The Measurement of a Circle (c. 260 BC
|
https://en.wikipedia.org/wiki/Elliott%E2%80%93Halberstam%20conjecture
|
In number theory, the Elliott–Halberstam conjecture is a conjecture about the distribution of prime numbers in arithmetic progressions. It has many applications in sieve theory. It is named for Peter D. T. A. Elliott and Heini Halberstam, who stated the conjecture in 1968.
Stating the conjecture requires some notation. Let , the prime-counting function, denote the number of primes less than or equal to . If is a positive integer and is coprime to , we let denote the number of primes less than or equal to which are equal to modulo . Dirichlet's theorem on primes in arithmetic progressions then tells us
that
where is Euler's totient function. If we then define the error function
where the max is taken over all coprime to , then the Elliott–Halberstam conjecture is the assertion that
for every and there exists a constant such that
for all .
This conjecture was proven for all by Enrico Bombieri and A. I. Vinogradov (the Bombieri–Vinogradov theorem, sometimes known simply as "Bombieri's theorem"); this result is already quite useful, being an averaged form of the generalized Riemann hypothesis. It is known that the conjecture fails at the endpoint .
The Elliott–Halberstam conjecture has several consequences. One striking one is the result announced by Dan Goldston, János Pintz, and Cem Yıldırım, which shows (assuming this conjecture) that there are infinitely many pairs of primes which differ by at most 16. In November 2013, James Maynard showed that subject to the Elliott–Halberstam conjecture, one can show the existence of infinitely many pairs of consecutive primes that differ by at most 12. In August 2014, Polymath group showed that subject to the generalized Elliott–Halberstam conjecture, one can show the existence of infinitely many pairs of consecutive primes that differ by at most 6. Without assuming any form of the conjecture, the lowest proven bound is 246.
See also
Barban–Davenport–Halberstam theorem
Sexy prime
Siegel–Walfisz theorem
Notes
Analytic number theory
Conjectures about prime numbers
Unsolved problems in number theory
|
https://en.wikipedia.org/wiki/Robinson%E2%80%93Schensted%20correspondence
|
In mathematics, the Robinson–Schensted correspondence is a bijective correspondence between permutations and pairs of standard Young tableaux of the same shape. It has various descriptions, all of which are of algorithmic nature, it has many remarkable properties, and it has applications in combinatorics and other areas such as representation theory. The correspondence has been generalized in numerous ways, notably by Knuth to what is known as the Robinson–Schensted–Knuth correspondence, and a further generalization to pictures by Zelevinsky.
The simplest description of the correspondence is using the Schensted algorithm , a procedure that constructs one tableau by successively inserting the values of the permutation according to a specific rule, while the other tableau records the evolution of the shape during construction. The correspondence had been described, in a rather different form, much earlier by Robinson , in an attempt to prove the Littlewood–Richardson rule. The correspondence is often referred to as the Robinson–Schensted algorithm, although the procedure used by Robinson is radically different from the Schensted algorithm, and almost entirely forgotten. Other methods of defining the correspondence include a nondeterministic algorithm in terms of jeu de taquin.
The bijective nature of the correspondence relates it to the enumerative identity
where denotes the set of partitions of (or of Young diagrams with squares), and denotes the number of standard Young tableaux of shape .
The Schensted algorithm
The Schensted algorithm starts from the permutation written in two-line notation
where , and proceeds by constructing sequentially a sequence of (intermediate) ordered pairs of Young tableaux of the same shape:
where are empty tableaux. The output tableaux are and . Once is constructed, one forms by inserting into , and then by adding an entry to in the square added to the shape by the insertion (so that and have equal shapes for all ). Because of the more passive role of the tableaux , the final one , which is part of the output and from which the previous are easily read off, is called the recording tableau; by contrast the tableaux are called insertion tableaux.
Insertion
The basic procedure used to insert each is called Schensted insertion or row-insertion (to distinguish it from a variant procedure called column-insertion). Its simplest form is defined in terms of "incomplete standard tableaux": like standard tableaux they have distinct entries, forming increasing rows and columns, but some values (still to be inserted) may be absent as entries. The procedure takes as arguments such a tableau and a value not present as entry of ; it produces as output a new tableau denoted and a square by which its shape has grown. The value appears in the first row of , either having been added at the end (if no entries larger than were present), or otherwise replacing the first entry in the first row of . In the
|
https://en.wikipedia.org/wiki/Jeffreys%20prior
|
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher information matrix:
It has the key feature that it is invariant under a change of coordinates for the parameter vector . That is, the relative probability assigned to a volume of a probability space using a Jeffreys prior will be the same regardless of the parameterization used to define the Jeffreys prior. This makes it of special interest for use with scale parameters.
In maximum likelihood estimation of exponential family models, penalty terms based on the Jeffreys prior were shown to reduce asymptotic bias in point estimates.
Reparameterization
One-parameter case
If and are two possible parametrizations of a statistical model, and is a continuously differentiable function of , we say that the prior is "invariant" under a reparametrization if
that is, if the priors and are related by the usual change of variables theorem.
Since the Fisher information transforms under reparametrization as
defining the priors as and gives us the desired "invariance".
Multiple-parameter case
Analogous to the one-parameter case, let and be two possible parametrizations of a statistical model, with a continuously differentiable function of . We call the prior "invariant" under reparametrization if
where is the Jacobian matrix with entries
Since the Fisher information matrix transforms under reparametrization as
we have that
and thus defining the priors as and gives us the desired "invariance".
Attributes
From a practical and mathematical standpoint, a valid reason to use this non-informative prior instead of others, like the ones obtained through a limit in conjugate families of distributions, is that the relative probability of a volume of the probability space is not dependent upon the set of parameter variables that is chosen to describe parameter space.
Sometimes the Jeffreys prior cannot be normalized, and is thus an improper prior. For example, the Jeffreys prior for the distribution mean is uniform over the entire real line in the case of a Gaussian distribution of known variance.
Use of the Jeffreys prior violates the strong version of the likelihood principle, which is accepted by many, but by no means all, statisticians. When using the Jeffreys prior, inferences about depend not just on the probability of the observed data as a function of , but also on the universe of all possible experimental outcomes, as determined by the experimental design, because the Fisher information is computed from an expectation over the chosen universe. Accordingly, the Jeffreys prior, and hence the inferences made using it, may be different for two experiments involving the same parameter even when the likelihood functions for the two experiments are the same—a violation of the strong likelihood principle.
Mi
|
https://en.wikipedia.org/wiki/Trigonal%20bipyramidal%20molecular%20geometry
|
In chemistry, a trigonal bipyramid formation is a molecular geometry with one atom at the center and 5 more atoms at the corners of a triangular bipyramid. This is one geometry for which the bond angles surrounding the central atom are not identical (see also pentagonal bipyramid), because there is no geometrical arrangement with five terminal atoms in equivalent positions. Examples of this molecular geometry are phosphorus pentafluoride (), and phosphorus pentachloride () in the gas phase.
Axial (or apical) and equatorial positions
The five atoms bonded to the central atom are not all equivalent, and two different types of position are defined. For phosphorus pentachloride as an example, the phosphorus atom shares a plane with three chlorine atoms at 120° angles to each other in equatorial positions, and two more chlorine atoms above and below the plane (axial or apical positions).
According to the VSEPR theory of molecular geometry, an axial position is more crowded because an axial atom has three neighboring equatorial atoms (on the same central atom) at a 90° bond angle, whereas an equatorial atom has only two neighboring axial atoms at a 90° bond angle. For molecules with five identical ligands, the axial bond lengths tend to be longer because the ligand atom cannot approach the central atom as closely. As examples, in PF5 the axial P−F bond length is 158 pm and the equatorial is 152 pm, and in PCl5 the axial and equatorial are 214 and 202 pm respectively.
In the mixed halide PF3Cl2 the chlorines occupy two of the equatorial positions, indicating that fluorine has a greater apicophilicity or tendency to occupy an axial position. In general ligand apicophilicity increases with electronegativity and also with pi-electron withdrawing ability, as in the sequence Cl < F < CN. Both factors decrease electron density in the bonding region near the central atom so that crowding in the axial position is less important.
Related geometries with lone pairs
The VSEPR theory also predicts that substitution of a ligand at a central atom by a lone pair of valence electrons leaves the general form of the electron arrangement unchanged with the lone pair now occupying one position. For molecules with five pairs of valence electrons including both bonding pairs and lone pairs, the electron pairs are still arranged in a trigonal bipyramid but one or more equatorial positions is not attached to a ligand atom so that the molecular geometry (for the nuclei only) is different.
The seesaw molecular geometry is found in sulfur tetrafluoride (SF4) with a central sulfur atom surrounded by four fluorine atoms occupying two axial and two equatorial positions, as well as one equatorial lone pair, corresponding to an AX4E molecule in the AXE notation. A T-shaped molecular geometry is found in chlorine trifluoride (ClF3), an AX3E2 molecule with fluorine atoms in two axial and one equatorial position, as well as two equatorial lone pairs. Finally, the triiodide ion () i
|
https://en.wikipedia.org/wiki/Chi%20distribution
|
In probability theory and statistics, the chi distribution is a continuous probability distribution over the non-negative real line. It is the distribution of the positive square root of a sum of squared independent Gaussian random variables. Equivalently, it is the distribution of the Euclidean distance between a multivariate Gaussian random variable and the origin. It is thus related to the chi-squared distribution by describing the distribution of the positive square roots of a variable obeying a chi-squared distribution.
If are independent, normally distributed random variables with mean 0 and standard deviation 1, then the statistic
is distributed according to the chi distribution. The chi distribution has one positive integer parameter , which specifies the degrees of freedom (i.e. the number of random variables ).
The most familiar examples are the Rayleigh distribution (chi distribution with two degrees of freedom) and the Maxwell–Boltzmann distribution of the molecular speeds in an ideal gas (chi distribution with three degrees of freedom).
Definitions
Probability density function
The probability density function (pdf) of the chi-distribution is
where is the gamma function.
Cumulative distribution function
The cumulative distribution function is given by:
where is the regularized gamma function.
Generating functions
The moment-generating function is given by:
where is Kummer's confluent hypergeometric function. The characteristic function is given by:
Properties
Moments
The raw moments are then given by:
where is the gamma function. Thus the first few raw moments are:
where the rightmost expressions are derived using the recurrence relationship for the gamma function:
From these expressions we may derive the following relationships:
Mean: which is close to for large .
Variance: which approaches as increases.
Skewness:
Kurtosis excess:
Entropy
The entropy is given by:
where is the polygamma function.
Large n approximation
We find the large n=k+1 approximation of the mean and variance of chi distribution. This has application e.g. in finding the distribution of standard deviation of a sample of normally distributed population, where n is the sample size.
The mean is then:
We use the Legendre duplication formula to write:
,
so that:
Using Stirling's approximation for Gamma function, we get the following expression for the mean:
And thus the variance is:
Related distributions
If then (chi-squared distribution)
(Normal distribution)
If then
If then (half-normal distribution) for any
(Rayleigh distribution)
(Maxwell distribution)
, the Euclidean norm of a standard normal random vector of with dimensions, is distributed according to a chi distribution with degrees of freedom
chi distribution is a special case of the generalized gamma distribution or the Nakagami distribution or the noncentral chi distribution
The mean of the chi distribution (scaled by the square root of ) yields the c
|
https://en.wikipedia.org/wiki/Transformation%20%28function%29
|
In mathematics, a transformation is a function f, usually with some geometrical underpinning, that maps a set X to itself, i.e. .
Examples include linear transformations of vector spaces and geometric transformations, which include projective transformations, affine transformations, and specific affine transformations, such as rotations, reflections and translations.
Partial transformations
While it is common to use the term transformation for any function of a set into itself (especially in terms like "transformation semigroup" and similar), there exists an alternative form of terminological convention in which the term "transformation" is reserved only for bijections. When such a narrow notion of transformation is generalized to partial functions, then a partial transformation is a function f: A → B, where both A and B are subsets of some set X.
Algebraic structures
The set of all transformations on a given base set, together with function composition, forms a regular semigroup.
Combinatorics
For a finite set of cardinality n, there are nn transformations and (n+1)n partial transformations.
See also
Coordinate transformation
Data transformation (statistics)
Geometric transformation
Infinitesimal transformation
Linear transformation
Rigid transformation
Transformation geometry
Transformation semigroup
Transformation group
Transformation matrix
References
External links
Functions and mappings
|
https://en.wikipedia.org/wiki/Vilho%20V%C3%A4is%C3%A4l%C3%A4
|
Vilho Väisälä (; September 28, 1889 – August 12, 1969) was a Finnish meteorologist and physicist, and founder of Vaisala Oyj.
After graduation in mathematics in 1912, Väisälä worked for the Finnish Meteorological Institute in aerological measurements, specializing in the research of the higher troposphere. At the time the measurements were conducted by attaching a thermograph to a kite.
In 1917 he published his dissertation in mathematics Ensimmäisen lajin elliptisen integralin käänteisfunktion yksikäsitteisyys (The single-valuedness of the inverse function of the elliptic integral of the first kind). His dissertation was the first and still is the only mathematical doctoral thesis written in the Finnish language.
Väisälä participated in development of radiosonde, a device attached to a balloon and launched to measure air in the higher atmosphere. In 1936 he started his own company, manufacturing radiosondes and — later — other meteorological instruments.
In 1948 Väisälä was nominated a Professor of Meteorology in the University of Helsinki.
Vilho Väisälä's two brothers, Kalle Väisälä and Yrjö Väisälä, also made successful careers in science.
Vilho Väisälä knew Esperanto, and played an active role in the Esperanto movement. During the World Congress of Esperanto of 1969, which was held in Helsinki shortly before his death, he served as the rector of the so-called Internacia Kongresa Universitato ("International Congressual University"), and coordinated the specialistic lectures in Esperanto given by various academicians to the congressists.
See also
Brunt–Väisälä frequency
References
External links
Finnish meteorologists
20th-century Finnish physicists
Academic staff of the University of Helsinki
Finnish Esperantists
1889 births
1969 deaths
|
https://en.wikipedia.org/wiki/Carl%20St%C3%B8rmer
|
Fredrik Carl Mülertz Størmer (3 September 1874 – 13 August 1957) was a Norwegian mathematician and astrophysicist. In mathematics, he is known for his work in number theory, including the calculation of and Størmer's theorem on consecutive smooth numbers. In physics, he is known for studying the movement of charged particles in the magnetosphere and the formation of aurorae, and for his book on these subjects, From the Depths of Space to the Heart of the Atom. He worked for many years as a professor of mathematics at the University of Oslo in Norway. A crater on the far side of the Moon is named after him.
Personal life and career
Størmer was born on 3 September 1874 in Skien, the only child of a pharmacist Georg Ludvig Størmer (1842–1930) and Elisabeth Amalie Johanne Henriette Mülertz (1844–1916). His uncle was the entrepreneur and inventor Henrik Christian Fredrik Størmer.
Størmer studied mathematics at the Royal Frederick University in Kristiania, Norway (now the University of Oslo, in Oslo) from 1892 to 1897, earning the rank of candidatus realium in 1898. He then studied with Picard, Poincaré, Painlevé, Jordan, Darboux, and Goursat at the Sorbonne in Paris from 1898 to 1900. He returned to Kristiania in 1900 as a research fellow in mathematics, visited the University of Göttingen in 1902, and returned to Kristiania in 1903, where he was appointed as a professor of mathematics, a position he held for 43 years. After he received a permanent position in Kristiania, Størmer published his subsequent writings under a shortened version of his name, Carl Størmer. In 1918, he was elected as the first president of the newly formed Norwegian Mathematical Society. He participated regularly in Scandinavian mathematical congresses, and was president of the 1936 International Congress of Mathematicians in Oslo (from 1924 the new name of Kristiania). Størmer was also affiliated with the Institute of Theoretical Astrophysics at the University of Oslo, which was founded in 1934. He died on 13 August 1957, at Blindern.
He was also an amateur street photographer, beginning in his student days. Near the age of 70 he put on an exhibition in Oslo of the photographs of celebrities that he had taken over the years. For instance it included one of Henrik Ibsen strolling down Karl Johans gate, the main road in Oslo. He was also a supervisory council member of the insurance company Forsikringsselskapet Norden.
In February 1900 he married consul's daughter Ada Clauson (1877–1973), with whom he eventually had five children. Their son Leif Størmer became a professor of historical geology at the University of Oslo. His daughter Henny married landowner Carl Otto Løvenskiold. Carl Størmer is also the grandfather of the mathematician Erling Størmer.
Mathematical research
Størmer's first mathematical publication, published when he was a beginning student at the age of 18, concerned trigonometric series generalizing the Taylor expansion of the arcsine function. He revisi
|
https://en.wikipedia.org/wiki/Function%20composition%20%28computer%20science%29
|
In computer science, function composition is an act or mechanism to combine simple functions to build more complicated ones. Like the usual composition of functions in mathematics, the result of each function is passed as the argument of the next, and the result of the last one is the result of the whole.
Programmers frequently apply functions to results of other functions, and almost all programming languages allow it. In some cases, the composition of functions is interesting as a function in its own right, to be used later. Such a function can always be defined but languages with first-class functions make it easier.
The ability to easily compose functions encourages factoring (breaking apart) functions for maintainability and code reuse. More generally, big systems might be built by composing whole programs.
Narrowly speaking, function composition applies to functions that operate on a finite amount of data, each step sequentially processing it before handing it to the next. Functions that operate on potentially infinite data (a stream or other codata) are known as filters, and are instead connected in a pipeline, which is analogous to function composition and can execute concurrently.
Composing function calls
For example, suppose we have two functions and , as in and . Composing them means we first compute , and then use to compute . Here is the example in the C language:
float x, y, z;
// ...
y = g(x);
z = f(y);
The steps can be combined if we don't give a name to the intermediate result:
z = f(g(x));
Despite differences in length, these two implementations compute the same result. The second implementation requires only one line of code and is colloquially referred to as a "highly composed" form. Readability and hence maintainability is one advantage of highly composed forms, since they require fewer lines of code, minimizing a program's "surface area". DeMarco and Lister empirically verify an inverse relationship between surface area and maintainability. On the other hand, it may be possible to overuse highly composed forms. A nesting of too many functions may have the opposite effect, making the code less maintainable.
In a stack-based language, functional composition is even more natural: it is performed by concatenation, and is usually the primary method of program design. The above example in Forth:
g f
Which will take whatever was on the stack before, apply g, then f, and leave the result on the stack. See postfix composition notation for the corresponding mathematical notation.
Naming the composition of functions
Now suppose that the combination of calling f() on the result of g() is frequently useful, and which we want to name foo() to be used as a function in its own right.
In most languages, we can define a new function implemented by composition. Example in C:
float foo(float x) {
return f(g(x));
}
(the long form with intermediates would work as well.) Example in Forth:
: foo g f ;
In languages such as C
|
https://en.wikipedia.org/wiki/Bernoulli%20differential%20equation
|
In mathematics, an ordinary differential equation is called a Bernoulli differential equation if it is of the form
where is a real number. Some authors allow any real , whereas others require that not be 0 or 1. The equation was first discussed in a work of 1695 by Jacob Bernoulli, after whom it is named. The earliest solution, however, was offered by Gottfried Leibniz, who published his result in the same year and whose method is the one still used today.
Bernoulli equations are special because they are nonlinear differential equations with known exact solutions. A notable special case of the Bernoulli equation is the logistic differential equation.
Transformation to a linear differential equation
When , the differential equation is linear. When , it is separable. In these cases, standard techniques for solving equations of those forms can be applied. For and , the substitution reduces any Bernoulli equation to a linear differential equation
For example, in the case , making the substitution in the differential equation produces the equation , which is a linear differential equation.
Solution
Let and
be a solution of the linear differential equation
Then we have that is a solution of
And for every such differential equation, for all we have as solution for .
Example
Consider the Bernoulli equation
(in this case, more specifically a Riccati equation).
The constant function is a solution.
Division by yields
Changing variables gives the equations
which can be solved using the integrating factor
Multiplying by
The left side can be represented as the derivative of by reversing the product rule. Applying the chain rule and integrating both sides with respect to results in the equations
The solution for is
Notes
References
. Cited in .
.
External links
Index of differential equations
Ordinary differential equations
|
https://en.wikipedia.org/wiki/Conjugacy%20class%20sum
|
In abstract algebra, a conjugacy class sum, or simply class sum, is a function defined for each conjugacy class of a finite group G as the sum of the elements in that conjugacy class. The class sums of a group form a basis for the center of the associated group algebra.
Definition
Let G be a finite group, and let C1,...,Ck be the distinct conjugacy classes of G. For 1 ≤ i ≤ k, define
The functions are the class sums of G.
In the group algebra
Let CG be the complex group algebra over G. Then the center of CG, denoted Z(CG), is defined by
.
This is equal to the set of all class functions (functions which are constant on conjugacy classes). To see this, note that f is central if and only if f(yx) = f(xy) for all x,y in G. Replacing y by yx−1, this condition becomes
.
The class sums are a basis for the set of all class functions, and thus they are a basis for the center of the algebra.
In particular, this shows that the dimension of Z(CG) is equal to the number of class sums of G.
References
Goodman, Roe; and Wallach, Nolan (2009). Symmetry, Representations, and Invariants. Springer. . See chapter 4, especially 4.3.
James, Gordon; and Liebeck, Martin (2001). Representations and Characters of Groups (2nd ed.). Cambridge University Press. . See chapter 12.
Group theory
|
https://en.wikipedia.org/wiki/Burnside%27s%20theorem
|
In mathematics, Burnside's theorem in group theory states that if G is a finite group of order where p and q are prime numbers, and a and b are non-negative integers, then G is solvable. Hence each
non-Abelian finite simple group has order divisible by at least three distinct primes.
History
The theorem was proved by using the representation theory of finite groups. Several special cases of the theorem had previously been proved by Burnside, Jordan, and Frobenius. John Thompson pointed out that a proof avoiding the use of representation theory could be extracted from his work on the N-group theorem, and this was done explicitly by for groups of odd order, and by for groups of even order. simplified the proofs.
Proof
The following proof — using more background than Burnside's — is by contradiction. Let paqb be the smallest product of two prime powers, such that there is a non-solvable group G whose order is equal to this number.
G is a simple group with trivial center and a is not zero.
If G had a nontrivial proper normal subgroup H, then (because of the minimality of G), H and G/H would be solvable, so G as well, which would contradict our assumption. So G is simple.
If a were zero, G would be a finite q-group, hence nilpotent, and therefore solvable.
Similarly, G cannot be abelian, otherwise it would be solvable. As G is simple, its center must therefore be trivial.
There is an element g of G which has qd conjugates, for some d > 0.
By the first statement of Sylow's theorem, G has a subgroup S of order pa. Because S is a nontrivial p-group, its center Z(S) is nontrivial. Fix a nontrivial element . The number of conjugates of g is equal to the index of its stabilizer subgroup Gg, which divides the index qb of S (because S is a subgroup of Gg). Hence this number is of the form qd. Moreover, the integer d is strictly positive, since g is nontrivial and therefore not central in G.
There exists a nontrivial irreducible representation ρ with character χ, such that its dimension n is not divisible by q and the complex number χ(g) is not zero.
Let (χi)1 ≤ i ≤ h be the family of irreducible characters of G over (here χ1 denotes the trivial character). Because g is not in the same conjugacy class as 1, the orthogonality relation for the columns of the group's character table gives:
Now the χi(g) are algebraic integers, because they are sums of roots of unity. If all the nontrivial irreducible characters which don't vanish at g take a value divisible by q at 1, we deduce that
is an algebraic integer (since it is a sum of integer multiples of algebraic integers), which is absurd. This proves the statement.
The complex number qdχ(g)/n is an algebraic integer.
The set of integer-valued class functions on G, Z([G]), is a commutative ring, finitely generated over . All of its elements are thus integral over , in particular the mapping u which takes the value 1 on the conjugacy class of g and 0 elsewhere.
The mapping which sends a c
|
https://en.wikipedia.org/wiki/Integrating%20factor
|
In mathematics, an integrating factor is a function that is chosen to facilitate the solving of a given equation involving differentials. It is commonly used to solve ordinary differential equations, but is also used within multivariable calculus when multiplying through by an integrating factor allows an inexact differential to be made into an exact differential (which can then be integrated to give a scalar field). This is especially useful in thermodynamics where temperature becomes the integrating factor that makes entropy an exact differential.
Use
An integrating factor is any expression that a differential equation is multiplied by to facilitate integration. For example, the nonlinear second order equation
admits as an integrating factor:
To integrate, note that both sides of the equation may be expressed as derivatives by going backwards with the chain rule:
Therefore,
where is a constant.
This form may be more useful, depending on application. Performing a separation of variables will give
This is an implicit solution which involves a nonelementary integral. This same method is used to solve the period of a simple pendulum.
Solving first order linear ordinary differential equations
Integrating factors are useful for solving ordinary differential equations that can be expressed in the form
The basic idea is to find some function, say , called the "integrating factor", which we can multiply through our differential equation in order to bring the left-hand side under a common derivative. For the canonical first-order linear differential equation shown above, the integrating factor is .
Note that it is not necessary to include the arbitrary constant in the integral, or absolute values in case the integral of involves a logarithm. Firstly, we only need one integrating factor to solve the equation, not all possible ones; secondly, such constants and absolute values will cancel out even if included. For absolute values, this can be seen by writing , where refers to the sign function, which will be constant on an interval if is continuous. As is undefined when , and a logarithm in the antiderivative only appears when the original function involved a logarithm or a reciprocal (neither of which are defined for 0), such an interval will be the interval of validity of our solution.
To derive this, let be the integrating factor of a first order linear differential equation such that multiplication by transforms a partial derivative into a total derivative, then:
Going from step 2 to step 3 requires that , which is a separable differential equation, whose solution yields in terms of :
To verify, multiplying by gives
By applying the product rule in reverse, we see that the left-hand side can be expressed as a single derivative in
We use this fact to simplify our expression to
Integrating both sides with respect to
where is a constant.
Moving the exponential to the right-hand side, the general solution to O
|
https://en.wikipedia.org/wiki/Wei-Liang%20Chow
|
Chow Wei-Liang (; October 1, 1911, Shanghai – August 10, 1995, Baltimore) was a Chinese mathematician and stamp collector born in Shanghai, known for his work in algebraic geometry.
Biography
Chow was a student in the US, graduating from the University of Chicago in 1931. In 1932 he attended the University of Göttingen, then transferred to the Leipzig University where he worked with van der Waerden. They produced a series of joint papers on intersection theory, introducing in particular the use of what are now generally called Chow coordinates (which were in some form familiar to Arthur Cayley).
He married Margot Victor in 1936, and took a position at the National Central University in Nanjing. His mathematical work was seriously affected by the wartime situation in China. He taught at the National Tung-Chi University in Shanghai in the academic year 1946–47, and then went to the Institute for Advanced Study in Princeton, where he returned to his research. From 1948 to 1977 he was a professor at Johns Hopkins University.
He was also a stamp collector, known for his book Shanghai Large Dragons, The First Issue of The Shanghai Local Post, published in 1996.
Research
According to Shiing-Shen Chern,
"Wei-Liang was an original and versatile mathematician, although his major field was algebraic geometry. He made several fundamental contributions to mathematics:
A fundamental issue in algebraic geometry is intersection theory. The Chow ring has many advantages and is widely used.
The Chow associated forms give a description of the moduli space of the algebraic varieties in projective space. It gives a beautiful solution of an important problem.
His theorem that a compact analytic variety in a projective space is algebraic is justly famous. The theorem shows the close analogy between algebraic geometry and algebraic number theory.
Generalizing a result of Caratheodory on thermodynamics, he formulated a theorem on accessibility of differential spaces. The theorem plays a fundamental role in control theory.
A lesser-known paper of his on homogeneous spaces gives a beautiful treatment of the geometry known as the projective geometry of matrices and treated by elaborate calculations. His discussions are valid in a more general context."
See also
Chow ring
Chow's theorem
Chow's moving lemma
Chow's lemma
Chow–Rashevskii theorem
References
External links
Catalog listing for Shanghai Large Dragons, The First Issue of The Shanghai Local Post
1911 births
1995 deaths
Algebraic geometers
20th-century Chinese mathematicians
Educators from Shanghai
University of Chicago alumni
Academic staff of Tongji University
Johns Hopkins University faculty
Mathematicians from Shanghai
Chinese emigrants to the United States
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.