source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/List%20of%20cities%20in%20Brazil%20by%20population
|
Brazil has a high level of urbanization with 87.8% of the population residing in urban and metropolitan areas. The criteria used by the IBGE (Brazilian Institute of Geography and Statistics) in determining whether households are urban or rural, however, are based on political divisions, not on the developed environment.
Nowadays, the country has 5,570 cities, with 5,568 municipalities plus the capital and the Island of Fernando de Noronha.
With two exceptions, the state capitals are all the largest cities in their respective states: Florianópolis, the capital of Santa Catarina is its second-largest city after Joinville, while Vitória is only the fourth-largest city in Espírito Santo, although it is located in that state's largest metropolitan area.
Most populous cities in Brazil
This is a list of the most populous cities based on the population of the municipality where the city is located, rather than its metropolitan area. As IBGE considers the entire Federal District synonymous to Brasília, the population of the Federal District is shown for Brasília.
2010 IBGE Census counts as of 1 August 2010. 2022 IBGE Census
State capitals are in bold and states' largest cities are in italics.
Brazil's population, as recorded by the 2010 census, was of 190,755,799 inhabitants (22.40 inhabitants per square kilometer), with 84.36% of the population defined as urban. The population is heavily concentrated in the Southeast (80.4 million) and Northeast (53.1 million).
Distribution
Largest metropolitan areas
See also
Municipalities of Brazil
List of municipalities of Brazil
List of largest cities in Brazil by state
List of metropolitan areas in the Americas
Largest cities in the Americas
Brazilian Institute of Geography and Statistics
References
Largest cities
Brazil
|
https://en.wikipedia.org/wiki/L%C3%A9vy%20distribution
|
In probability theory and statistics, the Lévy distribution, named after Paul Lévy, is a continuous probability distribution for a non-negative random variable. In spectroscopy, this distribution, with frequency as the dependent variable, is known as a van der Waals profile. It is a special case of the inverse-gamma distribution. It is a stable distribution.
Definition
The probability density function of the Lévy distribution over the domain is
where is the location parameter and is the scale parameter. The cumulative distribution function is
where is the complementary error function and is the Laplace Function (CDF of the Standard Normal Distribution). The shift parameter has the effect of shifting the curve to the right by an amount , and changing the support to the interval [, ). Like all stable distributions, the Levy distribution has a standard form f(x;0,1) which has the following property:
where y is defined as
The characteristic function of the Lévy distribution is given by
Note that the characteristic function can also be written in the same form used for the stable distribution with and :
Assuming , the nth moment of the unshifted Lévy distribution is formally defined by:
which diverges for all so that the integer moments of the Lévy distribution do not exist (only some fractional moments).
The moment generating function would be formally defined by:
however this diverges for and is therefore not defined on an interval around zero, so the moment generating function is not defined per se.
Like all stable distributions except the normal distribution, the wing of the probability density function exhibits heavy tail behavior falling off according to a power law:
as
which shows that Lévy is not just heavy-tailed but also fat-tailed. This is illustrated in the diagram below, in which the probability density functions for various values of c and are plotted on a log–log plot.
The standard Lévy distribution satisfies the condition of being stable
,
where are independent standard Lévy-variables with .
Related distributions
If then
If then (inverse gamma distribution)Here, the Lévy distribution is a special case of a Pearson type V distribution
If (Normal distribution) then
If then
If then (Stable distribution)
If then (Scaled-inverse-chi-squared distribution)
If then (Folded normal distribution)
Random sample generation
Random samples from the Lévy distribution can be generated using inverse transform sampling. Given a random variate U drawn from the uniform distribution on the unit interval (0, 1], the variate X given by
is Lévy-distributed with location and scale . Here is the cumulative distribution function of the standard normal distribution.
Applications
The frequency of geomagnetic reversals appears to follow a Lévy distribution
The time of hitting a single point, at distance from the starting point, by the Brownian motion has the Lévy distribution with . (For a Brownian m
|
https://en.wikipedia.org/wiki/Bell%20series
|
In mathematics, the Bell series is a formal power series used to study properties of arithmetical functions. Bell series were introduced and developed by Eric Temple Bell.
Given an arithmetic function and a prime , define the formal power series , called the Bell series of modulo as:
Two multiplicative functions can be shown to be identical if all of their Bell series are equal; this is sometimes called the uniqueness theorem: given multiplicative functions and , one has if and only if:
for all primes .
Two series may be multiplied (sometimes called the multiplication theorem): For any two arithmetic functions and , let be their Dirichlet convolution. Then for every prime , one has:
In particular, this makes it trivial to find the Bell series of a Dirichlet inverse.
If is completely multiplicative, then formally:
Examples
The following is a table of the Bell series of well-known arithmetic functions.
The Möbius function has
The Mobius function squared has
Euler's totient has
The multiplicative identity of the Dirichlet convolution has
The Liouville function has
The power function Idk has Here, Idk is the completely multiplicative function .
The divisor function has
The constant function, with value 1, satisfies , i.e., is the geometric series.
If is the power of the prime omega function, then
Suppose that f is multiplicative and g is any arithmetic function satisfying for all primes p and . Then
If denotes the Möbius function of order k, then
See also
Bell numbers
References
Arithmetic functions
Mathematical series
|
https://en.wikipedia.org/wiki/Normalized%20number
|
In applied mathematics, a number is normalized when it is written in scientific notation with one non-zero decimal digit before the decimal point. Thus, a real number, when written out in normalized scientific notation, is as follows:
where n is an integer, are the digits of the number in base 10, and is not zero. That is, its leading digit (i.e., leftmost) is not zero and is followed by the decimal point. Simply speaking, a number is normalized when it is written in the form of a × 10n where 1 ≤ a < 10 without leading zeros in a. This is the standard form of scientific notation. An alternative style is to have the first non-zero digit after the decimal point.
Examples
As examples, the number 918.082 in normalized form is
while the number in normalized form is
Clearly, any non-zero real number can be normalized.
Other bases
The same definition holds if the number is represented in another radix (that is, base of enumeration), rather than base 10.
In base b a normalized number will have the form
where again and the digits, are integers between and .
In many computer systems, binary floating-point numbers are represented internally using this normalized form for their representations; for details, see normal number (computing). Although the point is described as floating, for a normalized floating-point number, its position is fixed, the movement being reflected in the different values of the power.
See also
Significand
Normal number (computing)
References
Computer arithmetic
|
https://en.wikipedia.org/wiki/Essential%20spectrum
|
In mathematics, the essential spectrum of a bounded operator (or, more generally, of a densely defined closed linear operator) is a certain subset of its spectrum, defined by a condition of the type that says, roughly speaking, "fails badly to be invertible".
The essential spectrum of self-adjoint operators
In formal terms, let X be a Hilbert space and let T be a self-adjoint operator on X.
Definition
The essential spectrum of T, usually denoted σess(T), is the set of all complex numbers λ such that
is not a Fredholm operator, where denotes the identity operator on X, so that for all x in X.
(An operator is Fredholm if its kernel and cokernel are finite-dimensional.)
Properties
The essential spectrum is always closed, and it is a subset of the spectrum. Since T is self-adjoint, the spectrum is contained on the real axis.
The essential spectrum is invariant under compact perturbations. That is, if K is a compact self-adjoint operator on X, then the essential spectra of T and that of coincide. This explains why it is called the essential spectrum: Weyl (1910) originally defined the essential spectrum of a certain differential operator to be the spectrum independent of boundary conditions.
Weyl's criterion for the essential spectrum is as follows. First, a number λ is in the spectrum of T if and only if there exists a sequence {ψk} in the space X such that and
Furthermore, λ is in the essential spectrum if there is a sequence satisfying this condition, but such that it contains no convergent subsequence (this is the case if, for example is an orthonormal sequence); such a sequence is called a singular sequence.
The discrete spectrum
The essential spectrum is a subset of the spectrum σ, and its complement is called the discrete spectrum, so
If T is self-adjoint, then, by definition, a number λ is in the discrete spectrum of T if it is an isolated eigenvalue of finite multiplicity, meaning that the dimension of the space
has finite but non-zero dimension and that there is an ε > 0 such that μ ∈ σ(T) and |μ−λ| < ε imply that μ and λ are equal.
(For general nonselfadjoint operators in Banach spaces, by definition, a number is in the discrete spectrum if it is a normal eigenvalue; or, equivalently, if it is an isolated point of the spectrum and the rank of the corresponding Riesz projector is finite.)
The essential spectrum of closed operators in Banach spaces
Let X be a Banach space
and let be a closed linear operator on X with dense domain . There are several definitions of the essential spectrum, which are not equivalent.
The essential spectrum is the set of all λ such that is not semi-Fredholm (an operator is semi-Fredholm if its range is closed and its kernel or its cokernel is finite-dimensional).
The essential spectrum is the set of all λ such that the range of is not closed or the kernel of is infinite-dimensional.
The essential spectrum is the set of all λ such that is not Fredholm (an operator is Fredholm if i
|
https://en.wikipedia.org/wiki/Knot%20complement
|
In mathematics, the knot complement of a tame knot K is the space where the knot is not. If a knot is embedded in the 3-sphere, then the complement is the 3-sphere minus the space near the knot. To make this precise, suppose that K is a knot in a three-manifold M (most often, M is the 3-sphere). Let N be a tubular neighborhood of K; so N is a solid torus. The knot complement is then the complement of N,
The knot complement XK is a compact 3-manifold; the boundary of XK and the boundary of the neighborhood N are homeomorphic to a two-torus. Sometimes the ambient manifold M is understood to be the 3-sphere. Context is needed to determine the usage. There are analogous definitions for the link complement.
Many knot invariants, such as the knot group, are really invariants of the complement of the knot. When the ambient space is the three-sphere no information is lost: the Gordon–Luecke theorem states that a knot is determined by its complement. That is, if K and K′ are two knots with homeomorphic complements then there is a homeomorphism of the three-sphere taking one knot to the other.
Knot complements are Haken manifolds. More generally complements of links are Haken manifolds.
See also
Knot genus
Seifert surface
Further reading
C. Gordon and J. Luecke, "Knots are determined by their Complements", J. Amer. Math. Soc., 2 (1989), 371–415.
References
Knot theory
|
https://en.wikipedia.org/wiki/Toda%20field%20theory
|
In mathematics and physics, specifically the study of field theory and partial differential equations, a Toda field theory, named after Morikazu Toda, is specified by a choice of Lie algebra and a specific Lagrangian.
Formulation
Fixing the Lie algebra to have rank , that is, the Cartan subalgebra of the algebra has dimension , the Lagrangian can be written
The background spacetime is 2-dimensional Minkowski space, with space-like coordinate and timelike coordinate . Greek indices indicate spacetime coordinates.
For some choice of root basis, is the th simple root. This provides a basis for the Cartan subalgebra, allowing it to be identified with .
Then the field content is a collection of scalar fields , which are scalar in the sense that they transform trivially under Lorentz transformations of the underlying spacetime.
The inner product is the restriction of the Killing form to the Cartan subalgebra.
The are integer constants, known as Kac labels or Dynkin labels.
The physical constants are the mass and the coupling constant .
Classification of Toda field theories
Toda field theories are classified according to their associated Lie algebra.
Toda field theories usually refer to theories with a finite Lie algebra. If the Lie algebra is an affine Lie algebra, it is called an affine Toda field theory (after the component of φ which decouples is removed). If it is hyperbolic, it is called a hyperbolic Toda field theory.
Toda field theories are integrable models and their solutions describe solitons.
Examples
Liouville field theory is associated to the A1 Cartan matrix, which corresponds to the Lie algebra in the classification of Lie algebras by Cartan matrices. The algebra has only a single simple root.
The sinh-Gordon model is the affine Toda field theory with the generalized Cartan matrix
and a positive value for β after we project out a component of φ which decouples.
The sine-Gordon model is the model with the same Cartan matrix but an imaginary β. This Cartan matrix corresponds to the Lie algebra . This has a single simple root, and Coxeter label , but the Lagrangian is modified for the affine theory: there is also an affine root and Coxeter label . One can expand as , but for the affine root , so the component decouples.
The sum is Then if is purely imaginary, with real and, without loss of generality, positive, then this is . The Lagrangian is then
which is the sine-Gordon Lagrangian.
References
Quantum field theory
Lattice models
Lie algebras
Exactly solvable models
Integrable systems
|
https://en.wikipedia.org/wiki/Affine%20Lie%20algebra
|
In mathematics, an affine Lie algebra is an infinite-dimensional Lie algebra that is constructed in a canonical fashion out of a finite-dimensional simple Lie algebra. Given an affine Lie algebra, one can also form the associated affine Kac-Moody algebra, as described below. From a purely mathematical point of view, affine Lie algebras are interesting because their representation theory, like representation theory of finite-dimensional semisimple Lie algebras, is much better understood than that of general Kac–Moody algebras. As observed by Victor Kac, the character formula for representations of affine Lie algebras implies certain combinatorial identities, the Macdonald identities.
Affine Lie algebras play an important role in string theory and two-dimensional conformal field theory due to the way they are constructed: starting from a simple Lie algebra , one considers the loop algebra, , formed by the -valued functions on a circle (interpreted as the closed string) with pointwise commutator. The affine Lie algebra is obtained by adding one extra dimension to the loop algebra and modifying the commutator in a non-trivial way, which physicists call a quantum anomaly (in this case, the anomaly of the WZW model) and mathematicians a central extension. More generally,
if σ is an automorphism of the simple Lie algebra associated to an automorphism of its Dynkin diagram, the twisted loop algebra consists of -valued functions f on the real line which satisfy
the twisted periodicity condition . Their central extensions are precisely the twisted affine Lie algebras. The point of view of string theory helps to understand many deep properties of affine Lie algebras, such as the fact that the characters of their representations transform amongst themselves under the modular group.
Affine Lie algebras from simple Lie algebras
Definition
If is a finite-dimensional simple Lie algebra, the corresponding
affine Lie algebra is constructed as a central extension of the loop algebra , with one-dimensional center
As a vector space,
where is the complex vector space of Laurent polynomials in the indeterminate t. The Lie bracket is defined by the formula
for all and , where is the Lie bracket in the Lie algebra and is the Cartan-Killing form on
The affine Lie algebra corresponding to a finite-dimensional semisimple Lie algebra is the direct sum of the affine Lie algebras corresponding to its simple summands. There is a distinguished derivation of the affine Lie algebra defined by
The corresponding affine Kac–Moody algebra is defined as a semidirect product by adding an extra generator d that satisfies [d, A] = δ(A).
Constructing the Dynkin diagrams
The Dynkin diagram of each affine Lie algebra consists of that of the corresponding simple Lie algebra plus an additional node, which corresponds to the addition of an imaginary root. Of course, such a node cannot be attached to the Dynkin diagram in just any location, but for each simple
|
https://en.wikipedia.org/wiki/Eugene%20Dynkin
|
Eugene Borisovich Dynkin (; 11 May 1924 – 14 November 2014) was a Soviet and American mathematician. He made contributions to the fields of probability and algebra, especially semisimple Lie groups, Lie algebras, and Markov processes. The Dynkin diagram, the Dynkin system, and Dynkin's lemma are named after him.
Biography
Dynkin was born into a Jewish family, living in Leningrad until 1935, when his family was exiled to Kazakhstan. Two years later, when Dynkin was 13, his father disappeared in the Gulag.
Moscow University
At the age of 16, in 1940, Dynkin was admitted to Moscow University. He avoided military service in World War II because of his poor eyesight, and received his MS in 1945 and his PhD in 1948. He became an assistant professor at Moscow, but was not awarded a "chair" until 1954 because of his political undesirability. His academic progress was made difficult due to his father's fate, as well as Dynkin's Jewish origin; the special efforts of Andrey Kolmogorov, his PhD supervisor, made it possible for Dynkin to progress through graduate school into a teaching position.
USSR Academy of Sciences
In 1968, Dynkin was forced to transfer from the Moscow University to the Central Economic Mathematical Institute of the USSR Academy of Sciences. He worked there on the theory of economic growth and economic equilibrium.
Cornell
He remained at the Institute until 1976, when he emigrated to the United States. In 1977, he became a professor at Cornell University.
Death
Dynkin died at the Cayuga Medical Center in Ithaca, New York, aged 90. Dynkin was an atheist.
Mathematical work
Dynkin is considered to be a rare example of a mathematician who made fundamental contributions to two very distinct areas of mathematics: algebra and probability theory. The algebraic period of Dynkin's mathematical work was between 1944 and 1954, though even during this time a probabilistic theme was noticeable. Indeed, Dynkin's first publication was in 1945, jointly with N. A. Dmitriev, solved a problem on the eigenvalues of stochastic matrices. This problem was raised at Kolmogorov's seminar on Markov chains, while both Dynkin and Dmitriev were undergraduates.
Lie Theory
While Dynkin was a student at Moscow University, he attended Israel Gelfand's seminar on Lie groups. In 1944, Gelfand asked him to prepare a survey on the structure and classification of semisimple Lie groups, based on the papers by Hermann Weyl and Bartel Leendert van der Waerden. Dynkin found the papers difficult to read, and in an attempt to better understand the results, he invented the notion of a "simple root" in a root system. He represented the pairwise angles between these simple roots in the form of a Dynkin diagram. In this way he obtained a cleaner exposition of the classification of complex semisimple Lie algebras. Of Dynkin's 1947 paper "Structure of semisimple Lie algebras", Bertram Kostant wrote:
Dynkin's 1952 influential paper "Semisimple subalgebras of semisimple Lie
|
https://en.wikipedia.org/wiki/D-ring
|
A D-ring is a D-shaped metal ring used primarily as a lashing point in a tie-down system. Depending on their function D-rings may vary in composition, geometry, weight, finish, and load capacity. They may be screwed or welded in place, or attached to the end of a cord or a strap.
In permanent applications recessed tie-down rings minimize obstruction when the ring is not in use.
A D-ring may be also used as a permanent lifting point, or as a part of a tether. The most basic carabiner is a D-ring with a pivoting gate.
References
Hardware (mechanical)
|
https://en.wikipedia.org/wiki/Tautological
|
In mathematics, tautological may refer to:
Logic:
Tautological consequence
Geometry, where it is used as an alternative to canonical:
Tautological bundle
Tautological line bundle
Tautological one-form
Tautology (grammar), unnecessary repetition, or more words than necessary, to say the same thing.
See also
Tautology (disambiguation)
List of tautological place names
|
https://en.wikipedia.org/wiki/Modulo%20%28mathematics%29
|
In mathematics, the term modulo ("with respect to a modulus of", the Latin ablative of modulus which itself means "a small measure") is often used to assert that two distinct mathematical objects can be regarded as equivalent—if their difference is accounted for by an additional factor. It was initially introduced into mathematics in the context of modular arithmetic by Carl Friedrich Gauss in 1801. Since then, the term has gained many meanings—some exact and some imprecise (such as equating "modulo" with "except for"). For the most part, the term often occurs in statements of the form:
A is the same as B modulo C
which is often equivalent to "A is the same as B up to C", and means
A and B are the same—except for differences accounted for or explained by C.
History
Modulo is a mathematical jargon that was introduced into mathematics in the book Disquisitiones Arithmeticae by Carl Friedrich Gauss in 1801. Given the integers a, b and n, the expression "a ≡ b (mod n)", pronounced "a is congruent to b modulo n", means that a − b is an integer multiple of n, or equivalently, a and b both share the same remainder when divided by n. It is the Latin ablative of modulus, which itself means "a small measure."
The term has gained many meanings over the years—some exact and some imprecise. The most general precise definition is simply in terms of an equivalence relation R, where a is equivalent (or congruent) to b modulo R if aRb. More informally, the term is found in statements of the form:
A is the same as B modulo C
which means
A and B are the same—except for differences accounted for or explained by C.
Usage
Original use
Gauss originally intended to use "modulo" as follows: given the integers a, b and n, the expression a ≡ b (mod n) (pronounced "a is congruent to b modulo n") means that a − b is an integer multiple of n, or equivalently, a and b both leave the same remainder when divided by n. For example:
13 is congruent to 63 modulo 10
means that
13 − 63 is a multiple of 10 (equiv., 13 and 63 differ by a multiple of 10).
Computing
In computing and computer science, the term can be used in several ways:
In computing, it is typically the modulo operation: given two numbers (either integer or real), a and n, a modulo n is the remainder of the numerical division of a by n, under certain constraints.
In category theory as applied to functional programming, "operating modulo" is special jargon which refers to mapping a functor to a category by highlighting or defining remainders.
Structures
The term "modulo" can be used differently—when referring to different mathematical structures. For example:
Two members a and b of a group are congruent modulo a normal subgroup, if and only if ab−1 is a member of the normal subgroup (see quotient group and isomorphism theorem for more).
Two members of a ring or an algebra are congruent modulo an ideal, if the difference between them is in the ideal.
Used as a verb, the act of factoring out a normal su
|
https://en.wikipedia.org/wiki/Curve%20%28disambiguation%29
|
A curve is a geometrical object in mathematics.
Curve(s) may also refer to:
Arts, entertainment, and media
Music
Curve (band), an English alternative rock music group
Curve (album), a 2012 album by Our Lady Peace
"Curve" (song), a 2017 song by Gucci Mane featuring The Weeknd
Curve, a 2001 album by Doc Walker
"Curve", a song by John Petrucci from Suspended Animation, 2005
"Curve", a song by Cam'ron from the album Crime Pays, 2009
Periodicals
Curve (design magazine), an industrial design magazine
Curve (magazine), a U.S. lesbian magazine
Other uses in arts, entertainment, and media
Curve (film), a 2015 film
BBC Two "Curve" idents, various animations based around a curve motif
Brands and enterprises
Curve (payment card), a payment card that aggregates multiple payment cards
Curve (theatre), a theatre in Leicester, United Kingdom
Curve, fragrance by Liz Claiborne
BlackBerry Curve, a series of phones from Research in Motion
Curves International, an international fitness franchise
Other uses
Bézier curve, a type of parametric curve used in computer graphics and related fields
Curve (tonality), a software technique for image manipulation
Curveball, a baseball pitch often called simply a "curve"
Female body shape or curves
French curve, a template made out of plastic, metal or wood used to draw smooth curves
Grading curve, a system of grading students
Yield curve, a representation of predicted value of a fixed income security for different durations
See also
Curvature
Flat spline, a very flexible rule used to draw curves
The Curve (disambiguation)
|
https://en.wikipedia.org/wiki/Gaussian%20binomial%20coefficient
|
In mathematics, the Gaussian binomial coefficients (also called Gaussian coefficients, Gaussian polynomials, or q-binomial coefficients) are q-analogs of the binomial coefficients. The Gaussian binomial coefficient, written as or , is a polynomial in q with integer coefficients, whose value when q is set to a prime power counts the number of subspaces of dimension k in a vector space of dimension n over , a finite field with q elements; i.e. it is the number of points in the finite Grassmannian .
Definition
The Gaussian binomial coefficients are defined by:
where m and r are non-negative integers. If , this evaluates to 0. For , the value is 1 since both the numerator and denominator are empty products.
Although the formula at first appears to be a rational function, it actually is a polynomial, because the division is exact in Z[q]
All of the factors in numerator and denominator are divisible by , and the quotient is the q-number:
Dividing out these factors gives the equivalent formula
In terms of the q factorial , the formula can be stated as
Substituting into gives the ordinary binomial coefficient .
The Gaussian binomial coefficient has finite values as :
Examples
Combinatorial descriptions
Inversions
One combinatorial description of Gaussian binomial coefficients involves inversions.
The ordinary binomial coefficient counts the -combinations chosen from an -element set. If one takes those elements to be the different character positions in a word of length , then each -combination corresponds to a word of length using an alphabet of two letters, say with copies of the letter 1 (indicating the positions in the chosen combination) and letters 0 (for the remaining positions).
So, for example, the words using 0s and 1s are .
To obtain the Gaussian binomial coefficient , each word is associated with a factor , where is the number of inversions of the word, where, in this case, an inversion is a pair of positions where the left of the pair holds the letter 1 and the right position holds the letter 0.
With the example above, there is one word with 0 inversions, , one word with 1 inversion, , two words with 2 inversions, , , one word with 3 inversions, , and one word with 4 inversions, . This is also the number of left-shifts of the 1s from the initial position.
These correspond to the coefficients in .
Another way to see this is to associate each word with a path across a rectangular grid with height and width , going from the bottom left corner to the top right corner. The path takes a step right for each 0 and a step up for each 1. An inversion switches the directions of a step (right+up becomes up+right and vice versa), hence the number of inversions equals the area under the path.
Balls into bins
Let be the number of ways of throwing indistinguishable balls into indistinguishable bins, where each bin can contain up to balls.
The Gaussian binomial coefficient can be used to characterize .
Indeed,
whe
|
https://en.wikipedia.org/wiki/Pell%20number
|
In mathematics, the Pell numbers are an infinite sequence of integers, known since ancient times, that comprise the denominators of the closest rational approximations to the square root of 2. This sequence of approximations begins , , , , and , so the sequence of Pell numbers begins with 1, 2, 5, 12, and 29. The numerators of the same sequence of approximations are half the companion Pell numbers or Pell–Lucas numbers; these numbers form a second infinite sequence that begins with 2, 6, 14, 34, and 82.
Both the Pell numbers and the companion Pell numbers may be calculated by means of a recurrence relation similar to that for the Fibonacci numbers, and both sequences of numbers grow exponentially, proportionally to powers of the silver ratio 1 + . As well as being used to approximate the square root of two, Pell numbers can be used to find square triangular numbers, to construct integer approximations to the right isosceles triangle, and to solve certain combinatorial enumeration problems.
As with Pell's equation, the name of the Pell numbers stems from Leonhard Euler's mistaken attribution of the equation and the numbers derived from it to John Pell. The Pell–Lucas numbers are also named after Édouard Lucas, who studied sequences defined by recurrences of this type; the Pell and companion Pell numbers are Lucas sequences.
Pell numbers
The Pell numbers are defined by the recurrence relation:
In words, the sequence of Pell numbers starts with 0 and 1, and then each Pell number is the sum of twice the previous Pell number and the Pell number before that. The first few terms of the sequence are
0, 1, 2, 5, 12, 29, 70, 169, 408, 985, 2378, 5741, 13860, … .
Analogously to the Binet formula, the Pell numbers can also be expressed by the closed form formula:
For large values of n, the term dominates this expression, so the Pell numbers are approximately proportional to powers of the silver ratio , analogous to the growth rate of Fibonacci numbers as powers of the golden ratio.
A third definition is possible, from the matrix formula
Many identities can be derived or proven from these definitions; for instance an identity analogous to Cassini's identity for Fibonacci numbers,
is an immediate consequence of the matrix formula (found by considering the determinants of the matrices on the left and right sides of the matrix formula).
Approximation to the square root of two
Pell numbers arise historically and most notably in the rational approximation to . If two large integers x and y form a solution to the Pell equation
then their ratio provides a close approximation to . The sequence of approximations of this form is
where the denominator of each fraction is a Pell number and the numerator is the sum of a Pell number and its predecessor in the sequence. That is, the solutions have the form
The approximation
of this type was known to Indian mathematicians in the third or fourth century B.C. The Greek mathematicians of the fifth century B.
|
https://en.wikipedia.org/wiki/Artin%E2%80%93Tits%20group
|
In the mathematical area of group theory, Artin groups, also known as Artin–Tits groups or generalized braid groups, are a family of infinite discrete groups defined by simple presentations. They are closely related with Coxeter groups. Examples are free groups, free abelian groups, braid groups, and right-angled Artin–Tits groups, among others.
The groups are named after Emil Artin, due to his early work on braid groups in the 1920s to 1940s, and Jacques Tits who developed the theory of a more general class of groups in the 1960s.
Definition
An Artin–Tits presentation is a group presentation where is a (usually finite) set of generators and is a set of Artin–Tits relations, namely relations of the form for distinct in , where both sides have equal lengths, and there exists at most one relation for each pair of distinct generators . An Artin–Tits group is a group that admits an Artin–Tits presentation. Likewise, an Artin–Tits monoid is a monoid that, as a monoid, admits an Artin–Tits presentation.
Alternatively, an Artin–Tits group can be specified by the set of generators and, for every in , the natural number that is the length of the words and such that is the relation connecting and , if any. By convention, one puts when there is no relation . Formally, if we define to denote an alternating product of and of length , beginning with — so that , , etc. — the Artin–Tits relations take the form
The integers can be organized into a symmetric matrix, known as the Coxeter matrix of the group.
If is an Artin–Tits presentation of an Artin–Tits group , the quotient of obtained by adding the relation for each of is a Coxeter group. Conversely, if is a Coxeter group presented by reflections and the relations are removed, the extension thus obtained is an Artin–Tits group. For instance, the Coxeter group associated with the -strand braid group is the symmetric group of all permutations of .
Examples
is the free group based on ; here for all .
is the free abelian group based on ; here for all .
is the braid group on strands; here for , and for .
General properties
Artin–Tits monoids are eligible for Garside methods based on the investigation of their divisibility relations, and are well understood:
Artin–Tits monoids are cancellative, and they admit greatest common divisors and conditional least common multiples (a least common multiple exists whenever a common multiple does).
If is an Artin–Tits monoid, and if is the associated Coxeter group, there is a (set-theoretic) section of into , and every element of admits a distinguished decomposition as a sequence of elements in the image of ("greedy normal form").
Very few results are known for general Artin–Tits groups. In particular, the following basic questions remain open in the general case:
– solving the word and conjugacy problems — which are conjectured to be decidable,
– determining torsion — which is conjectured to be trivial,
– determ
|
https://en.wikipedia.org/wiki/Reuleaux%20polygon
|
In geometry, a Reuleaux polygon is a curve of constant width made up of circular arcs of constant radius. These shapes are named after their prototypical example, the Reuleaux triangle, which in turn, is named after 19th-century German engineer Franz Reuleaux. The Reuleaux triangle can be constructed from an equilateral triangle by connecting each two vertices by a circular arc centered on the third vertex, and Reuleaux polygons can be formed by a similar construction from any regular polygon with an odd number of sides, or from certain irregular polygons. Every curve of constant width can be accurately approximated by Reuleaux polygons. They have been applied in coinage shapes.
Construction
If is a convex polygon with an odd number of sides, in which each vertex is equidistant to the two opposite vertices and closer to all other vertices, then replacing each side of by an arc centered at its opposite vertex produces a Reuleaux polygon. As a special case, this construction is possible for every regular polygon with an odd number of sides.
Every Reuleaux polygon must have an odd number of circular-arc sides, and can be constructed in this way from a polygon, the convex hull of its arc endpoints. However, it is possible for other curves of constant width to be made of an even number of arcs with varying radii.
Properties
The Reuleaux polygons based on regular polygons are the only curves of constant width whose boundaries are formed by finitely many circular arcs of equal length.
Every curve of constant width can be approximated arbitrarily closely by a (possibly irregular) Reuleaux polygon of the same width.
A regular Reuleaux polygon has sides of equal length. More generally, when a Reuleaux polygon has sides that can be split into arcs of equal length, the convex hull of the arc endpoints is a Reinhardt polygon. These polygons are optimal in multiple ways: they have the largest possible perimeter for their diameter, the largest possible width for their diameter, and the largest possible width for their perimeter.
Applications
The constant width of these shapes allows their use as coins that can be used in coin-operated machines. For instance, the United Kingdom has made 20-pence and 50-pence coins in the shape of a regular Reuleaux heptagon. The Canadian loonie dollar coin uses another regular Reuleaux polygon with 11 sides. However, some coins with rounded-polygon sides, such as the 12-sided 2017 British pound coin, do not have constant width and are not Reuleaux polygons.
Although Chinese inventor Guan Baihua has made a bicycle with Reuleaux polygon wheels, the invention has not caught on.
References
Piecewise-circular curves
Constant width
|
https://en.wikipedia.org/wiki/Fibered%20knot
|
In knot theory, a branch of mathematics, a knot or link
in the 3-dimensional sphere is called fibered or fibred (sometimes Neuwirth knot in older texts, after Lee Neuwirth) if there is a 1-parameter family of Seifert surfaces for , where the parameter runs through the points of the unit circle , such that if is not equal to
then the intersection of and is exactly .
Examples
Knots that are fibered
For example:
The unknot, trefoil knot, and figure-eight knot are fibered knots.
The Hopf link is a fibered link.
Knots that are not fibered
The Alexander polynomial of a fibered knot is monic, i.e. the coefficients of the highest and lowest powers of t are plus or minus 1. Examples of knots with nonmonic Alexander polynomials abound, for example the twist knots have Alexander polynomials , where q is the number of half-twists. In particular the stevedore knot is not fibered.
Related constructions
Fibered knots and links arise naturally, but not exclusively, in complex algebraic geometry. For instance, each singular point of a complex plane curve can be described
topologically as the cone on a fibered knot or link called the link of the singularity. The trefoil knot is the link of the cusp singularity ; the Hopf link (oriented correctly) is the link of the node singularity . In these cases, the family of Seifert surfaces is an aspect of the Milnor fibration of the singularity.
A knot is fibered if and only if it is the binding of some open book decomposition of .
See also
(−2,3,7) pretzel knot
References
External links
|
https://en.wikipedia.org/wiki/Nicolae%20Popescu
|
Nicolae Popescu (; 22 September 1937 – 29 July 2010) was a Romanian mathematician and professor at the University of Bucharest. He also held a research position at the Institute of Mathematics of the Romanian Academy, and was elected corresponding Member of the Romanian Academy in 1997.
He is best known for his contributions to algebra and the theory of abelian categories. From 1964 to 2007 he collaborated with Pierre Gabriel on the characterization of abelian categories; their best-known result is the Gabriel–Popescu theorem, published in 1964. His areas of expertise were category theory, abelian categories with applications to rings and modules, adjoint functors, limits and colimits, the theory of sheaves, the theory of rings, fields and polynomials, and valuation theory. He also had interests and published in algebraic topology, algebraic geometry, commutative algebra, K-theory, class field theory, and algebraic function theory.
Biography
Popescu was born on September 22, 1937, in Strehaia-Comanda, Mehedinți County, Romania. In 1954 he graduated from the Carol I High School in Craiova and went on to study mathematics at the University of Iași. In his third year of studies he was expelled from the university, having been deemed "hostile to the regime" for remarking that "the achievements of American scientists are also worth of consideration." He then went back home to Strehaia, where he worked for a year in a collective farm, after which he was admitted in 1959 at the University of Bucharest, only to start anew as a freshman. Popescu earned his M.S. degree in mathematics in 1964, and his Ph.D. degree in mathematics in 1967, with thesis Krull–Remak–Schmidt Theorem and Theory of Decomposition written under the direction of . He was awarded a D. Phil. degree (Doctor Docent) in 1972, also by the University of Bucharest.
While still a student, Popescu focused on category theory. He first approached the general theory, with its connections to homological algebra and algebraic topology, then shifted his focus on theory of Abelian categories, being one of the main promoters of this theory in Romania. He carried out mathematics studies at the Institute of Mathematics of the Romanian Academy in the Algebra research group, and also had international collaborations on three continents. He shared many moral, ethical, and religious values with Alexander Grothendieck, who visited the Faculty of Mathematics in Bucharest in 1968. Like Grothendieck, he had a long-standing interest in category theory and number theory, and supported promising young mathematicians in his fields of interest. He also promoted the early developments of category theory applications in relational biology and mathematical biophysics/mathematical biology.
Academic positions
Popescu was appointed as a Lecturer at the University of Bucharest in 1968 where he taught graduate students until 1972. Starting in 1964 he also held a research appointment at the Institute of Mathematics
|
https://en.wikipedia.org/wiki/Ian%20Grojnowski
|
Ian Grojnowski is a mathematician working at the Department of Pure Mathematics and Mathematical Statistics at the University of Cambridge.
Awards and honours
Grojnowski was the first recipient of the Fröhlich Prize of the London Mathematical Society in 2004 for his work in representation theory and algebraic geometry. The citation reads
References
20th-century British mathematicians
21st-century British mathematicians
Australian mathematicians
Living people
Cambridge mathematicians
Year of birth missing (living people)
Massachusetts Institute of Technology alumni
|
https://en.wikipedia.org/wiki/Ansatz
|
In physics and mathematics, an ansatz (; , meaning: "initial placement of a tool at a work piece", plural ansätze ; ) is an educated guess or an additional assumption made to help solve a problem, and which may later be verified to be part of the solution by its results.
Use
An ansatz is the establishment of the starting equation(s), the theorem(s), or the value(s) describing a mathematical or physical problem or solution. It typically provides an initial estimate or framework to the solution of a mathematical problem, and can also take into consideration the boundary conditions (in fact, an ansatz is sometimes thought of as a "trial answer" and an important technique in solving differential equations).
After an ansatz, which constitutes nothing more than an assumption, has been established, the equations are solved more precisely for the general function of interest, which then constitutes a confirmation of the assumption. In essence, an ansatz makes assumptions about the form of the solution to a problem so as to make the solution easier to find.
It has been demonstrated that machine learning techniques can be applied to provide initial estimates similar to those invented by humans and to discover new ones in case no ansatz is available.
Examples
Given a set of experimental data that looks to be clustered about a line, a linear ansatz could be made to find the parameters of the line by a least squares curve fit. Variational approximation methods use ansätze and then fit the parameters.
Another example could be the mass, energy, and entropy balance equations that, considered simultaneous for purposes of the elementary operations of linear algebra, are the ansatz to most basic problems of thermodynamics.
Another example of an ansatz is to suppose the solution of a homogeneous linear differential equation to take an exponential form, or a power form in the case of a difference equation. More generally, one can guess a particular solution of a system of equations, and test such an ansatz by directly substituting the solution into the system of equations. In many cases, the assumed form of the solution is general enough that it can represent arbitrary functions, in such a way that the set of solutions found this way is a full set of all the solutions.
See also
Method of undetermined coefficients
Bayesian inference
Bethe ansatz
Coupled cluster, a technique for solving the many-body problem that is based on an exponential Ansatz
Demarcation problem
Guesstimate
Heuristic
Hypothesis
Trial and error
Train of thought
References
Bibliography
Philosophy of physics
Concepts in physics
Mathematical terminology
German_words_and_phrases
|
https://en.wikipedia.org/wiki/GLS
|
GLS may refer to:
Science and technology
GBAS landing system, an aircraft landing system
General Lighting Service, a type of light bulb
Generalized least squares, in statistics
Global location sensor
Glutaminase, a gene and enzyme
Gray leaf spot, a fungal plant disease
Guided local search, a search algorithm
Organisations
General Logistics Systems, a Dutch logistics company
Genesis Lease (NYSE: GLS), a former Bermudan aircraft leasing company
Government Legal Service, former name of a UK Government group
University of Chicago Graduate Library School
Glasgow Literary Society, Scotland
Global Linguist Solutions, an American translation company
GLS Bank, a German ethical bank
GLS University, in Ahmedabad, India
Events
Games, Learning & Society Conference
Georgetown Leadership Seminar
Global Leaders' Summit
Places
Glaisdale railway station (Station code), in England
Gloucestershire, England
Scholes International Airport at Galveston (IATA and FAA LID codes), Texas, US
Other uses
Guy L. Steele Jr. (born 1954), American computer scientist
Mercedes-Benz GLS, an automobile
|
https://en.wikipedia.org/wiki/Pregeometry%20%28model%20theory%29
|
Pregeometry, and in full combinatorial pregeometry, are essentially synonyms for "matroid". They were introduced by Gian-Carlo Rota with the intention of providing a less "ineffably cacophonous" alternative term. Also, the term combinatorial geometry, sometimes abbreviated to geometry, was intended to replace "simple matroid". These terms are now infrequently used in the study of matroids.
It turns out that many fundamental concepts of linear algebra – closure, independence, subspace, basis, dimension – are available in the general framework of pregeometries.
In the branch of mathematical logic called model theory, infinite finitary matroids, there called "pregeometries" (and "geometries" if they are simple matroids), are used in the discussion of independence phenomena. The study of how pregeometries, geometries, and abstract closure operators influence the structure of first-order models is called geometric stability theory.
Motivation
If is a vector space over some field and , we define to be the set of all linear combinations of vectors from , also known as the span of . Then we have and and . The Steinitz exchange lemma is equivalent to the statement: if , then
The linear algebra concepts of independent set, generating set, basis and dimension can all be expressed using the -operator alone. A pregeometry is an abstraction of this situation: we start with an arbitrary set and an arbitrary operator which assigns to each subset of a subset of , satisfying the properties above. Then we can define the "linear algebra" concepts also in this more general setting.
This generalized notion of dimension is very useful in model theory, where in certain situation one can argue as follows: two models with the same cardinality must have the same dimension and two models with the same dimension must be isomorphic.
Definitions
Pregeometries and geometries
A combinatorial pregeometry (also known as a finitary matroid) is a pair , where is a set and (called the closure map) satisfies the following axioms. For all and :
is monotone increasing and dominates (i.e. implies ) and is idempotent (i.e. )
Finite character: For each there is some finite with .
Exchange principle: If , then (and hence by monotonicity and idempotence in fact ).
Sets of the form for some are called closed. It is then clear that finite intersections of closed sets are closed and that is the smallest closed set containing .
A geometry is a pregeometry in which the closure of singletons are singletons and the closure of the empty set is the empty set.
Independence, bases and dimension
Given sets , is independent over if for any . We say that is independent if it is independent over the empty set.
A set is a basis for over if it is independent over and .
A basis is the same as a maximal independent subset, and using Zorn's lemma one can show that every set has a basis. Since a pregeometry satisfies the Steinitz exchange property all bases ar
|
https://en.wikipedia.org/wiki/Gauss%E2%80%93Kuzmin%E2%80%93Wirsing%20operator
|
In mathematics, the Gauss–Kuzmin–Wirsing operator is the transfer operator of the Gauss map that takes a positive number to the fractional part of its reciprocal. (This is not the same as the Gauss map in differential geometry.) It is named after Carl Gauss, Rodion Kuzmin, and Eduard Wirsing. It occurs in the study of continued fractions; it is also related to the Riemann zeta function.
Relationship to the maps and continued fractions
The Gauss map
The Gauss function (map) h is :
where denotes the floor function.
It has an infinite number of jump discontinuities at x = 1/n, for positive integers n. It is hard to approximate it by a single smooth polynomial.
Operator on the maps
The Gauss–Kuzmin–Wirsing operator acts on functions as
Eigenvalues of the operator
The first eigenfunction of this operator is
which corresponds to an eigenvalue of λ1 = 1. This eigenfunction gives the probability of the occurrence of a given integer in a continued fraction expansion, and is known as the Gauss–Kuzmin distribution. This follows in part because the Gauss map acts as a truncating shift operator for the continued fractions: if
is the continued fraction representation of a number 0 < x < 1, then
Because is conjugate to a Bernoulli shift, the eigenvalue is simple, and since the operator leaves invariant the Gauss–Kuzmin measure, the operator is ergodic with respect to the measure. This fact allows a short proof of the existence of Khinchin's constant.
Additional eigenvalues can be computed numerically; the next eigenvalue is λ2 = −0.3036630029...
and its absolute value is known as the Gauss–Kuzmin–Wirsing constant. Analytic forms for additional eigenfunctions are not known. It is not known if the eigenvalues are irrational.
Let us arrange the eigenvalues of the Gauss–Kuzmin–Wirsing operator according to an absolute value:
It was conjectured in 1995 by Philippe Flajolet and Brigitte Vallée that
In 2018, Giedrius Alkauskas gave a convincing argument that this conjecture can be refined to a much stronger statement:
here the function is bounded, and is the Riemann zeta function.
Continuous spectrum
The eigenvalues form a discrete spectrum, when the operator is limited to act on functions on the unit interval of the real number line. More broadly, since the Gauss map is the shift operator on Baire space , the GKW operator can also be viewed as an operator on the function space (considered as a Banach space, with basis functions taken to be the indicator functions on the cylinders of the product topology). In the later case, it has a continuous spectrum, with eigenvalues in the unit disk of the complex plane. That is, given the cylinder , the operator G shifts it to the left: . Taking to be the indicator function which is 1 on the cylinder (when ), and zero otherwise, one has that . The series
then is an eigenfunction with eigenvalue . That is, one has whenever the summation converges: that is, when .
A special case arises
|
https://en.wikipedia.org/wiki/Klein%20transformation
|
In quantum field theory, the Klein transformation is a redefinition of the fields to amend the spin-statistics theorem.
Bose–Einstein
Suppose φ and χ are fields such that, if x and y are spacelike-separated points and i and j represent the spinor/tensor indices,
Also suppose χ is invariant under the Z2 parity (nothing to do with spatial reflections!) mapping χ to −χ but leaving φ invariant. Obviously, free field theories always satisfy this property. Then, the Z2 parity of the number of χ particles is well defined and is conserved in time. Let's denote this parity by the operator Kχ which maps χ-even states to itself and χ-odd states into their negative. Then, Kχ is involutive, Hermitian and unitary.
Needless to say, the fields φ and χ above don't have the proper statistics relations for either a boson or a fermion. i.e. they are bosonic with respect to themselves but fermionic with respect to each other. But if you look at the statistical properties alone, we find it has exactly the same statistics as the Bose–Einstein statistics. Here's why:
Define two new fields φ' and χ' as follows:
and
This redefinition is invertible (because Kχ is). Now, the spacelike commutation relations become
Fermi–Dirac
Now, let's work with the example where
(spacelike-separated as usual).
Assume once again we have a Z2 conserved parity operator Kχ acting upon χ alone.
Let
and
Then
More than two fields
If there are more than two fields, then one can keep applying the Klein transformation to each pair of fields with the "wrong" commutation/anticommutation relations until the desired result is obtained.
This explains the equivalence between parastatistics and the more familiar Bose–Einstein/Fermi–Dirac statistics.
References
See also
Jordan–Schwinger transformation
Jordan–Wigner transformation
Bogoliubov–Valatin transformation
Holstein–Primakoff transformation
Quantum field theory
|
https://en.wikipedia.org/wiki/Integrally%20closed
|
In mathematics, more specifically in abstract algebra, the concept of integrally closed has three meanings:
A commutative ring contained in a commutative ring is said to be integrally closed in if is equal to the integral closure of in .
An integral domain is said to be integrally closed if it is equal to its integral closure in its field of fractions.
An ordered group G is called integrally closed if for all elements a and b of G, if an ≤ b for all natural numbers n then a ≤ 1.
|
https://en.wikipedia.org/wiki/Further%20Mathematics
|
Further Mathematics is the title given to a number of advanced secondary mathematics courses. The term "Higher and Further Mathematics", and the term "Advanced Level Mathematics", may also refer to any of several advanced mathematics courses at many institutions.
In the United Kingdom, Further Mathematics describes a course studied in addition to the standard mathematics AS-Level and A-Level courses. In the state of Victoria in Australia, it describes a course delivered as part of the Victorian Certificate of Education (see § Australia (Victoria) for a more detailed explanation). Globally, it describes a course studied in addition to GCE AS-Level and A-Level Mathematics, or one which is delivered as part of the International Baccalaureate Diploma.
In other words, more mathematics can also be referred to as part of advanced mathematics, or advanced level math.
United Kingdom
Background
A qualification in Further Mathematics involves studying both pure and applied modules. Whilst the pure modules (formerly known as Pure 4–6 or Core 4–6, now known as Further Pure 1–3, where 4 exists for the AQA board) build on knowledge from the core mathematics modules, the applied modules may start from first principles.
The structure of the qualification varies between exam boards.
With regard to Mathematics degrees, most universities do not require Further Mathematics, and may incorporate foundation math modules or offer "catch-up" classes covering any additional content. Exceptions are the University of Warwick, the University of Cambridge which requires Further Mathematics to at least AS level; University College London requires or recommends an A2 in Further Maths for its maths courses; Imperial College requires an A in A level Further Maths, while other universities may recommend it or may promise lower offers in return. Some schools and colleges may not offer Further mathematics, but online resources are available
Although the subject has about 60% of its cohort obtaining "A" grades, students choosing the subject are assumed to be more proficient in mathematics, and there is much more overlap of topics compared to base mathematics courses at A level.
Some medicine courses do not count maths and further maths as separate subjects for the purposes of making offers. This is due to the overlap in content, and the potentially narrow education a candidate with maths, further maths and just one other subject may have.
Support
There are numerous sources of support for both teachers and students. The AMSP (formerly FMSP) is a government-funded organisation that offers professional development, enrichment activities and is a source of additional materials via its website. Registering with AMSP gives access to Integral, another source of both teaching and learning materials hosted by Mathematics Education Innovation (MEI). Underground Mathematics is another resource in active development which reflects the emphasis on problem solving and reasoning in the UK c
|
https://en.wikipedia.org/wiki/Kleinian%20group
|
In mathematics, a Kleinian group is a discrete subgroup of the group of orientation-preserving isometries of hyperbolic 3-space . The latter, identifiable with , is the quotient group of the 2 by 2 complex matrices of determinant 1 by their center, which consists of the identity matrix and its product by . has a natural representation as orientation-preserving conformal transformations of the Riemann sphere, and as orientation-preserving conformal transformations of the open unit ball in . The group of Möbius transformations is also related as the non-orientation-preserving isometry group of , . So, a Kleinian group can be regarded as a discrete subgroup acting on one of these spaces.
History
The theory of general Kleinian groups was founded by and , who named them after Felix Klein. The special case of Schottky groups had been studied a few years earlier, in 1877, by Schottky.
Definitions
One modern definition of Kleinian group is as a group which acts on the 3-ball as a discrete group of hyperbolic isometries. Hyperbolic 3-space has a natural boundary; in the ball model, this can be identified with the 2-sphere. We call it the sphere at infinity, and denote it by . A hyperbolic isometry extends to a conformal homeomorphism of the sphere at infinity (and conversely, every conformal homeomorphism on the sphere at infinity extends uniquely to a hyperbolic isometry on the ball by Poincaré extension. It is a standard result from complex analysis that conformal homeomorphisms on the Riemann sphere are exactly the Möbius transformations, which can further be identified as elements of the projective linear group PGL(2,C). Thus, a Kleinian group can also be defined as a subgroup Γ of PGL(2,C). Classically, a Kleinian group was required to act properly discontinuously on a non-empty open subset of the Riemann sphere, but modern usage allows any discrete subgroup.
When Γ is isomorphic to the fundamental group of a hyperbolic 3-manifold, then the quotient space H3/Γ becomes a Kleinian model of the manifold. Many authors use the terms Kleinian model and Kleinian group interchangeably, letting the one stand for the other.
Discreteness implies points in the interior of hyperbolic 3-space have finite stabilizers, and discrete orbits under the group Γ. On the other hand, the orbit Γp of a point p will typically accumulate on the boundary of the closed ball .
The set of accumulation points of Γp in is called the limit set of Γ, and usually denoted . The complement is called the domain of discontinuity or the ordinary set or the regular set. Ahlfors' finiteness theorem implies that if the group is finitely generated then is a Riemann surface orbifold of finite type.
The unit ball B3 with its conformal structure is the Poincaré model of hyperbolic 3-space. When we think of it metrically, with metric
it is a model of 3-dimensional hyperbolic space H3. The set of conformal self-maps of B3 becomes the set of isometries (i.e. distance-preserving
|
https://en.wikipedia.org/wiki/Kazhdan%27s%20property%20%28T%29
|
In mathematics, a locally compact topological group G has property (T) if the trivial representation is an isolated point in its unitary dual equipped with the Fell topology. Informally, this means that if G acts unitarily on a Hilbert space and has "almost invariant vectors", then it has a nonzero invariant vector. The formal definition, introduced by David Kazhdan (1967), gives this a precise, quantitative meaning.
Although originally defined in terms of irreducible representations, property (T) can often be checked even when there is little or no explicit knowledge of the unitary dual. Property (T) has important applications to group representation theory, lattices in algebraic groups over local fields, ergodic theory, geometric group theory, expanders, operator algebras and the theory of networks.
Definitions
Let G be a σ-compact, locally compact topological group and π : G → U(H) a unitary representation of G on a (complex) Hilbert space H. If ε > 0 and K is a compact subset of G, then a unit vector ξ in H is called an (ε, K)-invariant vector if
The following conditions on G are all equivalent to G having property (T) of Kazhdan, and any of them can be used as the definition of property (T).
(1) The trivial representation is an isolated point of the unitary dual of G with Fell topology.
(2) Any sequence of continuous positive definite functions on G converging to 1 uniformly on compact subsets, converges to 1 uniformly on G.
(3) Every unitary representation of G that has an (ε, K)-invariant unit vector for any ε > 0 and any compact subset K, has a non-zero invariant vector.
(4) There exists an ε > 0 and a compact subset K of G such that every unitary representation of G that has an (ε, K)-invariant unit vector, has a nonzero invariant vector.
(5) Every continuous affine isometric action of G on a real Hilbert space has a fixed point (property (FH)).
If H is a closed subgroup of G, the pair (G,H) is said to have relative property (T) of Margulis if there exists an ε > 0 and a compact subset K of G such that whenever a unitary representation of G has an (ε, K)-invariant unit vector, then it has a non-zero vector fixed by H.
Discussion
Definition (4) evidently implies definition (3). To show the converse, let G be a locally compact group satisfying (3), assume by contradiction that for every K and ε there is a unitary representation that has a (K, ε)-invariant unit vector and does not have an invariant vector. Look at the direct sum of all such representation and that will negate (4).
The equivalence of (4) and (5) (Property (FH)) is the Delorme-Guichardet theorem. The fact that (5) implies (4) requires the assumption that G is σ-compact (and locally compact) (Bekka et al., Theorem 2.12.4).
General properties
Property (T) is preserved under quotients: if G has property (T) and H is a quotient group of G then H has property (T). Equivalently, if a homomorphic image of a group G does not have property (T) then G itself do
|
https://en.wikipedia.org/wiki/4-manifold
|
In mathematics, a 4-manifold is a 4-dimensional topological manifold. A smooth 4-manifold is a 4-manifold with a smooth structure. In dimension four, in marked contrast with lower dimensions, topological and smooth manifolds are quite different. There exist some topological 4-manifolds which admit no smooth structure, and even if there exists a smooth structure, it need not be unique (i.e. there are smooth 4-manifolds which are homeomorphic but not diffeomorphic).
4-manifolds are important in physics because in General Relativity, spacetime is modeled as a pseudo-Riemannian 4-manifold.
Topological 4-manifolds
The homotopy type of a simply connected compact 4-manifold only depends on the intersection form on the middle dimensional homology. A famous theorem of implies that the homeomorphism type of the manifold only depends on this intersection form, and on a invariant called the Kirby–Siebenmann invariant, and moreover that every combination of unimodular form and Kirby–Siebenmann invariant can arise, except that if the form is even, then the Kirby–Siebenmann invariant must be the signature/8 (mod 2).
Examples:
In the special case when the form is 0, this implies the 4-dimensional topological Poincaré conjecture.
If the form is the E8 lattice, this gives a manifold called the E8 manifold, a manifold not homeomorphic to any simplicial complex.
If the form is , there are two manifolds depending on the Kirby–Siebenmann invariant: one is 2-dimensional complex projective space, and the other is a fake projective space, with the same homotopy type but not homeomorphic (and with no smooth structure).
When the rank of the form is greater than about 28, the number of positive definite unimodular forms starts to increase extremely rapidly with the rank, so there are huge numbers of corresponding simply connected topological 4-manifolds (most of which seem to be of almost no interest).
Freedman's classification can be extended to some cases when the fundamental group is not too complicated; for example, when it is , there is a classification similar to the one above using Hermitian forms over the group ring of . If the fundamental group is too large (for example, a free group on 2 generators), then Freedman's techniques seem to fail and very little is known about such manifolds.
For any finitely presented group it is easy to construct a (smooth) compact 4-manifold with it as its fundamental group. As there is no algorithm to tell whether two finitely presented groups are isomorphic (even if one is known to be trivial) there is no algorithm to tell if two 4-manifolds have the same fundamental group. This is one reason why much of the work on 4-manifolds just considers the simply connected case: the general case of many problems is already known to be intractable.
Smooth 4-manifolds
For manifolds of dimension at most 6, any piecewise linear (PL) structure can be smoothed in an essentially unique way, so in particular the theory of 4 dimensional PL m
|
https://en.wikipedia.org/wiki/Quaternion%20%28disambiguation%29
|
In mathematics
The quaternions form a number system that extends the complex numbers.
Quaternion rotation
Quaternion group, a non-abelian group of order 8
Symbols
Imperial quaternions (heraldry of the Holy Roman Empire)
Quaternion Eagle
Military uses
A group of four soldiers in the Roman legion
A fireteam
Other
Quaternion (gathering), four folded sheets as a unit in bookbinding
Quaternion (poetry), a style of poetry with four parts
See also
|
https://en.wikipedia.org/wiki/Binary%20data
|
Binary data is data whose unit can take on only two possible states. These are often labelled as 0 and 1 in accordance with the binary numeral system and Boolean algebra.
Binary data occurs in many different technical and scientific fields, where it can be called by different names including bit (binary digit) in computer science, truth value in mathematical logic and related domains and binary variable in statistics.
Mathematical and combinatoric foundations
A discrete variable that can take only one state contains zero information, and is the next natural number after 1. That is why the bit, a variable with only two possible values, is a standard primary unit of information.
A collection of bits may have states: see binary number for details. Number of states of a collection of discrete variables depends exponentially on the number of variables, and only as a power law on number of states of each variable. Ten bits have more () states than three decimal digits (). bits are more than sufficient to represent an information (a number or anything else) that requires decimal digits, so information contained in discrete variables with 3, 4, 5, 6, 7, 8, 9, 10... states can be ever superseded by allocating two, three, or four times more bits. So, the use of any other small number than 2 does not provide an advantage.
Moreover, Boolean algebra provides a convenient mathematical structure for collection of bits, with a semantic of a collection of propositional variables. Boolean algebra operations are known as "bitwise operations" in computer science. Boolean functions are also well-studied theoretically and easily implementable, either with computer programs or by so-named logic gates in digital electronics. This contributes to the use of bits to represent different data, even those originally not binary.
In statistics
In statistics, binary data is a statistical data type consisting of categorical data that can take exactly two possible values, such as "A" and "B", or "heads" and "tails". It is also called dichotomous data, and an older term is quantal data. The two values are often referred to generically as "success" and "failure". As a form of categorical data, binary data is nominal data, meaning the values are qualitatively different and cannot be compared numerically. However, the values are frequently represented as 1 or 0, which corresponds to counting the number of successes in a single trial: 1 (success) or 0 (failure); see .
Often, binary data is used to represent one of two conceptually opposed values, e.g.:
the outcome of an experiment ("success" or "failure")
the response to a yes–no question ("yes" or "no")
presence or absence of some feature ("is present" or "is not present")
the truth or falsehood of a proposition ("true" or "false", "correct" or "incorrect")
However, it can also be used for data that is assumed to have only two possible values, even if they are not conceptually opposed or conceptually represent all possible
|
https://en.wikipedia.org/wiki/Lambert%20series
|
In mathematics, a Lambert series, named for Johann Heinrich Lambert, is a series taking the form
It can be resumed formally by expanding the denominator:
where the coefficients of the new series are given by the Dirichlet convolution of an with the constant function 1(n) = 1:
This series may be inverted by means of the Möbius inversion formula, and is an example of a Möbius transform.
Examples
Since this last sum is a typical number-theoretic sum, almost any natural multiplicative function will be exactly summable when used in a Lambert series. Thus, for example, one has
where is the number of positive divisors of the number n.
For the higher order sum-of-divisor functions, one has
where is any complex number and
is the divisor function. In particular, for , the Lambert series one gets is
which is (up to the factor of ) the logarithmic derivative of the usual generating function for partition numbers
Additional Lambert series related to the previous identity include those for the variants of the
Möbius function given below
Related Lambert series over the Moebius function include the following identities for any
prime :
The proof of the first identity above follows from a multi-section (or bisection) identity of these
Lambert series generating functions in the following form where we denote
to be the Lambert series generating function of the arithmetic function f:
The second identity in the previous equations follows from the fact that the coefficients of the left-hand-side sum are given by
where the function is the multiplicative identity with respect to the operation of Dirichlet convolution of arithmetic functions.
For Euler's totient function :
For Von Mangoldt function :
For Liouville's function :
with the sum on the right similar to the Ramanujan theta function, or Jacobi theta function . Note that Lambert series in which the an are trigonometric functions, for example, an = sin(2n x), can be evaluated by various combinations of the logarithmic derivatives of Jacobi theta functions.
Generally speaking, we can extend the previous generating function expansion by letting denote the characteristic function of the powers, , for positive natural numbers and defining the generalized m-Liouville lambda function to be the arithmetic function satisfying . This definition of clearly implies that , which in turn shows that
We also have a slightly more generalized Lambert series expansion generating the sum of squares function in the form of
In general, if we write the Lambert series over which generates the arithmetic functions , the next pairs of functions correspond to other well-known convolutions expressed by their Lambert series generating functions in the forms of
where is the multiplicative identity for Dirichlet convolutions, is the identity function for powers, denotes the characteristic function for the squares, which counts the number of distinct prime factors of (see prime omega function),
|
https://en.wikipedia.org/wiki/Band%20sum
|
In geometric topology, a band sum of two n-dimensional knots K1 and K2 along an (n + 1)-dimensional 1-handle h called a band is an n-dimensional knot K such that:
There is an (n + 1)-dimensional 1-handle h connected to (K1, K2) embedded in Sn+2.
There are points and such that is attached to along .
K is the n-dimensional knot obtained by this surgery.
A band sum is thus a generalization of the usual connected sum of knots.
See also
Manifold decomposition
References
.
.
Topology
Differential topology
Knot theory
Operations on structures
|
https://en.wikipedia.org/wiki/Noncentral%20F-distribution
|
In probability theory and statistics, the noncentral F-distribution is a continuous probability distribution that is a noncentral generalization of the (ordinary) F-distribution. It describes the distribution of the quotient (X/n1)/(Y/n2), where the numerator X has a noncentral chi-squared distribution with n1 degrees of freedom and the denominator Y has a central chi-squared distribution with n2 degrees of freedom. It is also required that X and Y are statistically independent of each other.
It is the distribution of the test statistic in analysis of variance problems when the null hypothesis is false. The noncentral F-distribution is used to find the power function of such a test.
Occurrence and specification
If is a noncentral chi-squared random variable with noncentrality parameter and degrees of freedom, and is a chi-squared random variable with degrees of freedom that is statistically independent of , then
is a noncentral F-distributed random variable.
The probability density function (pdf) for the noncentral F-distribution is
when and zero otherwise.
The degrees of freedom and are positive.
The term is the beta function, where
The cumulative distribution function for the noncentral F-distribution is
where is the regularized incomplete beta function.
The mean and variance of the noncentral F-distribution are
and
Special cases
When λ = 0, the noncentral F-distribution becomes the
F-distribution.
Related distributions
Z has a noncentral chi-squared distribution if
where F has a noncentral F-distribution.
See also noncentral t-distribution.
Implementations
The noncentral F-distribution is implemented in the R language (e.g., pf function), in MATLAB (ncfcdf, ncfinv, ncfpdf, ncfrnd and ncfstat functions in the statistics toolbox) in Mathematica (NoncentralFRatioDistribution function), in NumPy (random.noncentral_f), and in Boost C++ Libraries.
A collaborative wiki page implements an interactive online calculator, programmed in the R language, for the noncentral t, chi-squared, and F distributions, at the Institute of Statistics and Econometrics, School of Business and Economics, Humboldt-Universität zu Berlin.
Notes
References
Continuous distributions
F
|
https://en.wikipedia.org/wiki/Longdean%20School
|
Longdean School is a secondary school and sixth form with academy status, located in the southeast of Hemel Hempstead, Hertfordshire. The academy specialises in Maths and Computing.
History
Grammar school
Originally called Apsley Grammar School, it began as a state grammar school in Hemel Hempstead. It was founded in 1955 as part of the development of the town after its designation as a new town and the need for expanded secondary school provision. Although named for the nearby village of Apsley the school is actually situated about one mile away, in the Bennetts End district of the town. Its first Head Teacher was Valentine (V.J.) Wrigley.
Comprehensive
The name of the school changed to Longdean School in 1970 on the amalgamation with the adjacent Bennett's End Secondary Modern School to form what was the third-largest comprehensive school in Hertfordshire at the time.
The school motto of Rejoice in Thy Youth was retained after the amalgamation.
Since September 2012 the headmaster has been Mr Graham Cunningham, replacing the previous headmaster, Mr Rhodri Bryant. The last Ofsted report classed the school as a 'GOOD'.
The school operates community facilities in the form of a sports centre, small Astro pitch, grass pitches and Multi Use Games Area.
Academy
During the summer term of 2011, Longdean School attained academy status.
The school works in consortium with two neighbouring schools to enhance post-16 provision. The group consists of Adeyfield Academy, Astley Cooper School and Longdean School. Staff development and well-being are also coordinated at consortium level.
In May 2012, Longdean was included in the Government's £2 billion Priority School Building Programme. Longdean's inclusion was based upon the condition of its existing buildings that have exceeded their 25-year life expectancy. As a result, a completely new school building was constructed by Interserve/Kajima on former playing fields and both the existing premises were demolished. The new school opened at the end of 2016.
Admissions
Longdean is a non-selective coeducational school within the state education system, accepting pupils from its catchment area of Bennetts End, Nash Mills, Leverstock Green and adjacent areas.
Notable former pupils
Apsley Grammar School
Paul Boateng, (now Baron Boateng) – the UK's first black Cabinet minister, and British High Commissioner to South Africa from March 2005 to April 2009.
Prof Hugh Loxdale MBE, entomologist, Professor of Ecology from 2009 to 2010 at the Institute of Ecology, University of Jena, and President from 2004–6 of the Royal Entomological Society of London
Andy Powell – guitarist in the rock group Wishbone Ash
Sue Hayes - London Film Commissioner, award-winning documentary producer and director of Edinburgh International Television Festival
Longdean School
Chris Eagles – Professional football player, enrolled in Manchester United youth academy before turning pro, now playing for Ross County F.C.
Jake Howells –
|
https://en.wikipedia.org/wiki/Hellmuth%20Kneser
|
Hellmuth Kneser (16 April 1898 – 23 August 1973) was a Baltic German mathematician, who made notable contributions to group theory and topology. His most famous result may be his theorem on the existence of a prime decomposition for 3-manifolds. His proof originated the concept of normal surface, a fundamental cornerstone of the theory of 3-manifolds.
He was born in Dorpat, Russian Empire (now Tartu, Estonia) and died in Tübingen, Germany. He was the son of the mathematician Adolf Kneser and the father of the mathematician Martin Kneser. He assisted Wilhelm Süss in the founding of the Mathematical Research Institute of Oberwolfach and served as the director of the institute from 1958 to 1959.
He was an editor of Mathematische Zeitschrift, Archiv der Mathematik and Aequationes Mathematicae.
Kneser formulated the problem of non-integer iteration of functions and proved the existence of the entire Abel function of the exponential; on the base of this Abel function, he constructed the functional square root of the exponential function as a half-iteration of the exponential, i.e. a function such that .
Kneser was a student of David Hilbert. He was an advisor of a number of notable mathematicians, including Reinhold Baer.
Hellmuth Kneser was a member of the NSDAP and also the SA. In July 1934 he wrote to Ludwig Bieberbach a short note supporting his anti-semitic views and stating: "May God grant German science a unitary, powerful and continued political position."
Selected publications
Funktionentheorie. Studia Mathematica, Göttingen, 1958; 2nd edition 1966.
Gerhard Betsch, Karl H. Hofmann (eds.): Gesammelte Abhandlungen, De Gruyter 2005; 2011 pbk reprint
References
External links
1898 births
1973 deaths
People from Tartu
People from Kreis Dorpat
Baltic-German people
Emigrants from the Russian Empire to Germany
Nazi Party members
20th-century German mathematicians
Group theorists
Topologists
Academic staff of the University of Greifswald
Sturmabteilung personnel
|
https://en.wikipedia.org/wiki/Oberwolfach%20Research%20Institute%20for%20Mathematics
|
The Oberwolfach Research Institute for Mathematics () is a center for mathematical research in Oberwolfach, Germany. It was founded by mathematician Wilhelm Süss in 1944.
It organizes weekly workshops on diverse topics where mathematicians and scientists from all over the world come to do collaborative research.
The Institute is a member of the Leibniz Association, funded mainly by the German Federal Ministry of Education and Research and by the state of Baden-Württemberg. It also receives substantial funding from the Friends of Oberwolfach foundation, from the Oberwolfach Foundation and from numerous donors.
History
The Oberwolfach Research Institute for Mathematics (MFO) was founded as the Reich Institute of Mathematics (German: Reichsinstitut für Mathematik) on 1 September 1944. It was one of several research institutes founded by the Nazis in order to further the German war effort, which at that time was clearly failing. The location was selected to be remote as not to be a target for ally bombing. Originally it was housed in a building called the Lorenzenhof, a large Black Forest hunting lodge. After the war, Süss, a member of the Nazi party, was suspended for two months in 1945 as part of the county's denazification efforts, but thereafter remained head of the institute. Though the institute lost its government funding, Süss was able to keep it going with other grants, and contributed to rebuilding mathematics in Germany following the fall of the Third Reich by hosting international mathematical conferences. Some of these were organised by Reinhold Baer, a mathematician who was expelled from University of Halle in 1933 for being Jewish, but later returned to Germany in 1956 at the University of Frankfurt. The institute regained government funding in the 1950s.
After Süss's death in 1958, Hellmuth Kneser was briefly director before Theodor Schneider permanently took over in the role in 1959. In that year, he and others formed the mathematical society Gesellschaft für Mathematische Forschung e. V. in order to run the MFO.
In 1967, on the 10 October, the guest house of the Oberwolfach Research Institute for Mathematics was inaugurated, which was a gift from the Volkswagen Foundation. On June 13, 1975, the library and meetings building of the MFO were inaugurated, replacing the old castle. This new building was also a gift from the Volkswagen Foundation.
On the 26th of May in 1989, an extension to the guest building at the MFO was inaugurated.
In 1995, the MFO established the research program "Research in Pairs".
On January 1, 2005, Oberwolfach Research Institute for Mathematics became a member of the Leibniz Association. From 2005 to 2010, there was a general restoration of the guest house and the library building at the MFO.
Post-doctoral program "Oberwolfach Leibniz Fellows" was established in 2007. On May 5, of the same year, an extension to the library was inaugurated, the extension was a gift from the Klaus Tschira Stiftung a
|
https://en.wikipedia.org/wiki/Wilhelm%20S%C3%BCss
|
Wilhelm Süss (7 March 1895 – 21 May 1958) was a German mathematician. He was founder and first director of the Oberwolfach Research Institute for Mathematics.
Biography
He was born in Frankfurt, Germany, and died in Freiburg im Breisgau, Germany.
Süss earned a Ph.D. degree in 1922 from Goethe University Frankfurt, for a thesis written under the direction of Ludwig Bieberbach. In 1928, he took a lecturing position at the University of Greifswald, and in 1934 he became a Professor at the University of Freiburg.
Wilhelm Süss was a member of the Nazi Party and the National Socialist German Lecturers League; he joined Stahlhelm to avoid being automatically enrolled in Sturmabteilung but later he, with all Stahlhelm members, became members of Sturmabteilung. The extent to which he worked with Nazis or only cooperated as little as possible is a matter of debate among historians.
In 1936–1940, he was an editor of the journal Deutsche Mathematik.
References
External links
Suess, Wilhelm
Suess, Wilhelm
Suess, Wilhelm
Nazi Party members
Suess, Wilhelm
Goethe University Frankfurt alumni
Academic staff of the University of Freiburg
Academic staff of the University of Greifswald
|
https://en.wikipedia.org/wiki/Recurring
|
Recurring means occurring repeatedly and can refer to several different things:
Mathematics and finance
Recurring expense, an ongoing (continual) expenditure
Repeating decimal, or recurring decimal, a real number in the decimal numeral system in which a sequence of digits repeats infinitely
Curiously recurring template pattern (CRTP), a software design pattern
Processes
Recursion, the process of repeating items in a self-similar way
Recurring dream, a dream that someone repeatedly experiences over an extended period
Television
Recurring character, a character, usually on a television series, that appears from time to time and may grow into a larger role
Recurring status, condition whereby a soap opera actor may be used for extended period without being under contract
Other uses
Recurring (album), a 1991 album by the British psychedelic-rock group, Spacemen 3
See also
|
https://en.wikipedia.org/wiki/Polar%20decomposition
|
In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix ( is an orthogonal matrix and is a positive semi-definite symmetric matrix in the real case), both square and of the same size.
Intuitively, if a real matrix is interpreted as a linear transformation of -dimensional space , the polar decomposition separates it into a rotation or reflection of , and a scaling of the space along a set of orthogonal axes.
The polar decomposition of a square matrix always exists. If is invertible, the decomposition is unique, and the factor will be positive-definite. In that case, can be written uniquely in the form , where is unitary and is the unique self-adjoint logarithm of the matrix . This decomposition is useful in computing the fundamental group of (matrix) Lie groups.
The polar decomposition can also be defined as where is a symmetric positive-definite matrix with the same eigenvalues as but different eigenvectors.
The polar decomposition of a matrix can be seen as the matrix analog of the polar form of a complex number as , where is its absolute value (a non-negative real number), and is a complex number with unit norm (an element of the circle group).
The definition may be extended to rectangular matrices by requiring to be a semi-unitary matrix and to be a positive-semidefinite Hermitian matrix. The decomposition always exists and is always unique. The matrix is unique if and only if has full rank.
Intuitive interpretation
A real square matrix can be interpreted as the linear transformation of that takes a column vector to . Then, in the polar decomposition , the factor is an real orthonormal matrix. The polar decomposition then can be seen as expressing the linear transformation defined by into a scaling of the space along each eigenvector of by a scale factor (the action of ), followed by a rotation of (the action of ).
Alternatively, the decomposition expresses the transformation defined by as a rotation () followed by a scaling () along certain orthogonal directions. The scale factors are the same, but the directions are different.
Properties
The polar decomposition of the complex conjugate of is given by Note thatgives the corresponding polar decomposition of the determinant of A, since and . In particular, if has determinant 1 then both and have determinant 1.
The positive-semidefinite matrix P is always unique, even if A is singular, and is denoted aswhere denotes the conjugate transpose of . The uniqueness of P ensures that this expression is well-defined. The uniqueness is guaranteed by the fact that is a positive-semidefinite Hermitian matrix and, therefore, has a unique positive-semidefinite Hermitian square root. If A is invertible, then P is positive-definite, thus also invertible and the matrix U is uniquely determined by
Relation to the SVD
|
https://en.wikipedia.org/wiki/Alfred%20Clebsch
|
Rudolf Friedrich Alfred Clebsch (19 January 1833 – 7 November 1872) was a German mathematician who made important contributions to algebraic geometry and invariant theory. He attended the University of Königsberg and was habilitated at Berlin. He subsequently taught in Berlin and Karlsruhe. His collaboration with Paul Gordan in Giessen led to the introduction of Clebsch–Gordan coefficients for spherical harmonics, which are now widely used in quantum mechanics.
Together with Carl Neumann at Göttingen, he founded the mathematical research journal Mathematische Annalen in 1868.
In 1883, Saint-Venant translated Clebsch's work on elasticity into French and published it as Théorie de l'élasticité des Corps Solides.
Books by A. Clebsch
Vorlesungen über Geometrie (Teubner, Leipzig, 1876-1891) edited by Ferdinand Lindemann.
Théorie der binären algebraischen Formen (Teubner, 1872)
Theorie der Abelschen Functionen with P. Gordan (B. G. Teubner, 1866)
Theorie der Elasticität fester Körper (B. G. Teubner, 1862)
See also
Clebsch graph
Clebsch representation
Clebsch surface
Eigenvalues and eigenvectors
Helmholtz equation
Hyperboloid model
Pentagram map
Quaternary cubic
References
External links
1832 births
1872 deaths
19th-century German mathematicians
Algebraic geometers
Scientists from Königsberg
People from the Province of Prussia
University of Königsberg alumni
Academic staff of the Humboldt University of Berlin
Academic staff of the Karlsruhe Institute of Technology
Academic staff of the University of Giessen
Academic staff of the University of Göttingen
Mathematicians from the Kingdom of Prussia
|
https://en.wikipedia.org/wiki/Sha%20Tin%20Government%20Secondary%20School
|
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [ 114.184577, 22.376369 ]
}
}
]
}
Sha Tin Government Secondary School (STGSS; 沙田官立中學) is located in Sha Tin, Hong Kong. It was founded in September 1972 and has now become a full-fledged co-educational grammar school. There are 25 classes with an enrollment of approximately 840 students in the year 2022-23. The current principal is Ms. CHOI Fung-man (蔡鳳雯).
School information
Achievements
STGSS counts 6 winners (PANG Wai Sum Diana 1990, LUK Man Chung 1993, YEUNG Chok Hang 1996, CHAN Ting Ting 2006, LEUNG Ka Wing Connie 2007, LUK Man Ping Maggie 2010) of the prestigious Hong Kong Outstanding Students Awards, ranking 12th (tied with Diocesan Boys' School, St. Paul's Co-educational College and Marymount Secondary School) among all secondary schools in Hong Kong. In 2012, 4 senior students, Yiu Shing Fung (5C), Tai Tsz Long (6B), Tai Tsz Fung (6B) and Chen Kwan Kin (6C), were being awarded the Champion in the 45th Joint School Science Exhibition Proposal Competition and Overall Champion in the 45th Joint School Science Exhibition ITC Innovation Award with the theme of "Disaster Counteraction Scientific Innovation".
School organisation
The school is a co-educational secondary school founded by the Hong Kong Government. School policies are basically devised in accordance with the educational ordinances and policies of the Education Bureau. The School Management Committee (SMC) is the top decision-making body. Its chairperson is an official appointed by the Education Bureau. The SMC also includes the school principal, two teacher representatives, two parent representatives, two alumni representatives and two community members. The present chairperson of the SMC is Mr. NG Ka-shing, Principal Education Officer (Curriculum Development) of the Education Bureau. The community members are Professor POON Wai-yin, Isabella and Mrs TONG AU Yin-man. The SMC is responsible for setting the direction of school development and managing the school budgets. The principal, with the help of two assistant principals, is responsible for the daily operation of the school.
Facilities
There are 32 classrooms, 4 science laboratories, 2 computer rooms, and a number of special rooms such as the English Room, Visual Arts Room, Geography Room, and Music Room. All classrooms are air-conditioned and most are equipped with audio-visual facilities. The playground is accessible to all students for sports and leisure in a restricted time. Other facilities include the air-conditioned School Hall, Library, Lecture Theatre, Conference Room, Teachers' Resources Center, Prefects' Room, Broadcasting Room, Social Worker's Room and Student Council Room.
The English room
The English Room is a meeting place specially designated for the English Debating Club and the English Club of the school. National/Regional
|
https://en.wikipedia.org/wiki/R%C3%B3bert%20Szelepcs%C3%A9nyi
|
Róbert Szelepcsényi (; born 19 August 1966, Žilina) is a Slovak computer scientist of Hungarian descent and a member of the Faculty of Mathematics, Physics and Informatics of Comenius University in Bratislava.
His results on the closure of non-deterministic space under complement, independently obtained in 1987 also by Neil Immerman (the result known as the Immerman–Szelepcsényi theorem), brought the Gödel Prize of ACM and EATCS to both of them in 1995.
Scientific articles
Róbert Szelepcsényi: The Method of Forced Enumeration for Nondeterministic Automata. Acta Informatica 26(3): 279-284 (1988)
References
Slovak computer scientists
Hungarian computer scientists
20th-century Hungarian mathematicians
21st-century Hungarian mathematicians
Theoretical computer scientists
Comenius University alumni
Gödel Prize laureates
Hungarians in Slovakia
Slovak people of Hungarian descent
Living people
1966 births
|
https://en.wikipedia.org/wiki/Cole%20Prize
|
The Frank Nelson Cole Prize, or Cole Prize for short, is one of twenty-two prizes awarded to mathematicians by the American Mathematical Society, one for an outstanding contribution to algebra, and the other for an outstanding contribution to number theory. The prize is named after Frank Nelson Cole, who served the Society for 25 years. The Cole Prize in algebra was funded by Cole himself, from funds given to him as a retirement gift; the prize fund was later augmented by his son, leading to the double award.
The prizes recognize a notable research work in algebra (given every three years) or number theory (given every three years) that has appeared in the last six years. The work must be published in a recognized, peer-reviewed venue.. The first award for algebra was made in 1928 to L. E. Dickson, while the first award for number theory was made in 1931 to H. S. Vandiver.
Frank Nelson Cole Prize in Algebra
Frank Nelson Cole Prize in Number Theory
For full citations, see external links.
See also
List of mathematics awards
References
External links
Frank Nelson Cole Prize in Algebra
Frank Nelson Cole Prize in Number Theory
Awards of the American Mathematical Society
Awards established in 1928
Triennial events
.
1928 establishments in the United States
Algebra
|
https://en.wikipedia.org/wiki/Highly%20cototient%20number
|
In number theory, a branch of mathematics, a highly cototient number is a positive integer which is above 1 and has more solutions to the equation
than any other integer below and above 1. Here, is Euler's totient function. There are infinitely many solutions to the equation for
= 1
so this value is excluded in the definition. The first few highly cototient numbers are:
2, 4, 8, 23, 35, 47, 59, 63, 83, 89, 113, 119, 167, 209, 269, 299, 329, 389, 419, 509, 629, 659, 779, 839, 1049, 1169, 1259, 1469, 1649, 1679, 1889, ...
Many of the highly cototient numbers are odd. In fact, after 8, all the numbers listed above are odd, and after 167 all the numbers listed above are congruent to 29 modulo 30.
The concept is somewhat analogous to that of highly composite numbers. Just as there are infinitely many highly composite numbers, there are also infinitely many highly cototient numbers. Computations become harder, since integer factorization becomes harder as the numbers get larger.
Example
The cototient of is defined as , i.e. the number of positive integers less than or equal to that have at least one prime factor in common with . For example, the cototient of 6 is 4 since these four positive integers have a prime factor in common with 6: 2, 3, 4, 6. The cototient of 8 is also 4, this time with these integers: 2, 4, 6, 8. There are exactly two numbers, 6 and 8, which have cototient 4. There are fewer numbers which have cototient 2 and cototient 3 (one number in each case), so 4 is a highly cototient number.
Primes
The first few highly cototient numbers which are primes are
2, 23, 47, 59, 83, 89, 113, 167, 269, 389, 419, 509, 659, 839, 1049, 1259, 1889, 2099, 2309, 2729, 3359, 3989, 4289, 4409, 5879, 6089, 6719, 9029, 9239, ...
See also
Highly totient number
References
Integer sequences
|
https://en.wikipedia.org/wiki/Discrete%20symmetry
|
In mathematics and geometry, a discrete symmetry is a symmetry that describes non-continuous changes in a system. For example, a square possesses discrete rotational symmetry, as only rotations by multiples of right angles will preserve the square's original appearance. Discrete symmetries sometimes involve some type of 'swapping', these swaps usually being called reflections or interchanges. In mathematics and theoretical physics, a discrete symmetry is a symmetry under the transformations of a discrete group—e.g. a topological group with a discrete topology whose elements form a finite or a countable set.
One of the most prominent discrete symmetries in physics is parity symmetry. It manifests itself in various elementary physical quantum systems, such as quantum harmonic oscillator, electron orbitals of Hydrogen-like atoms by forcing wavefunctions to be even or odd. This in turn gives rise to selection rules that determine which transition lines are visible in atomic absorption spectra.
References
Slavik V. Jablan, Symmetry, Ornament and Modularity, Volume 30 of K & E Series on Knots and Everything, World Scientific, 2002.
Group theory
Theoretical physics
Symmetry
|
https://en.wikipedia.org/wiki/Rooted%20graph
|
In mathematics, and, in particular, in graph theory, a rooted graph is a graph in which one vertex has been distinguished as the root. Both directed and undirected versions of rooted graphs have been studied, and there are also variant definitions that allow multiple roots.
Rooted graphs may also be known (depending on their application) as pointed graphs or flow graphs. In some of the applications of these graphs, there is an additional requirement that the whole graph be reachable from the root vertex.
Variations
In topological graph theory, the notion of a rooted graph may be extended to consider multiple vertices or multiple edges as roots. The former are sometimes called vertex-rooted graphs in order to distinguish them from edge-rooted graphs in this context. Graphs with multiple nodes designated as roots are also of some interest in combinatorics, in the area of random graphs. These graphs are also called multiply rooted graphs.
The terms rooted directed graph or rooted digraph also see variation in definitions. The obvious transplant is to consider a digraph rooted by identifying a particular node as root. However, in computer science, these terms commonly refer to a narrower notion; namely, a rooted directed graph is a digraph with a distinguished node r, such that there is a directed path from r to any node other than r. Authors who give the more general definition may refer to as connected rooted digraphs or accessible rooted graphs (see ).
The Art of Computer Programming defines rooted digraphs slightly more broadly, namely, a directed graph is called rooted if it has at least one node that can reach all the other nodes. Knuth notes that the notion thus defined is a sort of intermediate between the notions of strongly connected and connected digraph.
Applications
Flow graphs
In computer science, rooted graphs in which the root vertex can reach all other vertices are called flow graphs or flowgraphs. Sometimes an additional restriction is added specifying that a flow graph must have a single exit (sink) vertex.
Flow graphs may be viewed as abstractions of flow charts, with the non-structural elements (node contents and types) removed. Perhaps the best known sub-class of flow graphs are control-flow graphs, used in compilers and program analysis. An arbitrary flow graph may be converted to a control-flow graph by performing an edge contraction on every edge that is the only outgoing edge from its source and the only incoming edge into its target. Another type of flow graph commonly used is the call graph, in which nodes correspond to entire subroutines.
The general notion of flow graph has been called program graph, but the same term has also been used to denote only control-flow graphs. Flow graphs have also been called unlabeled flowgraphs and proper flowgraphs. These graphs are sometimes used in software testing.
When required to have a single exit, flow graphs have two properties not shared with directed graphs in genera
|
https://en.wikipedia.org/wiki/Additive%20polynomial
|
In mathematics, the additive polynomials are an important topic in classical algebraic number theory.
Definition
Let k be a field of prime characteristic p. A polynomial P(x) with coefficients in k is called an additive polynomial, or a Frobenius polynomial, if
as polynomials in a and b. It is equivalent to assume that this equality holds for all a and b in some infinite field containing k, such as its algebraic closure.
Occasionally absolutely additive is used for the condition above, and additive is used for the weaker condition that P(a + b) = P(a) + P(b) for all a and b in the field. For infinite fields the conditions are equivalent, but for finite fields they are not, and the weaker condition is the "wrong" as it does not behave well. For example, over a field of order q any multiple P of xq − x will satisfy P(a + b) = P(a) + P(b) for all a and b in the field, but will usually not be (absolutely) additive.
Examples
The polynomial xp is additive. Indeed, for any a and b in the algebraic closure of k one has by the binomial theorem
Since p is prime, for all n = 1, ..., p−1 the binomial coefficient is divisible by p, which implies that
as polynomials in a and b.
Similarly all the polynomials of the form
are additive, where n is a non-negative integer.
The definition makes sense even if k is a field of characteristic zero, but in this case the only additive polynomials are those of the form ax for some a in k.
The ring of additive polynomials
It is quite easy to prove that any linear combination of polynomials with coefficients in k is also an additive polynomial. An interesting question is whether there are other additive polynomials except these linear combinations. The answer is that these are the only ones.
One can check that if P(x) and M(x) are additive polynomials, then so are P(x) + M(x) and P(M(x)). These imply that the additive polynomials form a ring under polynomial addition and composition. This ring is denoted
This ring is not commutative unless k is the field (see modular arithmetic). Indeed, consider the additive polynomials ax and xp for a coefficient a in k. For them to commute under composition, we must have
and hence ap − a = 0. This is false for a not a root of this equation, that is, for a outside
The fundamental theorem of additive polynomials
Let P(x) be a polynomial with coefficients in k, and be the set of its roots. Assuming that the roots of P(x) are distinct (that is, P(x) is separable), then P(x) is additive if and only if the set forms a group with the field addition.
See also
Drinfeld module
Additive map
References
David Goss, Basic Structures of Function Field Arithmetic, 1996, Springer, Berlin. .
External links
Algebraic number theory
Modular arithmetic
Polynomials
|
https://en.wikipedia.org/wiki/Functional%20%28mathematics%29
|
In mathematics, a functional is a certain type of function. The exact definition of the term varies depending on the subfield (and sometimes even the author).
In linear algebra, it is synonymous with a linear form, which is a linear mapping from a vector space into its field of scalars (that is, it is an element of the dual space )
In functional analysis and related fields, it refers more generally to a mapping from a space into the field of real or complex numbers. In functional analysis, the term is a synonym of linear form; that is, it is a scalar-valued linear map. Depending on the author, such mappings may or may not be assumed to be linear, or to be defined on the whole space
In computer science, it is synonymous with a higher-order function, which is a function that takes one or more functions as arguments or returns them.
This article is mainly concerned with the second concept, which arose in the early 18th century as part of the calculus of variations. The first concept, which is more modern and abstract, is discussed in detail in a separate article, under the name linear form. The third concept is detailed in the computer science article on higher-order functions.
In the case where the space is a space of functions, the functional is a "function of a function", and some older authors actually define the term "functional" to mean "function of a function".
However, the fact that is a space of functions is not mathematically essential, so this older definition is no longer prevalent.
The term originates from the calculus of variations, where one searches for a function that minimizes (or maximizes) a given functional. A particularly important application in physics is search for a state of a system that minimizes (or maximizes) the action, or in other words the time integral of the Lagrangian.
Details
Duality
The mapping
is a function, where is an argument of a function
At the same time, the mapping of a function to the value of the function at a point
is a functional; here, is a parameter.
Provided that is a linear function from a vector space to the underlying scalar field, the above linear maps are dual to each other, and in functional analysis both are called linear functionals.
Definite integral
Integrals such as
form a special class of functionals. They map a function into a real number, provided that is real-valued. Examples include
the area underneath the graph of a positive function
norm of a function on a set
the arclength of a curve in 2-dimensional Euclidean space
Inner product spaces
Given an inner product space and a fixed vector the map defined by is a linear functional on The set of vectors such that is zero is a vector subspace of called the null space or kernel of the functional, or the orthogonal complement of denoted
For example, taking the inner product with a fixed function defines a (linear) functional on the Hilbert space of square integrable functions on
Locality
If
|
https://en.wikipedia.org/wiki/Landscape%20engineering
|
Landscape engineering is the application of mathematics and science to shape land and waterscapes. It can also be described as green engineering, but the design professionals best known for landscape engineering are landscape architects. Landscape engineering is the interdisciplinary application of engineering and other applied sciences to the design and creation of anthropogenic landscapes. It differs from, but embraces traditional reclamation. It includes scientific disciplines: Agronomy, Botany, Ecology, Forestry, Geology, Geochemistry, Hydrogeology, and Wildlife Biology. It also draws upon applied sciences: Agricultural & Horticultural Sciences, Engineering Geomorphology, landscape architecture, and Mining, Geotechnical, and Civil, Agricultural & Irrigation Engineering.
Landscape engineering builds on the engineering strengths of declaring goals, determining initial conditions, iteratively designing, predicting performance based on knowledge of the design, monitoring performance, and adjusting designs to meet the declared goals. It builds on the strengths and history of reclamation practice. Its distinguishing feature is the marriage of landforms, substrates, and vegetation throughout all phases of design and construction, which previously have been kept as separate disciplines.
Though landscape engineering embodies all elements of traditional engineering (planning, investigation, design, construction, operation, assessment, research, management, and training), it is focused on three main areas. The first is closure planning – which includes goal setting and design of the landscape as a whole. The second division is landscape design more focused on the design of individual landforms to reliably meet the goals as set out in the closure planning process. Landscape performance assessment is critical to both of these, and is also important for estimating liability and levels of financial assurance. The iterative process of planning, design, and performance assessment by a multidisciplinary team is the basis of landscape engineering.
Source: McKenna, G.T., 2002. Sustainable mine reclamation and landscape engineering. PhD Thesis, University of Alberta, Edmonton, Canada 661p.
Example
An example of contemporary landscape engineering and natural resources management related to the Biosphere 2 and Seawater farming projects, is the IBTS Greenhouse, formerly the Forest City designed for the Emirate of Ras al Khaimah. The IBTS rests on a thoroughly integrated design with more than 340 different engineering, science and technology disciplines. It was created for desert greening of hot, arid deserts and optimized for fresh water production from saline, or brackish water. The Integrated Biotectural System is based on a wetland, more specifically a mangrove eco-system designed for food and fodder production of 80tons per hectare and year, also called mariculture.
The atmosphere inside the IBTS is turned into a potent water source and harvested with a co
|
https://en.wikipedia.org/wiki/Hans%20Zassenhaus
|
Hans Julius Zassenhaus (28 May 1912 – 21 November 1991) was a German mathematician, known for work in many parts of abstract algebra, and as a pioneer of computer algebra.
Biography
He was born in Koblenz in 1912.
His father was a historian and advocate for Reverence for Life as expressed by Albert Schweitzer. Hans had two brothers, Guenther and Wilfred, and sister Hiltgunt, who wrote an autobiography in 1974. According to her, their father lost his position as school principal due to his philosophy. She wrote:
Hans, my eldest brother, studied mathematics. My brothers Guenther and Wilfred were in medical school. ... only students who participated in Nazi activities would get scholarships. That left us out. Together we made an all-out effort. ... soon our house became a beehive. Day in and day out for the next four years a small army of children of all ages would arrive to be tutored.
At the University of Hamburg Zassenhaus came under the influence of Emil Artin. As he wrote later:
His introductory course in analysis that I attended at the age of 17 converted me from a theoretical physicist to a mathematician.
When just 21, Zassenhaus was studying composition series in group theory. He proved his butterfly lemma that provides a refinement of two normal chains to isomorphic central chains. Inspired by Artin, Zassenhaus wrote a textbook Lehrbuch der Gruppentheorie that was later translated as Theory of Groups.
His thesis was on doubly transitive permutation groups with Frobenius groups as stabilizers. These groups are now called Zassenhaus groups. They have had a deep impact on the classification of finite simple groups.
He obtained his doctorate in June 1934, and took the teachers’ exam the next May. He became a scientific assistant at University of Rostock. In 1936 he became assistant to Artin back in Hamburg, but Artin departed for the USA the following year. Zassenhaus gave his Habilitation in 1938.
According to his sister Hiltgunt, Hans was "called up as a research scientist at a weather station" for his part in the German war effort.
Zassenhaus married Lieselotte Lohmann in 1942. The couple raised three children: Michael (born 1943), Angela (born 1947), and Peter (born 1949). In 1943 Zassenhaus became extraordinary professor. He became managing director of the Hamburg Mathematical Seminar.
After the war, and as a fellow of the British Council, Zassenhaus visited the University of Glasgow in 1948. There he was given an honorary Master of Arts degree. The following year he joined the faculty of McGill University where the endowments of Peter Redpath financed a professorship. He was at McGill for a decade with leaves of absence to the Institute for Advanced Study (1955/6) and California Institute of Technology (1958/9). There he was using computers to advance number theory. In 1959 Zassenhaus began teaching at University of Notre Dame and became director of its computing center in 1964.
Zassenhaus was a Mershon visiting professor at Ohio
|
https://en.wikipedia.org/wiki/Edouard%20Zeckendorf
|
Edouard Zeckendorf (2 May 1901 – 16 May 1983) was a Belgian doctor, army officer and amateur mathematician. In mathematics, he is best known for his work on Fibonacci numbers and in particular for proving Zeckendorf's theorem, though he published over 20 papers, mostly in number theory.
Zeckendorf was born in Liège in 1901. He was the son of Abraham Zeckendorf, Dutch dentist and practicing Jew. In 1925, Zeckendorf graduated as a medical doctor from the University of Liège and joined the Belgian Army medical corps. When Germany invaded Belgium in 1940, Zeckendorf was taken prisoner and remained a prisoner of war until 1945. During this period, he provided medical care to other allied POWs.
Zeckendorf retired from the army in 1957 as a colonel.
References
20th-century Belgian mathematicians
1901 births
1983 deaths
University of Liège alumni
Physicians from Liège
Belgian military personnel of World War II
Belgian prisoners of war in World War II
Amateur mathematicians
Belgian people of Dutch descent
People of Dutch-Jewish descent
Jewish physicians
Belgian Army officers
World War II prisoners of war held by Germany
|
https://en.wikipedia.org/wiki/Zeckendorf%27s%20theorem
|
In mathematics, Zeckendorf's theorem, named after Belgian amateur mathematician Edouard Zeckendorf, is a theorem about the representation of integers as sums of Fibonacci numbers.
Zeckendorf's theorem states that every positive integer can be represented uniquely as the sum of one or more distinct Fibonacci numbers in such a way that the sum does not include any two consecutive Fibonacci numbers. More precisely, if is any positive integer, there exist positive integers , with , such that
where is the th Fibonacci number. Such a sum is called the Zeckendorf representation of . The Fibonacci coding of can be derived from its Zeckendorf representation.
For example, the Zeckendorf representation of 64 is
.
There are other ways of representing 64 as the sum of Fibonacci numbers
but these are not Zeckendorf representations because 34 and 21 are consecutive Fibonacci numbers, as are 5 and 3.
For any given positive integer, its Zeckendorf representation can be found by using a greedy algorithm, choosing the largest possible Fibonacci number at each stage.
History
While the theorem is named after the eponymous author who published his paper in 1972, the same result had been published 20 years earlier by Gerrit Lekkerkerker. As such, the theorem is an example of Stigler's Law of Eponymy.
Proof
Zeckendorf's theorem has two parts:
Existence: every positive integer has a Zeckendorf representation.
Uniqueness: no positive integer has two different Zeckendorf representations.
The first part of Zeckendorf's theorem (existence) can be proven by induction. For it is clearly true (as these are Fibonacci numbers), for we have . If is a Fibonacci number then there is nothing to prove. Otherwise there exists such that . Now suppose each positive integer has a Zeckendorf representation (induction hypothesis) and consider . Since , has a Zeckendorf representation by the induction hypothesis. At the same time, (we apply the definition of Fibonacci number in the last equality), so the Zeckendorf representation of does not contain , and hence also does not contain . As a result, can be represented as the sum of and the Zeckendorf representation of , such that the Fibonacci numbers involved in the sum are distinct.
The second part of Zeckendorf's theorem (uniqueness) requires the following lemma:
Lemma: The sum of any non-empty set of distinct, non-consecutive Fibonacci numbers whose largest member is is strictly less than the next larger Fibonacci number .
The lemma can be proven by induction on .
Now take two non-empty sets and of distinct non-consecutive Fibonacci numbers which have the same sum, . Consider sets and which are equal to and from which the common elements have been removed (i. e. and ). Since and had equal sum, and we have removed exactly the elements from from both sets, and must have the same sum as well, .
Now we will show by contradiction that at least one of and is empty. Assume the contrary, i.
|
https://en.wikipedia.org/wiki/Covering%20group
|
In mathematics, a covering group of a topological group H is a covering space G of H such that G is a topological group and the covering map is a continuous group homomorphism. The map p is called the covering homomorphism. A frequently occurring case is a double covering group, a topological double cover in which H has index 2 in G; examples include the spin groups, pin groups, and metaplectic groups.
Roughly explained, saying that for example the metaplectic group Mp2n is a double cover of the symplectic group Sp2n means that there are always two elements in the metaplectic group representing one element in the symplectic group.
Properties
Let G be a covering group of H. The kernel K of the covering homomorphism is just the fiber over the identity in H and is a discrete normal subgroup of G. The kernel K is closed in G if and only if G is Hausdorff (and if and only if H is Hausdorff). Going in the other direction, if G is any topological group and K is a discrete normal subgroup of G then the quotient map p : G → G/K is a covering homomorphism.
If G is connected then K, being a discrete normal subgroup, necessarily lies in the center of G and is therefore abelian. In this case, the center of H = G/K is given by
As with all covering spaces, the fundamental group of G injects into the fundamental group of H. Since the fundamental group of a topological group is always abelian, every covering group is a normal covering space. In particular, if G is path-connected then the quotient group is isomorphic to K. The group K acts simply transitively on the fibers (which are just left cosets) by right multiplication. The group G is then a principal K-bundle over H.
If G is a covering group of H then the groups G and H are locally isomorphic. Moreover, given any two connected locally isomorphic groups H1 and H2, there exists a topological group G with discrete normal subgroups K1 and K2 such that H1 is isomorphic to G/K1 and H2 is isomorphic to G/K2.
Group structure on a covering space
Let H be a topological group and let G be a covering space of H. If G and H are both path-connected and locally path-connected, then for any choice of element e* in the fiber over e ∈ H, there exists a unique topological group structure on G, with e* as the identity, for which the covering map p : G → H is a homomorphism.
The construction is as follows. Let a and b be elements of G and let f and g be paths in G starting at e* and terminating at a and b respectively. Define a path h : I → H by h(t) = p(f(t))p(g(t)). By the path-lifting property of covering spaces there is a unique lift of h to G with initial point e*. The product ab is defined as the endpoint of this path. By construction we have p(ab) = p(a)p(b). One must show that this definition is independent of the choice of paths f and g, and also that the group operations are continuous.
Alternatively, the group law on G can be constructed by lifting the group law H × H → H to G, using the lifting property
|
https://en.wikipedia.org/wiki/Orthogonal%20transformation
|
In linear algebra, an orthogonal transformation is a linear transformation T : V → V on a real inner product space V, that preserves the inner product. That is, for each pair of elements of V, we have
Since the lengths of vectors and the angles between them are defined through the inner product, orthogonal transformations preserve lengths of vectors and angles between them. In particular, orthogonal transformations map orthonormal bases to orthonormal bases.
Orthogonal transformations are injective: if then , hence , so the kernel of is trivial.
Orthogonal transformations in two- or three-dimensional Euclidean space are stiff rotations, reflections, or combinations of a rotation and a reflection (also known as improper rotations). Reflections are transformations that reverse the direction front to back, orthogonal to the mirror plane, like (real-world) mirrors do. The matrices corresponding to proper rotations (without reflection) have a determinant of +1. Transformations with reflection are represented by matrices with a determinant of −1. This allows the concept of rotation and reflection to be generalized to higher dimensions.
In finite-dimensional spaces, the matrix representation (with respect to an orthonormal basis) of an orthogonal transformation is an orthogonal matrix. Its rows are mutually orthogonal vectors with unit norm, so that the rows constitute an orthonormal basis of V. The columns of the matrix form another orthonormal basis of V.
If an orthogonal transformation is invertible (which is always the case when V is finite-dimensional) then its inverse is another orthogonal transformation. Its matrix representation is the transpose of the matrix representation of the original transformation.
Examples
Consider the inner-product space with the standard euclidean inner product and standard basis. Then, the matrix transformation
is orthogonal. To see this, consider
Then,
The previous example can be extended to construct all orthogonal transformations. For example, the following matrices define orthogonal transformations on :
See also
Improper rotation
Linear transformation
Orthogonal matrix
Unitary transformation
References
Linear algebra
|
https://en.wikipedia.org/wiki/Low-probability-of-intercept%20radar
|
A low-probability-of-intercept radar (LPIR) is a radar employing measures to avoid detection by passive radar detection equipment (such as a radar warning receiver (RWR), or electronic support receiver) while it is searching for a target or engaged in target tracking. This characteristic is desirable in a radar because it allows finding and tracking an opponent without alerting them to the radar's presence. This also protects the radar installation from anti-radiation missiles (ARMs).
LPI measures include:
Power management and high duty cycle, meaning the transmitter is on most of the time (long integration times)
Wide bandwidth (or Ultra-wideband)
Frequency Agility, and frequency selection
Advanced/irregular scan patterns
Coded pulses (coherent detection)
High processing gain
Low sidelobe antennas
Rationale
Radar systems work by sending out a signal and then listening for its echo off distant objects. Each of these paths, to and from the target, is subject to the inverse square law of propagation in both the transmitted signal and the signal reflected back. That means that a radar's received energy drops with the fourth power of the distance, which is why radar systems require high powers, often in the megawatt range, to be effective at long range.
The radar signal being sent out is a simple radio signal, and can be received with a simple radio receiver. Military aircraft and ships have defensive receivers, called radar warning receivers (RWR), which detect when an enemy radar beam is on them, thus revealing the position of the enemy. Unlike the radar unit, which must send the pulse out and then receive its reflection, the target's receiver does not need the reflection and thus the signal drops off only as the square of distance. This means that the receiver is always at an advantage [neglecting disparity in antenna size] over the radar in terms of range - it will always be able to detect the signal long before the radar can see the target's echo. Since the position of the radar is extremely useful information in an attack on that platform, this means that radars generally must be turned off for lengthy periods if they are subject to attack; this is common on ships, for instance.
Unlike the radar, which knows in which direction it is sending its signal, the receiver simply gets a pulse of energy and has to interpret it. Since the radio spectrum is filled with noise, the receiver's signal is integrated over a short period of time, making periodic sources like a radar add up and stand out over the random background. The rough direction can be calculated using a rotating antenna, or similar passive array using phase or amplitude comparison. Typically RWRs store the detected pulses for a short period of time, and compare their broadcast frequency and pulse repetition frequency against a database of known radars. The direction to the source is normally combined with symbology indicating the likely purpose of the radar – Airborne early warning an
|
https://en.wikipedia.org/wiki/Character%20group
|
In mathematics, a character group is the group of representations of a group by complex-valued functions. These functions can be thought of as one-dimensional matrix representations and so are special cases of the group characters that arise in the related context of character theory. Whenever a group is represented by matrices, the function defined by the trace of the matrices is called a character; however, these traces do not in general form a group. Some important properties of these one-dimensional characters apply to characters in general:
Characters are invariant on conjugacy classes.
The characters of irreducible representations are orthogonal.
The primary importance of the character group for finite abelian groups is in number theory, where it is used to construct Dirichlet characters. The character group of the cyclic group also appears in the theory of the discrete Fourier transform. For locally compact abelian groups, the character group (with an assumption of continuity) is central to Fourier analysis.
Preliminaries
Let be an abelian group. A function mapping the group to the non-zero complex numbers is called a character of if it is a group homomorphism from to —that is, if for all .
If is a character of a finite group , then each function value is a root of unity, since for each there exists such that , and hence .
Each character f is a constant on conjugacy classes of G, that is, f(hgh−1) = f(g). For this reason, a character is sometimes called a class function.
A finite abelian group of order n has exactly n distinct characters. These are denoted by f1, ..., fn. The function f1 is the trivial representation, which is given by for all . It is called the principal character of G; the others are called the non-principal characters.
Definition
If G is an abelian group, then the set of characters fk forms an abelian group under pointwise multiplication. That is, the product of characters and is defined by for all . This group is the character group of G and is sometimes denoted as . The identity element of is the principal character f1, and the inverse of a character fk is its reciprocal 1/fk. If is finite of order n, then is also of order n. In this case, since for all , the inverse of a character is equal to the complex conjugate.
Alternative definition
There is another definition of character grouppg 29 which uses as the target instead of just . This is useful while studying complex tori because the character group of the lattice in a complex torus is canonically isomorphic to the dual torus via the Appell-Humbert theorem. That is,We can express explicit elements in the character group as follows: recall that elements in can be expressed asfor . If we consider the lattice as a subgroup of the underlying real vector space of , then a homomorphismcan be factored as a mapThis follows from elementary properties of homomorphisms. Note thatgiving us the desired factorization. As the groupwe have
|
https://en.wikipedia.org/wiki/Mamadou%20Diallo%20%28footballer%2C%20born%201982%29
|
Mamadou Diallo (born 17 April 1982) is a Malian former professional footballer who played as a striker. He spent most of his professional career in France.
Career statistics
Scores and results list Mali's goal tally first, score column indicates score after each Diallo goal.
References
External links
Mamadou Diallo Interview
Living people
1982 births
Footballers from Bamako
Men's association football forwards
Malian men's footballers
JS Centre Salif Keita players
USM Alger players
FC Nantes players
Qatar SC players
Al Jazira Club players
Le Havre AC players
CS Sedan Ardennes players
Stade Lavallois players
Royale Union Tubize-Braine players
Algerian Ligue Professionnelle 1 players
Ligue 1 players
Ligue 2 players
UAE Pro League players
Qatar Stars League players
Challenger Pro League players
Malian expatriate men's footballers
Mali men's international footballers
2008 Africa Cup of Nations players
2010 Africa Cup of Nations players
Expatriate men's footballers in Belgium
Expatriate men's footballers in France
Expatriate men's footballers in Qatar
Expatriate men's footballers in the United Arab Emirates
Footballers at the 2004 Summer Olympics
Olympic footballers for Mali
Malian expatriate sportspeople in Algeria
Expatriate men's footballers in Algeria
21st-century Malian people
|
https://en.wikipedia.org/wiki/Long%20tail
|
In statistics and business, a long tail of some distributions of numbers is the portion of the distribution having many occurrences far from the "head" or central part of the distribution. The distribution could involve popularities, random numbers of occurrences of events with various probabilities, etc. The term is often used loosely, with no definition or an arbitrary definition, but precise definitions are possible.
In statistics, the term long-tailed distribution has a narrow technical meaning, and is a subtype of heavy-tailed distribution. Intuitively, a distribution is (right) long-tailed if, for any fixed amount, when a quantity exceeds a high level, it almost certainly exceeds it by at least that amount: large quantities are probably even larger. Note that there is no sense of the "long tail" of a distribution, but only the property of a distribution being long-tailed.
In business, the term long tail is applied to rank-size distributions or rank-frequency distributions (primarily of popularity), which often form power laws and are thus long-tailed distributions in the statistical sense. This is used to describe the retailing strategy of selling many unique items with relatively small quantities sold of each (the "long tail")—usually in addition to selling fewer popular items in large quantities (the "head"). Sometimes an intermediate category is also included, variously called the body, belly, torso, or middle. The specific cutoff of what part of a distribution is the "long tail" is often arbitrary, but in some cases may be specified objectively; see segmentation of rank-size distributions.
The long tail concept has found some ground for application, research, and experimentation. It is a term used in online business, mass media, micro-finance (Grameen Bank, for example), user-driven innovation (Eric von Hippel), knowledge management, and social network mechanisms (e.g. crowdsourcing, crowdcasting, peer-to-peer), economic models, marketing (viral marketing), and IT Security threat hunting within a SOC (Information security operations center).
History
Frequency distributions with long tails have been studied by statisticians since at least 1946. The term has also been used in the finance and insurance business for many years. The work of Benoît Mandelbrot in the 1950s and later has led to him being referred to as "the father of long tails".
The long tail was popularized by Chris Anderson in an October 2004 Wired magazine article, in which he mentioned Amazon.com, Apple and Yahoo! as examples of businesses applying this strategy. Anderson elaborated the concept in his book The Long Tail: Why the Future of Business Is Selling Less of More.
Business
The distribution and inventory costs of businesses successfully applying a long tail strategy allow them to realize significant profit out of selling small volumes of hard-to-find items to many customers instead of only selling large volumes of a reduced number of popular items. The total s
|
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Mathematics
|
The Max Planck Institute for Mathematics (, MPIM) is a prestigious research institute located in Bonn, Germany. It is named in honor of the German physicist Max Planck
and forms part of the Max Planck Society (Max-Planck-Gesellschaft), an association of 84 institutes engaging in fundamental research in the arts and the sciences. The MPIM is the only Max Planck institute specializing in pure mathematics.
The Institute was founded by Friedrich Hirzebruch in 1980, having emerged from the collaborative research center "Theoretical Mathematics" (Sonderforschungsbereich "Theoretische Mathematik"). Hirzebruch shaped the institute as its director until his retirement in 1995. Currently, the institute is managed by a board of five directors consisting of Peter Teichner (managing director), Werner Ballmann, Gerd Faltings, Peter Scholze, and Don Zagier. Friedrich Hirzebruch and Yuri Manin were, and Günter Harder is, acting as emeriti.
Research
The Max Planck Institute for Mathematics offers mathematicians from around the world the opportunity to visit Bonn and engage in sabbatical work lasting from weeks to several months.
This guest program distinguishes the MPIM from other Max Planck institutes, and results in only a limit number of permanent positions and the absence of separate departments within the institute.
The research of the members and guests of the institute can be classified into the following areas:
Algebraic Geometry and Complex Geometry
Algebraic Groups
Algebraic Topology
Arithmetic Geometry
Differential Geometry and Topology
Dynamical Systems
Global Analysis
Mathematical Physics
Noncommutative Geometry
Number Theory
Representation Theory
References
External links
Homepage
Homepage
Research institutes established in 1980
Mathematical institutes
Mathematics
Max-Planck-Institut fur Mathematik
Max-Planck-Institut fur Mathematik
Mathematics in Germany
|
https://en.wikipedia.org/wiki/Table%20of%20Lie%20groups
|
This article gives a table of some common Lie groups and their associated Lie algebras.
The following are noted: the topological properties of the group (dimension; connectedness; compactness; the nature of the fundamental group; and whether or not they are simply connected) as well as on their algebraic properties (abelian; simple; semisimple).
For more examples of Lie groups and other related topics see the list of simple Lie groups; the Bianchi classification of groups of up to three dimensions; see classification of low-dimensional real Lie algebras for up to four dimensions; and the list of Lie group topics.
Real Lie groups and their algebras
Column legend
Cpt: Is this group G compact? (Yes or No)
: Gives the group of components of G. The order of the component group gives the number of connected components. The group is connected if and only if the component group is trivial (denoted by 0).
: Gives the fundamental group of G whenever G is connected. The group is simply connected if and only if the fundamental group is trivial (denoted by 0).
UC: If G is not simply connected, gives the universal cover of G.
Real Lie algebras
Complex Lie groups and their algebras
Note that a "complex Lie group" is defined as a complex analytic manifold that is also a group whose multiplication and inversion are each given by a holomorphic map. The dimensions in the table below are dimensions over C. Note that every complex Lie group/algebra can also be viewed as a real Lie group/algebra of twice the dimension.
Complex Lie algebras
The dimensions given are dimensions over C. Note that every complex Lie algebra can also be viewed as a real Lie algebra of twice the dimension.
The Lie algebra of affine transformations of dimension two, in fact, exist for any field. An instance has already been listed in the first table for real Lie algebras.
See also
Classification of low-dimensional real Lie algebras
Simple Lie group#Full classification
References
Lie groups
Lie algebras
Lie groups
|
https://en.wikipedia.org/wiki/Gaussian%20noise
|
In signal processing theory, Gaussian noise, named after Carl Friedrich Gauss, is a kind of signal noise that has a probability density function (pdf) equal to that of the normal distribution (which is also known as the Gaussian distribution). In other words, the values that the noise can take are Gaussian-distributed.
The probability density function of a Gaussian random variable is given by:
where represents the grey level, the mean grey value and its standard deviation.
A special case is white Gaussian noise, in which the values at any pair of times are identically distributed and statistically independent (and hence uncorrelated). In communication channel testing and modelling, Gaussian noise is used as additive white noise to generate additive white Gaussian noise.
In telecommunications and computer networking, communication channels can be affected by wideband Gaussian noise coming from many natural sources, such as the thermal vibrations of atoms in conductors (referred to as thermal noise or Johnson–Nyquist noise), shot noise, black-body radiation from the earth and other warm objects, and from celestial sources such as the Sun.
Gaussian noise in digital images
Principal sources of Gaussian noise in digital images arise during acquisition e.g. sensor noise caused by poor illumination and/or high temperature, and/or transmission e.g. electronic circuit noise. In digital image processing Gaussian noise can be reduced using a spatial filter, though when smoothing an image, an undesirable outcome may result in the blurring of fine-scaled image edges and details because they also correspond to blocked high frequencies. Conventional spatial filtering techniques for noise removal include: mean (convolution) filtering, median filtering and Gaussian smoothing.
See also
Gaussian process
Gaussian smoothing
References
Stochastic processes
Normal distribution
Acoustics
|
https://en.wikipedia.org/wiki/System%20of%20imprimitivity
|
The concept of system of imprimitivity is used in mathematics, particularly in algebra and analysis, both within the context of the theory of group representations. It was used by George Mackey as the basis for his theory of induced unitary representations of locally compact groups.
The simplest case, and the context in which the idea was first noticed, is that of finite groups (see primitive permutation group). Consider a group G and subgroups H and K, with K contained in H. Then the left cosets of H in G are each the union of left cosets of K. Not only that, but translation (on one side) by any element g of G respects this decomposition. The connection with induced representations is that the permutation representation on cosets is the special case of induced representation, in which a representation is induced from a trivial representation. The structure, combinatorial in this case, respected by translation shows that either K is a maximal subgroup of G, or there is a system of imprimitivity (roughly, a lack of full "mixing"). In order to generalise this to other cases, the concept is re-expressed: first in terms of functions on G constant on K-cosets, and then in terms of projection operators (for example the averaging over K-cosets of elements of the group algebra).
Mackey also used the idea for his explication of quantization theory based on preservation of relativity groups acting on configuration space. This generalized work of Eugene Wigner and others and is often considered to be one of the pioneering ideas in canonical quantization.
Example
To motivate the general definitions, a definition is first formulated, in the case of finite groups and their representations on finite-dimensional vector spaces.
If G is a finite group and U a representation of G on a finite-dimensional complex vector space H. The action of G on elements of H induces an action of G on the vector subspaces W of H in this way:
If X is a set of subspaces of H such that
the elements of X are permuted by the action of G on subspaces and
H is the (internal) algebraic direct sum of the elements of X, i.e.,
Then (U,X) is a system of imprimitivity for G.
Two assertions must hold in the definition above:
the spaces W for W ∈ X must span H, and
the spaces W ∈ X must be linearly independent, that is,
holds only when all the coefficients cW are zero.
If the action of G on the elements of X is transitive, then we say this is a transitive system of imprimitivity.
If G is a finite group, G0 a subgroup of G. A representation U of G is induced from a representation V of G0 if and only if there exist the following:
a transitive system of imprimitivity (U, X) and
a subspace W0 ∈ X
such that G0 is the stabilizer subgroup of W under the action of G, i.e.
and V is equivalent to the representation of G0
on W0 given by Uh | W0 for h ∈ G0. Note that by this definition, induced by is a relation between representations. We would like to show that there is actually a map
|
https://en.wikipedia.org/wiki/Gilbert%20Walker%20%28physicist%29
|
Sir Gilbert Thomas Walker (14 June 1868 – 4 November 1958) was an English physicist and statistician of the 20th century. Walker studied mathematics and applied it to a variety of fields including aerodynamics, electromagnetism and the analysis of time-series data before taking up a teaching position at the University of Cambridge. Although he had no experience in meteorology, he was recruited for a post in the Indian Meteorological Department where he worked on statistical approaches to predict the monsoons. He developed the methods in the analysis of time-series data that are now called the Yule-Walker equations. He is known for his groundbreaking description of the Southern Oscillation, a major phenomenon of global climate, and for discovering what is named after him as the Walker circulation, and for greatly advancing the study of climate in general. He was also instrumental in aiding the early career of the Indian mathematical prodigy, Srinivasa Ramanujan.
Early life and education
Walker was born in Rochdale, Lancashire on 14 June 1868, the fourth child and eldest son of Thomas Walker and Elizabeth Charlotte Haslehurst. Thomas was Borough Engineer of Croydon and had pioneered the use of concrete for town reservoirs. He attended Whitgift School where he showed an interest in mathematics and got a scholarship to study at St Paul's School. He attended Trinity College, Cambridge where he was Senior Wrangler in 1889. His hard studies led to ill-health and he spent several winters recuperating in Switzerland where he learnt skating and became quite expert. He became a lecturer at Trinity College from 1895.
Career
Henry Francis Blanford, the founding director of the Indian Meteorological Department, had noticed the pattern that the summer monsoon in India and Burma was correlated with the spring snow cover in the Himalayas and it became routine to use this to make predictions of the Indian monsoons. By 1892 however, these predictions began to fail and the second director John Eliot began to use several other correlations including strength of the trade winds, anticyclones, Nile floods and data from Australia and Africa. Eliot's forecasts from 1899 to 1901 failed so badly, with a drought and famine when he predicted higher than normal rains, that he was criticized severely by the newspapers leading to forecasts being made confidential from 1902 to 1905. A growing interest in the work of Lockyer on cycles led him to choose a mathematically inclined successor who would be Walker, despite his lack of experience in meteorology. Eliot himself was an able mathematician, a Second Wrangler at Cambridge, while Walker had been a Senior Wrangler. Walker was an established applied mathematician at the University of Cambridge and gave up a Fellowship at Trinity to take up a position as assistant to the meteorological reporter in 1903. He was elevated to the position of director general of observatories in India in 1904. Walker developed Blanford's idea with
|
https://en.wikipedia.org/wiki/Transactions%20of%20the%20American%20Mathematical%20Society
|
The Transactions of the American Mathematical Society is a monthly peer-reviewed scientific journal of mathematics published by the American Mathematical Society. It was established in 1900. As a requirement, all articles must be more than 15 printed pages.
See also
Bulletin of the American Mathematical Society
Journal of the American Mathematical Society
Memoirs of the American Mathematical Society
Notices of the American Mathematical Society
Proceedings of the American Mathematical Society
External links
Transactions of the American Mathematical Society on JSTOR
American Mathematical Society academic journals
Mathematics journals
Publications established in 1900
|
https://en.wikipedia.org/wiki/Separated
|
Separated can refer to:
Marital separation of spouses
Legal separation of spouses
"Separated" (song), song by Avant
Separated sets, a concept in mathematical topology
Separated space, a synonym for Hausdorff space, a concept in mathematical topology
Separated morphism, a concept in algebraic geometry analogous to that of separated space in topology
Separation of conjoined twins, a procedure that allows them to live independently.
Separation (United States military), status of U.S. military personnel after release from active duty, but still having reserve obligations
|
https://en.wikipedia.org/wiki/Loop%20group
|
In mathematics, a loop group is a group of loops in a topological group G with multiplication defined pointwise.
Definition
In its most general form a loop group is a group of continuous mappings from a manifold to a topological group .
More specifically, let , the circle in the complex plane, and let denote the space of continuous maps , i.e.
equipped with the compact-open topology. An element of is called a loop in .
Pointwise multiplication of such loops gives the structure of a topological group. Parametrize with ,
and define multiplication in by
Associativity follows from associativity in . The inverse is given by
and the identity by
The space is called the free loop group on . A loop group is any subgroup of the free loop group .
Examples
An important example of a loop group is the group
of based loops on . It is defined to be the kernel of the evaluation map
,
and hence is a closed normal subgroup of . (Here, is the map that sends a loop to its value at .) Note that we may embed into as the subgroup of constant loops. Consequently, we arrive at a split exact sequence
.
The space splits as a semi-direct product,
.
We may also think of as the loop space on . From this point of view, is an H-space with respect to concatenation of loops. On the face of it, this seems to provide with two very different product maps. However, it can be shown that concatenation and pointwise multiplication are homotopic. Thus, in terms of the homotopy theory of , these maps are interchangeable.
Loop groups were used to explain the phenomenon of Bäcklund transforms in soliton equations by Chuu-Lian Terng and Karen Uhlenbeck.
Notes
References
See also
Loop space
Loop algebra
Quasigroup
Topological groups
Solitons
|
https://en.wikipedia.org/wiki/Robert%20Sempill
|
Robert Sempill (the elder) (c. 1530–1595), in all probability a cadet of illegitimate birth of the noble house of Sempill or Semple, was a Scottish ballad-writer and satirist.
Very little is known of Sempill's life. He was probably a soldier, and must have held some office at the Scottish court, as his name appears in the Lord Treasurer's books in February 1567 – 1568, and his writings show him to have had an intimate knowledge of court affairs. As a Protestant, he was a bitter opponent of Queen Mary and of the Catholic Church, authoring ballads supporting action against Queen Mary. Sempill was present at the siege of Leith (1559-1560) and at the siege of Edinburgh Castle, serving with the army of James Douglas, Earl of Morton. He was in Paris in 1572, but fled the country after the massacre of St Bartholomew. Three of his poems appear in the Bannatyne Manuscript.
His chief works are:
The Ballat maid vpoun Margret Fleming callit the Flemyng bark
The defence of Crissell Sande-landis
The Claith Merchant or Ballat of Jonet Reid, ane Violet and Ane Quhyt, all three in the Bannatyne manuscript
They are characterized by extreme coarseness, and are probably among his earlier works. His chief political poems are:
The Regentis Tragedie, a broadside of 1570
The Sege of the Castel of Edinburgh (1573), interesting from an historical point of view
Ane Complaint vpon fortoun ... (1581)
The Legend of the Bischop of St Androis Lyfe callit Mr Patrik Adamsone (1583)
Some of his poems and ballads were intended to advance the cause of the King's side during the Marian civil war. He was a mid-ranking Kings Party supporter, prominently known despite being outside of party leadership. He assuredly authored twelve poems out of a collection of twenty-five broadsides arguing against Queen Mary as a part of the Kings Party's political campaign, which collectively are known as the "Sempill ballads". Anonymous printed ballads such as The tressoun of Dumbertane, Robert Lekprevik, Edinburgh (1570), have been attributed to Sempill. The Tressoun describes Lord Fleming's failed ambush of the English commander William Drury at Dumbarton Castle.
See Chronicle of Scottish Poetry (ed. James Sibbald, Edinburgh, 1802); and Essays on the Poets of Renfrewshire by William Motherwell, in The Harp of Renfrewshire (Paisley, 1819; reprinted 1872).
Modern editions of Sempill are: Sege of the Castel of Edinburgh, a facsimile reprint with introduction by David Constable (1813); The Sempill Ballates (T. G. Stevenson, Edinburgh, 1872) containing all the poems; Satirical poems of the Reformation (ed. James Cranstoun, Scottish Text Soc., 2 vols, 1889-1893) with a memoir of Sempill and a bibliography of his poems.
References
1530s births
1595 deaths
Scottish male songwriters
Scottish soldiers
Scottish satirists
16th-century Scottish writers
16th-century male writers
16th-century Scottish poets
Middle Scots poets
|
https://en.wikipedia.org/wiki/Reflection%20symmetry
|
In mathematics, reflection symmetry, line symmetry, mirror symmetry, or mirror-image symmetry is symmetry with respect to a reflection. That is, a figure which does not change upon undergoing a reflection has reflectional symmetry.
In 2D there is a line/axis of symmetry, in 3D a plane of symmetry. An object or figure which is indistinguishable from its transformed image is called mirror symmetric. In conclusion, a line of symmetry splits the shape in half and those halves should be identical.
Symmetric function
In formal terms, a mathematical object is symmetric with respect to a given operation such as reflection, rotation or translation, if, when applied to the object, this operation preserves some property of the object. The set of operations that preserve a given property of the object form a group. Two objects are symmetric to each other with respect to a given group of operations if one is obtained from the other by some of the operations (and vice versa).
The symmetric function of a two-dimensional figure is a line such that, for each perpendicular constructed, if the perpendicular intersects the figure at a distance 'd' from the axis along the perpendicular, then there exists another intersection of the shape and the perpendicular, at the same distance 'd' from the axis, in the opposite direction along the perpendicular.
Another way to think about the symmetric function is that if the shape were to be folded in half over the axis, the two halves would be identical: the two halves are each other's mirror images.
Thus a square has four axes of symmetry, because there are four different ways to fold it and have the edges all match. A circle has infinitely many axes of symmetry.
Symmetric geometrical shapes
Triangles with reflection symmetry are isosceles. Quadrilaterals with reflection symmetry are kites, (concave) deltoids, rhombi, and isosceles trapezoids. All even-sided polygons have two simple reflective forms, one with lines of reflections through vertices, and one through edges.
For an arbitrary shape, the axiality of the shape measures how close it is to being bilaterally symmetric. It equals 1 for shapes with reflection symmetry, and between 2/3 and 1 for any convex shape.
Advanced types of reflection symmetry
For more general types of reflection there are correspondingly more general types of reflection symmetry. For example:
with respect to a non-isometric affine involution (an oblique reflection in a line, plane, etc.)
with respect to circle inversion.
In nature
Animals that are bilaterally symmetric have reflection symmetry in the sagittal plane, which divides the body vertically into left and right halves, with one of each sense organ and limb pair on either side. Most animals are bilaterally symmetric, likely because this supports forward movement and streamlining.
In architecture
Mirror symmetry is often used in architecture, as in the facade of Santa Maria Novella, Florence. It is also found in the design of
|
https://en.wikipedia.org/wiki/Stiefel%20manifold
|
In mathematics, the Stiefel manifold is the set of all orthonormal k-frames in That is, it is the set of ordered orthonormal k-tuples of vectors in It is named after Swiss mathematician Eduard Stiefel. Likewise one can define the complex Stiefel manifold of orthonormal k-frames in and the quaternionic Stiefel manifold of orthonormal k-frames in . More generally, the construction applies to any real, complex, or quaternionic inner product space.
In some contexts, a non-compact Stiefel manifold is defined as the set of all linearly independent k-frames in or this is homotopy equivalent, as the compact Stiefel manifold is a deformation retract of the non-compact one, by Gram–Schmidt. Statements about the non-compact form correspond to those for the compact form, replacing the orthogonal group (or unitary or symplectic group) with the general linear group.
Topology
Let stand for or The Stiefel manifold can be thought of as a set of n × k matrices by writing a k-frame as a matrix of k column vectors in The orthonormality condition is expressed by A*A = where A* denotes the conjugate transpose of A and denotes the k × k identity matrix. We then have
The topology on is the subspace topology inherited from With this topology is a compact manifold whose dimension is given by
As a homogeneous space
Each of the Stiefel manifolds can be viewed as a homogeneous space for the action of a classical group in a natural manner.
Every orthogonal transformation of a k-frame in results in another k-frame, and any two k-frames are related by some orthogonal transformation. In other words, the orthogonal group O(n) acts transitively on The stabilizer subgroup of a given frame is the subgroup isomorphic to O(n−k) which acts nontrivially on the orthogonal complement of the space spanned by that frame.
Likewise the unitary group U(n) acts transitively on with stabilizer subgroup U(n−k) and the symplectic group Sp(n) acts transitively on with stabilizer subgroup Sp(n−k).
In each case can be viewed as a homogeneous space:
When k = n, the corresponding action is free so that the Stiefel manifold is a principal homogeneous space for the corresponding classical group.
When k is strictly less than n then the special orthogonal group SO(n) also acts transitively on with stabilizer subgroup isomorphic to SO(n−k) so that
The same holds for the action of the special unitary group on
Thus for k = n − 1, the Stiefel manifold is a principal homogeneous space for the corresponding special classical group.
Uniform measure
The Stiefel manifold can be equipped with a uniform measure, i.e. a Borel measure that is invariant under the action of the groups noted above. For example, which is isomorphic to the unit circle in the Euclidean plane, has as its uniform measure the obvious uniform measure (arc length) on the circle. It is straightforward to sample this measure on using Gaussian random matrices: if is a random matrix with independent entries
|
https://en.wikipedia.org/wiki/Mu%20Alpha%20Theta
|
Mu Alpha Theta () is the United States mathematics honor society for high school and two-year college students. In June 2015, it served over 108,000 student members in over 2,200 chapters in the United States and in 20 foreign countries. Its main goals are to inspire keen interest in mathematics, develop strong scholarship in the subject, and promote the enjoyment of mathematics in high school and two year college students. The name is a rough transliteration of math into Greek (Mu Alpha Theta). Buchholz High School won first place in 2023 for the 15th time in the annually held national convention.
History
The Mu Alpha Theta National High School and Three-Year College Mathematics Honor Society was founded in by Dr. Richard V. Andree and his wife, Josephine Andree, at the University of Oklahoma. In Andree's words, Mu Alpha Theta is "an organization dedicated to promoting scholarship in mathematics and establishing math as an integral part of high school and junior college education". The name Mu Alpha Theta was constructed from the Greek lettering for the phonemes "m", "a", and "th".
Pi Mu Epsilon, the National Collegiate Honor Society of Mathematics, contributed funds for the organization's initial expenses; the University of Oklahoma provided space, clerical help and technical assistance. The Mathematical Association of America, a primary sponsor of the organization since , and the National Council of Teachers of Mathematics nominated the first officers and Board of Governors. The Society for Industrial and Applied Mathematics became an official sponsor in , followed by The American Mathematical Association of Two-Year Colleges in .
The official journal of Mu Alpha Theta, The Mathematical Log, was first issued in on mimeograph and was in printed form starting in . It was published four times during the school year until and featured articles, reports, news and problems for students.
Several different awards are given by Mu Alpha Theta, including the Kalin Award to outstanding students. The Andree award is awarded to students who plan to become mathematics teachers. Chapter sponsors are also recognized by Regional Sponsor Awards, the Sister Scholastica, and the Huneke awards for the most dedicated sponsors. The Rubin Award is presented to a chapter doing volunteer work to help others to enjoy mathematics.
Mu Alpha Theta presents numerous scholarships and grants to its members. Information about the organization can be viewed at mualphatheta.org.
The first Mu Alpha Theta National Convention was held at Trinity University in San Antonio, Texas in . Each year the convention brings together hundreds of teachers and students from across the country for five days of math-related events.
Recent National Conventions
The location of each national convention is announced at the convention held the previous year.
Competition levels
Competition is divided into six levels or divisions, Calculus, Pre-calculus, Algebra II, Geometry, Algebra I, and S
|
https://en.wikipedia.org/wiki/Shemiran
|
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {"fill": "#ff0000","fill-opacity": 0.3,"stroke-width": 0},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
51.37914277380333,
35.790971426954414
],
[
51.36540986364708,
35.815251899969255
],
[
51.397544865030795,
35.831954334344005
],
[
51.419792175292976,
35.825719167436105
],
[
51.42391204833985,
35.828168755566
],
[
51.437095650471754,
35.828836814250685
],
[
51.442863468546435,
35.83685304581964
],
[
51.45659637870267,
35.83752103139132
],
[
51.45851897774265,
35.8299502243998
],
[
51.471702579874545,
35.827946068088025
],
[
51.4760971069336,
35.82326950794772
],
[
51.48625947302208,
35.82972754616811
],
[
51.50548554724083,
35.828168755566
],
[
51.5216903644614,
35.82928217933547
],
[
51.524162292480476,
35.84397790979564
],
[
51.544212349690504,
35.81703365871983
],
[
51.546958931721754,
35.80701073608322
],
[
51.54640962369741,
35.792308164683604
],
[
51.55272677075119,
35.780945215534295
],
[
51.55135347973557,
35.77225481443624
],
[
51.537345894612386,
35.77760286401999
],
[
51.5200424194336,
35.77938546728048
],
[
51.511802673339844,
35.783396189105254
],
[
51.50164032820613,
35.78673829705339
],
[
51.49120330810547,
35.78629268907962
],
[
51.48131561698392,
35.78941187543406
],
[
51.468406689818956,
35.79163979438035
],
[
51.4612655597739,
35.793867650866964
],
[
51.451927197631456,
35.789634675238695
],
[
|
https://en.wikipedia.org/wiki/PGL2
|
PGL2 may refer to
SDHAF2, a gene on chromosome 11 in humans
for the group in mathematics, see projective linear group and modular group
|
https://en.wikipedia.org/wiki/Discrete%20Laplace%20operator
|
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph (having a finite number of edges and vertices), the discrete Laplace operator is more commonly called the Laplacian matrix.
The discrete Laplace operator occurs in physics problems such as the Ising model and loop quantum gravity, as well as in the study of discrete dynamical systems. It is also used in numerical analysis as a stand-in for the continuous Laplace operator. Common applications include image processing, where it is known as the Laplace filter, and in machine learning for clustering and semi-supervised learning on neighborhood graphs.
Definitions
Graph Laplacians
There are various definitions of the discrete Laplacian for graphs, differing by sign and scale factor (sometimes one averages over the neighboring vertices, other times one just sums; this makes no difference for a regular graph). The traditional definition of the graph Laplacian, given below, corresponds to the negative continuous Laplacian on a domain with a free boundary.
Let be a graph with vertices and edges . Let be a function of the vertices taking values in a ring. Then, the discrete Laplacian acting on is defined by
where is the graph distance between vertices w and v. Thus, this sum is over the nearest neighbors of the vertex v. For a graph with a finite number of edges and vertices, this definition is identical to that of the Laplacian matrix. That is, can be written as a column vector; and so is the product of the column vector and the Laplacian matrix, while is just the vth entry of the product vector.
If the graph has weighted edges, that is, a weighting function is given, then the definition can be generalized to
where is the weight value on the edge .
Closely related to the discrete Laplacian is the averaging operator:
Mesh Laplacians
In addition to considering the connectivity of nodes and edges in a graph, mesh Laplace operators take into account the geometry of a surface (e.g. the angles at the nodes). For a two-dimensional manifold triangle mesh, the Laplace-Beltrami operator of a scalar function at a vertex can be approximated as
where the sum is over all adjacent vertices of , and are the two angles opposite of the edge , and is the vertex area of ; that is, e.g. one third of the summed areas of triangles incident to .
It is important to note that the sign of the discrete Laplace-Beltrami operator is conventionally opposite the sign of the ordinary Laplace operator.
The above cotangent formula can be derived using many different methods among which are piecewise linear finite elements, finite volumes, and discrete exterior calculus
(PDF download: ).
To facilitate computation, the Laplacian is encoded in a matrix such that . Let be the (sparse) cotangent matrix with entries
Where denotes the neighborhood of .
And let be t
|
https://en.wikipedia.org/wiki/Dual%20lattice
|
In the theory of lattices, the dual lattice is a construction analogous to that of a dual vector space. In certain respects, the geometry of the dual lattice of a lattice is the reciprocal of the geometry of , a perspective which underlies many of its uses.
Dual lattices have many applications inside of lattice theory, theoretical computer science, cryptography and mathematics more broadly. For instance, it is used in the statement of the Poisson summation formula, transference theorems provide connections between the geometry of a lattice and that of its dual, and many lattice algorithms exploit the dual lattice.
For an article with emphasis on the physics / chemistry applications, see Reciprocal lattice. This article focuses on the mathematical notion of a dual lattice.
Definition
Let be a lattice. That is, for some matrix .
The dual lattice is the set of linear functionals on which take integer values on each point of :
If is identified with using the dot-product, we can write It is important to restrict to vectors in the span of , otherwise the resulting object is not a lattice.
Despite this identification of ambient Euclidean spaces, it should be emphasized that a lattice and its dual are fundamentally different kinds of objects; one consists of vectors in Euclidean space, and the other consists of a set of linear functionals on that space. Along these lines, one can also give a more abstract definition as follows:
However, we note that the dual is not considered just as an abstract Abelian group of functionals, but comes with a natural inner product: , where is an orthonormal basis of . (Equivalently, one can declare that, for an orthonormal basis of , the dual vectors , defined by are an orthonormal basis.) One of the key uses of duality in lattice theory is the relationship of the geometry of the primal lattice with the geometry of its dual, for which we need this inner product. In the concrete description given above, the inner product on the dual is generally implicit.
Properties
We list some elementary properties of the dual lattice:
If is a matrix giving a basis for the lattice , then satisfies .
If is a matrix giving a basis for the lattice , then gives a basis for the dual lattice. If is full rank gives a basis for the dual lattice: .
The previous fact shows that . This equality holds under the usual identifications of a vector space with its double dual, or in the setting where the inner product has identified with its dual.
Fix two lattices . Then if and only if .
The determinant of a lattice is the reciprocal of the determinant of its dual:
If is a nonzero scalar, then .
If is a rotation matrix, then .
A lattice is said to be integral if for all . Assume that the lattice is full rank. Under the identification of Euclidean space with its dual, we have that for integral lattices . Recall that, if and , then . From this it follows that for an integral lattice, .
An integral latti
|
https://en.wikipedia.org/wiki/Bicubic%20interpolation
|
In mathematics, bicubic interpolation is an extension of cubic spline interpolation (a method of applying cubic interpolation to a data set) for interpolating data points on a two-dimensional regular grid. The interpolated surface (meaning the kernel shape, not the image) is smoother than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation. Bicubic interpolation can be accomplished using either Lagrange polynomials, cubic splines, or cubic convolution algorithm.
In image processing, bicubic interpolation is often chosen over bilinear or nearest-neighbor interpolation in image resampling, when speed is not an issue. In contrast to bilinear interpolation, which only takes 4 pixels (2×2) into account, bicubic interpolation considers 16 pixels (4×4). Images resampled with bicubic interpolation can have different interpolation artifacts, depending on the b and c values chosen.
Computation
Suppose the function values and the derivatives , and are known at the four corners , , , and of the unit square. The interpolated surface can then be written as
The interpolation problem consists of determining the 16 coefficients .
Matching with the function values yields four equations:
Likewise, eight equations for the derivatives in the and the directions:
And four equations for the mixed partial derivative:
The expressions above have used the following identities:
This procedure yields a surface on the unit square that is continuous and has continuous derivatives. Bicubic interpolation on an arbitrarily sized regular grid can then be accomplished by patching together such bicubic surfaces, ensuring that the derivatives match on the boundaries.
Grouping the unknown parameters in a vector
and letting
the above system of equations can be reformulated into a matrix for the linear equation .
Inverting the matrix gives the more useful linear equation , where
which allows to be calculated quickly and easily.
There can be another concise matrix form for 16 coefficients:
or
where
Extension to rectilinear grids
Often, applications call for bicubic interpolation using data on a rectilinear grid, rather than the unit square. In this case, the identities for and become
where is the spacing of the cell containing the point and similar for .
In this case, the most practical approach to computing the coefficients is to let
then to solve with as before. Next, the normalized interpolating variables are computed as
where and are the and coordinates of the grid points surrounding the point . Then, the interpolating surface becomes
Finding derivatives from function values
If the derivatives are unknown, they are typically approximated from the function values at points neighbouring the corners of the unit square, e.g. using finite differences.
To find either of the single derivatives, or , using that method, find the slope between the two surrounding points in th
|
https://en.wikipedia.org/wiki/Levi%20graph
|
In combinatorial mathematics, a Levi graph or incidence graph is a bipartite graph associated with an incidence structure. From a collection of points and lines in an incidence geometry or a projective configuration, we form a graph with one vertex per point, one vertex per line, and an edge for every incidence between a point and a line. They are named for Friedrich Wilhelm Levi, who wrote about them in 1942.
The Levi graph of a system of points and lines usually has girth at least six: Any 4-cycles would correspond to two lines through the same two points. Conversely any bipartite graph with girth at least six can be viewed as the Levi graph of an abstract incidence structure. Levi graphs of configurations are biregular, and every biregular graph with girth at least six can be viewed as the Levi graph of an abstract configuration.
Levi graphs may also be defined for other types of incidence structure, such as the incidences between points and planes in Euclidean space. For every Levi graph, there is an equivalent hypergraph, and vice versa.
Examples
The Desargues graph is the Levi graph of the Desargues configuration, composed of 10 points and 10 lines. There are 3 points on each line, and 3 lines passing through each point. The Desargues graph can also be viewed as the generalized Petersen graph G(10,3) or the bipartite Kneser graph with parameters 5,2. It is 3-regular with 20 vertices.
The Heawood graph is the Levi graph of the Fano plane. It is also known as the (3,6)-cage, and is 3-regular with 14 vertices.
The Möbius–Kantor graph is the Levi graph of the Möbius–Kantor configuration, a system of 8 points and 8 lines that cannot be realized by straight lines in the Euclidean plane. It is 3-regular with 16 vertices.
The Pappus graph is the Levi graph of the Pappus configuration, composed of 9 points and 9 lines. Like the Desargues configuration there are 3 points on each line and 3 lines passing through each point. It is 3-regular with 18 vertices.
The Gray graph is the Levi graph of a configuration that can be realized in as a grid of 27 points and the 27 orthogonal lines through them.
The Tutte eight-cage is the Levi graph of the Cremona–Richmond configuration. It is also known as the (3,8)-cage, and is 3-regular with 30 vertices.
The four-dimensional hypercube graph is the Levi graph of the Möbius configuration formed by the points and planes of two mutually incident tetrahedra.
The Ljubljana graph on 112 vertices is the Levi graph of the Ljubljana configuration.
References
External links
Families of sets
Configurations (geometry)
Geometric graphs
|
https://en.wikipedia.org/wiki/Closed%20monoidal%20category
|
In mathematics, especially in category theory, a closed monoidal category (or a monoidal closed category) is a category that is both a monoidal category and a closed category in such a way that the structures are compatible.
A classic example is the category of sets, Set, where the monoidal product of sets and is the usual cartesian product , and the internal Hom is the set of functions from to . A non-cartesian example is the category of vector spaces, K-Vect, over a field . Here the monoidal product is the usual tensor product of vector spaces, and the internal Hom is the vector space of linear maps from one vector space to another.
The internal language of closed symmetric monoidal categories is linear logic and the type system is the linear type system. Many examples of closed monoidal categories are symmetric. However, this need not always be the case, as non-symmetric monoidal categories can be encountered in category-theoretic formulations of linguistics; roughly speaking, this is because word-order in natural language matters.
Definition
A closed monoidal category is a monoidal category such that for every object the functor given by right tensoring with
has a right adjoint, written
This means that there exists a bijection, called 'currying', between the Hom-sets
that is natural in both A and C. In a different, but common notation, one would say that the functor
has a right adjoint
Equivalently, a closed monoidal category is a category equipped, for every two objects A and B, with
an object ,
a morphism ,
satisfying the following universal property: for every morphism
there exists a unique morphism
such that
It can be shown that this construction defines a functor . This functor is called the internal Hom functor, and the object is called the internal Hom of and . Many other notations are in common use for the internal Hom. When the tensor product on is the cartesian product, the usual notation is and this object is called the exponential object.
Biclosed and symmetric categories
Strictly speaking, we have defined a right closed monoidal category, since we required that right tensoring with any object has a right adjoint. In a left closed monoidal category, we instead demand that the functor of left tensoring with any object
have a right adjoint
A biclosed monoidal category is a monoidal category that is both left and right closed.
A symmetric monoidal category is left closed if and only if it is right closed. Thus we may safely speak of a 'symmetric monoidal closed category' without specifying whether it is left or right closed. In fact, the same is true more generally for braided monoidal categories: since the braiding makes naturally isomorphic to , the distinction between tensoring on the left and tensoring on the right becomes immaterial, so every right closed braided monoidal category becomes left closed in a canonical way, and vice versa.
We have described closed monoidal categori
|
https://en.wikipedia.org/wiki/Closed%20category
|
In category theory, a branch of mathematics, a closed category is a special kind of category.
In a locally small category, the external hom (x, y) maps a pair of objects to a set of morphisms. So in the category of sets, this is an object of the category itself. In the same vein, in a closed category, the (object of) morphisms from one object to another can be seen as lying inside the category. This is the internal hom [x, y].
Every closed category has a forgetful functor to the category of sets, which in particular takes the internal hom to the external hom.
Definition
A closed category can be defined as a category with a so-called internal Hom functor
with left Yoneda arrows
natural in and and dinatural in , and a fixed object of with a natural isomorphism
and a dinatural transformation
,
all satisfying certain coherence conditions.
Examples
Cartesian closed categories are closed categories. In particular, any topos is closed. The canonical example is the category of sets.
Compact closed categories are closed categories. The canonical example is the category FdVect with finite-dimensional vector spaces as objects and linear maps as morphisms.
More generally, any monoidal closed category is a closed category. In this case, the object is the monoidal unit.
References
|
https://en.wikipedia.org/wiki/Overlap
|
Overlap may refer to:
In set theory, an overlap of elements shared between sets is called an intersection, as in a Venn diagram.
In music theory, overlap is a synonym for reinterpretation of a chord at the boundary of two musical phrases
Overlap (railway signalling), the length of track beyond a stop signal that is proved to be clear of obstructions as a safety margin
Overlap (road), a place where multiple road numbers overlap
Overlap (term rewriting), in mathematics, computer science, and logic, a property of the reduction rules in term rewriting systems
Overlap add, an efficient convolution method using FFT
Overlap coefficient, a similarity measure between sets
Orbital overlap, important concept in quantum mechanics describing a type of orbital interaction that affects bond strength
Overlap, publisher of the light novel series Arifureta: From Commonplace to World's Strongest
Overlapping can refer to:
"Reaching over", term in Schenkerian theory, see Schenkerian analysis#Lines between voices, reaching over
See also
Overlay (disambiguation)
Overload (disambiguation)
|
https://en.wikipedia.org/wiki/Walk-to-strikeout%20ratio
|
In baseball statistics, walk-to-strikeout ratio (BB/K) is a measure of a hitter's plate discipline and knowledge of the strike zone. Generally, a hitter with a good walk-to-strikeout ratio must exhibit enough patience at the plate to refrain from swinging at bad pitches and take a base on balls, but he must also have the ability to recognize pitches within the strike zone and avoid striking out. Joe Morgan and Wade Boggs are two examples of hitters with a good walk-to-strikeout ratio. A hit by pitch is not counted statistically as a walk and therefore not counted in the walk-to-strikeout ratio.
The inverse of this, the strikeout-to-walk ratio, is used to compare pitchers.
Leaders
Best single-season walk-to-strikeout ratios from 1913 to 2011:
In 2018, Jose Ramirez had the best BB/K ratio in the major leagues, at 1.33.
References
See also
On-base percentage
Walk percentage
Batting statistics
|
https://en.wikipedia.org/wiki/Strikeout-to-walk%20ratio
|
In baseball statistics, strikeout-to-walk ratio (K/BB) is a measure of a pitcher's ability to control pitches, calculated as strikeouts divided by bases on balls.
A hit by pitch is not counted statistically as a walk, and therefore not counted in the strikeout-to-walk ratio.
The inverse of this calculation is the related statistic for hitters, walk-to-strikeout ratio (BB/K).
Leaders
A pitcher who possesses a great K/BB ratio is usually a dominant power pitcher, such as Randy Johnson, Pedro Martínez, Curt Schilling, or Mariano Rivera. However, in 2005, Minnesota Twins starting pitcher Carlos Silva easily led the major leagues in K/BB ratio with 7.89:1, despite striking out only 71 batters over 188⅓ innings pitched; he walked only nine batters.
Through 2022, the all-time career leaders among starting pitchers were Chris Sale (5.3333), Jacob de Grom (5.3036), and Tommy Bond (5.0363).
Through May 22, 2019, the all-time career leaders among relievers were Koji Uehara (7.94), Sean Doolittle (6.41), and Roberto Osuna (6.33).
The player with the highest single regular season K/BB ratio through 2022 was Minnesota Twins pitcher Phil Hughes in 2014, with a ratio of 11.625 (186 strikeouts and 16 walks). He is followed by Bret Saberhagen (11.00 in 1994) and Cliff Lee (10.28 in 2010).
References
Pitching statistics
Statistical ratios
|
https://en.wikipedia.org/wiki/Probability%20function
|
Probability function may refer to:
Probability distribution
Probability axioms, which define a probability function
Probability measure, a real-valued function on a probability space
See also
Probability distribution function (disambiguation)
|
https://en.wikipedia.org/wiki/Generic%20polynomial
|
In mathematics, a generic polynomial refers usually to a polynomial whose coefficients are indeterminates. For example, if , , and are indeterminates, the generic polynomial of degree two in is
However in Galois theory, a branch of algebra, and in this article, the term generic polynomial has a different, although related, meaning: a generic polynomial for a finite group G and a field F is a monic polynomial P with coefficients in the field of rational functions L = F(t1, ..., tn) in n indeterminates over F, such that the splitting field M of P has Galois group G over L, and such that every extension K/F with Galois group G can be obtained as the splitting field of a polynomial which is the specialization of P resulting from setting the n indeterminates to n elements of F. This is sometimes called F-generic or relative to the field F; a Q-generic polynomial, which is generic relative to the rational numbers is called simply generic.
The existence, and especially the construction, of a generic polynomial for a given Galois group provides a complete solution to the inverse Galois problem for that group. However, not all Galois groups have generic polynomials, a counterexample being the cyclic group of order eight.
Groups with generic polynomials
The symmetric group Sn. This is trivial, as
is a generic polynomial for Sn.
Cyclic groups Cn, where n is not divisible by eight. Lenstra showed that a cyclic group does not have a generic polynomial if n is divisible by eight, and G. W. Smith explicitly constructs such a polynomial in case n is not divisible by eight.
The cyclic group construction leads to other classes of generic polynomials; in particular the dihedral group Dn has a generic polynomial if and only if n is not divisible by eight.
The quaternion group Q8.
Heisenberg groups for any odd prime p.
The alternating group A4.
The alternating group A5.
Reflection groups defined over Q, including in particular groups of the root systems for E6, E7, and E8.
Any group which is a direct product of two groups both of which have generic polynomials.
Any group which is a wreath product of two groups both of which have generic polynomials.
Examples of generic polynomials
Generic polynomials are known for all transitive groups of degree 5 or less.
Generic Dimension
The generic dimension for a finite group G over a field F, denoted , is defined as the minimal number of parameters in a generic polynomial for G over F, or if no generic polynomial exists.
Examples:
Publications
Jensen, Christian U., Ledet, Arne, and Yui, Noriko, Generic Polynomials, Cambridge University Press, 2002
Field (mathematics)
Galois theory
|
https://en.wikipedia.org/wiki/Chen%27s%20theorem
|
In number theory, Chen's theorem states that every sufficiently large even number can be written as the sum of either two primes, or a prime and a semiprime (the product of two primes).
It is a weakened form of Goldbach's conjecture, which states that every even number is the sum of two primes.
History
The theorem was first stated by Chinese mathematician Chen Jingrun in 1966, with further details of the proof in 1973. His original proof was much simplified by P. M. Ross in 1975. Chen's theorem is a giant step towards the Goldbach's conjecture, and a remarkable result of the sieve methods.
Chen's theorem represents the strengthening of a previous result due to Alfréd Rényi, who in 1947 had shown there exists a finite K such that any even number can be written as the sum of a prime number and the product of at most K primes.
Variations
Chen's 1973 paper stated two results with nearly identical proofs. His Theorem I, on the Goldbach conjecture, was stated above. His Theorem II is a result on the twin prime conjecture. It states that if h is a positive even integer, there are infinitely many primes p such that p + h is either prime or the product of two primes.
Ying Chun Cai proved the following in 2002:
Tomohiro Yamada claimed a proof of the following explicit version of Chen's theorem in 2015:
In 2022, Matteo Bordignon implies there are gaps in Yamada's proof, which Bordignon overcomes in his PhD. thesis.
References
Citations
Books
Chapter 10.
External links
Jean-Claude Evard, Almost twin primes and Chen's theorem
Theorems in analytic number theory
Theorems about prime numbers
Chinese mathematical discoveries
|
https://en.wikipedia.org/wiki/Christoffel%20symbols
|
In mathematics and physics, the Christoffel symbols are an array of numbers describing a metric connection. The metric connection is a specialization of the affine connection to surfaces or other manifolds endowed with a metric, allowing distances to be measured on that surface. In differential geometry, an affine connection can be defined without reference to a metric, and many additional concepts follow: parallel transport, covariant derivatives, geodesics, etc. also do not require the concept of a metric. However, when a metric is available, these concepts can be directly tied to the "shape" of the manifold itself; that shape is determined by how the tangent space is attached to the cotangent space by the metric tensor. Abstractly, one would say that the manifold has an associated (orthonormal) frame bundle, with each "frame" being a possible choice of a coordinate frame. An invariant metric implies that the structure group of the frame bundle is the orthogonal group . As a result, such a manifold is necessarily a (pseudo-)Riemannian manifold. The Christoffel symbols provide a concrete representation of the connection of (pseudo-)Riemannian geometry in terms of coordinates on the manifold. Additional concepts, such as parallel transport, geodesics, etc. can then be expressed in terms of Christoffel symbols.
In general, there are an infinite number of metric connections for a given metric tensor; however, there is a unique connection that is free of torsion, the Levi-Civita connection. It is common in physics and general relativity to work almost exclusively with the Levi-Civita connection, by working in coordinate frames (called holonomic coordinates) where the torsion vanishes. For example, in Euclidean spaces, the Christoffel symbols describe how the local coordinate bases change from point to point.
At each point of the underlying -dimensional manifold, for any local coordinate system around that point, the Christoffel symbols are denoted for . Each entry of this array is a real number. Under linear coordinate transformations on the manifold, the Christoffel symbols transform like the components of a tensor, but under general coordinate transformations (diffeomorphisms) they do not. Most of the algebraic properties of the Christoffel symbols follow from their relationship to the affine connection; only a few follow from the fact that the structure group is the orthogonal group (or the Lorentz group for general relativity).
Christoffel symbols are used for performing practical calculations. For example, the Riemann curvature tensor can be expressed entirely in terms of the Christoffel symbols and their first partial derivatives. In general relativity, the connection plays the role of the gravitational force field with the corresponding gravitational potential being the metric tensor. When the coordinate system and the metric tensor share some symmetry, many of the are zero.
The Christoffel symbols are named for Elwin Bruno Christ
|
https://en.wikipedia.org/wiki/Vaghela%20dynasty
|
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Abu" },
"geometry": { "type": "Point", "coordinates": [72.7156274, 24.5925909] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Ahmedabad" },
"geometry": { "type": "Point", "coordinates": [72.5713621, 23.022505] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Amran (Amaran)" },
"geometry": { "type": "Point", "coordinates": [70.5629575, 22.8311109] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Anavada" },
"geometry": { "type": "Point", "coordinates": [72.0901453, 23.850845] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Bharana" },
"geometry": { "type": "Point", "coordinates": [69.7078423, 22.373282] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Dabhoi" },
"geometry": { "type": "Point", "coordinates": [73.4121277, 22.1323391] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Desan", "description": "Muralidhar Temple in Bhiloda taluka" },
"geometry": { "type": "Point", "coordinates": [73.2190318, 23.7504668] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Girnar" },
"geometry": { "type": "Point", "coordinates": [70.5502916, 21.5178869] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Kadi" },
"geometry": { "type": "Point", "coordinates": [72.3310025, 23.29785] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Kantela" },
"geometry": { "type": "Point", "coordinates": [69.5201652, 21.7146257] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Khambhat (Cambay)" },
"geometry": { "type": "Point", "coordinates": [72.6189845, 22.3180817] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Khokhra" },
"geometry": { "type": "Point", "coordinates": [69.9848103, 23.2210259] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Mangrol" },
"geometry": { "type": "Point", "coordinates": [70.1158113, 21.1171698] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Patan", "description": "Vaidyanatha Mahadeva Temple" },
"geometry": { "type": "Point", "coordinates": [72.1266255, 23.8493246] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Porbandar" },
"geometry": { "type": "Point", "coordinates": [69.6292654, 21.6417069] }
},
{
"type": "Feature",
"properties": { "marker-symbol": "monument", "title": "Rava (Rav)" },
"geometry": { "type": "Point", "coordinates": [69.0381597, 23.2058476] }
},
{
"type":
|
https://en.wikipedia.org/wiki/Morley%20rank
|
In mathematical logic, Morley rank, introduced by , is a means of measuring the size of a subset of a model of a theory, generalizing the notion of dimension in algebraic geometry.
Definition
Fix a theory T with a model M. The Morley rank of a formula φ defining a definable (with parameters) subset S of M
is an ordinal or −1 or ∞, defined by first recursively defining what it means for a formula to have Morley rank at least α for some ordinal α.
The Morley rank is at least 0 if S is non-empty.
For α a successor ordinal, the Morley rank is at least α if in some elementary extension N of M, the set S has countably infinitely many disjoint definable subsets Si, each of rank at least α − 1.
For α a non-zero limit ordinal, the Morley rank is at least α if it is at least β for all β less than α.
The Morley rank is then defined to be α if it is at least α but not at least α + 1, and is defined to be ∞ if it is at least α for all ordinals α, and is defined to be −1 if S is empty.
For a definable subset of a model M (defined by a formula φ) the Morley rank is defined to be the Morley rank of φ in any ℵ0-saturated elementary extension of M. In particular for ℵ0-saturated models the Morley rank of a subset is the Morley rank of any formula defining the subset.
If φ defining S has rank α, and S breaks up into no more than n < ω subsets of rank α, then φ is said to have Morley degree n. A formula defining a finite set has Morley rank 0. A formula with Morley rank 1 and Morley degree 1 is called strongly minimal. A strongly minimal structure is one where the trivial formula x = x is strongly minimal. Morley rank and strongly minimal structures are key tools in the proof of Morley's categoricity theorem and in the larger area of model theoretic stability theory.
Examples
The empty set has Morley rank −1, and conversely anything of Morley rank −1 is empty.
A subset has Morley rank 0 if and only if it is finite and non-empty.
If V is an algebraic set in Kn, for an algebraically closed field K, then the Morley rank of V is the same as its usual Krull dimension. The Morley degree of V is the number of irreducible components of maximal dimension; this is not the same as its degree in algebraic geometry, except when its components of maximal dimension are linear spaces.
The rational numbers, considered as an ordered set, has Morley rank ∞, as it contains a countable disjoint union of definable subsets isomorphic to itself.
See also
Cherlin–Zilber conjecture
Group of finite Morley rank
U-rank
References
Alexandre Borovik, Ali Nesin, "Groups of finite Morley rank", Oxford Univ. Press (1994)
B. Hart Stability theory and its variants (2000) pp. 131–148 in Model theory, algebra and geometry, edited by D. Haskell et al., Math. Sci. Res. Inst. Publ. 39, Cambridge Univ. Press, New York, 2000. Contains a formal definition of Morley rank.
David Marker Model Theory of Differential Fields (2000) pp. 53–63 in Model theory, algebra and geometry, edited by D. Has
|
https://en.wikipedia.org/wiki/%C4%BDudov%C3%ADt%20La%C4%8Dn%C3%BD
|
Ľudovít Lačný (December 8, 1926 – December 25, 2019) was a Slovak chess problem composer and judge.
Lačný was born in Banská Štiavnica and studied mathematics, working as a teacher, and as a computer programmer.
In 1956 Lačný was appointed an International Judge of Chess Compositions and in 2005 was awarded the International Master for Chess Composition title. He is best known as the eponym of the Lacny cycle, according to the theme invented by him in 1949.
External links
Lacny's page on Juraj Lorinc's website
References
Chess composers
Slovak chess players
1926 births
2019 deaths
People from Banská Štiavnica
International Judges of Chess Compositions
|
https://en.wikipedia.org/wiki/Pandiagonal%20magic%20cube
|
In recreational mathematics, a pandiagonal magic cube is a magic cube with the additional property that all broken diagonals (parallel to exactly two of the three coordinate axes) have the same sum as each other. Pandiagonal magic cubes are extensions of diagonal magic cubes (in which only the unbroken diagonals need to have the same sum as the rows of the cube) and generalize pandiagonal magic squares to three dimensions.
In a pandiagonal magic cube, all 3m planar arrays must be panmagic squares. The 6 oblique squares are always magic. Several of them may be panmagic squares.
A proper pandiagonal magic cube has exactly 9m2 lines plus the 4 main space diagonals summing correctly (no broken space diagonals have the correct sum.)
The smallest pandiagonal magic cube has order 7.
See also
Magic cube classes
References
Hendricks, J.R; Magic Squares to Tesseracts by Computer, Self-published 1999.
Hendricks, J.R.; Perfect n-Dimensional Magic Hypercubes of Order 2n, Self-published 1999.
Harvey Heinz: All about magic cubes
Magic squares
|
https://en.wikipedia.org/wiki/Ultraparallel%20theorem
|
In hyperbolic geometry, two lines are said to be ultraparallel if they do not intersect and are not limiting parallel.
The ultraparallel theorem states that every pair of (distinct) ultraparallel lines has a unique common perpendicular (a hyperbolic line which is perpendicular to both lines).
Hilbert's construction
Let r and s be two ultraparallel lines.
From any two distinct points A and C on s draw AB and CB' perpendicular to r with B and B' on r.
If it happens that AB = CB', then the desired common perpendicular joins the midpoints of AC and BB' (by the symmetry of the Saccheri quadrilateral ACB'B).
If not, we may suppose AB < CB' without loss of generality. Let E be a point on the line s on the opposite side of A from C. Take A' on CB' so that A'B' = AB. Through A' draw a line s' (A'E') on the side closer to E, so that the angle B'A'E' is the same as angle BAE. Then s' meets s in an ordinary point D'. Construct a point D on ray AE so that AD = A'D'.
Then D' ≠ D. They are the same distance from r and both lie on s. So the perpendicular bisector of D'D (a segment of s) is also perpendicular to r.
(If r and s were asymptotically parallel rather than ultraparallel, this construction would fail because s' would not meet s. Rather s' would be asymptotically parallel to both s and r.)
Proof in the Poincaré half-plane model
Let
be four distinct points on the abscissa of the Cartesian plane. Let and be semicircles above the abscissa with diameters and respectively. Then in the Poincaré half-plane model HP, and represent ultraparallel lines.
Compose the following two hyperbolic motions:
Then
Now continue with these two hyperbolic motions:
Then stays at , , , (say). The unique semicircle, with center at the origin, perpendicular to the one on must have a radius tangent to the radius of the other. The right triangle formed by the abscissa and the perpendicular radii has hypotenuse of length . Since is the radius of the semicircle on , the common perpendicular sought has radius-square
The four hyperbolic motions that produced above can each be inverted and applied in reverse order to the semicircle centered at the origin and of radius to yield the unique hyperbolic line perpendicular to both ultraparallels and .
Proof in the Beltrami-Klein model
In the Beltrami-Klein model of the hyperbolic geometry:
two ultraparallel lines correspond to two non-intersecting chords.
The poles of these two lines are the respective intersections of the tangent lines to the boundary circle at the endpoints of the chords.
Lines perpendicular to line l are modeled by chords whose extension passes through the pole of l.
Hence we draw the unique line between the poles of the two given lines, and intersect it with the boundary circle ; the chord of intersection will be the desired common perpendicular of the ultraparallel lines.
If one of the chords happens to be a diameter, we do not have a pole, but in this case any chord perpendicular
|
https://en.wikipedia.org/wiki/Midpoint%20method
|
In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for numerically solving the differential equation,
The explicit midpoint method is given by the formula
the implicit midpoint method by
for Here, is the step size — a small positive number, and is the computed approximate value of The explicit midpoint method is sometimes also known as the modified Euler method, the implicit method is the most simple collocation method, and, applied to Hamiltonian dynamics, a symplectic integrator. Note that the modified Euler method can refer to Heun's method, for further clarity see List of Runge–Kutta methods.
The name of the method comes from the fact that in the formula above, the function giving the slope of the solution is evaluated at the midpoint between at which the value of is known and at which the value of needs to be found.
A geometric interpretation may give a better intuitive understanding of the method (see figure at right). In the basic Euler's method, the tangent of the curve at is computed using . The next value is found where the tangent intersects the vertical line . However, if the second derivative is only positive between and , or only negative (as in the diagram), the curve will increasingly veer away from the tangent, leading to larger errors as increases. The diagram illustrates that the tangent at the midpoint (upper, green line segment) would most likely give a more accurate approximation of the curve in that interval. However, this midpoint tangent could not be accurately calculated because we do not know the curve (that is what is to be calculated). Instead, this tangent is estimated by using the original Euler's method to estimate the value of at the midpoint, then computing the slope of the tangent with . Finally, the improved tangent is used to calculate the value of from . This last step is represented by the red chord in the diagram. Note that the red chord is not exactly parallel to the green segment (the true tangent), due to the error in estimating the value of at the midpoint.
The local error at each step of the midpoint method is of order , giving a global error of order . Thus, while more computationally intensive than Euler's method, the midpoint method's error generally decreases faster as .
The methods are examples of a class of higher-order methods known as Runge–Kutta methods.
Derivation of the midpoint method
The midpoint method is a refinement of the Euler method
and is derived in a similar manner.
The key to deriving Euler's method is the approximate equality
which is obtained from the slope formula
and keeping in mind that
For the midpoint methods, one replaces (3) with the more accurate
when instead of (2) we find
One cannot use this equation to find as one does not know at . The solution is then to use a Taylor series expansion exactly as if using the Euler method to solve for :
which, when plugged in (4), gives us
and
|
https://en.wikipedia.org/wiki/Times%20on%20base
|
In baseball statistics, the term times on base (TOB), is the cumulative total number of times a batter has reached base as a result of a hit, base on balls, or hit by pitch. This statistic does not include times reaching base by way of an error, uncaught third strike, fielder's obstruction or a fielder's choice, making the statistic somewhat of a misnomer.
Times on base leaders in Major League Baseball
Career
As of the end of the 2021 season, the following are the top 10 players in career times on base.
Pete Rose – 5929
Barry Bonds – 5599
Ty Cobb – 5532
Rickey Henderson – 5343
Carl Yastrzemski – 5304
Stan Musial – 5282
Hank Aaron – 5205
Tris Speaker – 4998
Babe Ruth – 4978
Eddie Collins – 4891
Single-season
Babe Ruth, Yankees (1923) – 379
Barry Bonds, Giants (2004) – 376
Ted Williams, Red Sox (1949) – 358
Barry Bonds, Giants (2002) – 356
Billy Hamilton, Phillies (1894) – 355
Babe Ruth, Yankees (1921) – 353
Babe Ruth, Yankees (1924) – 346
Ted Williams, Red Sox (1947) – 345
Three players are tied for ninth:
Lou Gehrig, Yankees (1936) -342
Wade Boggs, Red Sox (1988) – 342
Barry Bonds, Giants (2001) – 342
Single game
Three players have had 9 TOB in a single game:
Max Carey, July 7, 1922 – six hits, three walks (18-inning game)
Johnny Burnett, July 10, 1932 – nine hits (18-inning game)
Stan Hack, August 9, 1942 – five hits, four walks (18-inning game)
Burnett's nine hits are the record for most hits in a single game in MLB history, albeit in extra innings.
See also
On-base percentage (OBP), which is the ratio of TOB to the sum of at bats, base on balls, hit by pitch, and sacrifice flies
References
External links
All-time career leaders from baseball-reference.com
All-time single-season leaders from baseball-reference.com
Batting statistics
Baseball terminology
|
https://en.wikipedia.org/wiki/AA%20postulate
|
In Euclidean geometry, the AA postulate states that two triangles are similar if they have two corresponding angles congruent.
The AA postulate follows from the fact that the sum of the interior angles of a triangle is always equal to 180°. By knowing two angles, such as 32° and 64° degrees, we know that the next angle is 84°, because 180-(32+64)=84. (This is sometimes referred to as the AAA Postulate—which is true in all respects, but two angles are entirely sufficient.)
The postulate can be better understood by working in reverse order. The two triangles on grids A and B are similar, by a 1.5 dilation from A to B. If they are aligned, as in grid C, it is apparent that the angle on the origin is congruent with the other (D). We also know that the pair of sides opposite the origin are parallel. We know this because the pairs of sides around them are similar, stem from the same point, and line up with each other. We can then look at the sides around the parallels as transversals, and therefore the corresponding angles are congruent. Using this reasoning we can tell that similar triangles have congruent angles.
Now, because this article is practically over, you might want to know what AA postulate can be used for. It is used proving the Angle Bisector Theorem.
AA postulate is one of the many similarity ways for determining similarity in a triangle.
References
http://hanlonmath.com/pdfFiles/464Chapter7Sim.Poly.pdf (Unused Source)
Elementary geometry
Triangle geometry
Euclidean plane geometry
|
https://en.wikipedia.org/wiki/Radon%27s%20theorem
|
In geometry, Radon's theorem on convex sets, published by Johann Radon in 1921, states that:Any set of d + 2 points in Rd can be partitioned into two sets whose convex hulls intersect. A point in the intersection of these convex hulls is called a Radon point of the set.For example, in the case d = 2, any set of four points in the Euclidean plane can be partitioned in one of two ways. It may form a triple and a singleton, where the convex hull of the triple (a triangle) contains the singleton; alternatively, it may form two pairs of points that form the endpoints of two intersecting line segments.
Proof and construction
Consider any set of d + 2 points in d-dimensional space. Then there exists a set of multipliers a1, ..., ad + 2, not all of which are zero, solving the system of linear equations
because there are d + 2 unknowns (the multipliers) but only d + 1 equations that they must satisfy (one for each coordinate of the points, together with a final equation requiring the sum of the multipliers to be zero). Fix some particular nonzero solution a1, ..., ad + 2. Let be the set of points with positive multipliers, and let be the set of points with multipliers that are negative or zero. Then and form the required partition of the points into two subsets with intersecting convex hulls.
The convex hulls of and must intersect, because they both contain the point
where
The left hand side of the formula for expresses this point as a convex combination of the points in , and the right hand side expresses it as a convex combination of the points in . Therefore, belongs to both convex hulls, completing the proof.
This proof method allows for the efficient construction of a Radon point, in an amount of time that is polynomial in the dimension, by using Gaussian elimination or other efficient algorithms to solve the system of equations for the multipliers.
Topological Radon theorem
An equivalent formulation of Radon's theorem is:If ƒ is any affine function from a (d + 1)-dimensional simplex Δd+1 to Rd, then there are two disjoint faces of Δd+1 whose images under ƒ intersect.They are equivalent because any affine function on a simplex is uniquely determined by the images of its vertices. Formally, let ƒ be an affine function from Δd+1 to Rd. Let be the vertices of Δd+1, and let be their images under ƒ. By the original formulation, the can be partitioned into two disjoint subsets, e.g. (xi)i in I and (xj)j in J, with overlapping convex hull. Because f is affine, the convex hull of (xi)i in I is the image of the face spanned by the vertices (vi)i in I, and similarly the convex hull of (xj)j in J is the image of the face spanned by the vertices (vj)j in j. These two faces are disjoint, and their images under f intersect - as claimed by the new formulation.
The topological Radon theorem generalizes this formluation. It allows f to be any continuous function - not necessarily affine:If ƒ is any continuous function from a (d + 1)-dimensional s
|
https://en.wikipedia.org/wiki/CCR%20and%20CAR%20algebras
|
In mathematics and physics CCR algebras (after canonical commutation relations) and CAR algebras (after canonical anticommutation relations) arise from the quantum mechanical study of bosons and fermions respectively. They play a prominent role in quantum statistical mechanics and quantum field theory.
CCR and CAR as *-algebras
Let be a real vector space equipped with a nonsingular real antisymmetric bilinear form (i.e. a symplectic vector space). The unital *-algebra generated by elements of subject to the relations
for any in is called the canonical commutation relations (CCR) algebra. The uniqueness of the representations of this algebra when is finite dimensional is discussed in the Stone–von Neumann theorem.
If is equipped with a nonsingular real symmetric bilinear form instead, the unital *-algebra generated by the elements of subject to the relations
for any in is called the canonical anticommutation relations (CAR) algebra.
The C*-algebra of CCR
There is a distinct, but closely related meaning of CCR algebra, called the CCR C*-algebra. Let be a real symplectic vector space with nonsingular symplectic form . In the theory of operator algebras, the CCR algebra over is the unital C*-algebra generated by elements subject to
These are called the Weyl form of the canonical commutation relations and, in particular, they imply that each is unitary and . It is well known that the CCR algebra is a simple non-separable algebra and is unique up to isomorphism.
When is a Hilbert space and is given by the imaginary part of the inner-product, the CCR algebra is faithfully represented on the symmetric Fock space over by setting
for any . The field operators are defined for each as the generator of the one-parameter unitary group on the symmetric Fock space. These are self-adjoint unbounded operators, however they formally satisfy
As the assignment is real-linear, so the operators define a CCR algebra over in the sense of Section 1.
The C*-algebra of CAR
Let be a Hilbert space. In the theory of operator algebras the CAR algebra is the unique C*-completion of the complex unital *-algebra generated by elements subject to the relations
for any , .
When is separable the CAR algebra is an AF algebra and in the special case is infinite dimensional it is often written as .
Let be the antisymmetric Fock space over and let be the orthogonal projection onto antisymmetric vectors:
The CAR algebra is faithfully represented on by setting
for all and . The fact that these form a C*-algebra is due to the fact that creation and annihilation operators on antisymmetric Fock space are bona-fide bounded operators. Moreover, the field operators satisfy
giving the relationship with Section 1.
Superalgebra generalization
Let be a real -graded vector space equipped with a nonsingular antisymmetric bilinear superform (i.e. ) such that is real if either or is an even element and imaginary if both of them are odd. The uni
|
https://en.wikipedia.org/wiki/The%20Compendious%20Book%20on%20Calculation%20by%20Completion%20and%20Balancing
|
The Compendious Book on Calculation by Completion and Balancing (, ; ), also known as al-Jabr (Arabic: ), is an Arabic mathematical treatise on algebra written in Baghdad around 820 CE by the Persian polymath Muḥammad ibn Mūsā al-Khwārizmī. It was a landmark work in the history of mathematics, establishing algebra as an independent discipline.
Al-Jabr provided an exhaustive account of solving for the positive roots of polynomial equations up to the second degree. It was the first text to teach elementary algebra, and the first to teach algebra for its own sake. It also introduced the fundamental concept of "reduction" and "balancing" (which the term al-jabr originally referred to), the transposition of subtracted terms to the other side of an equation, i.e. the cancellation of like terms on opposite sides of the equation. Mathematics historian Victor J. Katz regards Al-Jabr as the first true algebra text that is still extant. Translated into Latin by Robert of Chester in 1145, it was used until the sixteenth century as the principal mathematical textbook of European universities.
Several authors have also published texts under this name, including Abū Ḥanīfa al-Dīnawarī, Abū Kāmil Shujā ibn Aslam, Abū Muḥammad al-ʿAdlī, Abū Yūsuf al-Miṣṣīṣī, 'Abd al-Hamīd ibn Turk, Sind ibn ʿAlī, Sahl ibn Bišr, and Šarafaddīn al-Ṭūsī.
Legacy
R. Rashed and Angela Armstrong write:
J. J. O'Connor and E. F. Robertson wrote in the MacTutor History of Mathematics archive:
The book
The book was a compilation and extension of known rules for solving quadratic equations and for some other problems, and considered to be the foundation of algebra, establishing it as an independent discipline. The word algebra is derived from the name of one of the basic operations with equations described in this book, following its Latin translation by Robert of Chester.
Quadratic equations
The book classifies quadratic equations to one of the six basic types and provides algebraic and geometric methods to solve the basic ones. Historian Carl Boyer notes the following regarding the lack of modern abstract notations in the book:
Thus the equations are verbally described in terms of "squares" (what would today be "x2"), "roots" (what would today be "x") and "numbers" ("constants": ordinary spelled out numbers, like 'forty-two'). The six types, with modern notations, are:
squares equal roots (ax2 = bx)
squares equal number (ax2 = c)
roots equal number (bx = c)
squares and roots equal number (ax2 + bx = c)
squares and number equal roots (ax2 + c = bx)
roots and number equal squares (bx + c = ax2)
Islamic mathematicians, unlike the Hindus, did not deal with negative numbers at all; hence an equation like bx + c = 0 does not appear in the classification, because it has no positive solutions if all the coefficients are positive. Similarly equation types 4, 5 and 6, which look equivalent to the modern eye, were distinguished because the coefficients must all be positive.
The al-ğa
|
https://en.wikipedia.org/wiki/Don%20Zagier
|
Don Bernard Zagier (born 29 June 1951) is an American-German mathematician whose main area of work is number theory. He is currently one of the directors of the Max Planck Institute for Mathematics in Bonn, Germany. He was a professor at the Collège de France in Paris from 2006 to 2014. Since October 2014, he is also a Distinguished Staff Associate at the International Centre for Theoretical Physics (ICTP).
Background
Zagier was born in Heidelberg, West Germany. His mother was a psychiatrist, and his father was the dean of instruction at the American College of Switzerland. His father held five different citizenships, and he spent his youth living in many different countries. After finishing high school (at age 13) and attending Winchester College for a year, he studied for three years at MIT, completing his bachelor's and master's degrees and being named a Putnam Fellow in 1967 at the age of 16. He then wrote a doctoral dissertation on characteristic classes under Friedrich Hirzebruch at Bonn, receiving his PhD at 20. He received his Habilitation at the age of 23, and was named professor at the age of 24.
Work
Zagier collaborated with Hirzebruch in work on Hilbert modular surfaces. Hirzebruch and Zagier coauthored Intersection numbers of curves on Hilbert modular surfaces and modular forms of Nebentypus, where they proved that intersection numbers of algebraic cycles on a Hilbert modular surface occur as Fourier coefficients of a modular form. Stephen Kudla, John Millson and others generalized this result to intersection numbers of algebraic cycles on arithmetic quotients of symmetric spaces.
One of his results is a joint work with Benedict Gross (the so-called Gross–Zagier formula). This formula relates the first derivative of the complex L-series of an elliptic curve evaluated at 1 to the height of a certain Heegner point. This theorem has some applications, including implying cases of the Birch and Swinnerton-Dyer conjecture, along with being an ingredient to Dorian Goldfeld's solution of the class number problem. As a part of their work, Gross and Zagier found a formula for norms of differences of singular moduli. Zagier later found a formula for traces of singular moduli as Fourier coefficients of a weight 3/2 modular form.
Zagier collaborated with John Harer to calculate the orbifold Euler characteristics of moduli spaces of algebraic curves, relating them to special values of the Riemann zeta function.
Zagier found a formula for the value of the Dedekind zeta function of an arbitrary number field at s = 2 in terms of the dilogarithm function, by studying arithmetic hyperbolic 3-manifolds. He later formulated a general conjecture giving formulas for special values of Dedekind zeta functions in terms of polylogarithm functions.
He discovered a short and elementary proof of Fermat's theorem on sums of two squares.
Zagier won the Cole Prize in Number Theory in 1987, the Chauvenet Prize in 2000, the von Staudt Prize in 2001 and the
|
https://en.wikipedia.org/wiki/Helly%27s%20theorem
|
Helly's theorem is a basic result in discrete geometry on the intersection of convex sets. It was discovered by Eduard Helly in 1913, but not published by him until 1923, by which time alternative proofs by and had already appeared. Helly's theorem gave rise to the notion of a Helly family.
Statement
Let be a finite collection of convex subsets of , with . If the intersection of every of these sets is nonempty, then the whole collection has a nonempty intersection; that is,
For infinite collections one has to assume compactness:
Let be a collection of compact convex subsets of , such that every subcollection of cardinality at most has nonempty intersection. Then the whole collection has nonempty intersection.
Proof
We prove the finite version, using Radon's theorem as in the proof by . The infinite version then follows by the finite intersection property characterization of compactness: a collection of closed subsets of a compact space has a non-empty intersection if and only if every finite subcollection has a non-empty intersection (once you fix a single set, the intersection of all others with it are closed subsets of a fixed compact space).
The proof is by induction:
Base case: Let . By our assumptions, for every there is a point that is in the common intersection of all with the possible exception of . Now we apply Radon's theorem to the set which furnishes us with disjoint subsets of such that the convex hull of intersects the convex hull of . Suppose that is a point in the intersection of these two convex hulls. We claim that
Indeed, consider any We shall prove that Note that the only element of that may not be in is . If , then , and therefore . Since is convex, it then also contains the convex hull of and therefore also . Likewise, if , then , and by the same reasoning . Since is in every , it must also be in the intersection.
Above, we have assumed that the points are all distinct. If this is not the case, say for some , then is in every one of the sets , and again we conclude that the intersection is nonempty. This completes the proof in the case .
Inductive Step: Suppose and that the statement is true for . The argument above shows that any subcollection of sets will have nonempty intersection. We may then consider the collection where we replace the two sets and with the single set . In this new collection, every subcollection of sets will have nonempty intersection. The inductive hypothesis therefore applies, and shows that this new collection has nonempty intersection. This implies the same for the original collection, and completes the proof.
Colorful Helly theorem
The colorful Helly theorem is an extension of Helly's theorem in which, instead of one collection, there are d+1 collections of convex subsets of .
If, for every choice of a transversal – one set from every collection – there is a point in common to all the chosen sets, then for at least one of the collections, there is a poin
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.